Hacker Newsnew | past | comments | ask | show | jobs | submit | AndyNemmity's commentslogin

There is no evidence of any of that.

He was paid to work on it. That stopped, he continued to work on it in the hopes he could find someone who would hire him to work on it.

That wasn’t true, no one has funded it.

So due to the economic system he no longer maintains it.

That’s your economic system at work. No one is pretending it isn’t there, this is the outcome of it


Why is it the responsibility of the person working for free?

Why is it never the responsibility of the people using it?

If anyone cares enough they will. People didn’t care enough to pay, so maybe no one cares enough to fork and be the new unpaid custodian


Well said, accurate framing.

The reason for something to exist is not to be used. He was paid while doing it, and that pay stopped, and he kept doing it. Now he wishes to stop.

The reason for something to exist is someone finds joy doing it. Especially when they are unpaid.

The sadness should be focused on his inability to support himself with a tool that clearly a lot of companies, and people are using and gaining value for.


The reason for a tool to exist is to be used, even if it's just by a singular person, other projects that aren't tools do definitely fit into the criteria "just for the joy of it" but a tool, by definition, has at least one usage, and building a tool gives someone joy from the tool being useful.

The sadness doesn't need to be focused anywhere, you can feel sad for more than one thing at a time. People can be sad that a tool they think is great, have relied on, and has been important for their use case is going away while also be sad that such a great tool doesn't get enough support from companies. Both can be true, no need to control what people can or should feel.


It can still be forked. There is no salting the ground here. If you maintain the project and have for a long time, and you wish to stop, you can stop.

If no one cared enough to support the project, why does anyone care enough now? It all sounds hollow. Nothing bitter about it.

When you work on a project, any project, you have a responsibility. At some point we all can stop, and become free to not have that responsibility.


I mean, skills also include calling python scripts. That's determinism.

Anything that can be deterministic, should be


Skills are not like hooks. Skills can and will inevitably be ignored.

Skills are not ignored if you use a router in front of them, and they are actually called.

The problem is the base harnesses don't call them aggressively enough. Not that they don't work.


The same thing I've been doing all the time, now has used up 1/3rd of my week in one day on max20.

So yes, for the same tasks, usage runs out faster (currently)


It's unsurprising when this is the first day that tokens have been crazy like this.

All of us doing crazy agentic stuff were fine on max before this. Now with Opus 4.7, we're no longer fine, and troubleshooting, and working through options.


> were fine on max before this

Ya...you may be who I'm talking about though (if you're speaking from experience). If your methodology is "I used 4.6 max, so I'm going to try 4.7 max" this is fully on you - 4.7 max is not equivalent to 4.6 max, you want 4.7 xhigh.

From their docs:

max: Max effort can deliver performance gains in some use cases, but may show diminishing returns from increased token usage. This setting can also sometimes be prone to overthinking. We recommend testing max effort for intelligence-demanding tasks.

xhigh (new): Extra high effort is the best setting for most coding and agentic use cases.


Sorry, in that case I misunderstood max to mean the subscription, max 20.

I am on xhigh.


Ah - xhigh is probably what you want. Their docs suggest xhigh for agentic coding, though judging by their blog high should be better than 4.6 max (ymmv)

I've always used high, so maybe I should be using xhigh


I'm actually in the process of switching all of my agents to sonnet, and going to try to drop down to medium.

I used up 1/3rd of my context in less than a day. I am working diligently to do whatever I can to lower token usage.


I'm at 35% :(

The biggest issue I have with these systems is, I don't want a blanket memory. I want everything to be embedded in skills and progressively discovered when they are required.

I've been playing around with doing that with a cron job for a "dream" sequence.

I really want to get them out of main context asap, and where they belong, into skills.

https://github.com/notque/claude-code-toolkit


Isn't this the idea behind holographic memory? Chopping the image in half gets you the same image at half the resolution? Or so I've heard...

What you want is a context mipmap.

Then there was the Claude article describing using filesystem hierarchy to organize markdown knowledge, which apparently beats RAG.


Blanket memory doesn't scale, totally agree. I built something similar in Atmita (https://atmita.com). Agents see short summaries of each other instead of full memory dumps, and automation run logs live in their own layer.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: