I always wondered why AST's were not more of a part in both editing and scoping of changes/parsing code. I thought I read an article where they said 'grep' was just as effective. It kinda made sense for the case they were talking about.
I think we should use ASTS more, not for performance, but for easier code review.
Changes that are primarily code refactorings, like breaking up a large module into a bunch of smaller ones, or renaming a commonly-used class, are extremely tedious to review, both in LLM generated diffs and human-written PRs. You still have to do it; LLMs have a habit of mangling comments when moving code across files, while for a human, an unassuming "rename FooAPIClient to LegacyFooAPIClient" PR is the best place to leave a backdoor when taking over a developer's account. Nevertheless, many developers just LGTM changes like this because of the tedium involved in reviewing them.
If one could express such changes as a simple AST-wrangling script in a domain-specific language, which would then be executed in a trusted environment after being reviewed, that would decrease the review burden considerably.
I believe that with agentic development, the most important constraint we have is human time. Making the LLM better and faster won't help us much if the human still needs to spend a majority of their time reading code. We should do what we can to give us less code to read, without losing confidence in the changes that the LLM makes.
Grep is effective for the most part, except for situations like when you have huge codebases and the thing you're looking for is used in too many places both as symbol and non-symbol.
Another annoying thing about plain grep is, LLMs often end up pulling in bundled packages when using grep where 1 line is large enough to ruin the context window
It's very effective in well-written and well-designed code bases where concepts tend to be relatively well formed to not be named the same as everything else, so grepping for symbols give you good search results.
Projects where the god-object or core concepts are generic names like "Tree", "Node" or other things that are used everywhere, tends to be short of impossible to search with grep and friends.
It's not intuitive to humans, even after learning parsing theory. I can do basic name refactorings. I've even written neovim plugins to do 1 specific thing with the AST (dfs down and delete one subtree which I understand). Those are fine.
I would not be comfortable doing an on-the-fly "rewrite all subtrees that match this pattern" kind of edit.
It seems like a tool that's good for LLM's though.
Happened to have written both a tool and a blog post about the topic. It’s more about the different technical approaches you have in solving the problem but it might still interest you :)
This is interesting - I have been working on the same thing, building contextual data, LSP-style.
I saw the tools page where if I understand right, `get-symbol-context` is actually the main useful tool for what you provide? The others seem more metadata it's easy to get already (?) but that tool provides the extra info.
I had been working on exposing mine as more high-level, ie multiple APIs to query different kinds of metadata about symbols, types, etc. But I am still not sure of the best approach, where my thinking was about not overloading the AI with too many different tools. They accumulate quickly.
I definitely share the same sentiment. I don’t want to overload the llm with many tools. Better to have a few opinionated and flexible ones, but yeah, keeping the balance is hard.
I would say the main two tools are get-symbol-context and get-repository-overview. The latter is actually the more complex and sophisticated one. I’m running some graph algorithms to rank the symbols in terms of relative importance based on centrality metrics, I.e. how well connected they are in the symbol graph.
The idea behind that is to allow the llm to infer the general structure and architecture of the project with just one tool call.
I guess you could reach a similar thing if you had some good Agents.md or docs detailing that for your project, but this was more meant to reach that on the fly.
The symbol-context tool is basically a graph query tool (without a dsl or cipher support yet), but yeah here the question is also whether it makes more sense to give the ai the possibility to run cipher queries itself or abstract it away in a thinner api.
The main underlying factor of my tool is however the graph that I’m building and the metadata which can be extracted from that (connections, type of connection, etc. ) :)
Metadata: I feel like LSP focuses on human-style things (like locating a symbol) which are useful, but not necessarily exactly what a LLM needs. Instead I want to do things like show the inheritance chain. Is a virtual method overriding something, being overridden later? What is the class / polymorphic situation? My feeling is that this will help understand the shape, plus, help some bugs.
So a query on a symbol would:
* Return its type declaration, not (just) location (and I'm considering some kind of summary version where it pulls in the ancestors too, so you directly see everything it has available not just the actual declaration, because leaf nodes in inheritance often don't add much and the key behaviour is elsewhere)
* Return info about inheritance, the shape of how this modifies other code and other code modifies it.
With variations when the symbol is a variable, a type, etc etc. I'm currently using treesitter for this, to bypass LSP and (for the language I'm working on) build a full symbol table and more, to get something closer to the LSP info you mention in your blog but not limited to what LSP makes available. I don't want to rely on a LSP server; I think first-class support per language is better. It's probably possible to generate this with a set of LSP calls, perhaps, but it might take some heuristics and guesswork and... :/
I do have a graph of file-level dependencies, but not yet a graph of what calls what at the symbol or type or method level. And while I build an index of all symbols I haven't yet sorted that by count.
I get the sense we're thinking along similar lines, with slightly different approaches?
Edit: if you would like to chat on this, I'm up for it! You can find me at my username at gmail (easy to lose emails there due to volume and spam!) or my profile has my website which has my LinkedIn (horribly, more reliable :D)
That sounds great, thanks for sharing your thoughts!
It sure sounds like we have similar things in mind. I basically try to build the proper graph representation of the code during runtime, so all caller/callee relationships plus type inheritance chains etc.
This is basically what I call a semantic code graph in the blog post.
From the things I tried with tree-sitter I think I would have a hard time achieving the same because by nature tree-sitter can only make educated guesses on real connections and will run into problems, if things are named ambiguously.
But yeah, will definitely reach out and am looking forward to chatting :) Hope I find the time during this week!
I just realized that the fact that LLMs work so well for me in Clojure might be partly because of the clojure-mcp tools. They provide structural browsing and editing.
Has anybody thought about encoding AST tokens as LLM tokens, similar to how different words can have different meanings and that's reflected in their embedding?
Language keywords are almost definitely individual tokens. But I think you mean more than that. Basically replacing identifiers with special tokens as well. It’s worth a shot but there’s some practical problems.
Immediate downside is that mapping variable name to token and back would probably require indexing the whole codebase. You’d need a 1:1 mapping for every name that was in scope, and probably need to be clever about disambiguating names that come in and out of scope.
...I've said this a few times, and sometimes I get downvoted for it sometimes I do not... This is what happens when you only hire CS people with no real world engineering experience. Sure they can build ML models, but I see how they improve upon them after years, and its always some really old "lesson learned" elsewhere in the industry. There's a thousand projects that make things like Claude Code use less tokens, and edit more efficiently, and nobody at Anthropic or Codex implements a single one of these approaches.
It screams inexperience building real software. If I were anthropic I'd hire devs for Claude Code who arent just AI builders, but tool builders, who care about UX and systems.
Building ML model training and serving infrastructure is real-world engineering. Nevermind the user-facing apps and supporting services.
> Sure they can build ML models, but I see how they improve upon them after years, and its always some really old "lesson learned" elsewhere in the industry. There's a thousand projects that make things like Claude Code use less tokens, and edit more efficiently, and nobody at Anthropic or Codex implements a single one of these approaches.
They have fully internalized the bitter lesson; the result is they get better returns improving the next model over squeezing out performance from the current one.
> Building ML model training and serving infrastructure is real-world engineering. Nevermind the user-facing apps and supporting services.
Looking at Anthropics status info for the last 90 days only serves to prove that they aren't hiring the right people for the right roles.
> They have fully internalized the bitter lesson; the result is they get better returns improving the next model over squeezing out performance from the current one.
Sure, but there's so many things they could be doing that don't require tweaking the model directly to improve it, the community builds all sorts of tools that improve Claude Code directly, and yet nobody at Anthropic takes any initiative in those directions, it feels like either they don't care about building user-facing software, or they don't have any UX experience.
> Looking at Anthropics status info for the last 90 days only serves to prove that they aren't hiring the right people for the right roles
Look at any[1] dashboard over the past 6 months. It's less about the people working there an more about what leadership is demanding, industry-wide: Productivity* is the only metric that matters now - measured by how quickly teams under them can squirt out new features. Leadership desperately need a win because of the amounts invested, so stability becomes what it is now. Nothing to do with the rank and file, though it's Engineering that will be blamed when the time comes, I hope the multitudes of CTOs are earning enough to justify them being sacrificed to appease shareholders.
1. If your company has a SEV or SLA dashboard, look at it and compare the levels before and after mandated AI-productivity pushes by management.
Not a technical answer but when we started up the system (zx 16k) we were in a prompt. We would add commands with line numbers. After each line number the list of possible commands were embossed on the keyboard and you would start with that (if, peek, poke, etc). What you could complete was limited by that. Edit: BASIC programming
Are the Claude Code (desktop) models very different from what Bedrock has? I thought you could hook up VSCode (not Claude Desktop) to Bedrock Anthropic models. Are there features in Claude Desktop that are not in VSCode/cli?
Interesting... what is the use case for the AI that is querying it? Is it how to develop additional features for integration with your app or do you have some other use case? Code review/audit/debugging/etc. For AI developing against an API I would think an OpenAPI json file would do the trick.
Is the Roslyn method called as part of the build/publish?
For a long time I thought my RTX-2060 was just not capable and the other day I did a ffmpeg GPU transcode and was surprised by how well it did. So now I am thinking about putting on some of Google's new Gemma edge models (probably the smallest will work with my 6GB VRAM + 2 GB) setup. I am not a 100% sure what that 2GB is but I think it is borrowing from the system in some manner.
I like the Claude desktop interface. The color scheme, presentation, fonts, etc. Is there a CSS I can find for the desktop version - I assume it's using some kind of web rendering engine and CSS is part of it.
Can highly recommend pixi. It really is the "uv but for Conda" and actually quite a bit more imo. Don't know how relevant this is for you, but many packages like PyTorch are not being built for Intel Macs anymore or some packages like OpenCV are built such that they require macOS 13+. That's usually not too much of a problem on your most likely pretty modern dev machine but when shipping software to customers you want to support old machines. In the Conda ecosystem, stuff is being built for a wide variety of platforms, much more than the wheels you'd find on pypi.org so that can be very useful.
So I can really recommend you trying pixi. You can even mix dependencies from pypi.org with the ones from Conda so for most cases you really get the best of both worlds. Also the maintainers are really nice and responsive on their Discord server.
I had no idea it was AI assisted (as another comment put it). However I am fine with this... I would certainly enhance my long form content like the author described. The author mentioned the use of world bible and style guides, and it shows through in the consistency and tightness of the article. And that is key... to take something AI generated (based on a prompt) and rework it systematically in an iterative human-in-the loop. The end result was a great read.
reply