Hacker Newsnew | past | comments | ask | show | jobs | submit | vinhnx's commentslogin

OpenAI is having fun, love this.

It’s really is, I tried to submit and Hn striped the key title. It’s a good article though.

The way to do it as I understand is to submit, notice title was filtered within the edit window time period, edit the submission to include filtered words.

Thank you for checking out VT Code and great questions! Somehow I am only able to get the comment now (4 hours late), sorry!

I will answer your questions

> does ast-grep run first as a structural filter, with ripgrep for content matching, or are they used independently depending on the query type?

ast-grep and ripgrep are used independently depending on the query type, not in a sequential filtering relationship. In VT Code's, the `unified_search` tool provides two distinct search actions/args: `grep` and `structural`. Where, `grep` action uses ripgrep for broad text matching and quick file-content sweeps, which calls system `rg` binary or falls back to default grep (I also use my own perg, which is my cli for grep). The `structural` action uses ast-grep (https://ast-grep.github.io/) for syntax-aware search, read-only project scans, and rule tests, it directly executes ast-grep binary without any ripgrep preprocessing.

The routing is based on your search needs, design based on semantic:

- Plain text search -> `grep` action (ripgrep)

- Syntax-aware search -> `structural` action (ast-grep)

I design VT Code's system prompt to explicitly states to prefer `grep` (rg) for broad text search and prefer `structural` (ast-grep) for syntax-sensitive search.

> How are you handling the schema translation?

VT Code handles schema translation through a unified abstraction layer with provider-specific translation methods, avoiding provider-specific code paths in the main application logic. (https://github.com/vinhnx/vtcode/blob/main/vtcode-core/src/l...) The system uses a unified `LLMProvider` trait with shared request/response types, then handles provider-specific translations at the boundary layer.

Different providers have varying role support, handled through the `MessageRole` enum with provider-specific string conversion methods:

- OpenAI: Supports `system`, `user`, `assistant`, `tool` roles directly. Based on OpenAI Completion/Response API roles definition.

- Anthropic: Converts tool responses to `user` messages, handles system messages separately.

- Gemini: Maps `assistant` to `model`, handles system messages as `systemInstruction`.

Each provider implements the `LLMProvider` trait and handles its own schema conversion internally. For example, the Anthropic provider converts requests in `convert_to_anthropic_format()`.

> ACP support is interesting. I haven't seen many agents implement it yet, mostly MCP. Is your read that ACP is going to gain adoption, or is including both more about hedging?

I support and adopt ACP since it was first announced and introduced from Zed. And the currently VT Code run with almost all ACP-compatible clients (Zed, Minamo Notebook, Jetbrain, Toad...). See: https://github.com/vinhnx/VTCode/blob/main/docs/guides/zed-a.... I'm not sure about ACP adoption metric but ACP integration helps VT Code run on more surfaces (eg: environments).

> The local inference angle (LM Studio, Ollama) matters for use cases where source code can't leave the network. Have you benchmarked which open models hold up reasonably for tool-calling-heavy workflows? In my experience most local models below 70B struggle with multi-turn tool use even when their raw code generation is decent.

Currently, local inference support via LM Studio and Ollama is still early and experiment in VT Code and I haven't run it much and yet to have benchmarks with open models since I don't have enough VRAM. But it definitely in my checklist and could get helps from the comminity if anyone interest.

Thank you!


The king is back! I remember vividly being very amazed and having a deep appreciation reading DeepSeek's reasoning on Chat.DeepSeek.com, even before the DeepSeek moment in January later that year. I can't quite remember the date, but it's the most profound moment I have ever had. After OpenAI O1, no other model has “reasoning” capability yet. And DeepSeek opens the full trace for us. Seeing DeepSeek's “wait, aha…” moments is something hard to describe. I learned strategy and reasoning skills for myself also. I am always rooting for them.

Instead of King DeepSeek we got DeepShit Clown

Anthropic reverts the “prosumer” AB testing, but damage is done.

Oh absolutely and you're welcome! Btw, fish sauce in scrambled eggs over rice is one of the simplest, most satisfying meals you'll find across Southeast Asia, in my country Vietnam especially. It's my favorite meal also.

A simple mental model for Claude's new adaptive thinking is that it is the recommended way to use extended thinking. Adaptive Thinking (wraps Extended Thinking). It applies to Opus 4.7, 4.6, and Sonnet 4.6 and is the default mode on Claude Mythos Preview.


They do now. /effort command is on the latest Claude Code version; run `claude update` and `claude /effort`.


/effort is only per-session though, not persistent.


You're welcome! I shared from AnimationObsessive's blog, and they did a great deal of research and showed the drawing image boards on paper from Mr. Hayao Miyazaki himself and the studio. This article I find unique, and some images have not been seen or public before. I recommend subscribing to their blog for more animation articles.


Thank you!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: