Hacker Newsnew | past | comments | ask | show | jobs | submit | antirez's commentslogin

Context: he is one of the MLX developers, a skilled ML researcher.

1. The huge jump from from Opus to GPT 5.3. Game changer. GPT 5.4, 5.5, were better but only incrementally better.

2. Nope I don't give much personalities, but I use subtle prompt differences to maximize certain responses I want, to make the model focusing in a given detail or acting in a specific kind of engineering mindset.

3. It never happened that the AI was slowing me down since I always had the full context and code detail in mind of what was happening. I believe that this happens more when you don't have a clear idea. Also GPT >= 5.3/4 is not the past generation of models, it is very hard to trap it into a situation where it seems unable to understand what you mean.

4. A few times the AI provided fresh insights that I really liked. Most of the times it was the other way around. Certain implementations were written by the AI at a very impressive level of quality.

5. I don't use general skills, I build skills with deep search when needed for specific projects, and build an AGENT.md that works as a knowledge base as I work with the AI. One thing that I use a lot is, when there is a very complex problem, to tell GPT that I have a friend called Machiavelli that is an incredible computer scientist. To write him an email in /tmp/letter.md with the problem we are facing, and I'll try to get a reply. Then I ask GPT 5.5 Pro on the web with extensive reasoning set on. It will take sometimes 30 minutes or more to reply. Often times after I feed back the reply, the agent will be able to see things a lot more clearly.


Thanks a lot for the insights. I like the Machiavelli thing.

> Then I ask GPT 5.5 Pro on the web with extensive reasoning set on. It will take sometimes 30 minutes or more to reply.

Any reason why Codex can't do that?


If Pro is the same model (hard to tell, I'm not sure) it has a token budget to think (test time scaling) which is huge compared to the Codex endpoint.

The code is 5000 lines of code in total, comments included:

2000 lines the sparse array.

2000 lines the t_array commands and upper layer implementation.

~500 lines of AOF / RDB code.

All the other stuff is tests, JSON command descriptions, TRE library under "deps".


I might be the outlier, but this PR feels like heaven to review. It's a complete, all encompassing PR that I can work through with the entire context right in front of me.

If the initial development bar is relatively high, it's far, far easier to identify flaws and gaps when you have the whole thing in front of you all at once.


The obsession with forcing one coherent feature into X PRs reviewed over Y days often feels like process theater.

It rewards visible incrementalism over actually understanding the change. Sometimes the best review is to sit down, build the full model in your head, and work through the whole thing in context.

"Easier to review with minimal effort" is not the same as "better reviewed."


I think the point GP is making is this is a PR that smells like a solo dev working on their own project and not how a community-driven project adds major new functionality, although I'm sure there are docs and descriptions (or at least a discussion of tradeoffs and design decisions if not ADRs) are somewhere, but not linked handily to the PR. There is a lot of explanation in the blog post and PR, but it's unilateral-looking.

c.f. valkey and others


Redis was completely built in this way since the start. I believe this is a better way to create software. Compromise in design is, in my opinion, something to avoid: feedbacks are important, but often times a single person that studied a lot the problem and have design taste, can come up with a great solution. Mediating such solution, even among two stellar A and B solutions, will not produce a C soution that is better, since you can't produce such solution by interpolation. It is simpler to damage A and B. And: it is rare that in a big set of people all have stellar ideas, so you have to mediate, often, also with people having poor ideas. Not worth the effort for the way I'm wired. What works better for me is to provide hints about what I'm doing, then I receive feedbacks, and sometimes there are really great ideas in this feedbacks, and I incorporate the part I like.

Thanks, I think I'm all caught up now. The timeline is like this if I understand correctly: your successors (Yossi Gottlieb and Oran Agra) explicitly announced a new governance model in 2020, saying the project had "outgrown the BDFL-style of management" and that they wanted to "promote more teamwork and structure". With the relicensing in 2024, however, external contributors with five or more commits to Redis dropped to zero in the first six months (basically, community contribution collapsed). In late 2024, you came back in the role of "Redis evangelist" and a year ago there was an additional licensing change, adding AGPLv3 as an option (8.0's tri-license). So now redis has your steady hand on the wheel again.

I was confused because the last time I checked on things, it was still about fostering community input and advancement but not necessarily consensus. Things have tipped back in the original direction since then. I don't think "Redis was completely built in this way since the start" is completely accurate, but also the community effort under the new governance model never got very deeply entrenched while you were away.


First of all, redis is amazing, and your 4 month development process speaks to the fact that you've already designed and verified correctness super thoroughly.

... just speaking as someone who sometimes has to review very long PRs sometimes, though, I feel like 25% is a roughly normal level of "signal to noise." 5,000 lines of core logic is a LOT, and the tests and dependencies do still need to be read.

EDIT: I feel like the problem, as a reviewer, is processing 4 months of intensive research/development and providing useful feedback. At that point, there's probably not much major input you can have into the core architecture or strategy, so you're probably not providing much more than a bugbot at that point.


> At that point, there's probably not much major input you can have into the core architecture or strategy

Sure you can? In this concrete case, Redis is very "flat" — there's the data structure implementations, and there's the commands that use them. 1+N. You could have feedback about the data structure (i.e. whether it's optimal for the use-cases); or about any of the commands (i.e. not just their impls, but also whether they're the best core API surface to lock in long-term, or even whether they're worth including at all.)

Any given feedback would necessitate fairly limited rework to address, as you're either modifying the data structure (and its tests) or a command (and its tests and docs.)


Fair point that there might be some functional changes you can suggest, but I continue to suspect that by the time this PR hit GitHub, all the most important decisions have already been finalized.

I think where we went wrong in understanding this PR is in the assumption that it's designed to invite review because that's how a lot of other team- or community-driven projects work.

Unfortunately not, sorted sets are actually a bit in the other side of the spectrum: they are semantically sound, but absolutely wasteful because of the combined skiplist + array. Also, if the underlying representation is not an array, range queries and ring buffers will never be as efficient and compact as they should. In theory you can do everything with everything, but segmenting what each API can do allows you to exploit the use cases to provide the best underlying implementation.

Redis sets the locale at startup to avoid issues so should be ok but we will document that for instance è will not match È when nocase is used.

Once I realized arrays were a great fit for text files, many use cases I could conceive were always limited by the fact we need to grep on files. So I thought: what is the AROP equivalent for files? ARGREP. Then I made sure to add both fast, exact and regexp matching so that depending on the use case the best tool could be used. I then discovered that for many OR-ed strings regexps could be the faster way if we'll optimized. And then I specialized TRE a bit.

Are there other existing Redis data types and features that might benefit from integrating TRE?

KEYS comes immediately to mind :)

Haha, ~5000 LOC with comments. The rest is tests + TRE code + TRE tests.

Checking, thanks. EDIT: works very well on my iPhone, so without being able to reproduce is not easy to fix.

Same here, I need to turn off content blockers for the article content to load.

I should probably remove the Adsense JS which I don't use anyway...

Oh shoot. Sorry I didn't even think about having a content blocker running on my phone. Sorry for the distraction.

Well, Redis is a data structures server, and has very complicated and edgy data structures like the HyperLogLog, so I have very little doubts that a fundamental data type like the Array will fit :) Also the actual complexity added is mostly two C files that are quite commented and understandable.

    wc -l t_array.c sparsearray.c
        2012 t_array.c
        2063 sparsearray.c
        4075 total (including comments)
Sure there are also the AOF / RDB glues, the tests, the vendored TRE library for ARGREP. But all in all it's self contained complexity with little interactions with the rest of the server.

A quick note: if we focus only on that part of the implementation, skipping tests and persistence code which is not huge, 4075 lines in 4 months are an average of 33 lines per day, which is quite low.


I’m a big fan of your work, and I honestly didn’t expect to receive a reply from you. Thank you. Also, thank you for pointing out exactly where I was misunderstanding the issue. In the past, I used Redis for temperature measurements in a smart farm project. I used Hashes back then, but it seems like Array would fit that use case much better.

This looks like a very useful feature. Thank you again for the reply.


I appreciate your kind reply as well :)

Yep I will release it, it is a bit out of sync at this point, but will do a pass of updating and will release it.

It’s always a great HN thread when an author of a widely used lib/app engages on a technical level.

antirez - you inspire a generation of devs. Thanks for all you do.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: