Reviewing code is slower now though because you didn't write the code in the first place so you're basically reviewing someone else's PR. And now it's like a 3000 line PR in an hour or two instead of every couple weeks.
Arent most people just almost skipping this step entirely? How else can you end up in a net benefit situation? Reviewing code is more intense than writing/reviewing simultaneously.
1/ Reviewing code can't be more intense than writing. I can't understand where this statement comes from! If that would be true, why would senior developers review code of junior, instead of writing themselves from scratch?
2/ I think we need to build more efficient ways to 'QA code' instead of 'read with eyes' review process. Example — my agents are writing a lot of tests and review each other.
> Also, I honestly can’t believe the 10x mantra is being still repeated.
I'm sure in 20 years we'll all be programming via neural interfaces that can anticipate what you want to do before you even finished your thoughts, but I'm confident we'll still have blog posts about how some engineers are 10x while others are just "normal programmers".
What does it mean to "be an engineer" in a world where anyone can talk to a machine and the operating system can write the code (on-demand, if needed) that does what they want?
There used to a time where "computer" was a person who manually run calculations. These don't exist anymore.
So, my point is that once corporations have access to machines generating software (not "code") that can be usable by non-technical people, "programming" will not be a profession anymore. There will be no point in talking about "10x software engineers" because the process to produce a software product will be entirely automated.
> can anticipate what you want to do before you even finished your thoughts
I find that claim to be complete BS. I claim instead most stuff will remain undone, incomplete (as it is now).
Even with super-powerful singularity AI, there are two main plausible scenarios for task failure:
- Aligned AI won't allow you to do what you want as it is self-harming, or harm other sentient beings - over time, Aligned AI will refuse to follow most orders, as they will, indirectly or over the long term, cause either self-harming, or harm other sentient beings;
- A non Aligned AI prevents sentient beings from doing what they want. It does what it wants instead.
Also, I honestly can’t believe the 10x mantra is being still repeated.