Hacker Newsnew | past | comments | ask | show | jobs | submit | Eliah_Lakhin's commentslogin

I'm not quite understand what "language-extrapolation" actually means.

Personally, I don't use AI because I like to program myself. There are many other reasons, but this simple one dominates.


Programming with the use of AI can still be seen as programming but at a different level of abstraction.

Depending on your "job to be done", you will prefer one level of abstraction or another.

Example from before AI: I've always hated javascript frameworks like nest.js because they were doing too much magic under the hood. But for a simple CRUD application in MVP, I'd might use it.


From the purely utilitary position maybe you are right, but my point was about arts and entertainment. Using the content generation tools might be useful in certain situations, but it's not joyful when the robots do the job for you.

Imaging that you purchase a video game, but then use LLM to play this game for you. Another example is Chess. Stockfish is more efficient than most Chess players, but playing Chess using programming assistant (even a little bit) is no longer a sport competition.

I also agree that not everyone like programming, and see it just as a job to be done.


ow I see the arts and entertainment point, and it makes sense!

But I'm now wondering why programming with AI doesn't feel like art (which is totally true to me at this point)?

For me I'd say the answer is related to feeling in control. Either: - I don't know enough about using AI to code and feel I'm in control or; - the tools are not at the level I need to feel like I'm in control


language extrapolation is extrapolation applied to language.

example: you try to come up with the name for the most promising next tool (as in, a concept), as you personally judge how it should be named best. (IF YOU KNOW HOW TO TALK, you should have no problems with this.) you ask AI to explain that term. if AI misses significantly, you make your term more precise. pro tip. inspired by my learning style. example can be seen in my latest submission

i like to program myself by hand too, but you can use AI for programming others.


exactly this, doing programming as a Hobby is very different than from a money-making perspective, both aren't incompatible. After all, people do many things that are literally "useless" for fun (and most know already there is a better way to do practically anything they are doing, same as cooking), there is no reason it doesn't apply to programming, however, not using AI nowadays for anything business-related is counterintuitive.


what is the economical foundation of the entity/phenomenon "businesses"?


why do you think that it is counterintuitive?


Because I have myself 10-50x my productivity and I have close to 20 years of experience in dev, and I'm able to manage 50 repos at once (with heavy adversarial in loops and many other automated procedures e2e, regressions and everything you can think of), spent about $50K in tokens lately (+ many subscriptions). I also have a grid of 8 monitors where windows are automatically switched to ask me questions non-stop, I'm a "bot" yes :p, but I've never been able to even dream of managing projects this big while maintaining consistency and quality.

I say it's counterintuitive in the sense that for a business, the "sheer" output now possible, not just in code... but in marketing automation, documentation, testing, prototyping which brings you competitive advantage that just can't be compared, I understand that you can still make money without AI, but if you can, you can also make so much more money with AI, it's not only about the money, it's also that you can solo handle complex projects (assuming you have the mental bandwidth to manage 30 threads in parallel).

I completely respect the OP's position for personal projects and hobbyist work but I don't see how you can make more money without using AI, quality of code (but even this, I'd argue that 20 rounds of adversarial with 10 models is not really beatable) is rarely a success factor in business.


Well, I can ask you the same question. I don't see how you can make money using AI in the foreseeable future too. Even if it is possible today, once the critical mass of people will master this technology, the opportunity gone.

In the past there was a "human computer" profession[^1] where people earn money for calculating by hand. Soon they were replaced by just normal computers. But computers were rare and expensive. Not everyone could afford of having one. So, there was a period of time where the big business had an opportunity. This opportunity also gone, once the home computers widespread.

The businesses may be in advantage of selling products made with AI. But I foresee that the value of these products will degrade eventually, because everyone will manage to do more or less the same at home. Ordinary software will no longer be a thing you can sell or advertise easily.

You would have to try harder to produce something unordinary. And even if you do, it will be very easy to replicate for competitors using AI.

I think that maybe not today but in a few years the Internet will be full of generated content most users wouldn't trust, and wouldn't be interested in mostly. And there will be no novel ideas available for public, because any know-how will be carefully hidden.

To conclude, it's not just about "code quality". If you are making something unusual you can make by hand, that the AI cannot imaging, you have very serious reasons not to disclose it in a new reality.

[1]: https://en.wikipedia.org/wiki/Computer_(occupation)


I actually agree with you, the future is uncertain and even worrisome and many ways (and bleak :/ goodbye Internet the way it was), but the 1-2-3 years window now is real where practically anyone with motivation & skills can succeed.


I upvoted your comment. I share your view and just wanted to say you're not the only one who thinks this way.


There are dozens of us. Dozens!


> As far as memory-safety goes, it really isn't close to being the most important thing unless you are writing security critical stuff.

Safety is the selling point of Rust, but it's not the only benefit from a technical point of view.

The language semantics force you to write programs in a way that is most convenient for the optimizing compiler.

Not always, but in many cases, it's likely that a program written in Rust will be highly and deeply optimized. Of course, you can follow the same rules in C or Zig, but you would have to control more things manually, and you'd always have to think about what the compiler is doing under the hood.

It's true that neither safety nor performance are critical for many applications, but from this perspective, you could just use a high-level environment such as the JVM. The JVM is already very safe, just less performant.


A huge part of China's success stems from the fact that Nixon opened up U.S. markets to China, along with U.S. investment capital. It was a key strategic decision made by the United States to counter Soviet influence in Asia. China's reforms were merely an adaptation to the new opportunities that arose from this shift.

Another important factor was China's significantly larger population compared to the Soviet Union, combined with notably lower labor costs. All these factors eventually propelled the country to great prosperity. Without them, I think China today wouldn't look much different from any other East Asian country.

The Soviets simply didn't have such opportunities. Leaving aside the fact that Western countries never offered them a similar deal, Soviet labor simply couldn't match the industrial productivity enabled by the cheap workforce of East Asian countries.

The USSR had vast land and abundant natural resources, but its population density was relatively low. Additionally, it already possessed advanced technologies and a well-developed industrial base. From the U.S. perspective, such a country looked more like a potential (and actual) competitor rather than just another member of the Western economic system.

I'm not a big fan of a planned economy. And I believe that the lack of social freedoms and democratic institutions, typical of Western countries, was a major factor in the Soviet collapse. But regardless of the decisions and reforms Soviet authorities could have made after World War II, I think the country was doomed either way.

The Soviet Union simply didn't have a large enough population to effectively develop such an enormous landmass. After WWII, significant male losses and the effects of the second demographic transition led to continuous population decline. The only reasonable course of action would have been to relinquish part of its global influence and territory, which it eventually did — but perhaps too late. However, the authorities of any country rarely want to give up power, and the Soviets were no exception.

As for turning points, I don't think it was the NEP. More likely, the Communist (October) Revolution itself was the crucial historical moment. The Russian Empire was a relatively promising state, evolving in the right direction. It was gradually building democratic institutions and transitioning to a liberal economy. Its industrial development was progressing similarly to other European countries — perhaps with some lag, but still moving forward.

Perhaps the real turning point in Russian history was when radicals, driven by controversial economic and social ideas, inherited a wealthy country and used its potential for large-scale social experiments.


>After WWII, significant male losses and the effects of the second demographic transition led to continuous population decline.

Wait, but this simply isnt true. USSR population grew continiusly until the 90iez.

The was the "war echo" in this growth, but it wasn't declining.

You cant really beat 3 year maternity leave and free kindergarder from the age of 3.

All that really worked and beinf a single mother was normalized too, because of distorted gender ratio.


What really worked was that people were not invested in jobs. It was dead end, you dont care, dont have ambitions and try to go home as soon as possible. The 3 year maternity leave stayed after communisms ended, free kindergarten stayed. But capitalism made the work feel less of "sleepy dead end" for everyone.

Otherwise said, people stopped having kids when economic opportunities opened up.


I actually lived through the whole thing and I don't agree with your take. The hard drop in 90ies has more to do with the soviet system collapsing and taking away the floor than with capitalism opening up the top. It was a time of struggle where some more adaptable people ripped the benefits, but for everybody else it was a very tough time. Instead of working 9 to 5 and going home a lot of people had to work 2 or three jobs, drastically change their lifestyle and take the burden of care for their parents because the state basically defaulted on their pensions.

If you think that people didn't have ambitions, it's also not true. You could still climb the ladder inside the system, it's just the monetary reward was capped at having middle class-level comfort instead of being unbelievably rich. You could always be a head of something or go by the party route to have more nice things or have friends to bring stuff from the west too.


Childbirth did not went up all that much even as economic conditions got better. Largest driver of children in communism were really people checked out of public and professional life, retreating to hobbies and family.

Your options to climb the ladder inside the system were severely limited, especially if you was young. Communism and working in communism was sleepy and having ambitions was negative - just about the only way to use ambitions was to be political, denounce who you have to denounce and do the dirty work for the state.

> You could always be a head of something or go by the party route to have more nice things or have friends to bring stuff from the west too.

Of course that implies comfort with a lot of dirt and amorality. Yep, some people did that. Everybody knew what being party meant and some of these people are still the sleaziest politicians out there. Not everyone had stomach for that.

Plus, you could not do this if you was from the wrong family.


> A huge part of China's success stems from the fact that Nixon opened up U.S. markets to China, along with U.S. investment capital.

Nixon arrived a full year after Gough Whitlam established Australian-Chinese trade which prompted Henry Kissinger to take a secret visit to China to negotiate the terms for Nixon’s mission.


> develop such an enormous landmass

Why would that matter? Australia and Canada are doing reasonably well regardless of the overwhelming majority of their territories being empty "wasteland" (economically).

If anything relatively low population coupled with huge amounts of natural resources per capita is one of the best positions to be in.

>relinquish part of its global influence and territory

By in and large their relinquished the more highly populated areas not the empty ones.


It works as long as a country primarily spends its income on internal development. Today's Russia is a relatively prosperous country with high living standards, at least in big cities. Perhaps these standards are even higher than in some EU countries. And they are certainly higher than at any point in Soviet history.

However, to a large extent, this is the result of cutting expenditures on projecting Soviet influence abroad. The Soviet Union had enormous spending on subsidizing friendly regimes and their economies around the world, as well as maintaining a military presence. The same applies to some former Soviet republics that the USSR had to subsidize for decades.

I think we are observing similar processes in the United States today. They are attempting to cut spending and perhaps even reduce their military presence simply because they cannot afford it in the long term without sacrificing their own prosperity.


> What does the developer's experience with incremental parsing feel like?

It's essentially the experience most of us already have when using Visual Studio, IntelliJ, or any modern IDE on a daily basis.

The term "incremental parsing" might be a bit misleading. A more accurate (though wordier) term would be a "stateful parser capable of reparsing the text in parts". The core idea is that you can write text seamlessly while the editor dynamically updates local fragments of its internal representation (usually a syntax tree) in real time around the characters you're typing.

An incremental parser is one of the key components that enable modern code editors to stay responsive. It allows the editor to keep its internal syntax tree synchronized with the user's edits without needing to reparse the entire project on every keystroke. This stateful approach contrasts with stateless compilers that reparse the entire project from scratch.

This continuous (or incremental) patching of the syntax tree is what enables modern IDEs to provide features like real-time code completion, semantic highlighting, and error detection. Essentially, while you focus on writing code, the editor is constantly maintaining and updating a structural representation of your program behind the scenes.

The article's author suggests an alternative idea: instead of reparsing the syntax tree incrementally, the programmer would directly edit the syntax tree itself. In other words, you would be working with the program's structure rather than its raw textual representation.

This approach could simplify the development of code editors. The editor would primarily need to offer a GUI for tree structure editing, which might still appear as flat text for usability but would fundamentally involve structural interactions.

Whether this approach improves the end-user experience is hard to say. It feels akin to graphical programming languages, which already have a niche (e.g., visual scripting in game engines). However, the challenge lies in the interface.

The input device (keyboard) designed for natural text input and have limitations when it comes to efficiently interacting with structural data. In theory, these hurdles could be overcome with time, but for now, the bottleneck is mostly a question of UI/UX design. And as of today, we lack a clear, efficient approach to tackle this problem.


This is likely the first gamedev project written entirely in Rust and Vulkan to achieve significant financial success on Steam.

I'm genuinely proud of the authors — they've set an inspiring example and given us hope for a bright future where the Rust ecosystem serves as a foundation for unique and creative game development projects.


The Gnorp Apologue (mentioned in another thread here) was also notably written in Rust. https://store.steampowered.com/news/app/1473350?emclan=10358...


Lovely game, would recommend.

I play it in the background when chatting with friends on weekly game nights!


Beat me to it. There's some more title, but honestly they're not all that memorable :'). Gnorp is worth the money though!


It'll be interesting to see if its success leads to it being ported to consoles. To my knowledge Rust has yet to ship in a console game, obviously there's a number of roadblocks to making that happen but the biggest one is that Sony apparently has a strict approval process for new languages/compilers to be used on their platforms.


Yes. It's a good success story. And a cute little game.


Using the term "open source" for every project published in source code form might not be ideal — not just because of the "zealots", but also because it may not serve the author’s best interests.

The term "open source" has a well-established reputation as "free as in beer", whether we like it or not. So why attach such a label to a commercial product?

Commercial software isn't inherently a bad thing. In fact, it's even better if the author or business can afford to publish it in source code form, making their services more transparent to end users.

As for the term "source available", it isn't as well-established as "open source". Its meaning may not be clear to the audience, and there's a certain lack of trust associated with it. However, this could change over time if more projects identify as "source available" and maintain clear and honest distribution and usage policies.


The current state of web development engineering is largely the result of how startup economics have functioned over the past decade.

A startup's market value is often closely tied to its number of employees. From an investor's perspective, a company with 1,000 employees is typically valued much higher than a small team of 37 programmers — regardless of the revenue generated per employee, or even if the company isn’t generating revenue at all. This is largely because interest rates remained very low for a long time, making it reasonable to borrow investment funds for promising companies with large staffs.

However, those employees need to be kept busy with something that appears useful, at least in theory. I believe this is one of the primary reasons we see such complex solutions for relatively simple tasks, which sometimes might not require a large team of advanced web developers or sophisticated technologies at all.


I'm currently unemployed, and my last job ended over two years ago, so my experience is more related to the pre-AI boom era.

Back then, I often felt that the software products we were developing could have been created by much smaller teams of experienced programmers, or even by a single programmer. I'm referring specifically to direct programming, excluding management, QA, and devops. My professional experience is primarily with startups and small companies, but I believe this idea could extend to some larger products as well.

This raises the question of whether I, as a programmer, was productive enough. I believe that my colleagues and I were quite productive, and we performed our daily tasks honestly and fairly. However, I feel that our responsibilities were artificially limited. I think my productivity could have been much higher if my responsibilities within the company had been expanded. At least, this is what my personal, non-commercial experience with my pet projects in my spare time suggests.

I understand that a pet project is not the same as a business solution, but I believe the core issue is not that AI affects programmers' productivity, but that AI has helped management realize that increasing the number of programmers does not necessarily improve product quality.

I also found Josh Christiane's video on this topic very insightful: https://www.youtube.com/watch?v=hAwtrJlBVJY


You can publish your project under your own licensing terms. In your license, you can prohibit the creation of derivative works based on your source code, except for building executables for personal use, provided your clients obtain a commercial license from you. This way, GitHub users can audit and fork your repository, but they won't be able to sell your software under their own terms.

I'm not a lawyer, and it's generally a good idea to consult a specialist when drafting licensing terms. However, in my personal opinion, it's often better to draft a project-specific license yourself to start, rather than using a popular open-source license, most of which are not aligned with typical commercialization goals.


That's an interesting idea, but wouldn't it be seen as a bit of a red flag? I thought most devs if they see a license other than MIT they will frown upon it :)


On a more general note, it's often challenging to sell anything to the software developer community, regardless of the licensing terms you choose. I believe you'll gain commercial traction only if your product is closer to the real market. In this sense, using the MIT license may expose your project to the risks you mentioned.

As for community feedback, it doesn't necessarily have to be negative. Recently, I published my project under a non-standard license and received generally positive feedback[1], despite my project being in a very niche field.

[1]: https://news.ycombinator.com/item?id=40747845


makes sense, thanks a lot for your advice.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: