I don't care if LLMs are good at coding or bad at it (in my experience the answer is "it depends"). I don't care how good are they at anything else. What matters in the end is that this tech is not to empower a common person (although it could). It is not here to make our lives better, more worthwhile, more satisfying (it could do these as well). It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position, to suck even more wealth from those that have little to those that have a lot.
Yet what I see are pigs discussing the usefulness of bacon-making machine just because it also happens to be able to produce tasty soybean feed. They forget that it is not soybean feed that their owner bought this machine for, and that their owner expects a return from such investment.
> What matters in the end is that this tech is not to empower a common person (although it could).
How do you figure? 20 dollars/month is insanely cheap for what OpenAI/Anthropic/Google offer. That absolutely qualifies as "empowering a common person". It lowers barriers!
A lot of the anti-AI sentiment on HN concerns people losing their jobs. I don't think this will happen: programmers who know what they're doing are going to be way, way more effective at using AIs to generate code than others.
But even if it is true and we do see job losses in tech: are software devs really "in a precarious position"? Do they really qualify as "those that have little"? Seems like a fantasy to me. Computer programmers have done great over the past 30 years.
More broadly, anti-AI sentiment comes from people who dislike change. It's hard to argue someone out of that position. You're allowed to prefer stasis. But the world moves on and I think it's best to remain optimistic, keep an open mind, and make the most of it.
It's also, for example, the studies finding that when companies adopt AI employees' jobs get worse. More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.
$20/month in return for measurable reductions in quality of life is not an amazing deal. It's "Heads I win, tails you lose."
Or maybe, if you're thinking of it as an enabler for a side hustle or some other project with a low probability of a high payoff, it can slightly more optimistically be regarded as a moderately expensive lottery ticket.
That's not pessimism; it's just a realistic understanding of how the tech industry actually works, informed by decades' worth of experience.
>More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.
Similar things happened with the adoption of computers in the workplace. Perhaps there's a case for banning all digital technology and hiring typists and other assistants to perform the work using typewriters and mechanical calculators? There would certainly be less multitasking when you have 8 hours worth of documents to retype and file/mail. Perhaps there would be less overtime when your boss can see you have a high workload by the state of papers piled upon your desk. Or maybe we can solve these problems in a different way.
> It's also, for example, the studies finding that when companies adopt AI employees' jobs get worse. More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.
Can you share those studies? I'm pretty skeptical of this effect. I find that AI has made my job easier and less stressful.
In general, I think your atittude is not realistic, it's just general pessimism about the world ("everything new is bad") that is basically unfounded.
> How do you figure? 20 dollars/month is insanely cheap for what OpenAI/Anthropic/Google offer. That absolutely qualifies as "empowering a common person".
Tech companies have been laying off employees for a while now. I think it's mostly due to pandemic overhiring and higher interest rates but I suppose we'll see.
I agree that AI was not the _actual_ reason, however, it did allow them to do massive layoffs without admitting they are doing poorly and not taking a massive hit to their stock price.
> I think it's mostly due to pandemic overhiring and higher interest rates
It's not because of pandemic overhiring, and if that were true, the layoffs in 2021-2022 would have handled it. It's 2026. The people getting laid off (on average) haven't worked at these companies since before the pandemic, they got hired in ~2023 (average tenure at a tech company is ~3 years).
It's not because of AI either. Nobody is replacing jobs with AI, AI can't do anyone's job.
It's not because of interest rates. People hired like crazy when interest rates were this high in the oughts.
It's because Elon Musk's Twitter purchase and subsequent management convinced every executive in tech that you can cut to the bone, fuck your product's quality completely, and be totally fine. It's not true, but the downsides come later and the cash influx comes now, so they're doing it anyway.
> It's because Elon Musk's Twitter purchase and subsequent management convinced every executive in tech that you can cut to the bone, fuck your product's quality completely, and be totally fine.
I agreed with you up to this point. Twitter largely operated in the red for its entire existence prior to his "restructuring" to make it leaner and profitable. In my opinion, twitter went to shit when the incentive for creating engagement switched from gaining social capital to gaining... erm... actual capital. The laissez-faire attitude about allowing fairly terrible behavior on there gave it a PR black eye that probably didn't help either in the eyes of advertisers.
If I had to guess what happened with Block (and that's what we're all doing, guessing): a CEO's job is to make the line go up, and saying you introduced tools to increase productivity with half the staff (especially if you're overstaffed) seems to me a pretty easy way to do that. I saw someone on here refer to it as "Vibe CEOing", which I think is pretty on point. Again, just my opinion/guess.
Yeah, and for good reason. The invitation of the light bulb meant factory owners could force workers into 16 hours shifts. The main beneficiaries of new tech were always the capital owners. Workers had to literally fight and die for our 8-hour workdays and 5-days workweek.
This is still going on today: the massive gains from automation are being hoarded by the wealthy capital owners, while workers struggle to make ends meet.
No it hasn't. Work mechanisation throughout history has resulted in a shift from manual labour to one that's more intellectual in nature. Modern AI believers pretend that it will take over those jobs soon as well.
This would essentially bring us to a cross-roads between, on one hand, a utopia with UBI and people not needing to work (because their labour is unnecessary), or a dystopia, where few technocratic "lords" own the means of work automation and rule over a submissive world.
I don't think it takes a genius to guess where this is heading in our current political climate.
Personally, I'm not scared about any of that, because I don't believe LLMs to be very potent as an AI tool. Robotic militias (remotely controlled by BI or AI) seem a much more tangible threat.
> It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position
Could be. It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.
It’s here, so I don’t know where you’re going with “I’m unhappy this is happening and someone should do something”
It's also worth nothing that the "our" in that sentence is just SWEs, who are a pretty small group in the grand scheme of things. I recognize that's a lot of HN, but still bears considering in terms of the broader impact outside of that group.
I'm a small business owner, and AI has drastically increased my agency. I can do so much more - I've built so many internal tools and automated so many processes that allow me to spend my time on things I care about (both within the business but also spending time with my kids).
It is, fortunately, and unfortunately, the nature of a lot of technology to disempower some people while making lives better for others. The internet disempowered librarians.
> It's also worth nothing that the "our" in that sentence is just SWEs
It isn't, it just a matter of seeing ahead of the curve. Delegating stuff to AI and agents by necessity leads to atrophy of skills that are being delegated. Using AI to write code leads to reduced capability to write code (among people). Using AI for decision-making reduces capability for making decisions. Using AI for math reduces capability for doing math. Using AI to formulate opinions reduces capability to formulate opinions. Using AI to write summaries reduces capability to summarize. And so on. And, by nature, less capability means less agency.
Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them
Not to mention utilizing AI for control, spying, invigilation and coercion. Do I need to explain how control is opposed to agency?
I'll grant that it does extend beyond SWEs, but whether AI atrophies skills is entirely up to the user.
I used to use a bookkeeper, but I got Claude a QuickBooks API key and have had it doing my books since then. I give it the same inputs and it generates all the various journal entries, etc. that I need. The difference between using it and my bookkeeper is I can ask it all kinds of questions about why it's doing things and how bookkeeping conventions work. It's much better at explaining than my bookkeeper and also doesn't charge me by the hour to answer. I've learned more about bookkeeping in the past month than in my entire life prior - very much the opposite of skill atrophy.
Claude does a bunch of low-skill tasks in my business, like copying numbers from reports into different systems into a centralized Google Sheet. My muscle memory at running reports and pulling out the info I want has certainly atrophied, but who cares? It was a skill I used because I needed the outcome, not because the skill was useful.
You say that using AI reduces all these skills as though that's an unavoidable outcome over which people have no control, but it's not. You can mindlessly hand tasks off to AI, or you can engage with it as an expert and learn something. In many cases the former is fine. Before AI ever existed, you saw the same thing as people progressed in their careers. The investment banking analyst gets promoted a few times and suddenly her skill at making slide decks has atrophied, because she's delegating that to analysts. That's a desirable outcome, not a tragedy.
Less capability doesn't necessarily mean less agency. If you choose to delegate a task you don't want to do so you can focus on other things, then you are becoming less capable at that skill precisely because you are exercising agency.
Now in fairness I get that I am very lucky in that I have full control of when and how I use AI, while others are going to be forced to use it in order to keep up with peers. But that's the way technology has always been - people who decided they didn't want to move from a typewriter to a word processor couldn't keep up and got left behind. The world changes, and we're forced to adapt to it. You can't go back, but within the current technological paradigm there remains plenty of agency to be had.
> but whether AI atrophies skills is entirely up to the user
Thing with society is that we cannot simply rely on self-discipline and self-control of individuals. For the same reason we have universal and legally enforced education system. We would still live in mostly illiterate society if people were not forced to learn or not forced to send their children to school.
Analogies to past inventions are limited due to the fact that AI doesn't automate physical-labor, hard or light - it automates, or at least its overlords claim it automates, lot of cognitive and creative labor. Thinking itself, at least in some of its aspects.
From sociological and political perspective there is a huge difference between majority of population losing capability to forge swords or sew dresses by hand and capability to formulate coherent opinions and communicate them.
It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.
Lmfao LLM's can barely count rows in a spreadsheet accurately, this is just batshit crazy.
edit: also the solution here isn't that every one writes their own software (based on open source code available on the internet no doubt) we just use that open source software, and people learn to code and improve it themselves instead of off-loading it to a machine
This is one of those things where people who don't know how to use tools think they're bad, like people who would write whole sentences into search engines in the 90s.
LLMs are bad at counting the number of rows in a spreadsheet. LLMs are great at "write a Python script that counts the number of rows in this spreadsheet".
Yes, for some definition of OS. It could build a DOS-like or other TUI, or a list of installed apps that you pick from. Devices are built on specifications, so that's all possible. System API it could define and refine as it goes. General utilities like file management are basically a list of objects with actions attached. And so on... the more that is rigidly specified, the better it will do.
It'll fail miserably at making it human-friendly though, and attempt to pilfer existing popular designs. If it builds a GUI, it's be a horrible mashup of Windows 7/8/10/11, various versions of OSX / MacOS, iOS, and Android. It won't 'get' the difference between desktop, laptop, mobile, or tablet. It might apply HIG rules, but that would end up with a clone at best.
In short, it would most likely make something technically passable but nightmareish to use.
Given 100 years though? 100 years ago we barely had vacuum tubes and airplanes.
Given a century the only unreasonable part is oneshotting with no details, context, or follow up questions. If you tell Linus Torvalds "write a python script that generates and OS", his response won't be the script, it'll be "who are you and how did you get into my house".
Considering how simple "an OS" can be, yes, and in the 2020s.
If you're expecting OSX, AI will certainly be able to make that and better "in the next 100 years". Though perhaps not oneshotting off something as vague as "make an OS" without followup questions about target architecture and desired features.
JFYI, LLMs still can't solve 7x8, and well possibly never will. A more rudimentary text processor shoves that into a calculator for consumption by the LLM. There's a lot going on behind the scenes to keep the illusion flying, and that lot is a patchwork of conventional CS techniques that has nothing to do with cutting edge research.
To many interested in actual AI research, LLMs are known as the very flawed and limiting technique they are, and the increasing narrative disconnect between this and the table stakes where they are front and center of every AI shop, carrying a big chunk of the global GDP on its back, is annoying and borderline scary.
This is false. You can run a small open-weights model in ollama and check for yourself that it can multiply three-digit numbers correctly without having access to any tools. There's even quite a bit of interpretability research into how exactly LLMs multiply numbers under the hood. [1]
When an LLM does have access to an appropriate tool, it's trained to use the tool* instead of wasting hundreds of tokens on drudgery. If that's enough to make you think of them as a "flawed and limiting technique", consider instead evaluating them on capabilities there aren't any tools for, like theorem proving.
* Which, incidentally, I wouldn't describe as invoking a "more rudimentary text processor" - it's still the LLM that generates the text of the tool call.
> Heck, one company built a (prototype but functional) web browser
No, they built something which claimed to be a web browser but which didn't even compile. Every time someone says "look an LLM did this impressive sounding thing" it has turned out to be some kind of fraud. So yeah, the idea that these slop machines could build an OS is insane.
I personally observe AI creation phenomenally good code, much better than I can write. At insane speed, with minimal oversight. And today’s AI is the worst we will ever have.
Progress in AI can easily be measured by the speed at which the goalposts move - from “it can’t count” to “yeah but the entire browser it wrote didnt compile in the CI pipeline”
What happens when they decide it's a national security threat and an act of domestic terrorism to use AI to undermine commercial dependencies? We're all acting like AI isn't being invented within the context of and used by a fascist regime.
Look, from a point of view of a person outside of US, you are all fascists, "democrats" and trumpists. Dont take this as "trolling", but as a sincere opinion (I dont care about your internal brawls, I care for what you do to others.)
But on the reverse, the first danger with AI is people.
Over the longer term it will look like this. The rich 'win' the world by using AI to enslave the rest of mankind and claim ownership over everything. This will suck and a lot of us will die.
The problem is this doesn't solve the greed that cause the problem in the first place. The world will still be limited in a resources of something which will end with the rich in a dick measuring contest and to win that contest they will put more and more power in AI and they connive and fight each other. Eventually the AI has enough power that it kills us all, intentionally or not.
We'll achieve nearly unlimited capability long before we solve the problem of unlimited greed and that will spell our end.
This is entirely assumptions about a future that has not happened.
Ive worked in "AI" for 20 years, through 2 winters, and run an alignment shop and AIRT... The problem is people. People will use the problem as a scapegoat.
Dinosaurs lived 100 million years, before they didn't.
And walls between France and Germany were effective, until they weren't.
Hell, even the 'people' is the problem doesn't work well for things like Moloch problems. Which people? The problem can no longer be pointed at any individual but a super-organizational response. Once you have an issue that is abstracted from it's base components, then any agent capable of parsing the abstraction can be part of influencing it and becoming part of Moloch.
At some point, if most people lose their jobs, you have no market to sell your services to. So, either, new jobs have to be created in order to keep the capitalism machine running, or you have to provide for the needs of every human being from whatever you're doing with your AI. Otherwise, a lot of hungry people revolt and you have violence against these businesses.
I think new jobs will be created because AI is always limited by hardware and its current capabilities. Businesses, in order to compete, want to do things their competitors aren't currently doing. Those business needs always go beyond the current technological capabilities until the tech catches up and then they lather, rinse, repeat.
The individual has never had as much ability to take on large projects as they do now. They’ve never been able to learn as easily as they can now.
>to make it easier to fire us
As of now, the technology increases productivity in the average user. The companies that take advantage of that and expand their offering will outperform the ones that simply replace workers and don’t expand or improve offerings.
More capable employees make companies more money in general. Productivity increases lead to richer societies and yes, even more jobs, just as it always has.
> It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position, to suck even more wealth from those that have little to those that have a lot.
You could say this is the story of society, it makes us dependent on each other, reduces our agency, puts us in precarious positions (like WW2). But nobody would argue against society like that.
What happens here is that we become empowered by AI and gain some advantages which we immediately use and become dependent on, eventually not being able to function without them - like computers and even thermostats.
Does anyone think how would economy operate without thermostats? No fridges, no data centers, no engines... they all need thermostats. We have lost some freedom by depending on them. But also gained.
The post that I commented on was arguing that what sets the Mac apart from other options with 8 GB RAM, and what makes them more expensive, is that they are seen as a status symbol. I made a point against that mentioning two areas in which Macs are truly superior.
> and the worst have died away (things like goto and classical inheritance)
What's so wrong about classical inheritance, and how it died away while being well-supported in most popular programming languages of today (Python, C++, Java, C#, TS, Swift)?
In a sense, it’s like global variables. About every complex program [1] has a few of them, so languages have to support them, but you shouldn’t have too many of them, and people tend to say “don’t use globals”.
[1] some languages such as classical Java made it technically impossible to create them, but you can effectively create one with
class Foo {
public static int bar;
}
If you’re opposed to that, you’ll end up with making that field non-static and introducing a singleton instance of “Foo”, again effectively creating a global.
In some Java circles, programmers will also wrap access to that field in getters and setters, and then use annotations to generate those methods, but that doesn’t make such fields non-global.
Who are Wirths, Dijkstras, Hoares, McCarthies and Keys of today? I mean - who represents current generation of such thinkers? Genuinely asking. Most stuff I see here and in other places is about blogposts, videos and rants made by contemporary "dev influencers" and bloggers (some of them very skilled and capable of course, very often more than I am), but I would like to be in touch with something more thoughtful and challenging.
Contemporary PL designers who have inspired my programming language design journey the most are people like Chris Granger (Eve), Jamie Brandon (Eve/Imp/others), Bret Victor (Dynamicland), Chris Lattner (Swift / Mojo), Simon Peyton Jones (GHC/Verse), Rich Hickey (Clojure), and Jonathan Edwards (Subtext). My favorite researcher is Amy J. Ko for her unique perspective on the nature of languages. Check out her language "Wordplay" which is very interesting.
There is a lot of good computer science, but the computer science community today is vastly larger than it was in the 1960s and 1970s when Dijkstra, Knuth, Wirth, and others became legends. There are so many subfields of CS, each with its own deep literature and legendary figures. It’s difficult to be a modern Dijkstra or Knuth due to these factors, though to be fair, it is an impressive feat for Dijkstra to be Dijkstra and for Knuth to be Knuth even in their heydays. It’s just easier to get famous in an upstart field compared to getting famous in a mature field.
I think there are two typical paths to widespread visibility across CS subfields: (1) publishing a widely-adopted textbook, and (2) writing commonly-used software. For example, many computer scientists know about Patterson and Hennessy due to their famous computer architecture textbooks, and many computer scientists know about people like Jeff Dean due to their software.
Reading more academically-oriented literature such as the ACM’s monthly periodical “Communications of the ACM” is also a good way to get acquainted with the latest developments of computer science.
I can't claim to be equal to the greats, but I do run a Discord server where I think and talk a lot about both the philosophy and practice of language design while building tools that I hope will change the state of the art: https://discord.gg/NfMNyYN6cX
very hot and edgy take: theoretical CS is vastly overrated and useless. as someone who actively studied the field, worked on contemporary CPU archs and still doing some casual PL research - asides from VERY FEW instances from theoretical CS about graphs/algos there is little to zero impact on our practical developments in the overall field since 80s. all modern day Dijkstras produce slop research about waving dynamic context into java program by converting funds into garbage papers. more deep CS research is totally lost in some type gibberish or nonsense formalisms. IMO research and science overall is in a deep crisis and I can clearly see it from CS perspective
Well, I think there is something to it. Computers were at some point newly invented so research in algorithms suddenly became much more applicable. This opened up a gold mine of research opportunities. But like real life mines at some point they get depleted and then the research becomes much less interesting unless you happen to be interested in niche topics. But, of course, the paper mill needs to keep running and so does the production of PhDs.
You made a preposterous statement, got called out, and are now making excuses.
Anybody who claims to have studied "Theoretical Computer Science" can/will never make the statements that you did (and that too in a thread to do with Niklaus Wirth's achievements who was one of the most "practical" of "theoretical computer scientists"!).
I assume that you are talking about modern "theoretical CS", because among the "theoretical CS" papers from the fifties, sixties, seventies, and even some that are more recent I have found a lot that remain very valuable and I have seen a lot of modern programmers who either make avoidable mistakes or they implement very suboptimal solutions, just because they are no longer aware of ancient research results that were well known in the past.
I especially hate those who attempt to design new programming languages today, but then demonstrate a complete lack of awareness about the history of programming languages, by introducing a lot of design errors in their languages, which had been discussed decades ago and for which good solutions had been found at that time, but those solutions were implemented in languages that never reached the popularity of C and its descendants, so only few know about them today.
If you really have followed the research in type systems and see how it *factually* intersects with practical reality you wouldnt joke about it. Its a bizzare nonsense what they do in „research“ and sane implementations (only slightly grounded in formalisms) are actually used
I do, and hope that one day stuff like dependent types and formal proofs are every day tools, alongside our AI masters, which also don't use any learnings from scientific research.
> There is hope that with AI we get to better tested, better written, better verified software.
And it is one thing we don't get for sure.
This tech, in a different world, could be empowering common people and take some weight from their shoulders. But in this world its purpose is quite the opposite.
But it is not something new that came with AI, even if it is most recent and most visible symptom of the sickness. We keep buying tons of useless crap and convert to tons of trash. We waste tremendous amount of energy for most trivial whims. Frugality was never dominant idea.
As far as I can tell after a quick Google, you can't share your Qt UI with the browser version of your app. Considering that "lite" browser-based versions of apps are a very common funnel to a more featureful desktop version, it makes sense to just use the UI tools that already work and provide a common experience everywhere.
The same search incidentally turned up that Qt requires a paid license for commercial projects, which is surprising to me and obviously makes it an even less attractive choice than Electron. Being less useful and costing more isn't a great combo.
> you can't share your Qt UI with the browser version of your app
You can with WASM (but you shouldn't).
> Qt requires a paid license for commercial projects
It doesn't, it requires paid license if you don't want to abide with (L)GPL license, which should be fair deal, right? You want to get paid for your closed-source product, so you should not have any reservations about paying for their product that enables you to create your product, right? Or is it "money for me, but not for thee"?
> Being less useful and costing more isn't a great combo.
Very nice, but now explain why you are talking about using Qt to create apps, whereas grandparent talks about experience with apps created with Qt.
I looked up the WASM Qt target and it renders to a canvas, which hampers accessibility. The docs even call out that this approach barely works for screen readers [0], and that it provides partial support by creating hidden DOM elements. This creates a branch of differing behavior between your desktop and browser app that doesn't have to exist at all with Electron.
It should go without saying that the requirements of the LGPL license are less attractive than the MIT one Electron has, fairness doesn't really come into it. Beyond the licensing hurdles that Qt devotes multiple pages of its website to explaining, they also gate commercial features such as "3D and graphs capabilities" [1] behind the paid license price, which are more use cases that are thoroughly covered by more permissively licensed web projects that already work everywhere.
On your last point I'm completely lost; it's late here so it might be me but I'm not sure what distinction you're making. I guess I interpreted dmix' comment generally to be about the process of producing software with either approach given that my comment above was asking for details on alternatives from the perspective of a developer and not a user. I don't have any personal beef with using apps that are written with Qt.
I do frontend work so struggle to get over how bad most Qt GUIs are. They are far out of date compared to Gnome or MacOS in a lot of the small widget details and menus.
Plus I use Mac these days and Qt apps just never looked right on that platform.
I don't care if LLMs are good at coding or bad at it (in my experience the answer is "it depends"). I don't care how good are they at anything else. What matters in the end is that this tech is not to empower a common person (although it could). It is not here to make our lives better, more worthwhile, more satisfying (it could do these as well). It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position, to suck even more wealth from those that have little to those that have a lot.
Yet what I see are pigs discussing the usefulness of bacon-making machine just because it also happens to be able to produce tasty soybean feed. They forget that it is not soybean feed that their owner bought this machine for, and that their owner expects a return from such investment.
reply