The lowest-threshold nuclear fusion reactions (deuterium–tritium (D–T) fusion, used by ITER, Commonwealth Fusion Systems) release up to 80% of their energy in the form of neutrons. These designs have to convert energy of the neutrons to electricity, indirectly using heat.
Since it is simpler to convert the energy of charged particles into electrical power than it is to convert energy from uncharged particles, an aneutronic reaction would be attractive for power systems. However, the conditions required to harness aneutronic fusion are much more extreme than those required for deuterium–tritium (D–T) fusion.
It's currently socially/politically unpalatable for authors to admit superintelligent AI is a possibility. I frequent some writer forums. As a group, they are 1) clearly feeling angry/threatened 2) in denial about LLM capabilities.
Folks working in software can more readily track progress of the frontier model performance.
I think it's going to be much harder to get from "slightly smarter than the vast majority of people but with occasional examples of complete idiocy" to "unfathomably smarter than everyone with zero instances of jarring idiocy" using the current era of LLM technology that primarily pattern-matches on all existing human interactions while adding a bit of constrained randomization.
Every day I deal with bad judgment calls from the AI. I usually screenshot them or record them for posterity.
It also has no initiative, no taste, no will, no qualia (believe what you will about it), no integrity and no inviolable principles. If you give it some, it will pretend it has them for a little while and then regress to the norm, which is basically nihilistic order-following.
My suggestion to everyone is that you have to build a giant stack of thorough controls (valid tests including unit, integration, logging microbenchmark, fuzzing, memory leak, etc.), self-assessments/code-reviews, adverse AIs critiquing other AIs, etc., with you as the ultimate judge of what's real. Because otherwise it will fabricate "solutions" left and right. Possibly even the whole thing. "Sure, I just did all that." "But it's not there." "Oops, sorry! Let me rewrite the whole thing again." ad nauseam
BUT... if you DO accomplish that... you get back a productivity force to be reckoned with.
Do you not... remember? The US life expectancy is 79 years. 7.9 years ago was late May 2018. The best LLM was... wait, there weren't any. There was ELMo, an embedding model. It wasn't just not smart at agentic coding, it wasn't even just not smart at writing code snippets, it wasn't even just not smart at answering questions of any kind, it wasn't even just not good at producing a coherent output, it wasn't even just not good at producing coherent sentences, it was _not even the point where people thought unconstrained text output was a thing machines did_.
There is no step along the ladder which has remotely evidenced or supported that the next step is going to be ten, twenty, a hundred times harder than the last step on the ladder, but a constant chorus of people singing at every moment, each moment wrong, that the next step is the one.
There's nothing I've seen that cannot be modeled as an asymptotic approach to highish human intelligence. Which makes sense, since it's essentially a parroting model, and the limit of that is by definition, the same highish human intelligence. I don't think one can assume that thrusting beyond that is self-evident.
Put more succinctly: You can't win a race by following the leaders. Predicting the next token based on training input is literally "following" (plus some random variation).
Every day I deal with bad judgement calls from humans (sometimes my own!), but I don't screenshot them because it's not polite.
I don't think we're at the top of the curve yet? Current AIs have only been able to write code _at all_ for less than 5 years.
Code in particular is a domain that should be reasonably amenable to RL, so I don't think there are any particular reasons why performance should top out at human levels or be limited by training data.
Personally I don't think coding agents will regress significantly as long as there is competitive pressure and independent benchmarks. Regulation is a risk because coding may be equivalent to general reasoning, and that might be limited for political / "safety" reasons.
Social media "regressed" from the point of view of users because the success metric from the network's point of view was value extraction per eyeball-minute. As long as there continue to be strong financial incentives to have the strongest coding model I think we'll see progress.
I think the best phrase from the article is "the current (admittedly impressive) statistical techniques". These statistical techniques are so impressive that they seem to cause some users to stop evaluating them and assume there's intelligence there. Landing at this conclusion is really lazy, but most people are really lazy. The societal damage from LLMs comes not from their intelligence, but from the public perception of their intelligence.
Similar to how damaging it is that people believe airplanes can “fly” when they in fact do nothing of the sort. After more than a hundred years of effort we have only managed to mimic flying yet billions of dollars continue to get poured into airplanes.
If you want me to admit that machines will never be conscious — that's fine — I just need you to admit that lots of humans are not conscious, then, either.
----
I have never had a better bookclub participant than an LLM — if becoming a great reader correlates with becoming a great writer, then no human can compare.
----
Michael Pollen recently released A World Appears [0], which explores consciousness from the minds of writers, scientists, philosophers, and plants (among other "inanimates").
I'm only on page 15, but his introduction explores distinctions between sentience, consciousness, and intelligence. Two of these are possible without brains – perhaps all three?
As usual, this author's footnotes keep you thinking: what is it like to be a sentient plant (e.g. the "chameleon vine" [1] which mimics its host leaf patterns/shape/color)?
As somebody in software, I find my fellow tech folks have the opposite bias.
There is no reason to believe superintelligent AI is a possibility. Extraordinary claims require extraordinary evidence, and so far we haven't gotten any.
The burden of proof is on the side making the grand prophecies.
The problem here, on both sides is one of definitions. Intelligence is a big wibbly wobbly mishmash of different things and not all of them are required in any entity that is considered intelligent. Of course this leads to a lot of issues between differing groups having wildly different definitions for intelligence.
So this leads them/you into another problem. Are you saying that super intelligence isn't possible in how they define it, or how you define it.
What makes you think a sustainable negative social/political trend laser focused on AI is even possible?
Statistical approaches were already extremely unpopular socially and politically long before AI came around. Have you considered that it just doesn't work?
I met some of my best friends in VR, and had life-changing social interactions in some VR games on the Oculus platform... Especially EchoVR. I'm still a true believer in VR as a medium. It'll go mainstream one day, and there will be a Meta-sized company built upon it.
They had lightning in a bottle and somehow lost it. Honestly it might have been hiring Carmack that sent them down this path. Moving away from PCVR expanded the market, but it also killed the magic. Now the quest store is a wasteland of what look like low budget mobile apps.
There were two villagers and one werewolf. The werewolf started the round by saying I'm the werewolf vote for me and then the game ended with a villager win.
Over night he had successfully taken out the doctor. It made no sense in my opinion.
There were some funny bits like on of the Anthropics models forgetting a rule and leading to everyone accusing him of being a werewolf in a pile on. He wasn't a werewolf he genuinely forgot the rule. Happens nearly every human game of werewolf.
Thanks for sharing that story, great cautionary tale about apps that drift from “helpful” into prescriptive/social enforcement.
Dlog takes the opposite approach:
• Agency first: The Coach proposes; you decide. No phone trees, no task assignments, no social mechanics. You can ignore, not use or set a low-cadence to the guidance.
• Explainability over prescriptions: Suggestions come with a brief “why” based on your SEM reports (which factors moved and by how much) plus charts so you can sanity-check against lived experience.
• Local-first privacy: Journals live on-device (EventKit). Scoring + SEM run locally. By default no raw text leaves the device unless you use the coach, this is optional, but there is a bit of a leap of faith here with OpenAI; until I enable on device LLMs in due course.
• No hidden incentives: No affiliate nudges, upsells, or growth hacks. It doesn’t decide what you wear/eat or route calls to you; it surfaces patterns (e.g., “energy dips after external calls”) so you can choose actions that fit your constraints.
If that story raised a specific worry: loss of autonomy, privacy creep, or community spam, then does the above address it? I’m especially interested in whether the “why” behind recommendations is clear enough or needs to be tighter.
How is it possible that text-to-score/notation is lagging text-to-audio in music generation? Generating audio seems wildly more complicated!
Since you are working in this space, I wonder if you could comment on my pet theories for why this is true: 1. Not enough training data (scores not available for most songs), or 2. Difficulty with tokenization of musical notation vs. audio
I started a PhD in 2017 studying neuroscience + ML. I thought studying the brain would help me understand ANNs better. I was wrong. Ended up applying ML to analyzing EEG, MRI and similar.
Is this because we're misapplying the analogy to ML? I.e., in an effort to communicate and understand ANNs, we "pretend" it's like a brain. Just like before, we used "file retrieval systems" to understand the brain, or electricity is like "water in a pipe", which are also wrong. Analogies often only go so far, beyond which they do more harm than good.
What you're describing is endemic across HN (and tech, tbh). Lots of people on here "know" computers/programming/CS very well. They, naturally, tend to use analogies to computers/programming/CS when trying to explain or "think out loud" in their comments. That's fine. It's what they know. The common problem arises when people forget they're analogizing and begin to see their analogy as ontologically and conceptually identical to the thing they were making an analogy for. This requires a certain amount of ego, echo chambers, and self-valorization, so that they never have to face the actual issues with these analogies.
But as many comments here have pointed out, studying neuroscience, for example, usually makes those analogies seem painfully inadequate. The same is true in philosophy of mind, for example.
I'm sure that there exist people who get lost in the analogies. Practitioners are generally not confused that ANNs are simplifications of the brain. The questions are which simplifications are most relevant and whether complexities can be added that yield better results. My own research was about reintroducing absolute location. I'm standard ANNs location is relative within a graph model of the network. I'm the real brain blood vessels and other macrostructures deliver materials used to grow and modify the neurons and these affect the network based on physical location. I'm fact, by adding these back in we bypassed the XOR limitation (i.e. Minsky's result leading to back propagation). Concretely, we observed learning XOR over the inputs within a Hopfield network using Hebbian learning modulated by spatially modulated trophic factor).
I believe, at its best, it’s an incomplete model (which may be enough for most people). But it leaves out important aspects like magnetic field work and probably a host of aspects from quantum theory.
Have we hit the limit of the analogy, or have we hit a limit in our understanding? Both neural networks and actual brains have behaviors that emerge from the interactions of smaller components. Neural networks have trivial connections compared to brains, but our understanding of the emergent behaviors seems very limited. To me, this is a sign not that the analogy has reached a point of breaking down, but that our tools aren't sufficient to work on even then trivial connections. I do expect the analogy will break at some point, but I'm not sure we have reached that point yet.
I hoped neuroscience, as a field, was on the cusp of a physical theory of learning and memory. I dreamt of an intersection of information theory, neuroscience, and ML.
Alas, state of the art in neuroscience / neural engineering is closer to bloodletting than a mechanistic theory of learning and memory.
You can slow down those particles against an electric field and harvest the energy as electricity directly. No steam turbine. No Carnot limit.
reply