Hacker Newsnew | past | comments | ask | show | jobs | submit | 100ms's commentslogin

> Full stop.

Why people don't edit out obvious sloppification and expect to still have readers left


Third line in to the article: "But there’s one result in the benchmarks I keep coming back to."

I hear this sort of thing all the time now on YouTube from media/news personalities:

“And that’s the part nobody seems to be talking about.”

"And here's what keeps me up at night."

“This is where the story gets complicated.”

“Here’s the piece that doesn’t quite fit.”

“And this is where the usual explanation starts to break down.”

“Here’s what I can’t stop thinking about.”

“The part that should worry us is not the obvious one.”

“And that’s where the real problem begins.”

“But the more interesting question is the one no one is asking.”

“And this is where things stop being simple.”

It doesn't really worry me but I think its interesting that LLM speak sounds so distinctive, and how willing these media personalities are to be so obvious in reading out on TV what the LLM spat out.

I've never studied what LLMs say in depth is it is interesting that my brain recognises the speech pattern so easily.


I think this kind of language predates widespread LLM use, and has been picked up from that kind of writing. It's a "and here's where it gets interesting" pattern that people like Malcolm Gladwell and Freakonomics have used, even if the same thing could be said in a way that makes it sound much less intriguing.

There's even a word for it: “cliché”

How banal

10 EASY WAYS TO SPOT A LLM~ THE 10TH ONE WILL SURPRISE YOU!

Isn't this the format of "hook-driven media" a constant stream of "second-act pivots" - where some new twist is added to a story to re-engage the reader and keep them reading.

BuzzFeed and Upworthy etc pioneered this for web 'news stories', then it got used in linkedin, twitter, and everywhere where views are more important than the content.


The language of drama and import without meaningful substance. Words statistically likely to be used in a segue, regardless of the preceding or subsequent point. Particularly effective when it seems like you’re getting let in on a secret. Really fatiguing to read

A writing teacher once excoriated me for saying that something was important. “Don’t tell me it’s important, show me, and let me decide, and if you do your job I’ll agree”

I don’t know how a completion can tell when it needs to do this. Mostly so far it doesn’t seem capable


Maybe the solution is to cull the bad, cliché writing from the training data.

You can just instruct the LLM not to write like an LLM.

Ugh, you're making me remember the last time I listened to NPR. It's so bad.

I listen to NPR daily and I don't think I've ever heard any of them use that phrasing.

I notice this very often in LinkedIn posts, and it's annoying, but I had not realized it was LLM-speak? Isn't it possible that people write like this naturally?

I think LLM's have that sort of "summarise, wrap it in a bow tie, give a little dramatic punch as a preview to the next few points".

Guys, LLMs are build on all these social cues which were developed pre-model. There's atleast 10 years of pre-llm gibberish.

This is to say: Marketers and spammers repeat the same things over and over, and these models are build on coalescing repetition into the basis.

So yeah, of course people talked like this before, but it was always in some known context like linked in or a spam website.


Sure, but RLHF ended up emphasizing this to a level beyond normal human writing.

Arguably it's exactly because it was used naturally so often that the LLMs parrot it so frequently.

Yes. Some people are very trigger happy in attributing human slop to LLMs.

Nate B Jones videos ... YouTube channel "AI News and Strategy Daily" channel uses all of these. Every video.

I listened to a lot of NPR podcasts before LLM were around, and most of them are full of these kinds of filler phrases.

The general concept of a hook with delayed payoff is far from new, and generally one of the better ways at keeping attention.

It's also exactly the Mr beast playbook, and got him to the largest channel on YouTube.

Any system attempting to capture human attention will use these techniques, nothing LLM-specific here at all.


Apparently John Oliver was an LLM before they were even invented.

So are we saying it's fine that the article is written by an LLM as long as it doesn't have the tell-tale signs of LLMs?

It's more about curating the things you're publishing. Why would I bother reading what you couldn't bother to read?

They could easily have read it, and thought , that communicates the information that it needs to.

No point creating busywork for yourself just shuffling words around when the information is there, no?

I guess it depends on what you want out of the article. Substance, or style?


> They could easily have read it, and thought , that communicates the information that it needs to.

I'd they aren't self-aware enough or smart enough to determine that what they wrote is indistinguishable from text generation, how probable is it that they have something of value to add to any thought?


I don't really see reason to complain about tool use, so long as the result is cohesive, accurate and that ultimately means a human has at least read their own output before publishing. It's a bit like receiving a supposedly personal letter that starts "Dear [INSERT_FIRST_NAME_FIELD]," are you really going to read such a thing?

An article without telltale signs of an LLM is indistinguishable from an article written by a human, so yes.

My opinion is that literature and art will continue pushing the envelope in the places they always pushed the envelope. LLMs will not change this, humans love making art, and they love doing it in new ways.

Corporate announcements were never the places that literature and art were pushing the envelope. They were slop before, and they're slop now.


Are you referring to the literal use of the expression "full stop"? I don't see it anymore in the article, maybe they edited it out?

These seem amazing for hobbyist, but that TDP given the perf might be an issue deploying a lot of them

Its performance is pretty unbalanced. If you're using it for the couple of things that it's good at, the TDP is competitive.

That's not an ideal tone for here. From my perspective the most incredible thing is the concentration of IO. I might like at some point for elements of my computer usage to remain private, it would be nice if that ability were preserved. A bit hard to accomplish when 1 out of 4 bits processed globally all run through the same network

It's literally a distinct model with a different optimisation goal compared to normal chat. There's a ton of public information around how they work and how they're trained

Unironically the best method to implement that browser feature you're looking for is probably also AI. Which tells a meta-story, AI isn't a new feature it's also a new medium. It can be used to turn cave speak into works of literature just as easily as it can turn voluminous spew into one liners (Ed Zitron just popped into mind for some reason). You can't ignore it once it exists, but it sounds like the problem you have genuinely can be solved by it, and I expect over the next decade we'll see a lot more of exactly that.

Here's to reading HN projected through the lens of manga comic strips sometime after we solve the GPU shortage..


He goes way beyond saying it's a test, he's legitimising the change in the follow-up rationale

I'm excited for Taalas, but the worry with that suggestion is that it would blow out energy per net unit of work, which kills a lot of Taalas' buzz. Still, it's inevitable if you make something an order of magnitude faster, folk will just come along and feed it an order of magnitude more work. I hope the middleground with Taalas is a cottage industry of LLM hosts with a small-mid sized budget hosting last gen models for quite cheap. Although if they're packed to max utilisation with all the new workloads they enable, latency might not be much better than what we already have today


SPARK does static analysis (proof) of Absence of Runtime Errors (AoRTE).

Yes, but that requires eliminating aliasing and expressions with side effects?

.get() will bounds check and the compiler will optimize that away if it can prove safety at compile time. That leaves you 3 options made available in Rust:

- Explicitly unsafe

- Runtime crash

- Runtime crash w/ compile time avoidence when possible


https://play.rust-lang.org/?version=stable&mode=debug&editio...

Catch the panic & unwind, safe program execution continues. Fundamentally impossible in Fil-C.


Seems like a niche use case. If it needs code to handle, it's also not apples to apples...

It's an apple to non-existent-apple comparison. Fil-C can't handle it even with extra code because Fil-C provides no recovery mechanism.

I also don't think it's that niche a use case. It's one encountered by every web server or web client (scope exception to single connection/request). Or anything involving batch processing, something like "extract the text from these 10k PDFs on disk".


Sure, it's not implemented in Fil-C because it is very new and the point of it is to improve things without extensive rewrites.

Generally, I think one could want to recover from errors. But error recovery is something that needs to be designed in. You probably don't want to catch all errors, even in a loop handling requests for an application. If your application isn't designed to handle the same kinds of memory access issues as we're talking about here, the whole thing turns into non-existent-apples to non-existent-apples lol.


The root comment here said this:

> All this "rewrite it in rust for safety" just sounds stupid when you can compile your C program completely memory safe.

All of the points about Rust were made in that context, and they've pushed back against it successfully enough that now you're trying to argue from the other side as if it disproves their point. No one here is saying that there's no point in having safer C code or that literally everything needs to get rewritten; they're just pointing out that yes, there is a concrete advantage that something in Rust has over something in C today even with Fil-C available.


There are many concrete disadvantages to writing things in Rust too, not to mention rewriting. But you are right, these are different solutions and they thus have different characteristics.

As for your "as if it disproves their point" stuff is wrong. The fact is, the reply to a comment in a thread is not a reply to a different one. You are implicitly setting up a straw man like "See, you are saying there are NO advantages to using Rust over Fil-C" and I never said that at any point. I also didn't say that you said that there was no advantage to using Fil-C.


My point is that no one here defending Rust is trying to say that Fil-C doesn't offer anything useful or claim that Rust is better in all circumstances. When one person says "A is strictly better than B", and some people respond "Here are some cases where you'd still get benefits from B over A", coming in and saying "A is better in these other circumstances" isn't saying anything that people aren't already aware of.

>coming in and saying "A is better in these other circumstances" isn't saying anything that people aren't already aware of.

Oh so you're a psychic now too? I think all kinds of people read these threads. Most of them probably aren't as aware as you're claiming, even the ones actively commenting on the topic.


I do not know how Fil-C handles this, but it could raise a signal that one can then catch.

Reminds me of a commercial project I did for my old University department around 1994. The GUI was ambitious and written in Motif, which was a little buggy and leaked memory. So... I ended up catching any SEGVs, saving state, and restarting the process, with a short message in a popup telling the user to wait. Obviously not guaranteed, but surprisingly it mostly worked. With benefit of experience & hindsight, I should have just (considerably) simplified it: I had user-configurable dialogs creating widgets on the fly etc, none of which was really required.

I originally switched to Opus because it could reliably write Rust. As of 2 weeks ago, I'm using Codex because it writes way more compact and idiomatic Rust. Just another anecdote for the pile. I detest ChatGPT's persona, but Codex definitely feels better than Claude Code for anything I throw at it

    $ pbpaste | wc -w 
    62508
    $ pbpaste | grep -oi mythos|wc -w
    331
    $ pbpaste | grep -oi opus|wc -w
    809


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: