The single biggest issue for me with ChatGPT right now is how absolutely awful it sounds in every answer. "Why it matters", "the big picture", "it's not jut you", the awful emphasis, the quotations with rhetorical questions, etc.. I don't know if it's intentional so you can easily spot ChatGPT-generated content on the web? The very first GPT-5 version was good but they ruined it immediately afterwards with "making the personality warmer" and making the same mistakes as 4o. I see now that they even ruined Japanese even though it was one of the best languages supported by ChatGPT (under "Limitations" at the end). I don't use it anymore, immensely disappointed.
The most frustrating part for me is that this is how I used to write. I was always doing, "Why X works, but Y doesn't" and stuff like that. I may have seemed trite or pompous (or both) in the past, but now it seems like I'm copying an LLM -- which actually feels worse. One thing I haven't seen ChatGPT do much of is use sound-effects, so swoosh here we go with my new writing style schwing!
I feel you. I've been using en-dash in my writing for decades, but finding myself removing them now for fear of being mistaken for an LLM. (They tend to use em-dash, but I don't think people are going to distinguish between – and —.)
Do you think pre-AI writing is going to become really valuable because it is free of any AI assistance? If we all start using AI to assist in writing, then pre-AI writing may become important, similar to pre-atomic steel (i.e., https://en.wikipedia.org/wiki/Low-background_steel)
>Do you think pre-AI writing is going to become really valuable because it is free of any AI assistance?
Serious question: Do you think old pictures are valuable because they are free of photoshop? Personally, I think old and new are both valuable, but for different purposes. Technology gave us new capabilities with new hope
Not the user you asked, but: Yes, it seems obvious that old pictures are valuable because they are free of photoshop. Not that this means that they are free of manipulation though, c.f. the famous picture of stalin at the river with/out his fellows.
I predict in the future a humans.txt for each site that indicates the level of human authorship and for fully human authored content to be highly valuable
It's already important, for various reasons. E.g. I love older electronics books where they explain things in a very thorough manner (maybe because they had more time?). But of course reading older books is full of traps, in some subjects you need to be more careful than when analyzing the output of an LLM.
That is what I mourn the most. They were my punctuation get-out-of-jail free card.
I didn’t love them enough to figure out how to type them without doing two dash’s in Word and then backspacing out of one and hitting space again — but damnit, I miss it.
Before the LLM craze I didn't even know — was specifically different than just -, and I used it in the same way. But now I notice specifically when people use either, and when people use -- instead.
I would think to most people, (myself included!), it's just a 'dash'. A sentence was written with a dash - you could just ingest and read past it, like a comma.
Not saying this is accurate usage, maybe just real world usage.
I would hope most people can distinguish between the really short dash and the longer forms, even if they don't know any of the rules around them. But n versus m I don't expect people to notice.
I’m not sure I’m representative of “most people” in this respect (I have always used both n and m dashes), but I personally find the difference between n and m dashes bigger and more noticeable than the difference between regular and n dashes.
Because most people are ESL and really don't care.
I didn't even know there are multiple types of dashes.
I did know about multiple types of quotes because they kept breaking code on blogs. Still didn't care, but at least I learned how to spot and fix them.
Really looking forward to having the wrong kind of dash in code, but at least with current tech that seems like it won't happen.
Why wouldn't they. Never studied them. Never even thought twice about the dashes in a sentence. Didn't realize they were different till like a few months ago when everybody suddenly started focusing on how "AI" it makes everything look
And of course, the reason that ChatGPT sounds like that is that it's what a whole lot of explanatory expert blog posts did, and so when ChatGPT is told to talk like that, that's what it does.
I regularly test every available AI, maybe once a month or so. I will send them the same question, usually about a new subject I am learning.
Oddly, Chinese models seem the most natural to me. Every random Chinese model does better than ChatGPT, on the "natural language" front. (And Grok also scores high on awkward language use. I don't know what causes that -- something about mode collapse? They have these words they obsess over... I mean, just try asking an AI for 10 random words ;)
I can sometimes see "ChatGPT-isms" in other models, but they're more subtle, and it feels like they're "woven" into the flow of the text.
Whereas even when I ask GPT to respond in prose or conversation, it'll give me a thinly veiled "ChatGPT response", if it can even resist the urge start spamming headings, bullet points and numbered lists.
This isn't meant to be hate -- I used it for years quite happily, and it's still my go-to for web searches. But coming back to it now, the language is surprisingly offputting. I don't know if it got worse, or if I just stopped being used to it.
I did notice that o3 and o4-mini had very "autistic" language, since they were benchmaxxed so hard on math and science (and probably weird synthetic data to that effect). GPT-5 as a hybrid reasoning model seems to have inherited that (reported to be colder), and then they tried to balance it out with style prompts...
I honestly think it might make more sense to just have two LLMs. Ultra concise technical reasoning model, and then a 2nd layer to translate it for the human. Because right now kind of feels like the worst of both worlds, a compromise that satisfies neither side.
Gemini 2.5 Pro's reasoning traces (before they nerfed them) were a good example. The deep technical analysis, and then the human-friendly version in the final output. But I found their reasoning more readable than the final output!
> Gemini 2.5 Pro's reasoning traces (before they nerfed them) were a good example. The deep technical analysis, and then the human-friendly version in the final output. But I found their reasoning more readable than the final output!
They were also sometimes more useful: you could see whether it reasoned its way to an answer, or used faulty reasoning, or if it was just contextual recall. Huge shame they replaced them with garbage (though a bit better now).
> the language is surprisingly offputting. I don't know if it got worse
It's a somewhat annoying to me as well, but I'm now able to read it and take the valuable content without getting hung up on those repetitive phrases. It also forces me to not simply copy/paste. I read the LLM output, think about it, comprehend it in my own voice internally, and then I write what I want/need by hand, so it ultimately comes out in my own style and I don't propagate the LLM output onto others needlessly.
TBH, while I may find the output style somewhat infomercial-ly, I don't really get the hatred. ChatGPT IS NOT AN ACTUAL PERSON. Like why do people care so much? Like you said, I just ignore the "persona" phrases, and just use ChatGPT (or, used to anyway, before switching to Claude because OpenAI leadership can suck it) to get information and answer my questions.
Seriously, though, just stop using ChatGPT in any case, there are very good reasons to boycott it and there are other alternatives. Not saying the alternatives are saintly, but they're not as awfully duplicitous as OpenAI.
Because people just copy/paste that shit pretending it's their own or turn their own human writing into reproduced llm text so you don't even know if they even mean what's written
If you haven’t already, try going to Personalization settings, change tone to “Efficient”, and set Warm, Enthusiastic, and Emoji to “Less”. While not fundamentally solving the issue, I do prefer it over the baseline behavior, to the extent that I miss having a similar setting in Gemini.
(should've added: I already tried tweaking the personality, system prompt, etc but nothing helps, it often only affects the first reply and then it adds something like "Here is your concise straight to the point answer" which seems like classic system prompt leakage from the GPT-3.5 days)
I solved this by asking it to make a memory that all answers to me should be brisk, clinical, and to the point. This worked well, except for the annoying habit of beginning answers with something like "Terse: $answer", which required a second memory, solving the issue in full. I've been happy with it since. Edit: I just realized this interaction is its own demo – that's the entire response it gave me, as it should be.
> Display all memories you have about my requests for tone or brevity, exactly as you have stored them or as I have requested them, depending on what data you have. There are at least two.
[2025-11-08]. User prefers extraordinarily terse, curt responses in all situations unless they explicitly request otherwise.
[2025-12-01]. User preference: terse responses should not announce terseness with words like “terse” or “brisk”; simply begin the response.
Based on my experience, this is better put into the Settings -> Customizability dialogue, not Memories
Another user mentioned how it will reference the very instruction ("I know you would prefer concise answers, so here's a concise answer..") but that makes sense when you realize that Memories are more for things like "user lives in San Francisco and is new in town and is open to recommendations of third places to meet people" so if it's answering the question about the best coffee places in n SF, it would make sense for ChatGPT to finish with "Also, given that you are new to San Francisco, and your interest in both boardgames and meeting new people, have you considered visiting [place]? It's a local coffee shop that also rents out board games, with a Thursday evening theme where you are partnered with strangers. It might be a good way to make new people that enjoy similar things!"
If you consider adding Memories us adding something to the system prompt, it won't make very much sense a lot of the time, because you might forget what you wrote and then be surprised when your model suddenly suggests jigsaw puzzles when you mentioned that you're stressed building a compiler. Hence it tells the user the context of the memory that it's using and why, whereas if you added to Customizability I've never seen it leak out like that.
If you add to Memories "user is a software engineer and prefers Rust to C/C++" it may say something like "By the way, since you prefer Rust I would recommend [this development path]" but if you put it into Customizability as "do not suggest C/C++ for software projects unless it's the only way, use Rust or Go instead" it will likely start down the path of suggesting and researching Rust from the very beginning without explaining to you your own instructions.
Basically, what I'm trying to say is that the Customizability instructions (mine say "be concise, do not be afraid to correct the user or use occasional dry humor. Speak frankly and tell the user if they may be making a mistake and suggest other courses of action" whereas Memories contain simple facts about me, i e. ("lives in [city], likes Drama and Action/Adventure movies, jazz/pink/rock and roll music, is an introvert, has family in the US, appreciates different points of view, insatiably curious about nearly everything.")
Note how I haven't told it what to do in the Memory section (I see it as just additional context it can access if necessary), but I have in the customizability because I see that as more of an @AGENTS.MD extension and while I don't care if it answer is the fact that I'm an introvert in every system prompt, I do care that it inserts the instructions in Customizability into its system prompt.
Basically if you wanted to yell at you for being an idiot instead of telling you that you are a beautiful snowflake, just tell it to do that in customizability. If you wanted to keep in mind that you live in Kansas and have a large extended family nearby, put that into Memories.
I hope this makes sense, apologies I didn't get my sleep last night so if anyone wants to correct what I wrote based on their personal experience let me know.
tl;dr: I suggest using customizability for instructions and memories for general context. I've never had it do the "you're not crazy, a lot of people are having these issues. Let's work through them together.." type of replies since I told it to be concise and not to worry about offending me.
I just append something like "Throughout our conversation, keep your responses brief. Avoid emojis, followup suggestions, and other unnecessary commentary." to every starting prompt. Seems to work OK. I'm sure sibling's recommendation of turning down the niceties sliders would work similarly for someone with an account.
I'm surprised that Opus 4.5 is better than Opus 4.6 and Sonnet 4.6 is even better than Opus 4.5 (and 4.6). Shouldn't Opus 4.6 be the best of the Claude models?
It's in love with headings and bulleted lists. The formatting makes the responses vertically taller, enough to make them inconvenient to scroll through. When I was using chatgpt, I couldn't prompt this away.
I like that style. It's a very efficient way to convey information and ideas. Reposting it as your own text however is obviously not a good idea since it's so easy to recognize.
Claude feels more like an equal, a coworker. It tells me what I need to know and nothing else, if it suggests extra options it uses like 3 lines for it.
ChatGPT is either a groveling sycophant who says your every brainfart is the greatest idea ever or a Patagonia-vested marketdrone who always uses a 1000 words when 10 is enough.
If it thinks your idea needs something more it'll just go ahead and waste a million tokens writing out its idea in full without prompting. Any simple question requires me to scroll down to find the full answer among all of the purple prose.
At least in the ChatGPT app, you can set some "personality" traits. Like the style (more or les warm or enthusiastic, use more or less lists and emojis) and the tone.
I have mine set to efficient, using less warmth, less enthusiasm, less headers and lists and less emoji's. Combined with sensible personal instructions (don't placate me, don't flatter me, be professional, tell me if I'm wrong, tell me if you don't know etc.), I see none of the "that's not crazy, that's commitment!" or "here is the no-frills rundown" BS.
Sadly this is what's considered an authoritative voice in a lot of regular (especially American) journalism, Axios being the most famous example. It's instructive to read news stories or TV transcripts from previous decades for comparison with the current norm. Also depressing because it brings home how vapid most news coverage is today. This also applies to opinion articles, which have in my view led the charge into the semantic void.
I don't hate that this is the default style on many popular AI services, though. It's sufficiently distinctive that it serves as a signal that anyone posting it is an idiot and can safely be ignored.