Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


> I asked Chat what it thinks private systems would cost and it suggests around half+.

I am genuinely unsure why someone would think an LLM could accurately estimate that kind of cost and would like to know your reasoning.


I like when I see people cite LLMs as a source because it’s an easy heuristic to just dismiss them.


Likewise, and that's despite me finding LLMs impressive.

I only trust LLMs for a first draft, where I can actually fact-check everything, or light copy-editing for tone and style.

I wouldn't expect a fresh graduate to be dotting all the i's and crossing all the t's on their research; and as LLMs are like a fresh graduate at coding, I assume they're like that at everything else, too.

Useful, sure, but not what I'd call a "high quality source".


Why? Do you dismiss people who use Google? Or Wikipedia? Or Britannica? They are all just different sources of info with strengths and weaknesses.


LLMs lie far far more often. It would take a year+ of good experiences for me to start reconsidering my bad experiences with any technical topic outside of programming.

I similarly dismiss people's opinions that rely on LLMs to inform them on important subjects, rather than using them as a search engine to find more reliable sources.

You can't trust a person that educated themselves with an LLM, because they have filled their head with bad foundations. That's very difficult to correct later on.


Having worked in the private satellite indsutry, I would dismiss most of the information on Google, Wikipedia and Britannica on this topic, yes. Most of the information online about satellite operations is very, very wrong.


It’s funny I should’ve just said I work in the satellite industry and everyone would’ve accepted my ballpark :p


Yes, if you lie people will be deceived. Good to have this in your comment history I suppose.


Not really? If one seeks to gain basic information these are reasonable sources but they are not authoritative, nor are they deep enough to generate deep and rich understanding of most phenomenon.

I was taught since at least high school that you don’t cite encyclopedias as sources because they are not sources - they are summaries of sources.

LLMs are similar but worse. They are a statistical approximation of truth, without a concept of truth. They are fuzzing sources that you and I have access to already. I find calling that hand off of agency and responsibility “being informed” deeply disturbing.


my guess is people lend 'Chat' credibility since it's definitely read more of the internet than any one person.

of course (to your point) that credibility overlooks possibilities for algorithmic bugs


Yeah the whole point is it’s a quick way to synthesize a lot. It’s not foolproof but neither is anything else on the internet.


I've set some of these models to work synthesising summaries etc. based on my own blog posts; LLMs are surprisingly middling at synthesising info from documents — I've seen even good models elide significant content, go on distracted rambles about other topics in the same area, and even invert the meanings of points being made.

Use them the way you'd have used Wikipedia in 2008: a starting point from which you can do actual research, but you have to watch out for a lot of unverified junk as well.


> The math for privatization does not make sense to me

When everything is considered, privatization rarely makes sense.


> The math for privatization does not make sense to me?

It does when you assume that some rich person will end up taking that over and the government(s of the world) paying for it.

Always, especially with this admin, assume grift of one or another kind behind anything.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: