In the beginning, I was really curious about ChatGPT—a tool that could save me from useless blogs, pushy products, and research roadblocks. Then it started asking follow-up questions, and I got a bit uneasy… where is this trying to take me? Now it feels like the goal is to pull me into a discussion, ultimately consulting me on what? Buy something? Think something? It’s sad to see something so promising turn into an annoying, social-network-like experience, just like so many technologies before. As with Facebook or Google products, maybe we’re not the happy users of free tech—we’re the product. Or am I completely off here? For me, there’s a clear boundary between me controlling a tool and a tool broadcasting at me.
Ted Chiang has a great short story about a virtual assistant that slowly but methodically "nudges" all of its users over the course of years until everybody's lives are almost completely controlled by them and "escaping" becomes a near-impossible task.
It's as if OpenAI saw that as an instruction manual, I really don't like the direction they're taking it.
Likewise Ken Liu (the English translator for the Three Body Problem) has a really good short story "The Perfect Match" about the same concept, which you can read here: https://www.lightspeedmagazine.com/fiction/the-perfect-match... It was the first thing that came to mind when I read this announcement.
Don’t forget that Sam Altman is also the cryptocurrency scammer who wants your biometric information. The goal was and will always be personal wealth and power, not helping others.
Engaging in wild character assassination (ie. spewing hate on the internet) in return for emotional upvote validation...I would argue is a great example of what you just described.
I don't doubt you've convinced yourself you're commenting these things altruistically.
But remember, when high school girls spread rumors that the good-looking popular girl has loose morals, they aren't doing it out of concern for public good.
They're hoping to elevate their own status by tearing down the competition, and avoiding the pain of comparison by placing themselves on a higher moral pedestal.
You’re comparing transitory conversations on a forum to a project with proven negative impact which was banned in multiple countries, and serious investigative journalism to teenage gossip, while ascribing twisted motivations to strangers on the internet.
That you would start from the assumption that someone’s motivation for a comment is not their opinion but a desire to farm meaningless internet points is bizarre. I have no idea if that’s how you operate—I don’t know you, I wouldn’t presume—but I sincerely hope that is not the case if we ever hope to have even a semblance of a productive conversation.
Some actually promote ideas and defend ideals, individual "success" doesn't always correlates with fulfilment. For instance, working for a non-profit, providing help and food to people who need it, helping associations live or creating a company that maximize doing good instead of profit.
You are right, we all do at some point. Further I think it is OpenAIs right to do so as they invented the product. But in this case I feel even more gaslighted. It's as with the internet as a whole. A great invention for society but pretty much unusable or misused as products are designed for those people success.
Their point is that ChatGPT is clearly maximizing for engagement with the product, rather than working as a straightforward tool that aims to solve your problem at hand as fast and efficiently as possible.
The decision to begin every response with “that’s a fascinating observation” and end every response with “want me to do X?” is a clear decision by PMs at OpenAI.
Poster is questioning what might motivate those decisions.
Exactly this. Thanks for clarifying. It is the Want me to do X? thing that makes me think why would you ask? Further on the Pulse feature: Why would you start a conversation?