> It's funny because you ascribe some reasoning in ChatGPT's answer “Gerhard Schröder”, but somehow missed that it's also able to give you two other answers that are unambiguously wrong
Schröder is the only answer it gave when using functionality that would bring some representation if a list into its context first, and 10it did it with different prompts and mechanisms for bringing a list into its context.
That LLMs are bad at counting-related tasks without doing that is well-known, and not a point I felt needed belaboring.
Schröder is the only answer it gave when using functionality that would bring some representation if a list into its context first, and 10it did it with different prompts and mechanisms for bringing a list into its context.
That LLMs are bad at counting-related tasks without doing that is well-known, and not a point I felt needed belaboring.