I have no clue, have not read the PDF, and am naive and dumb on this topic. But my naive thought recently was how important language must be for our thought, or even be our thoughts, based on how well LLMs work. Needless to say I'm no expert on either topic. But my naive impression was, given that LLMs work on nothing more than words and predictors, the evidence that they almost feel like a real human makes me think that our thoughts are heavily influenced or even purely based on language and massively defined by it.
Can you replicate an algorithm just by looking at its inputs and outputs? Yes, sometimes.
Will it be a full copy of the original algorithm - the same exact implementation? Often not.
Will it be close enough to be useful? Maybe.
LLMs use human language data as inputs and outputs, and they learn (mostly) from human language. But they have non-language internals. It's those internal algorithms, trained by relations seen in language data, that give LLMs their power.
Maybe the structure and operation in LLMs is a somewhat accurate model of the structure and operation of our brains and maybe the actual representation of “thought” is different between the human brain and LLMs. Then it might be the case that what makes the LLM “feel human” depends not so much on the actual thinking stuff but how that stuff is related and how this process of thought unfolds.
I personally believe that our thinking is fundamentally grounded/embodied in abstract/generalized representations of our actions and experiences. These representations are diagrammatic in nature, because only diagrams allow us to act on general objects in (almost) the same way to how we act on real-world objects. With “diagrams” I mean not necessarily visual or static artefacts, they can be much more elusive, kinaesthetic and dynamic. Sometimes I am conscious of them when I think, sometimes they are more “hidden” underneath a symbolic/language layer.