Do LLMs communicate? No, of course not. Not more than a book or a street sign communicates with you. Yes, these things communicate information, but it would sound funny if someone walked up to you and said "the street signs communicate with me..." If they went on to say that the signs talk to them, then you'd really start to think they'd gone mad.

Consider ELIZA. Primitive by today's standards, by using the program you would be communicating with it. But, is it communicating back to you? Certainly not in the same sense that you are with it. ELIZA is just a computer program, after all. But it uses words and phrases that people use when they communicate over text, so it feels much closer to communication with another person than, say, a Google search, even though a Google search communicates far more information back to you from a simple text query.

An LLM's training wasn't guided by communication -- it was guided by completion. The number one improvement, so far, in LLMs has been given by scaling up the data. Why would communication suddenly arise when the only objective is completion? When I write text to this post, my goal isn't to replicate what a human would say. My goal is to lay out my thinking for myself and for anyone who might be reading so that it can be understood. That's the basis of communication -- conveying something in a way that it can be understood by the intended parties. LLMs themselves can't be communicating, since during inference they have no target for the quality of their output.

Note: By LLM, I mean classic autocomplete-only models.