How AI really works

What Counting Reveals About How AI Really Works

We often describe large language models as “advanced autocomplete.”

They predict the next token. They don’t reason. They don’t calculate. They don’t truly “understand.”

That’s the common framing.

But a recent research paper challenged that assumption using a surprisingly simple task:

Can a language model count characters and decide where to break lines of text?

On the surface, this sounds trivial. It isn’t.

The model used in the study only sees token IDs, not characters. Tokens don’t neatly correspond to letters. Some represent whole words, others fragments, punctuation, or combinations.

Yet the model successfully performed fixed-width line breaking which requires tracking character counts across tokens.

That means it wasn’t just predicting statistically plausible formatting. It had to internally represent something like “character count.”

And that’s where it gets interesting.

If a language model can internally encode a variable like character position, without being explicitly programmed to do so, then something more structured is happening inside these systems than simple pattern matching.

This paper isn’t really about counting.

It’s about what counting reveals.

In the next article, I’ll unpack what the researchers discovered inside the model, and why geometry plays a central role.