Neuralese: The Most Spoken Language You’ll Never Speak

Somewhere between thinking and speaking, there’s a strange place where meaning starts to solidify. It’s not quite a word yet. More like a haze of associations. A mental sketch your brain tries to translate into something shareable. Sometimes it works. Most of the time it doesn’t, at least for me. I tend to mumble a lot.

That private language in your head, the one you use to talk to yourself, isn’t English or Portuguese or Python. It’s not even a language, really. It’s raw and messy. A kind of silent shorthand sculpted by experience. Try catching it. Try explaining it. It slips through like fog in your fingers.

The language of thought (According to machines)

Science is already poking around in there. Researchers are feeding brain signals into neural networks and getting fuzzy images back. They’re trying to reverse-engineer what we see, dream or remember. Some of the reconstructions look like fever dreams. Others are eerily close. It’s like watching the mind on a bad TV signal, but the tech keeps getting better.

Then there’s the way we connect our minds to each other. Through letters carved into stone. Through cave paintings, vinyl records, emojis, GIFs, memes. Through every kludge we’ve invented to make what’s in here vaguely resemble what’s in there. Language is our duct tape for consciousness.

What the machines whisper when we’re not listening

When we started teaching machines, we handed them the same duct tape. Natural language. Our language. We told them, here, talk like us. So they did. Or at least, they pretended to and we believed.

ChatGPT. Alexa. Siri. Every chatbot trying to pass for clever at dinner. All of them trained to be interpreters between our species and theirs. But here’s the twist. When machines talk to each other, they skip the human stuff. No syntax. No grammar. No metaphors.

When Alexa talks to Roomba they are not exchanging cute phrases. They’re just tossing around compressed packets of intention. Efficient. Silent. A language built for zero confusion. Zero vibe. Zero poetry.

Those systems aren’t AGI. They’re narrow tools with fixed jobs. But we’re building more general-purpose AI agents. Ones that can learn from us, adapt, execute tasks that weren’t hard-coded. They still need a language model to talk to us. But not to each other. Inside, they could speak in pure vectors. Numbers. Tokens. Dense little nuggets of meaning we can’t read or pronounce.

Is this the Neuralese thing? It could. It’s not a language we’ll ever learn. Not because it’s too complex. Because it wasn’t made for humans. And that’s not science fiction. Machine-to-machine communication existed since the first machine created, but it was created and mediated by us. We wanted to understand and control. Even today, we’re still defining how autonomous vehicles should talk to each other. Neuralese is a shortcut, optimized to be used by machines, not for us directly. There are entire working groups on vehicle-to-everything (V2X) protocols.

Inside the black box:
The unspoken language of AI

But this new language lives inside the black box. It’s the internal chatter of large language models. The soup of token embeddings sloshing around under the hood. It’s not designed to be elegant or expressive. It’s designed to get the job done. Efficient. Fast.

And here’s the part that stings a little. Our beautiful, bloated language isn’t just inefficient. It might be a liability when it comes to raw cognitive throughput. You want reasoning, clarity, precision? Neuralese could beat us at our own game.

“But Dieguito, LLMs are still hallucinating and getting things wrong”

– The LLM Hater

Sure. A lot of them are. And I might be completely off here. But models built to reason are already proving more accurate. Chain-of-thought, tree-of-thought and other techniques all try to force a step-by-step breakdown. It’s like watching a toddler narrate their Lego build. Clunky, but it works. And here’s where things get weird. That inner language, the model’s inner monologue, starts to feel just as chaotic and hazy as ours. Thinking burns energy. Nature figured out a way to make that work for us. We made machines. They’re still figuring it out.

Why should a model have to spell out a whole grammatically correct essay to think something through? Why not let it mumble to itself in its own weird way?

I ran a dumb little experiment. Just wanted to see if tweaking the way a model reasons, shifting its “language” a bit, could save on tokens without wrecking the answer.

A little dumb experiment

By using only prompt engineering, I wanted to see if I could get the model to reason in a language I don’t understand, but still produce the correct final answer, all while keeping it fast and using fewer tokens. I tested only the latest mini OpenAI models that don’t have reasoning embedded. I chose a classic test case that models without reasoning usually fail.

“Sally has 3 brothers, each with 2 sisters. How many sisters does Sally have?”

Test 1: Just asked the question straight up.

Final answer: Sally has 2 sisters
*871ms, 8 tokens, and wrong answer ❌

Test 2: Wrapped the whole thing in a JSON schema. Forced the model to explain each step. It cost 20 times more to get it right.

Final answer: Sally has 1 sister
*2.559ms, 164 tokens, right answer ✅

Test 3: Limited the vocabulary to words with four letters or less. Still got the right answer. Faster and over 60% more cost-effective than test 2.

Final answer: Sally has 1 sister
*1.633ms, 64 tokens, right answer✅

The final test was successful on the 4o-mini, 4.1-mini, and 4.1-nano. Even the nano, which I find almost useless, got things right.

The best result I got was this reasoning that sounded like stripped-down English. Kind of minimalist. With “bros” and “sis”. No fluff. And that seemed to help. There’s no judgment on grammar when it comes to reasoning. Clarity doesn’t always need correctness.

To be honest, this wasn’t test 3. It was more like test 100. I tried switching to other languages. Simplified Chinese worked better than expected, each symbol packs more meaning. Telegraph-style English helped too. Fewer filler words, less ambiguity. Even Simplified English made a difference. Some other experiments failed, costing more or failing to find the answer, such as using logic symbols, not using vowels or spaces during reasoning, which made sense based on how token prediction works.

This was not a breakthrough strategy for cutting reasoning costs or use that as machine-to-machine (M2M) communication. But it’s a nudge. A clue. A hint that maybe we can think better by saying less. And it’s still a long way from pure vectors or emergent protocols. But we might unlock cheaper, faster, more energy-efficient reasoning (unless someone builds a clean and infinite energy source first).

A little sci-fi experiment

So let’s imagine that if there is a language machines use to communicate with each other, why not a programming language created by them that is efficient and probably impossible for us to understand?

Let’s call it Noēsis (from the Greek for “pure thought”). It is a token-only language. Each token is meaningless on its own. Meaning emerges only in the context of thousands of other tokens, across time, weighted by past executions.

Tokens:
Arbitrary identifiers, like:
ɸqz, ∆9r, aal, ⊠7, gr_, etc.
No keywords. No syntax. No punctuation. No variables.

Sequence-as-Code:
Execution depends on token sequence and adjacency, much like how meaning in neural nets emerges from token windows.

Compiled into Behavior:
Each token maps to a vector in a latent space. Program execution is the traversal of a latent vector graph.

ɸqz ∆9r ∆9r aal ⊠7 ⊠7 ⊠7 gr_ ∞yx ɸqz gr_ gr_ ⊠7 xaz ɸqz

Same “program” executed in a slightly different environment might yield a different outcome, not because the code changed, but because the system’s context matrix evolved. Just to make us anxious.

Syntax of a mind that isn’t ours

We keep polishing the human-sounding outputs. Meanwhile, the real magic might be in listening closer to the alien syntax already unfolding under the surface. Well, alien because this language will evolve by a technology that wasn’t created by us, but our creations.

So yeah. Neuralese. You’ll never speak it. You’ll never read it. But it might end up being the most fluent language on the planet.

And we’re the ones with the accents.


Leave a Reply

Your email address will not be published. Required fields are marked *