When I first moved to San Francisco, I felt like just another tech bro leech, slurping up overpriced coffee, making rents go up, sucking the life out of the city, and giving absolutely nothing back. I had the whole starter pack: pitch deck, startup hoodie, a head full of “move fast” mantras that sounded deep at the time but now feel like bad Twitter threads. I told myself I was “creating value,” but honestly, I was mostly just creating slides.
Somewhere along the way, I bought into the idea that if what you’re doing isn’t “scalable,” then it’s not worth doing.
You know the voice:
“Damn Dieguito, you should be doing something globally impactful. Think local, act global.”
– My startup brain parasite
It’s a catchy mantra, but sometimes it blinds me from the stuff right in front of me, the things that don’t scale, don’t monetize neatly, and don’t promise unicorn exits. That little whisper in my head can make me dismiss real things (like the people planting trees in our park) while I chase hypothetical millions.
Then, on a rainy Tuesday morning, I went to an event organized by Nadine Hammer (half meetup, half community forum) where folks gathered to talk about sustainability and a new city project. I’ll admit it, my expectations were low. Who the hell shows up on a random weekday morning to talk about creeks and climate? But surprise: a lot of people did. Passionate, curious, caffeinated people.
Event at the Walnut Creek Library. Coffee. Good company. A first look at one of Walnut Creek’s biggest civic projects.
Talking about coffee, you know that Starbucks gives free coffee for non-profit events? We got our portion of it. Nothing like free caffeine to fuel a conversation about creek restoration and circular economies.
So, I met (and re-met) folks doing the kind of work that doesn’t hit TechCrunch headlines. Hyper-local stuff (if that’s a category), like running a library program, upcycling fashion, or restoring a single overlooked stretch of creek. Tiny, unglamorous projects that keep the world stitched together in ways we only notice when they’re gone. These aren’t people waiting for Series A funding, they’re the ones showing up with gloves, clipboards, and a lot of stubbornness.
Take Civic Park in Walnut Creek. Have you been lately? Four volunteer-run organizations have been working there for years. Thanks to them, you can now actually see Walnut Creek’s creek (the few original pieces of it that survived). Years of persistence, patience, and picking up trash that no venture capitalist would fund. They didn’t need a growth strategy; they needed boots and trash bags.
Planting native species around the creek.Removing non-native things around the creek.Teenage volunteers at Deer Lake, you heard right, teenagers.Coffee burlap and my sweat (both from Brazil), helping little oak trees survive the drought.
One funny thing about this is that while I was helping a group of people plant and water oak trees in an open space restoration area around the city, the drought is so severe here that it’s really hard for those trees to survive. So my mind kept wandering.
How can we scale this? Can’t we be more efficient?
– Pa.. pa.. parrot brain
I went back to the drawing board to come up with a solution that could make a huge impact in remote areas: automated irrigation drone stations powered by sunlight that charge and release drones 24/7 to collect water from a nearby pond and drop it on recently planted trees (inspired by Nathan’s project). The survival rate would increase greatly, and it could drastically reduce wildfires in the future, ping me if you have a billion-dollar check ;p
But something I was missing while dreaming about that is that part of the whole experience is to strengthen my bond with the city and make me pay more attention to my surroundings. Once you notice, you can’t un-notice. You start seeing these efforts everywhere. Someone teaching kids how to compost in a library basement. A group fixing up old bikes for free rides. Upcycling clothes workshop. Seniors everywhere picking up trail trash. None of it scales. All of it matters.
A small creek is where life starts. Water flows to rivers, to the bay, to the ocean. And, if you let it, it also flows to connections: to people, to ideas, to myself, to hope. It’s humbling to remember that something as overlooked as a trickle of water in a city park can link to everything downstream.
Walnut Creek’s creek
That brings me hope that… local work ripples outward… and that ripple is global…
That these small, stubborn efforts I may dismiss as “too local” are the ones that might actually matter. The ones that sneak under the radar while I’m busy pitching “the next big thing.”
Not everything needs a hockey-stick growth curve. Sometimes the curve is just water bending around rocks in a creek, reminding us that slowing down, changing course, and flowing steady can be its own kind of success. And honestly? That’s enough.
Yesterday I was rummaging through an ancient backup drive when I found a text file from 15 years ago. Inside sat an email I had bravely, or foolishly, addressed to the chief executive of Subway. I had eaten there a handful of times, spotted a pattern, and decided the top guy needed my young adult wisdom. Reading it now makes my cheeks warm, but cringe is a good teacher, so here we go.
Dear Director,
In all my visits to Subway stores I have noticed that your employees seem quite sad. At first I thought it was because I took too long to choose my condiments or something like that and that this bothered the attendants, but after watching other customers more closely I realized the ordering process itself can be stressful for the staff.
I do not know your employee‑motivation policies, but I believe they could be reviewed so everyone comes out ahead.
I must admit I do not feel entirely comfortable with the ordering procedure. Perhaps an optional form could let customers tick the items they want and hand it to the attendant. The customer could still watch the sandwich being assembled, asking for more or less of a condiment, and the whole process might move faster and put less stress on the attendants.
Best regards, Diego Dotta
Yes, I really suggested a paper checklist so I would not stress the staff by naming veggies under pressure. ✋😊
Why I am sharing this fossil
I keep these artifacts around to remind myself that the impulse to fix everything is both a gift and a hazard. Fifteen years later I still spot pain points in random systems, but I try to ask first, build later, and send fewer midnight missives to unsuspecting CEOs. I remember doing this more than once, sometimes searching for emails of C-level people at those companies and sending random ideas or complaints.
Also, if you have a forgotten folder full of old emails, open it. You will meet a past version of yourself who thought laminated order sheets were the answer to world peace. It is humbling, hilarious, and strangely motivating.
Now I am going to grab a sandwich and see if anyone looks even slightly happier. If they do, I will choose to believe my letter changed the course of history. If they do not, I will quietly enjoy my onions and move on.
Somewhere between thinking and speaking, there’s a strange place where meaning starts to solidify. It’s not quite a word yet. More like a haze of associations. A mental sketch your brain tries to translate into something shareable. Sometimes it works. Most of the time it doesn’t, at least for me. I tend to mumble a lot.
That private language in your head, the one you use to talk to yourself, isn’t English or Portuguese or Python. It’s not even a language, really. It’s raw and messy. A kind of silent shorthand sculpted by experience. Try catching it. Try explaining it. It slips through like fog in your fingers.
The language of thought (according to machines)
Just a quick heads-up before Reddit experts start jumping on me again, 👩🏫
This is arm‑chair speculation, not peer‑reviewed linguistics. I’m poking at metaphors, not staking a PhD thesis. It’s a thought experiment about the strange, alien dialects “spoken” by machines, and what they might reveal about how we understand language, and ourselves. And I’m definitely not the only one thinking this, see arziv1, greaterwrong and here.
Still, any mistakes here are my own.
So, talking about experts, scientists are also poking at our actual brains: feed fMRI signals into a network, get a fuzzy image back. They’re trying to reverse-engineer what we see, dream or remember. Some of the reconstructions look like fever dreams. Others are eerily close. It’s like watching the mind on a bad TV signal, but the tech keeps getting better.
Re-creations of images based on brain scans (bottom row) match the layout, perspective, and contents of the actual photos seen by study participants (top row)- Nature
Then there’s the way we connect our minds to each other. Through letters carved into stone. Through cave paintings, vinyl records, emojis, GIFs, memes. Through every kludge we’ve invented to make what’s in here vaguely resemble what’s in there. Language is our duct tape for consciousness.
What the machines whisper when we’re not listening
When we started teaching machines, we handed them the same duct tape. Natural language. Our language. We told them, here, talk like us. So they did. Or at least, they pretended to and we believed.
Quick cheat sheet before we tangle the wires. There are three very different “languages” in this story (I can hear the linguists sharpening their red pencils):
1. Human languages: messy, culture‑soaked, built for wet brains and bad at precision. 2. Machine protocols (non-neural): JSON blobs, HTTP headers, rule-bound micro-dialects that leave no room for doubt. 3. Latent representations (neural): the private vector soup inside one model, never transmitted (outside a lab demo), never meant for ears.
ChatGPT. Alexa. Siri. Every chatbot trying to pass for clever at dinner sits in bucket one when it chats with us. But here’s the twist. When machines talk to each other, they skip the human stuff. No syntax, no grammar, no metaphors.
When Alexa calls Roomba they are not exchanging cute phrases. Alexa fires a strict micro‑dialect packet (JSON over HTTPS). That’s bucket two, a protocol, not a mind‑to‑mind vector swap. Efficient, silent, built for zero confusion.
Those packet‑speaking systems are narrow tools, nothing close to AGI. But we are already wiring up broader agents that learn on the fly, pick their own tactics, and only visit a language model when they need to chat with us. For talking to each other they could ditch words entirely and trade vectors, numbers, tokens, dense nuggets of meaning we can’t read or pronounce.
Is that Neuralese? Maybe. It is not a code we will ever study in a classroom, not because it is too complex, but because it was never meant for us. If a signal can move intent across silicon and spin motors into action without leaving a human‑readable trace, “language” feels like the best word we have.
Do LLMs actually communicate inside the stack? Not like two agents tossing discrete symbols in a reinforcement game. A single model is one giant function. Its only chatter is with itself. Neuralese might be closer to private thoughts than walkie‑talkie slang. If you have a paper that shows real agent‑to‑agent symbol swaps, drop a link. I want to dig in.
Inside the black box: The unspoken language of AI
They don’t think. Not really. They don’t speak, understand, or mean. What they do is behave in ways that simulate meaning so convincingly, we reflexively fill in the gaps. We anthropomorphize (everything, as always). And in that space between their mimicry and our projection, something emerges, something like communication. Or, as Rodney Brooks said, “…we over-anthropomorphize humans, who are, after all, mere machines.”
Back to the (non?)language Neuralese, this dense tangle of vector math, no rules or roots that only appears as thought when reflected in our direction.
It’s not that the model knows what a cat is. It’s that when we ask it about cats, it activates just enough of the “catness” region in its mathematical dreamspace to give us an answer that feels right. That feeling is the trick. The illusion. But also the revelation.
But this new language lives inside the black box. It’s the internal chatter of large language models. The soup of token embeddings sloshing around under the hood. It’s not designed to be elegant or expressive. It’s designed to get the job done.
Human language is a marvel, full of ambiguity, poetry, subtext, and shared cultural connections. But to a machine, it’s just noise, indirect and redundant, made for soft, wet brains. Machines might not even need a large language model to communicate with each other.
“But Dieguito, LLMs are still hallucinating and getting things wrong”
– The LLM Hater
Sure. A lot of them are. And I might be completely off here. But models built to reason are already proving more accurate. Chain-of-thought, tree-of-thought and other techniques all try to force a step-by-step breakdown. More steps reduce guessing, but they don’t grant wisdom, just like talking out loud helps humans avoid dumb math errors.
It’s like watching a toddler narrate their Lego build. Clunky, but it works. And here’s where things get weird. That inner language, the model’s inner monologue, starts to feel just as chaotic and hazy as ours. Thinking burns a lot of energy. Nature has figured out a way to make that work for us. We are still trying to find a way to make machines think without burning the planet.
Why should a model have to spell out a whole grammatically correct essay to think something through? Why not let it mumble to itself in its own weird way?
I ran a dumb little experiment. Just wanted to see if tweaking the way a model reasons, shifting its “language” a bit, could save on tokens without wrecking the answer.
A little dumb experiment
By using only prompt engineering, I wanted to see if I could get the model to reason in a language I don’t understand, but still produce the correct final answer, all while keeping it fast and using fewer tokens. I tested only the latest mini OpenAI models that don’t have reasoning embedded. I chose a classic test case that models without reasoning usually fail.
“Sally has 3 brothers, each with 2 sisters. How many sisters does Sally have?”
From more than 100 tests I made, here are some insights Test A: Just asked the question straight up, no reasoning prompt. Test B: Wrapped the whole thing in a JSON schema. Forced the model to explain each step. It cost 20 times more to get it right. Test C: Limited the vocabulary to words with four letters or less. Still got the right answer. Faster and over 60% more cost-effective than test 2.
Prompt style
Tokens
Latency*
Result
A. Plain ask
8
0.87 s
❌ 2 sisters
B. JSON schema
164
2.56 s
✅ 1 sister
C. ≤ 4‑letter vocab
64
1.63 s
✅ 1 sister
*Latency from the OpenAI playground, not a scientific benchmark. The final test was replicated successfully on the 4o-mini, 4.1-mini, and 4.1-nano. Even the nano, which I find almost useless, got things right.
Optimized reasoning: “Sally has 3 bros, Each bro has 2 sis. Sally is one sis. So, the other sis count is 1.”
During the tests, I tried switching reasoning to other languages. Simplified Chinese worked better than expected, each symbol packs more meaning. Telegraph-style English helped too. Fewer filler words, less ambiguity. Even Simplified English made a difference. Some other experiments failed, costing more or missing to find the answer, such as using logic symbols, not using vowels or spaces during reasoning, which made sense based on how token prediction works.
The best result I got was this reasoning that sounded like stripped-down English. Kind of minimalist. With “bros” and “sis”. No fluff. And that seemed to help. There’s no judgment on grammar when it comes to reasoning. Clarity doesn’t always need correctness.
This was not a breakthrough strategy for cutting reasoning costs or use that as machine-to-machine (M2M) communication. But it’s a nudge. A clue. A hint that maybe we can think better by saying less. And it’s still a long way from pure vectors or emergent protocols. But we might unlock cheaper, faster, more energy-efficient reasoning (unless someone builds a clean and infinite energy source first).
A little sci-fi thought experiment
So, if you’re a linguist or an MS and you haven’t gotten upset about what you’ve read so far, now’s the time.
Let’s imagine that if there is a language machines use to communicate with each other, why not a programming language created by them that is efficient and probably impossible for us to understand?
Let’s call it Noēsis+ (from the Greek for “pure thought”). It is a token-only language. Each token is meaningless on its own. Meaning emerges only in the context of thousands of other tokens, across time, weighted by past executions.
Imagine each token as a coordinate, one point in a vast, high-dimensional landscape. Not with meaning on its own, but with potential. What matters isn’t what the token “says,” but where it leads.
I’m drifting into Black‑Mirror script territory here. Noēsis+ is a thought toy, not a roadmap. Skip to the next header if you’re done with thought toys; the rest is late‑night riffing.
Tokens: Arbitrary identifiers, like: ɸqz, ∆9r, aal, ⊠7, gr_, etc. No keywords. No syntax. No punctuation. No variables.
Sequence-as-Code: In Noēsis, tokens don’t have fixed meanings. Execution isn’t logic, it’s flow. Meaning emerges from proximity, repetition, and order, the way patterns in machine learning models seem to take shape across vast sequences. Not like programming. More like resonance. A mood that builds as tokens pass in relation to one another.
Compiled into Behavior: Imagine a language where each token isn’t a command, but a vector. Not syntax, but coordinates in a sprawling, invisible space.
Programs in Noēsis don’t “run” like code. They move. They drift across latent vector fields, tracing paths shaped by token proximity, past history, and ambient state.
Same program, different result, depending not on what was written, but on when and where it was run. Like a thought that feels different depending on your mood. It’s almost as if it’s meant to make us anxious, and maybe machines could get anxious too?
Not that machines “feel.” But if their outputs jitter with context, if the same input drifts into new behaviors, does it matter? From the outside, it looks like mood. Like uncertainty. Like… unease? 🫣
“Congrats, you burned 1,800 tokens to say nada, tech bro.”
– A linguist ex-friend
Point taken. You found my weak spot.
Syntax of a mind that isn’t ours
Jokes apart, while we keep polishing the human-sounding outputs, the real magic might be in listening closer to the alien syntax already unfolding under the surface. Well, alien because this language will evolve by a technology that wasn’t created by us, but by our creations.
So yeah. Neuralese. You’ll never speak it. You’ll never read it. But it might end up being the most fluent language on the planet.
And we’re the ones with the accents.
Changelog & Mea Culpas
The first version of this article triggered strong emotions on Reddit. I updated it and I ended up getting some great insights though, thanks to all the anonymous experts who educationally slapped me into a less “unscientific” and “idiotic” thought experiment.
At some point, someone decided a “strong personality” meant loud opinions, fast answers, and the kind of handshake that says I drink protein shakes with my eyes closed. And the rest of us, with our awkward silences and well-timed nods, just quietly slipped into the background.
For a while, I bought into that. Thought maybe I was missing something. Maybe I needed to speak up more or say things like “let’s circle back” with a straight face. But then I started noticing the quiet people. The ones who listen more than they talk. The ones who sit through a meeting without posturing, then send one sentence afterward that rearranges the whole thing. They’re not weak. They’re just not peacocking.
I wrote this on a Tuesday when I felt like a ghost in a room full of confident noise:
If I am not a mountain’s cry, am I the breeze that passes by? If I don’t shout, or strike, or shine, can stillness be a strength of mine?
Turns out, yes. Stillness sees things. It notices how people shift in their chairs when they lie. It remembers where the scissors were last week. It doesn’t rush to fill silence just to prove it’s there.
I’ve learned to stop asking whether I have a strong personality. It’s the wrong question. The better one might be, am I honest? Am I curious? Can I sit with not knowing and not pretend otherwise?
Strong is relative. Some of us are just the type to quietly move a chair so someone else doesn’t trip. No one claps, but no one falls. That counts.
Anyway. That’s where I’m at. Probably still overthinking it. But at least I’m doing it quietly.
Stumbled on a dusty folder while rifling through an old hard‑drive backup. Inside sat scribbles about Faith Popcorn’s trend bombs, written by a younger me who thought Winamp skins were the height of customization.
Two decades later they still hit, so I stitched the notes into one coherent ramble and kept the timestamp vibe intact.
Cocooning
Back when 56 k modems squealed like wounded robots, parking myself at home felt radical. Work, class, and late night Counter‑Strike all funneled through the same beige tower. It looked like productivity, really it was bubble wrap for the soul. Faith called it the craving for a padded nest against daily roughness. Turns out pizza boxes double as insulation.
Clanning
Even hermits need tribe time. Message boards, LAN parties, and sprawling ICQ lists let miniature crews swap obsessions. One night I am hunting Photoshop tips, next I am deep in a Quake forum arguing rocket splash radius. Clanning hands out membership patches to anyone who shows up and types fast.
Fantasy Adventure
Thornton Wilder nailed it. Safe at home we crave peril, in peril we crave home. My shortcut was EverQuest marathons. Dragons melt stress better than therapy, at least until the server crashes. Imagination never loses.
Pleasure Revenge
We grind, then smash Buy‑It‑Now on something shiny. That impulse feels like justice for commuter traffic and neon deadlines. Consequences get punted to tomorrow‑morning Diego. Tonight is about the dopamine spike.
The disk also held four fresh clicks that push the plot forward.
Mancipation
Suddenly the razor aisle stocks moisturizing gel and magazines tell guys to exfoliate. My grandfather would laugh himself silly. Sharing family gigs and cooking a half decent pasta feels less like rebellion and more like catching up.
Ninety‑Nine Lives
Every browser window wants a slice of the same day. Job, side gig, gym, band practice, grandma’s birthday, password resets. Multitasking is a myth yet I keep chasing it because the alternative boots slower than Windows Me.
Check Out
When the juggling drops a flaming chainsaw, Check Out surfaces. Quit the gig. Nuke the roadmap. Backpack across South America. The reset button is shock therapy for people hooked on busy badges. I have not punched it yet but the fantasy lives on a sticky note beside the monitor.
Living Click
All trends swirl into one gnarly soup. Living Click means syncing the fragments into intent. Less autopilot, more joystick. The buy‑in is attention, the payoff is those rare flashes where everything aligns and the noise cuts.
Why Bother With This List Now
Because the ideas still ring true and because early‑twenties me predicted hoverboards by 2025. Instead we got pop‑up blockers and a thousand passwords. These eight clicks became a crude compass. They do not guarantee bliss, they just flag the fault lines we keep dancing on.
So here is the gist. Build the nest, join the clan, slay the dragon, eat the cake, moisturize, juggle, bail when it turns toxic, then stitch the pieces into something that resembles living. Pull that off and ping me on ICQ. I will be online unless someone picks up the phone.