Justify my mistake, please!

If you’re a non-engineer experimenting with prompt engineering, here’s the deal: understanding how LLMs actually work isn’t just a nice-to-have, it’s non-negotiable.

Take the Chain of Thought technique as an example. Using it after the model has already decided is like trying to screw in a bolt with a hammer, it’s just not going to work. At that point, the model isn’t “thinking” or “deliberating” anymore. It’s already locked in its answer and might just be scrambling to justify something wrong.

Sounds obvious, right? But tons of articles and tools (looking at you, Langchain) get this wrong. They miss the timing and end up using models to evaluate their own responses in ways that don’t make sense, leading to garbage conclusions.

If you want to build better AI solutions, get the basics down. Knowing how LLMs tick will save you headaches, lead to smarter results, and happier users.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may also enjoy…