• Gut Feeling Prioritization

    I’ve always found myself at a crossroads when it comes to roadmap prioritization. The constant struggle between waiting for conclusive results from A/B tests to set clear priorities, managing the flood of customer feedback, and the need to move forward with strategic planning has been a significant challenge for me.

    I recently rewatched one of my favorite Goodwater Masterclass episodes featuring Sean Rad (Tinder). It reminded me of stories about successful companies where, during the early stages of their product development, the most impactful ideas were top-down and based on gut feelings.

    Then, as the company grew, formal processes became necessary to deal with the increase in customer requests and the influx of new ideas from the team, which made decision-making slower.

  • 545-mile ride

    When I signed up for the @AIDSLifeCycle 545-mile ride from SF to LA I underestimated the training and fundraising challenge. As usual! So, I’m building some stuff with FlutterFlow, @Strava and @DonorDrive to keep me accountable.

    I’m enjoying it. As usual! :p

  • Is my brain LLM-based?

    When I’m asked to check a box to confirm I’m not a robot, I don’t give it a second thought, of course I’m not a robot.

    On the other hand, when my keyboard guesses the next word I’m about to text, I start to doubt myself.

    Just thinking out loud about building something to help me typing with more confidence and sound natural.

    Food for thought: https://fortune-com.cdn.ampproject.org/c/s/fortune.com/2023/06/13/chatgpt-like-human-language-robots-linguistics-artificial-intelligence/amp

  • Justify my mistake, please!

    If you’re a non-engineer experimenting with prompt engineering, here’s the deal: understanding how LLMs actually work isn’t just a nice-to-have, it’s non-negotiable.

    Take the Chain of Thought technique as an example. Using it after the model has already decided is like trying to screw in a bolt with a hammer, it’s just not going to work. At that point, the model isn’t “thinking” or “deliberating” anymore. It’s already locked in its answer and might just be scrambling to justify something wrong.

    Sounds obvious, right? But tons of articles and tools (looking at you, Langchain) get this wrong. They miss the timing and end up using models to evaluate their own responses in ways that don’t make sense, leading to garbage conclusions.

    If you want to build better AI solutions, get the basics down. Knowing how LLMs tick will save you headaches, lead to smarter results, and happier users.