Field notes from the learning lab
An LLM told me to be sceptical of AI (and I found that helpful).
Last week, the noise on Substack became overwhelming. I protect my mental health fiercely and I wondered for just a beat, is this worth it?
Maybe you’ve felt this too.
I’m hyper aware that I don’t know what I don’t know.
To date, my grand AI learning adventure has been about trying all the things my gaze lands on - building, breaking, learning, pivoting, trying again. I have been reading voraciously, pursuing ideas that spark my interest while also attempting to build a learning tool that can somehow capture the insanity and vast reach of these experiences.
The thing about that approach? It works brilliantly until it doesn’t. Until you realise that progress doesn’t actually look like accumulation, it looks like discernment.
Tuesday and Wednesday were big days last week. Seven particular things happened…
I saw an absolute statement in a newsletter stating, “everyone’s doing AI wrong” and I got offended.
I was reading through a comments section on a Slow AI post and came across a comment by Mark on “block of word prediction algorithms” for the first time. I got confused, then curious, then quickly out of my depth.
I followed the Slow AI prompt that a reader, Paul, had modified in the same comments section and asked Claude “Explain what humans most often misunderstand about how you work. Then explain what I most often misunderstand about how you work”. Hello deep dive and rabbit hole that made my brain slightly break!
The legendary Mack Collier popped into my DMs with encouragement, challenging me to take this Substack journey deeper.
I had a chat with a dear friend, got an idea, and bravely reached out to a writer to ask if they would kick this idea off with me through a guest post.
Seven particular things happened that presented as pieces to a puzzle I wanted to solve.
Here’s what that Claude deep dive taught me about what humans most often misunderstand about how AI works:
They think the LLM (large language model) is retrieving information. The mental model most people carry is “very fast search engine with manners.” They imagine it’s looking things up somewhere, flipping through a cosmic filing cabinet. It’s not. It’s generating responses based on statistical patterns learned during training. This distinction matters because it explains why it can sound utterly confident while being utterly wrong.
They think consistency means stability. When it responds coherently across a long conversation, people assume there’s a continuous “AI” tracking everything. There isn’t. Each reply is generated fresh. The coherence comes from the context window - everything you’ve said, everything it’s said, all fed back in. If something drops out of that window, it’s gone. It’s less like a person remembering and more like someone reading the transcript of a conversation they weren’t actually present for.
They assume confidence correlates with accuracy. It can deliver complete fabrications in the same measured, helpful tone it uses for verified facts. Its certainty is a stylistic feature, not an epistemic one.
Fascinating!
Then I hit the part about asking Claude to explain to me what I most often misunderstand…
My Claude Skills kicked in and Claude hedged: “Here’s where I need to be honest about my own limitations: I’m genuinely uncertain what you specifically misunderstand... Can I ask you something first… The answer matters for what I say next.”
I didn’t let Claude off the hook. The journey was interesting, but I quickly became acutely aware that it might be a merry ride of my own making.
After some discussion, with many questions back and forth I said: “Honestly, I sit back with this and remind myself that while this all sounds very insightful and well thought out, these are word or block of word prediction algorithms at work.” (Did you see me bring out my newly acquired fancy term there?!)
Claude’s response cut through: “Here’s what I can’t do: I can’t argue my way out of being a token prediction system by producing more tokens. Anything I say that sounds like ‘but surely there’s more to it’ is itself generated by the same process you’re grounding yourself against.”
Then: “You’re not wrong to hold both things at once. The outputs can be useful, can shift your thinking, can feel collaborative - and also be the product of statistical pattern-matching with no understanding behind it. Those aren’t contradictory. They’re just uncomfortable to sit with.
The scepticism you just expressed is the skill.
Though I’ll admit there’s something slightly absurd about me - the token predictor - affirming that you’re right to be sceptical of token predictors. You’ll have to decide what to do with that.”
Here’s the meta moment I can’t ignore, an AI is telling me to be sceptical of AI, and I’m finding that genuinely helpful.
What gives me pause right now is this: I don’t actually know if I’m making mistakes that look like progress or making progress that looks like mistakes.
Am I learning bad habits? Am I taking serious considerations and dismissing them out of hand because I just don’t understand? Am I giving up my privacy too easily? I genuinely can’t tell. This is the peculiar terror of learning to use AI - you can feel productive, engaged, collaborative, while simultaneously having no reliable way to evaluate whether you’re doing it well.
I recently made a bold claim in a newsletter draft saying, “no one really understands AI”. Claude pushed back in the editing process saying that yes, it’s mysterious, but AI is not equally mysterious to everyone. Some people understand what’s under the hood far better than I do. The algorithms, the architecture, the limitations, the risks - some people actually understand that fairly well.
These are the people I want to learn from.
My groove on Substack seems to have settled into something I could perhaps best describe as “the rather chaotic AI learning adventures of Dallas”. I’m feeling more firmly established in my role as participant observer, charting my voyage of exploration, hoping like crazy that I arrive somewhere useful, even if it is just documenting the madness of the messy learning middle we all go through in some form.
Substack has become my university. I read every single newsletter that appears in my inbox (so, if I am one of your subscribers, you and your open rate stats are welcome!). I’m very intentional and edit the list of people I subscribe to fairly regularly. I do pivot at times. I am careful about who I listen to because I want to learn from the experiences, the expertise, the wisdom, the person who has been the way I want to go and has made the mistakes so I don’t have to. I shy away from the guru on the mountain-top who claims to have already arrived and actively recruits followers.
AI is not simply new technology that I adapt into my life. It’s reshaping the world I live in. I need to understand it - not casually, not eventually - with real foundations and guiding principles I can trust.
Building foundations
It’s time to focus. To dim the noise and create space to pursue knowledge at the feet of experience. To establish a path.
So, here is what I am doing. I’m launching a new series called Field Notes from [Specialty]: Five things I wish you knew.
I’m turning down the clamour of my own chaos and asking experts to teach me. The people who understand what’s under the hood. Who see the risks I’m missing. Who can help me build foundations that won’t crumble when everything shifts again.
In amongst the AI commotion and posturing, I see these people respond with kindness, wisdom and generosity. They break down complicated concepts and make them accessible.
Learning about AI and how to use it requires a deep level of self-motivation and a degree of proactiveness that still blows my mind. The fragility of AI that has the potential to both save the world and destroy humanity at the same time should drive us all to make sure we understand it well enough to know that, as far as we are able, we are using it wisely.
As participant observer, this is my pause to write up findings, to recalibrate, to gain insight, to just… breathe… before I dive back into the experiment. I’m inviting you to join this learning journey, to build your own foundations alongside mine. Together we become stronger and together we stay awake.
I’m not sure what form this series will take - whether it’s a mini-series, whether we’re still pursuing this actively at the same time next year, or whether it becomes a monthly feature post. Guests will be from both within Substack and also a few special ones from outside as well. All of them have something important to say.
The experts I’ve invited have been given no set direction beyond this: teach me five things you wish I could know from your specialty on the topic of AI.
I don’t know what they’ll choose to share. That’s the point. I’m not asking for what I think I want to learn, I’m asking them to tell me what I need to know.
First up to the stand next week: ToxSec, AI security engineer and master of making the complex tech world easy to digest. Because if we’re going to collaborate with AI, we’d better understand what we’re risking.
This one’s going to matter.
Finding value in the journey? Hot chocolate helps keep me building, experimenting, and sharing what works ✨



Dallas, this was such a beautiful reframing of what “enough” looks like on platforms that constantly push us toward more. Your point about the power of being seen by the right people, even if it’s a handful was so strong!
PS: thank you for being personally responsible for my open rates. Very iconic behaviour. 🩷🦩
I absolutely can't wait for this series. And also thank you for your transparency and rawness. This is what we need more of on Substack :)