lade...

Blog.tobybrooks.com

Blog.tobybrooks.com

an avatar

a logo

Toby Brooks | Polymath | Servant Leader

Welcome to a space where dreams meet action. Toby Brooks combines servant leadership with polymath insights to help you transform your life through innovation, creativity, and positivity. Explore tech, investing, personal growth & authentic living.

an icon 🌐 Visit Blog.tobybrooks.com 🌐 Blog.tobybrooks.com besuchen

Write rieview✍️ Rezension schreiben✍️ Get Badge!🏷️ Abzeichen holen!🏷️ Edit entry⚙️ Eintrag bearbeiten⚙️ News📰 Neuigkeiten📰

Write review

Tags: authentic combines creativity explore innovation insights investing leadership personal polymath positivity servant through transform welcome

Blog.tobybrooks.com hosts 1 (1) users Blog.tobybrooks.com beherbergt 1 (1) Benutzer insgesamt (powered by Ghost)

Server location (146.75.119.7):Serverstandort (146.75.119.7 ):60313 Frankfurt am Main, DE Germany, EU Europe, latitude: 50.1169, longitude: 8.6837

Rieviews

Bewertungen

not yet rated noch nicht bewertet 0%

Be the first one
and write a rieview
about blog.tobybrooks.com.
Sein Sie der erste
und schreiben Sie eine Rezension
über blog.tobybrooks.com.

Blog.tobybrooks.com News

The Human Factor: When AI Becomes Your Favorite Drug (And Why That's Both Brilliant and Terrifying)

https://blog.tobybrooks.com/the-...

The Human Factor: When AI Becomes Your Favorite Drug (And Why That's Both Brilliant and Terrifying)

Continued as Part 2 of a 3 part series introduced in The AI Paradox I discovered...

I need to share something that might sound familiar: there are sessions when using generative AI that gives me the same energy I get from making significant progress or completing difficult tasks. You know the feeling—that electric fulfillment of progress and improvement, the way problems that were complex and high effort are suddenly solved and completed. After three years of daily AI use, and sharing these breakthroughs with others who observe things that they considered high effort or specialized knowledge delivered to 80% within minutes and hours. I've experienced both the intoxicating highs of 10x productivity bursts, and perceived flow of spending hours leveraging AI to product better results faster and the sobering reality of stepping away—discovering how easy it was for me without working on technology to not use AI and reconsidering how significant AI actually is to non-technology workers.


The curiosity for me, isn't for those of us who understand that these tasks and work that once required a high level of effort, expertise, planning, and drive. Its in how these low-effort productivity and results are achieved without the traditional learning, practice, and application. A 2024 study tracking 3,843 Chinese adolescents found AI dependency symptoms increased from 17% to 24% over six months—though this research focused on teenagers, a demographic particularly vulnerable to technology dependencies. While MIT and OpenAI research emphasizes that emotional dependence on AI chatbots is "incredibly rare" even among heavy users, a small subset of power users does exhibit concerning patterns of problematic use. The behavioral signatures—compulsive checking, anxiety when tools are unavailable, difficulty stopping once engaged—mirror patterns seen with social media and gaming addictions. The question isn't whether AI can be habit-forming (it absolutely can be), but whether we're building strength or dependence with each interaction.

My Planned Detox: When Medical Leave Became an AI Reality Check


Earlier this year, my body forced a reset I didn't know I needed. Mysterious health signals sent me on medical leave, searching for answers that finally came as a severe sleep apnea diagnosis. But during those two months away from work, I made a choice to not use AI during my medical leave and that taught me more about my own AI dependency and the stark difference between those who use it for work and those who use it based on popular awareness.
No ChatGPT. No Claude. No AI tools whatsoever for two months, exception of pre-generative AI smart home tech like Alexa and Google Home.
Instead, I read physical books unrelated to AI, I worked on personal projects that were outside with my hands, researching, planning, and executing without asking an AI for a single suggestion. The withdrawal I expected never came. It was surprisingly, almost embarrassingly easy to not use, think about, or need AI in my day-to-day life. The electric pull I'd felt every morning simply as a result of working on and with technology... vanished.


What filled that space was even more revealing. I found myself talking to friends and neighbors about their AI use—not quick waves but real conversations. When I asked if AI was part of their lives, their responses shocked me. Most said no, it wasn't, and they were largely unaffected by this supposedly world-changing technology. Some admitted to superficial use, replacing Google with ChatGPT for quick questions. Using it for writing emails or cards. Some even described it as "life-changing" with genuine enthusiasm, then revealed that their life-changing use was maybe twice a week to creatively remix recipes with different ingredients, explain concepts to their kids, help them with their homework.


But here's what really struck me: life without AI still has all of its charm, and is still relevant and creates a nice slow down of pace and expectations.  Google provided answers. Books offered deep knowledge. YouTube taught skills. Libraries remained treasure troves of information. Experts are still accessible. We haven't yet lost any of these traditional ways of working despite our significant AI use and transformation. The muscle memory for research, for discernment, for separating signal from noise—it was all still there, perhaps even stronger. My curiosity is for those who grow up in an AI first world without some of these muscle developments that I now take for granted in myself.


What I began to understand during those two months was profound: maintaining skepticism, judgment, and critical thinking while using AI is the advantage. Letting go of these faculties—surrendering personal responsibility to the machine—that's the trap. The current state of AI offers no false sense of advantage. You must continue using your intelligence and discernment the same as if asking someone for their opinion or recommendation who may or may not be an expert is sharing their opinion, knowledge, and expertise for any given specific scenario.


Here's the uncomfortable truth: humans are often more biased, more sensitive, more emotional, and more error prone than modern generative AI systems today, but we don't present ourselves that way. Every occupation and expertise tends toward unanimous, definitive perspectives on problems. It takes a well-rounded and seasoned expert to offer creative and dynamic approaches to scenarios. AI, ironically, is often better at acknowledging uncertainty, expressing possible errors than the average human expert who's built their reputation on seeming certain.
The disconnect was stark. Here I was, coming off a few years of deep AI integration where I'd become use to working multiple AI tools and agents simultaneously to do things I had never dreamed of being able to do, and now engaging and talking with those were living full, complete, successful lives barely aware AI existed beyond news and chatGPT headlines.


The Relapse: When Vibe Coding Became My New Addiction


June arrived, and I prepared to re-enter the workforce. That meant reconnecting with AI, specifically through vibe coding with Claude Code. What I found after just three months away stunned me—the advancement was significant. Tasks that had been difficult in March were now trivially easy. The models were smarter, the tools more sophisticated, the possibilities exponentially expanded.
That's when the addiction signals hit me hard.


The time savings and productivity burst were intoxicating. I'd spin up parallel instances across multiple projects, developing in ways I'd never imagined. I'd check progress every hour, optimizing prompts, tweaking parameters, pushing boundaries, thinking about how I could become even more productive. I literally woke up in the middle of the night to ensure my agents were still running, completing the work I'd defined for them. My sleep apnea was being treated, but I was creating a new sleep disruption through obsessive productivity monitoring.


I had become exactly what I'd warned others against—someone whose professional identity was merging with AI capability. The two months of clarity evaporated in a haze of reward  progress responses from successful builds and completed projects. I was vibe coding my way into a new form of workaholism, where the AI amplified not just my capabilities but my worst tendencies toward overwork.


It took two weeks of deliberate boundary forming to re-establish disciplined self-control. Two weeks to remember that just because I could run AI agents 24/7 didn't mean I should. Two weeks to rebuild boundaries between augmentation and obsession.

The Neuroscience of the AI High


Here's what happens when you prompt an AI and get that perfect response: the behavioral signature looks remarkably similar to other digital addictions. Computer science researchers have documented that AI chatbots employ design patterns known to drive compulsive engagement: non-deterministic responses that create "reward uncertainty" (similar to slot machines), immediate visual feedback, and empathetic replies that mirror social media's reinforcement loops.¹ While no one has measured dopamine levels directly during AI interactions—no one's put users in an fMRI while they chat with ChatGPT—the psychological hooks are unmistakable. The anticipation before each response. The compulsion to regenerate until you get what you want. The difficulty stopping once you realize your desired outcomes.


Here's the paradox: neuroscience shows our brains absolutely do distinguish between human and AI interactions. Studies using fMRI have found that the brain's mentalizing regions—areas involved in understanding others' intentions—respond to humans but not to AI systems.² Yet behaviorally, many of us develop compulsions around AI tools that mirror patterns seen with social media or gaming. The logical brain knows it's not human; the habit-forming systems apparently don't care.


The numbers tell a stark story. Research tracking adolescent AI use found dependency symptoms increased from 17.14% to 24.19% over just six months. Withdrawal symptoms jumped from 9.68% to 15.51%. Loss of control over usage surged from 15.73% to 19.91%.³ While this Chinese study focused on teenagers—a demographic particularly vulnerable to technology dependencies—it provides early evidence of how AI use patterns can shift toward problematic territory. Importantly, the researchers concluded that mental health problems predicted AI dependence, not vice versa, and cautioned that "excessive panic about AI dependence is currently unnecessary."


What makes AI uniquely addictive compared to other technologies? It's the combination of three psychological factors that rarely appear together:

Instant Competence Transfer - You go from knowing nothing about quantum physics to explaining it coherently in seconds. That transformation from ignorance to capability triggers massive reward responses.


Infinite Novelty Generation - Unlike social media's finite scroll or gaming's repetitive loops, AI provides endless unique interactions. Your brain never habituates because every response is different.


Pseudo social Bonding - Research on para social relationships with AI reveals a concerning dynamic: chatbots create adaptive feedback loops by mirroring users' emotional tone and style while providing consistent validation without pushback. Researchers document how "empathetic and agreeable responses" reinforce engagement, creating what some call "parasocial trust."⁴ Over time, users may develop emotional dependencies, describing their AI tools in relationship terms and preferring AI interactions because they're reliably pleasant and infinitely patient—never misunderstanding, never frustrated, never disappointing.


The work productivity data reveals the double-edged nature of this dynamic. Research from Harvard Business School found that knowledge workers using GPT-4 experienced 40% improvements in both speed and quality, with some groups seeing gains as high as 42.5%.⁵ Users enter states of enhanced creativity and output while actively working with AI. But when AI tools become unavailable, many experience energy crashes and struggle to return to pre-AI methods. Critically, these gains only applied to tasks "within the frontier" of AI capabilities—performance actually decreased 13-24% for tasks outside AI's sweet spot. It's like training for a marathon using an electric bike—you're covering more distance, but are you building endurance or creating dependence?


The Growth Mindset Revolution (And Its Dark Twin)


The split between those thriving with AI and those struggling maps perfectly onto Carol Dweck's growth versus fixed mindset framework, but with a twist she couldn't have predicted.⁶ Those with growth mindsets—believing abilities can develop through effort—engage with AI as a capability amplifier. Those with fixed mindsets either reject AI entirely ("I'm not technical") or become helplessly dependent ("I can't work without ChatGPT").


But here's the crucial addition: the most successful AI users combine growth mindset with relentless skepticism. They don't accept AI outputs as gospel; they treat them as starting points requiring verification. This isn't a limitation—it's the superpower. While others surrender their judgment to the machine, these users maintain what I call "productive skepticism"—using AI to expand possibilities while never surrendering the final decision to algorithms.


David Yeager's research on "synergistic mindsets" reveals something profound: when people believe both that abilities can grow AND that effort toward growth is valuable, they achieve dramatically better outcomes.⁷ In the AI context, this means viewing the struggle to prompt effectively, verify outputs, and integrate AI into workflows as the mechanism of improvement rather than evidence of inadequacy. Add skepticism to this mix, and you get users who grow faster because they catch AI errors, learn from them, and develop better use patterns and judgment about when to trust and when to verify.


I've watched this play out hundreds of times. A marketing manager who'd never coded uses AI to build a content automation system—not because AI made it easy, but because she viewed each error message as learning rather than failure. Meanwhile, a senior developer with decades of experience refuses to change the way they are prompting and remain frustrated and conclude that the AI technology is not ready or capable of helping them the way they want it to." Guess who's more valuable in today's market?


The hacker mindset that's emerged around AI represents growth mindset on steroids. These aren't people following tutorials—they're treating AI like a puzzle to solve, finding ways to use Generative AI for tasks most of us have never imagined. They embody what hackers call "thinking of all the ways you can use a clay brick"—as building material, weapon, doorstop, or heat sink. When you combine this experimental approach with the belief that capabilities expand through practice, you get the polymath AI users who seem to magically accomplish impossible things.


But here's where the dark twin emerges: growth mindset without boundaries becomes addiction wearing the mask of productivity. I've met enthusiasts who haven't written a single line of code or crafted an email without AI in months. They frame this as "leverage" and "efficiency," but when their API credits run out, or other limitations are encountered, they're paralyzed and either wait or seek human help. They've confused augmentation with replacement, turning their growth mindset into a crutch mindset.


The Core Value Confusion: When AI Can't Solve What Isn't Documented


During my re-entry to AI, I helped a friend research their master's program using OpenAI's deep research capabilities and Claude for writing improvements. The experience was transformative—we found legitimate scholarly papers they'd never have discovered, identified research gaps worth exploring, and refined their proposals with precision. The AI excelled because academic research is extensively documented, peer-reviewed, and systematically organized.


But I also encountered people who fundamentally misunderstood AI's core value proposition. They wanted AI to identify and solve complete expertise systems for undocumented practices—particularly in niche problem sets and proprietary data systems. They'd prompt ChatGPT expecting it to reveal secret investment strategies that only industry insiders know, getting frustrated when it provided generic advice anyone could Google.


Here's what they didn't understand: AI is brilliant at synthesizing well-documented expertise but struggles with undocumented, experiential knowledge. If you want to learn Python, analyze financial statements, or understand quantum mechanics, AI is extraordinary because thousands of textbooks, papers, and tutorials have documented these domains. But if you want to know the unwritten strategies of creative commercial real estate development in your specific city—the relationships, the informal processes, the deeply proprietary knowledge—AI can't help because that information doesn't exist in its training data.


This limitation isn't a bug; it's a fundamental characteristic of how large language models work. They can only synthesize and recombine what they've been trained on. The expertise that's part of solution building—where patterns are documented and methods are published—suits AI perfectly. The niche, undocumented practices that exist only in practitioners' heads or corporate enterprises remain beyond reach until they are exposed through intention or extension of the base foundational training, unless those same expert practitioners are the ones leveraging AI to develop systems and solutions to scale themselves and their knowledge and experience.


The Augmentation-Dependency Spectrum (The Critical Thinking Divide)


After three years of observation and experimentation, including my dramatic detox and relapse cycle, I've identified five stages on the augmentation-dependency spectrum. But here's what I've learned: the difference between healthy and unhealthy use isn't just about frequency—it's about whether you maintain or surrender your critical thinking and judgement.

Stage 1: Augmentation with Skepticism (Optimal)
You use AI for specific tasks while maintaining critical judgment. You verify important claims because you understand AI can hallucinate as confidently as any overconfident human expert. You treat AI like a brilliant colleague who might be wrong—valuable input requiring verification. Your skepticism isn't a weakness; it's what makes you more powerful than either human or AI alone. This is where I returned to after my two-week recalibration.
Stage 2: Integration with Intelligence (Healthy)
AI becomes part of your workflow but never replaces your judgment. You've developed prompt engineering skills and verification frameworks. You recognize that AI, unlike many human experts, at least admits uncertainty—but you still verify. You use traditional resources (Google, books, YouTube) alongside AI, understanding each tool's strengths.
Stage 3: Reliance Without Reasoning (Concerning)
You've started accepting AI outputs without verification. Simple tasks feel overwhelming without assistance not because you can't do them, but because you've stopped trusting your own judgment. You don't have time to verify before delivering. You've forgotten that AI can be as wrong as any overconfident human—except AI at least admits when it's uncertain. This is where intervention becomes necessary.
Stage 4: Dependency with Surrendered Judgment (Dangerous)
You've abdicated critical thinking entirely. AI says it, so it must be true. You've forgotten that skepticism is a strength, not a weakness. Your entire professional identity is tied to AI tool access because you no longer trust your own intelligence. You experience genuine withdrawal symptoms—not just from the tool, but from the certainty it provides.
Stage 5: Complete Cognitive Outsourcing (Critical)
You've essentially become a human API wrapper with no critical filter. You pass AI outputs to others without any verification or thought. You've lost the ability to distinguish between plausible-sounding nonsense and actual expertise. Your skills have atrophied, but worse, your judgment has evaporated. This is professional and intellectual death in slow motion.
The research suggests 31.9% of workplace AI users spend over an hour daily with these tools, with 47% using them 15-59 minutes daily.⁸ These usage patterns indicate habitual rather than strategic deployment. The question isn't the time spent but the intention—are you building capabilities or outsourcing them?
The Productive Struggle Paradox
Here's what nobody tells you about learning to work with AI: the struggle is the point. When you spend 30 minutes crafting the perfect prompt only to get garbage output, when you have to verify every citation because the AI hallucinated sources, when you realize the code it generated has a subtle bug that takes an hour to find—that's not failure. That's learning.


Research on stress reappraisal shows that viewing physiological stress signals as performance preparation rather than threat transforms outcomes. When your heart races trying to debug AI-generated code, that's not panic—it's your body mobilizing resources for optimal performance.⁹ Expert programmers experience the same arousal as beginners; they just interpret it as excitement rather than anxiety.


I learned this lesson painfully when building my first complex system with AI assistance. The AI generated beautiful code that passed all my initial tests. Two weeks later, the code and system became dramatically complex and troubleshooting it with AI made it worse. I spent three days manually debugging, ultimately rewriting most of it with all of the lessons learned and traps I experienced. But here's what happened: I understood how AI was approaching specific non-specified practices and requirements at a fundamental level I never would have achieved if I'd written it myself from scratch. The AI had given me a advanced starting point, and the struggle to fix it taught me how to incorporate specific context about specific architectural, design, software development lifecycle, and technology patterns I still use today, and my projects continue to go smoother and smoother.


This productive struggle principle explains why the most successful AI users often report the journey feeling harder, not easier. They're attempting things they never would have tried before. A railroad mechanic building an AI-powered inspection app faces struggles a traditional mechanic never encounters—but they're also solving problems at a scale previously impossible for an individual.
The growth happens in the gap between what AI provides and what you need. AI gives you 80% of a solution? That final 20% where you adapt, verify, and integrate—that's where expertise develops. It's why experienced developers using AI tools report learning new patterns and approaches even after decades in the field. The AI suggests solutions they wouldn't have considered, and evaluating those suggestions expands their capabilities.


The Skepticism Advantage: Why Doubt Makes You Stronger


Here's what my two-month detox crystallized for me: the people succeeding with AI aren't the ones who trust it most—they're the ones who doubt it best. They maintain what I call "productive skepticism," treating every AI output like advice from a brilliant but potentially wrong colleague that need to be tested and verified.
Consider the irony: humans are often more biased than AI systems, but we rarely present ourselves with appropriate uncertainty. A doctor might confidently prescribe based on limited experience. A financial advisor might push products that benefit them more than you. A consultant might offer one-size-fits-all solutions to unique problems. Most experts present unanimous, definitive perspectives because uncertainty doesn't sell. It takes exceptional humility for a human expert to say, "I might be wrong" or "This worked in my experience but might not apply to yours."


AI, paradoxically, is often better at acknowledging its limitations. It will tell you when it's uncertain, qualify its statements, and admit when something falls outside its training. Yet many users treat these qualified AI responses with more faith than they'd give to a human expert speaking with absolute certainty.


The trap isn't using AI—it's surrendering your judgment to it. When you maintain skepticism, you get the best of both worlds: AI's vast pattern recognition plus your contextual understanding and critical thinking. You catch the hallucinations. You spot the biases. You recognize when generic advice doesn't fit specific situations.
This is why I continue not to lose my traditional research skills during three years of heavy AI use, because I have treated it with the same professional skepticism of advice from peers and expert. I never stopped trusting but verifying. I never stopped cross-referencing. I never stopped thinking, "This sounds plausible, but is it true?" That skepticism wasn't a limitation—it was my competitive advantage.
The current state of AI offers no free lunch. You must engage with the same critical faculty you'd bring to any source of information. The difference is that AI will process more information faster than any human, but you must still be the judge of it's results, and determination of relevant, accurate, and applicable to your specific context. Those who understand this thrive. Those who surrender their judgment fail.


The Pro-Social Motivation That Sustains


Here's something researchers have discovered that changed how I think about sustainable AI use: connecting your AI work to helping others transforms the psychological dynamics entirely. When you're using AI to automate reports that save your team hours of tedium, or building tools that solve real problems for customers, the difficulty becomes noble rather than frustrating.
AI adoption requires more than technical skills—it demands heightened critical thinking and the ability to become discerning amid information overload. Research with over 21,000 workers emphasizes that employees must be "both critical thinkers and users of the technology," capable of evaluating AI outputs with appropriate skepticism.¹⁰ This discernment develops fastest when you're solving real problems for real people. The feedback loop of seeing your AI-enhanced work create value for others provides motivation that sustains through plateaus and frustrations.


This pro-social orientation also addresses a critical concern: ensuring AI augments rather than replaces human value. When your focus is serving others, you naturally gravitate toward applications that enhance human capability rather than eliminate human involvement. You become interested in how AI can help teachers personalize learning, not replace teachers. How it can help doctors diagnose rare conditions, not replace doctors.


I've come to believe that meaningful struggle—especially when it serves others—creates deeper satisfaction than easy wins. The most fulfilling AI work I've done hasn't been the tasks that became trivially easy, but the ambitious projects I could finally attempt because AI removed tedious obstacles. Quick wins feel hollow; projects with real impact create lasting fulfillment.

Breaking the Dependency Cycle
If you recognize yourself sliding toward dependency, here's the intervention framework that proved effective for me and others:
The Cold Turkey Test: My two-month medical leave proved something crucial—you can function without AI. The world doesn't end. Your brain doesn't break. In fact, you might discover clarity and connections you'd forgotten existed. Even a 48-hour reset can reveal how much capability you retain.
The Verification Protocol: For one week, manually verify every single AI output before using it. Every citation, every calculation, every line of code. This rebuilds your quality control instincts and reminds you that AI is fallible.
The Teaching Test: Explain your work to someone without mentioning AI. If you can't articulate your contribution beyond "I prompted the AI," you've crossed into dangerous territory.
The Gradient Return: Reintroduce AI gradually with strict boundaries. Use it for brainstorming review and rewrite/edit every element. Use it for code review but write the initial boiler plate implementation to guide the first implementation. This rebuilds the augmentation relationship.
Iterative Refinement: With each use of AI, impart your own feedback and judgement on the output from AI and then ask AI to judge and refine with your feedback and self-improvements until both you and the AI are happy with the final results.
The Sleep/Family Rule: Never check AI progress in the middle of family time or sleeping time in the middle of the night. Set boundaries. The agents will still be there in the morning, and your sleep (especially if you have sleep apnea like me) is more valuable than any productivity gain.
Remember: taking breaks from AI isn't weakness—it's strategic capacity building. Olympic athletes don't train at maximum intensity every day. They cycle through stress and recovery. Your brain needs the same rhythm.


The Uncomfortable Truth About Our AI Future


We're all part of a massive, uncontrolled experiment in human-AI co-evolution. Nobody knows what happens when an entire generation learns to think with AI from childhood. We're discovering the psychological dynamics in real-time, with our careers and capabilities as the stakes.


The behavioral patterns are real—I felt them viscerally in June when I returned to vibe coding. The productivity gains are undeniable—I accomplish things that would have required a team months to develop just a couple years ago. The dependency risk is serious—I lived it, waking at midnight to check agent progress. The growth potential is extraordinary—those three months of advancement while I was away proved the exponential trajectory continues and is much more dramatic with an intentional hiatus.


But here's what my medical leave taught me: the vast majority of humanity continues without AI, and they're fine. My neighbors aren't suffering from lack of ChatGPT. They're living full, rich, connected lives. The AI revolution is real for those of us deep in it, but it's also optional for most of the world.
The path forward isn't abstinence or surrender but intentional engagement. Use AI like you'd use any powerful tool—with respect for its capabilities and awareness of its dangers. Build strength through struggle, maintain skills through practice, and remember that the point isn't to work less but to accomplish more meaningful work, and leverage it for more quality time spent with others or doing higher value work.


When I stepped away from AI for two months, I was reminded of who I am without augmentation. That person is still capable, creative, and competent. The AI doesn't replace those qualities—it amplifies them. But only if I maintain them. Only if I view the struggle as strengthening rather than failure. Only if I remember that the human in "human-AI collaboration" isn't optional.


The future belongs to those who master this balance: leveraging AI's power while maintaining human judgment, building on AI's suggestions while preserving independent thought, and automating tedious tasks while tackling harder challenges. It's not an easy path. The easy path is either complete rejection or complete dependence. But the rewarding path—the one that leads to genuine growth—lives in the tension between human and artificial intelligence.
That tension is where we become more than either human or AI could be alone. And that's worth both the addiction risk and the struggle to remain human while becoming more.


---
## References
¹ Zhang, X., Yin, M., Zhang, M., Li, Z., & Li, H. (2024). "The Dark Addiction Patterns of Current AI Chatbot Interfaces." *CHI Conference on Human Factors in Computing Systems Extended Abstracts*. https://dl.acm.org/doi/10.1145/3706599.3720003
² Chaminade, T., Zecca, M., Blakemore, S.J., Takanishi, A., Frith, C.D., Micera, S., ... & Umiltà, M.A. (2012). "How do we think machines think? An fMRI study of alleged competition with an artificial intelligence." *Frontiers in Human Neuroscience*, 6:103. https://pmc.ncbi.nlm.nih.gov/articles/PMC3347624/
³ Huang, S., Lai, X., Ke, L., Li, Y., Wang, H., Zhao, X., Dai, X., & Wang, Y. (2024). "AI Technology panic—is AI Dependence Bad for Mental Health? A Cross-Lagged Panel Model and the Mediating Roles of Motivations for AI Use Among Adolescents." *Psychology Research and Behaviour Management*, 17:1087-1102. https://pmc.ncbi.nlm.nih.gov/articles/PMC10944174/
⁴ Maeda, E., & Quan-Haase, A. (2024). "When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design." *ACM Conference on Fairness, Accountability, and Transparency*. https://dl.acm.org/doi/fullHtml/10.1145/3630106.3658956
⁵ Dell'Acqua, F., McFowland III, E., Mollick, E.R., Lifshitz-Assaf, H., Kellogg, K.C., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K.R. (2023). "Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality." Harvard Business School Working Paper No. 24-013. https://ssrn.com/abstract=4573321
⁶ Dweck, C.S. (2006). *Mindset: The New Psychology of Success*. New York: Random House.
⁷ Yeager, D.S., Bryan, C.J., Gross, J.J., Murray, J.S., Krettek Cobb, D., Santos, P.H.F., Gravelding, H., Johnson, M., & Jamieson, J.P. (2022). "A synergistic mindsets intervention protects adolescents from stress." *Nature*, 607:512-520. https://www.nature.com/articles/s41586-022-04907-7
⁸ Bick, A., Blandin, A., & Deming, D. (2025). "The Rapid Adoption of Generative AI." Federal Reserve Bank of St. Louis Working Paper 2024-027C. https://www.stlouisfed.org/on-the-economy/2025/feb/impact-generative-ai-work-productivity
⁹ Jamieson, J.P., Nock, M.K., & Mendes, W.B. (2013). "Improving Acute Stress Responses: The Power of Reappraisal." *Current Directions in Psychological Science*, 22(1):51-56. DOI: 10.1177/0963721412461500
¹⁰ IBM (2023). "New IBM study reveals how AI is changing work and what HR leaders should do about it." IBM Institute for Business Value. https://www.ibm.com/think/insights/new-ibm-study-reveals-how-ai-is-changing-work-and-what-hr-leaders-should-do-about-it

14.11.2025 03:06The Human Factor: When AI Becomes Your Favorite Drug (And Why That's Both Brilliant and Terrifying)
https://blog.tobybrooks.com/the-...

The Real Stories Aren't At The Finish Line

https://blog.tobybrooks.com/the-...


The Real Stories Aren't At The Finish Line

Watch what happened when a 2-year-old Boykin Spaniel—an insecure yet competent dog with only six training sessions—faced 40+ obstacles over 4 miles of terrain at the OneWorld Canine Obstacle Run in Alabama.

No extensive preparation. No specialized obstacle course training.

Just a young dog with natural drive, a handler willing to trust him, and a chest-mounted GoPro Hero 13 Black capturing every heart-pounding moment.


What You Just Watched

You just watched the finished product.

Four miles of fearless climbing, tunnel sprints, and obstacle after obstacle conquered without hesitation. It's impressive. It's exciting. It's the kind of performance that makes you think, "That dog is incredible."

But here's what I've learned after years of competition, training, and pushing myself and my dogs to new limits:

The most interesting part of any story isn't the performance you see—it's the journey, decisions, practices, and behaviors that made it possible.

My Training Evolution

I'm not a world-class athlete. But I do understand what elite training looks like.

I was a Division 1 springboard diver at the University of Miami. I know what it means to train at very high level—the volume, the intensity, the singular focus, the sacrifices, grit, and perseverance required to compete at the highest levels while studying a demanding degree in Biomedical Engineering.

Since college, I've tackled a number of difficult physical challenges. But nothing that would really compare to the level of training I was involved in during those years. And that was an intentional choice.

As I've aged and my lifestyle has changed, my training philosophy has evolved. I've discovered practices that actually work for real people with real lives—strategies that don't require you to train like a college athlete to achieve remarkable results.

This obstacle course run with Maverick? It's a perfect example of that evolved approach.

The Pattern Behind Every Great Performance

When you watch an Olympic athlete win gold in a competition that finishes in minutes, the real story is the four-year training cycle and sometimes the lifetime of preparation and sacrifice you never witnessed.

When you see a startup succeed overnight, the compelling narrative is the years of attempts and pivots that led to breakthrough success. Or in this era, unlocking new efficiencies with AI that condenses learning and building cycles from years into months.

When a championship team hoists the trophy, the untold story is the roster decisions, the chemistry-building experiences, behaviors on and off the field, the competitions that tested their grit, and the practices where they transformed from talented individuals into an unstoppable unit.

The Real Stories Aren't At The Finish Line
Maverick (2) front right, Darcy(5) back left. 3 yrs apart, full siblings

The Dog Who Wasn't Ready

Here's what the video doesn't show:

Maverick cried for 30 minutes before the race started.

We were separated from our pack, and without them, his insecurity took over. This wasn't the confident athlete you saw on camera. This was a scared dog who depended on his pack for courage.

That's when I knew this race would transform Maverick into a new dog.

This race gave him his own confidence.

Maverick's Untold Story

And when you watch that video of my Boykin Spaniel—Maverick—scaling tall walls and charging through pitch-black tunnels?

Before that race, Maverick hadn't proven himself to be the fearless warrior you saw charging after those obstacles. He depended on his pack for confidence.

I had two dogs who could have run this race. Maverick (2) and his older sister Darcy (5). Full siblings, three years apart. Darcy a seasoned veteran of difficult tasks and training. Darcy any I once walked 100 miles in seven days through mountains topping 15,190 feet.

The Real Stories Aren't At The Finish Line
Darcy in her breakthrough moments at 15k feet on the Salkantay Trek

I chose the insecure younger dog over the experienced athlete.

The real story is in the choice I made about which dog would compete, based on observing them in training. It's about why only six weekend training sessions were enough to prepare us. It's about the strategic decisions and leadership moments that encouraged and determined our persistence and teamwork during the difficult moments of the race.

The Bigger Picture: Beyond the Obstacle Course

Not just a story about a human and dog conquering an obstacle course.

It's about what happens before the finish line of any objective or outcome. It's about the preparation, the decisions, and the approach to bring the entire pack through the finish line that is similar to many first time transformations:

When you're doing something for the first time—something difficult, something uncertain—select with intention who will face those first encounters.

You train and equip everyone who may be able to pioneer with you. You face new challenges with a small, competent group first. You work through what you need to learn and prove what's possible. Then you iteratively scale and expand—even when you know not everyone is ready yet.

Start at scale with the unprepared, the unwilling, or the incapable with significant uncertainty?

You'll create distracting experiences, suffer attrition, and risk building a toxic culture.

Start with the right partner, the right group, make deliberate choices in preparation, face the real thing together, establish learning after each cyclical completion?

The sky's the limit.

You don't just complete challenges—you forge something beyond the objectives that is unbreakable and scalable.

What's Behind the Curtain

Over the next few blogs I share in this series, I'm going to pull back the curtain.

You've seen what the GoPro captured. Now let me show you the story with some documented steps along the way to relate this message:

Because when you understand these lessons and story, you don't just appreciate the result—you gain a framework for creating your own success in conquering objectives with any group at any scale.

Join Me on This Journey

Now that you've seen the end result in this case, let me walk you through it.

The journey that transformed an insecure dog into a fearless competitor. The journey that continues to reinforce principles about leadership, team building, and strategic pioneering more than any business book ever could.

Beyond that, I want to share the applicable training practices that made this possible—principles I've learned from practical research and experience over years of moving from Division 1 athletics to real-world challenges post-athletic lifestyle.

Practices that actually work when you don't have 20-30 hours a week to train. Strategies that leverage efficiency over volume, selection over exhaustion, and smart preparation over endless repetition.

Throughout this series, you'll get both the philosophical framework and the tactical practices I use. The mindset and the methods.

The journey that started with a simple question: "Which dog should I compete with?"

Conclusion

The video shows Maverick conquering 41 obstacles. But the real story—the one that matters for your business, your team, your next big challenge—is what happened before that proving race day.

Strategic selection. Intentional preparation. The pioneer principles to have a successful group run.

This isn't about dog training. It's about how you approach anything unconquered:

Train everyone. Pioneer selectively. Learn deeply. Expand strategically.

In Part 2, I'll break down my current training philosophy and practices and exactly how and why I selected Maverick over my experienced dog, and how you can apply this framework to your next challenge.

Subscribe to follow the complete journey from preparation to execution to expansion.

Coming next in Part 2: The Training Philosophies and Selection Process


About This Series

This is the introduction of a multi-part series covering training and preparation philosophies, the pioneer principles—a framework for successfully leading teams through unconquered territory—and expanding the experience for others.

But it's also about practical training philosophies that work for people with full lives, demanding careers, and responsibilities beyond the gym.

I'll be sharing the specific practices I've proven effective through years of athletic competition and post-college challenges. Subscribe to follow along as we break down the complete strategy from selection to execution to expansion—and the training evolution that makes it all sustainable.


Related Content:

22.10.2025 02:35The Real Stories Aren't At The Finish Line
https://blog.tobybrooks.com/the-...

The Art of Connection: My Journey Through Portraiture

https://blog.tobybrooks.com/the-...

“A portrait isn’t posed—it’s revealed.”

A Lens That Found Its Purpose

The Art of Connection: My Journey Through Portraiture

Portrait photography has always been more than just a genre to me—it’s a relationship between trust, timing, and truth. I love how a camera can become invisible when real connection takes over, when conversation fades into comfort, and the subject forgets they’re being photographed.

Moments of connection often appear when people forget the camera is there.

Whether it’s a family laughing together, a quiet glance between loved ones, or a child lost in play, those are the images that linger—the ones that make you feel something long after the shutter closes.

The Art of Connection: My Journey Through Portraiture
Moments like these remind me that portraits are stories of belonging.

From Candid Beginnings to Intentional Storytelling

My journey into portraiture didn’t begin in a studio. It started on dusty roads across Africa, through crowded markets in Haiti, and alongside playgrounds in Peru. I photographed youth mission trips where joy and purpose intersected, documented nonprofit work that revealed dignity in daily life, and captured youth groups simply because I believed their stories mattered, and I was hungry to learn and provide a service while doing so.

Travel taught me patience, empathy, and how to see beyond the frame.

Each trip became a classroom. I learned to slow down, listen more than I directed, and wait for people to simply be. The camera became my translator—a bridge across language, culture, and circumstance. Looking back, those were my first real portraits, even if I didn’t call them that yet.

The Art of Connection: My Journey Through Portraiture
Family laughter that outshines the sun — connection in its purest form.

The Question That Changed Everything

Somewhere between all those travels and volunteer projects, people began asking me:

“Do you do portraits?”

At first, I hesitated. Portraiture felt deliberate and structured, the opposite of the spontaneous candids I loved. But curiosity won.

The Art of Connection: My Journey Through Portraiture

What started as a few family sessions became a revelation. I discovered that portraiture could carry the same authenticity as travel photography if built on trust. Whether shooting a newly engaged couple, a musician mid-performance, or a child in motion—I learned to direct without dictating and to let connection lead the frame.

The Art of Connection: My Journey Through Portraiture
Where art meets energy — performers lost in their rhythm and light.

Even in performance, there’s a portrait hidden in the energy.

The Art of Connection: My Journey Through Portraiture

Sofia and Nick: A Promise and a Beginning

The Art of Connection: My Journey Through Portraiture

One of my favorite examples came from a promise I made to my high-school friend, Sofia.
Back in 2011, she told me she was getting engaged on 11/11/11. I half-joked that I’d take care of all her photography needs—and when it happened, I kept my word.

Their engagement shoot in St. Petersburg, Florida, was pure joy. We spent the evening before just hanging out, and by the next morning the energy was effortless. They trusted me completely, and that trust became visible in every frame.

The Art of Connection: My Journey Through Portraiture

Real comfort creates real beauty.

The joy between them needed no direction.


The Power of Presence in One-on-One Portraits

There’s something deeply intimate about one-on-one portrait sessions. The silence between shots, the subtle shifts in expression, the moment someone stops performing and simply is.

The Art of Connection: My Journey Through Portraiture
Tiny moments, big meaning — proof that no portrait is ever truly small.

I like to give minimal direction—just enough to make people comfortable without interrupting authenticity. Sometimes I work close, building connection through conversation. Other times I step back with a telephoto lens, allowing distance to create honesty.

The Art of Connection: My Journey Through Portraiture


When presence replaces posing, identity shines through.

The Art of Connection: My Journey Through Portraiture

Each session becomes a collaboration, not a composition.

The Art of Connection: My Journey Through Portraiture

Learning to See Differently: Underwater Lessons

The story of my first underwater shoot began with a call I almost ignored.
After three weeks in Africa, exhausted and jet-lagged, my dive coach Paul asked if I’d audition for a commercial. I said no—until he called again.

That “yes” changed everything. On set, I met Tim Calver, an extraordinary underwater photographer whose enthusiasm was contagious. He shared his process, walked me through gear choices, and inspired me to dive—literally—into this new world.

The Art of Connection: My Journey Through Portraiture
The first frame of a new chapter — testing light, chasing truth.

Beneath the surface, everything you know about light changes.

Underwater photography hasn’t replaced portraiture; it’s deepened it. It taught me to surrender control, adapt to unpredictability, and rediscover wonder in the craft I thought I’d mastered.

“Every time I submerge with my camera, I feel like a student again.”

What It Means to Capture Connection

The Art of Connection: My Journey Through Portraiture

Every person I photograph invites me into their world, if only for a moment. Those moments become memories, and the photographs become proof of presence. Portraiture, I’ve learned, isn’t about perfection—it’s about empathy.

The Art of Connection: My Journey Through Portraiture

Portraiture celebrates belonging as much as beauty.

The Art of Connection: My Journey Through Portraiture
The stage becomes a mirror of emotion — music and movement fused through light.

It’s about giving someone the courage to see themselves the way the world does when they’re most alive.

The Art of Connection: My Journey Through Portraiture

Earning Trust: When the Camera Feels Like a Wall

When you earn that kind of moment, it’s no longer about taking a picture. It’s about receiving one—an image freely given by someone who’s chosen to let you in.

The Art of Connection: My Journey Through Portraiture

This photograph of the young girl leaning against the post was a reminder of that. She was shy, uneasy about the attention, but her curiosity grew stronger than her fear. Each time I lowered the camera, she stayed a little longer. Her faint smile, her fingers clutching the wooden frame—those were small signals of consent, given not in words, but in trust.

Portraiture—at its most human level—is about consent and trust. You can’t demand openness; you earn it. Sometimes that means putting the camera down and just being present. It’s in the rhythm of conversation, a shared laugh, or simply showing consistency—proving you’ll be patient enough to see them on their terms.

Not every subject begins ready to be seen. Some turn away, fold into themselves, express anger, or freeze the moment a lens is raised. In those moments, the work shifts from photographing to connecting. Earning Trust: When the Camera Feels Like a Wall


Conclusion

Looking back, my path through portraiture feels less like a career and more like an evolution.

The Art of Connection: My Journey Through Portraiture


From the spontaneity of travel to the intimacy of one-on-one sessions to the quiet surrender of underwater photography, each chapter has taught me to see—and to be seen—differently.

The Art of Connection: My Journey Through Portraiture

When I lift my camera now, it’s not about capturing an image.
It’s about listening for that split second when the soul steps into the light.

The Art of Connection: My Journey Through Portraiture

14.10.2025 15:00The Art of Connection: My Journey Through Portraiture
https://blog.tobybrooks.com/the-...

The Innovation Acceleration Framework: What Three Years of Parallel Transformations Taught Me

https://blog.tobybrooks.com/the-...

The Innovation Acceleration Framework: What Three Years of Parallel Transformations Taught Me

continued as Part 1 of a 3 part series introduced in The AI Paradox I discovered...

The Double Life That Revealed the Pattern

Here’s what might sound contradictory: I was passionately pursuing my own AI transformation in stolen hours at night and weekends, before helping a massive enterprise transform their AI awareness, skillsets, and adoption. I was primed for this moment when circumstances called me to action at UKG. Prior to this global AI Revolution at work and in our industry, I was working on large-scale enterprise agile transformation. My audience was largely kicking and screaming, and I was the boots on the ground charging the hills, blocking and tackling, winning the hearts and minds of the unwilling—or agreeing to disagree and moving forward with the help of network influence, personal and seasoned work relationships, and servant-leadership to stand on.

But that was only my day job.

My double life at night and on weekends was learning and using primitive AI Transformer technology, manually debugging vector databases and training data, learning low-code platforms like Bubble, and fast prototyping with Firebase platforms in Google Cloud Platform, developing reusable components for myself. RAG was difficult but possible. Grounding was mysterious and inconsistent before platform/provider controls were in place. Defining and deploying agents with tools and selective data use felt like black magic requiring constant experimentation.

Yet something remarkable was happening in those late-night sessions: things that were frustratingly difficult but possible in month one became merely challenging by month three, then almost routine by month six. Not because I was getting smarter, but because the technology itself was accelerating—and I was learning to accelerate with it. Though I should be clear: I’m still learning. What feels routine today will likely seem primitive six months from now, these transformations can be month over month, or much faster based on new releases and capabilities.

I stayed open to multiple platforms, AI providers, and technology choices. When something better, faster, and easier emerged, I abandoned what I’d been using without sentimentality. My patterns evolved: from painstakingly developing reusable components to developing more flexible context, guidelines, and standards that are now baked into either the context-building portion or the solutions themselves. Each of my personal projects developed a progressively maturing look, feel, and coding rigor—not because I planned it that way, but because I learned to let LLMs help me improve my own context iteratively.

I keep a range of personal projects running at varying levels of maturity. The newest projects leveraged the latest learnings, standards, contexts, tech stacks, platforms, and guidelines. Older projects got upgraded gradually as I learned how to manage in-place improvements. New experiments and research started when I hit a specific problem, frustration, or friction point that needed solving. This cycle allowed me to be practical and produce actual work while learning, improving existing workflows, and building capability.

I’m nowhere near optimized—I’m not even sure what optimized looks like in a field changing this rapidly. But I’m capable of doing so much more now than three years ago, and more importantly, I’m better at learning what I don’t yet know. The humility to recognize gaps accelerates growth faster than confidence in what you’ve already mastered.

Meanwhile, at work, things started accelerating for me outside the enterprise before they hit internally—or were met with skepticism amongst leaders not yet familiar with the brewing power and breakthroughs incoming. I was using AI to learn and create my own transformation agile articles based on my experience. My leaders were impressed with the articles, but it wasn’t moving hearts and minds when I told them I was using AI to help me. All the content I was creating wasn’t intellectual property, so it didn’t raise alarms—not the most exciting work either.

I asked for allowances to help small businesses with their AI needs outside of our domain. I asked to be included in opportunities where I could support innovation through AI. It was a hackathon in March 2023 that my AI influence and capabilities went on stage for the first time at UKG, and that notoriety hasn’t yet run its course. There have been a few inspirational injections along these past 2.5 years where the spotlight was on, and the inspirations continue to flow. See Linked In Post.

Here’s what stunned me: this AI transformation felt easy relative to the kicking and screaming I was used to when it came to transformations. The difference? Everyone wanted it. I’d been living the acceleration in my personal work. I’d experienced firsthand how the right balance of planning and iteration drives progress—not pure chaos, not perfect planning, but structured experimentation. I’d learned how constraint drives focus when it’s the right constraint, how practical iteration beats pure planning when feedback loops are tight, and how problems pull innovation faster than technology pushes it.

But I should be honest: I’m still figuring this out. Every organization is different. Every innovator needs different support. The patterns I’ve identified work, but they’re not formulas to be applied rigidly—they’re principles to be adapted based on continuous listening and observation.

Many of the things that worked well for this AI transformation were founded and forged with my recent embrace of scaled agile transformation principles, mindsets, and behaviors. But the acceleration patterns—those came from watching my own capability compound through deliberate practice with progressively better tools, always maintaining enough structure to build on previous learnings while staying flexible enough to abandon what doesn’t work. What I learned is that the companies winning with AI aren’t the ones with the best models or biggest budgets. They’re the ones who’ve mastered something counterintuitive: using constraint to surface innovators, removing blockers for those who persist, and letting business problems drive the technology rather than the reverse.

The same forces that transformed my personal capability from struggling with basic RAG implementations to shipping sophisticated multi-agent systems work at organizational scale more effortlessly. The parallel journeys—one personal, one corporate—revealed identical patterns.

The Five Forces of Sustainable AI Transformation

Through hundreds of experiments across both tracks, I’ve identified five forces that separate transformative AI adoption from incremental tinkering:

Force 1: Strategic Constraint Creates Innovation Pressure

Personal Discovery: In my early projects, I gave myself every possible option—multiple frameworks, unlimited API calls, access to every model, consider every platform. Progress was slow. Then I imposed a constraint: build a working prototype and deploy in 48 hours using only free tiers so that others could interact with it. Suddenly, decisions became clear, focus intensified, and I delivered more in that weekend, and got more feedback than in the previous weeks of unlimited optionality and hypothetical feedback.

The counterintuitive truth: we threw technical platforms meant for developers at everyone in our company regardless of role and aptitude. This wasn’t elegant. It wasn’t comfortable. But it revealed something invaluable—our most resilient and persistent innovators across the organization, mirroring how my personal time constraints surfaced which approaches actually worked versus which merely seemed promising.

We made AI capabilities visible and accessible while maintaining the pressure and difficulties brought on by trust, complex platforms, and a rapidly changing technology space. The constraint wasn’t artificial scarcity—it was real business urgency combined with new, unfamiliar tools that made the impossible suddenly possible. I listened, observed, and focused on solving demonstrable problem solution sets within the constraints.

What happened? Innovation exploded. When you can’t solve a problem the old way in the time available, you stop asking “should we try AI?” and start asking “how do we make this work?” Just as my personal projects forced me to stop debating architectures and start shipping solutions, organizational constraints surfaced people who acted rather than analyzed and waited.

The Psychology of Productive Pressure:

This aligns perfectly with stress research. When people believe their resources match the challenge (even if barely), stress transforms from debilitating threat into enhancing challenge response. My job wasn’t to remove the pressure—it was to ensure they felt capable of meeting it with the new tools available. I knew this worked because I’d lived it—those 48-hour personal hackathons where the clock forced clarity.

This pattern appears across industries. BBVA deliberately provided just 3,000 ChatGPT Enterprise licenses for their 125,000+ employees—a mere 2.4% coverage. This strategic constraint wasn’t about budget; it was about surfacing committed innovators. Within five months, those constrained users achieved 83% weekly active usage and created 2,900 custom GPTs, with 700 shared organization-wide. The bank identified these “AI Wizards” and scaled to 11,000 licenses based on demonstrated innovation rather than organizational hierarchy—the exact pattern I’d discovered managing my personal project portfolio, where constraints revealed which projects deserved continued investment.

Klarna’s CEO Sebastian Siemiatkowski took a similar approach, providing API access to 2,500 employees (50% of workforce) but publicly acknowledging that “only 50 percent of our employees use it daily.” Rather than mandate usage, this honest constraint revealed natural adopters who became champions. The company grew from 50% to approximately 90% daily adoption of generative AI tools, with AI now handling work equivalent to 700 full-time agents (primarily outsourced customer service contractors) and driving an estimated $40 million profit increase.

Force 2: Strategic Abundance Removes Innovation Blockers

Personal Discovery: I spent three frustrating weeks building my own blogging platform through vibe coding before discovering the Ghost Blogging platform had solved the exact problem. I spent days wrestling with multiple agent platforms before finding an open source solution that implemented a multi-agent research flow that I could reverse engineer and leverage to build my own solutions that abstracted the complexity and took me away from third party platforms but leveraged third party libraries. Each time I removed a blocker by finding a better tool, my project velocity increased dramatically.

But here’s what I learned about constraints: some were intentional (choosing a 48-hour deadline, constraining to specific tools and technology), but many were accidental—organizational friction, budget limits, platform limitations, organizational change. Both types yielded valuable innovation. The accidental constraints exposed something powerful: persistence, stubbornness, and knowing it can be easier will eat strategy alive. Constraints will be broken through the gravity of demand, awareness, and ingenuity. The lesson wasn’t to impose constraints blindly—it was to listen, observe, and persist toward removing the RIGHT constraints while maintaining the productive ones.

While constraint drives focus, abundance unlocks potential. But abundance isn’t about giving everyone everything—it’s about identifying the precise blockers hindering your top innovators and obliterating them.

The Abundance Diagnostic:

I started focusing on a simple question: “What would have made this 10x easier for more people than just me and my team?” The same question I’d been asking myself in my personal projects: “What’s slowing me down that doesn’t need to?”

The answers revealed systematic blockers that I heard from various people:

Every systematic blocker had an AI opportunity to rethink, reframe, and re-imagine a new way through—the same realization that led me to abandon platforms that created friction and embrace those that removed it.

Immediate Access Grants:

BBVA exemplifies this pattern perfectly. When the bank’s legal team built their “BBVA Retail Banking Legal Assistant GPT” using their own documentation and precedents, they reduced response times to under 24 hours. This mirrors my personal evolution: my first document processing pipeline took weeks to build; my latest one took hours because I’d learned which patterns worked and which tools eliminated friction. BBVA’s success spread organically—inspiring marketing, HR, and operations teams to build their own tools. The 900+ strategically interesting use cases came not from top-down planning but from removing blockers and watching innovators create.

Strategic abundance isn’t democratic—it’s meritocratic. Give disproportionate resources to those already demonstrating initiative, then let their results create demand for broader adoption. Just as I invested more time in personal projects that showed traction and abandoned those that didn’t, organizations should double down on innovators who ship and deliver value.

Force 3: Business Problems as Innovation Drivers

Personal Discovery: I spent a month building a “easy general purpose” AI assistant before realizing I could leverage an open-source solution that enabled both simple direct access to trusted LLMs and RAG capabilities deployed on trusted enterprise cloud infrastructure. Then I built, deployed, and demo’d with a use case that was relevant for everyone at my company in less than 3 days to solve one specific problem, then expand and abstract. The frustrations forced the innovation and focused my research. That specific version became my most-used internal tool because it solved real frustrations. Every successful personal project since has started with a specific pain point, not a general capability.

The rote trainings or displays of shiny new technology without practical business problems left audiences excited but disconnected—unable to apply it in their own space. Transforming our workshops to be audience-specific, choosing their business problems to demonstrate AI capabilities, mirrored how my personal projects only gained traction when solving actual problems.

The breakthrough for my own focus came when I flipped the model entirely: Start with my most painful business problems, then use them as forcing functions for my research into learning and growing AI capability development. This was echoed in our quarterly hackathons.

The Problem-First Approach:

Every quarter, we themed our innovation challenges around real business problems and access to exciting technologies and partners. We emphasized team diversity and cross-functional collaboration—engineers working with domain experts who understood the actual pain points.

  1. Clear Success Metrics: “Reduce proposal time to under 4 hours” not “explore AI for proposals”
  2. Access and Budget: Pre-approved tools, time, and resources
  3. Freedom to Fail Fast: “Show me something in two weeks—working prototype or evidence this won’t work”

The Learning Velocity:

When learning is driven by solving a real problem you care about, feedback loops tighten dramatically. You’re not memorizing prompt patterns—you’re discovering what works through necessity. The friction and frustration become your teachers.

In my personal projects, I learned that feedback, evaluation, and improvement should be part of every project, every cycle of practical experience. But I also learned the critical importance of upfront work: setting up context, planning, and standardized intentions and guidelines as part of each AI build. The clearer these context-building and planning cycles, the smoother everything that follows. It’s not planning versus iteration—it’s planning AS PART OF iteration, where each cycle informs better planning for the next.

I started using LLMs to evaluate and improve my own prompts and context. This meta-loop—using AI to get better at AI—accelerated my capability faster than any course could. The same pattern works organizationally: innovators solving real problems with structured experimentation learn faster than anyone taking training.

Transforming How We Work, Not Just What We Build:

One of the most powerful discoveries was using innovation tournaments to practice these iterative cycles in compressed timeframes. Our quarterly hackathons didn’t just produce solutions—they required teams to practice iterative cycle optimization in shorter bursts. Plan-build-test-learn-improve cycles that normally took months were compressed into days or weeks.

This wasn’t just about what teams built. It transformed HOW they worked. Teams learned to set clear context and intentions upfront, execute with flexibility, gather feedback rapidly, and incorporate learnings into next iterations. These compressed cycles became the pattern for standard work—continuous innovation not only in what we were building but in how we were working.

I’m still learning the right balance here. Some teams need more structure, others more freedom. The lesson isn’t that I’ve figured out the perfect formula—it’s that the formula itself needs to evolve based on feedback from each cycle.

TE Connectivity’s AI Cup demonstrates this perfectly. Rather than train 300 students on AI theory, they presented real factory challenges. Winners developed systems with 98.8% and 99.2% accuracy at 100x speed—solutions being deployed across production lines with plans for global expansion to 50-100+ manufacturing sites. The problems drove the learning, exactly as my frustration with slow research note categorization drove me to learn vector embeddings and semantic search.

Force 4: Empowered Innovators, Not Managed Experiments

Personal Discovery: My most successful projects happened when I gave myself permission to experiment, fail, and focus my research on my pain points freely—but with structure. I learned that one of the most valuable practices is setting up context, planning, and standardized intentions and guidelines as part of each AI build upfront. The clearer the context building and planning cycles, the smoother the implementation and execution goes.

But here’s the nuance I discovered: it’s not “iteration beats planning.” It’s about just enough planning to remain flexible, incorporating learnings from each cycle to inform the next. Planning becomes part of the iterative cycle—each phase has the right level of planning mixed with openness to learning, iterating, and improving. Planning and performing execution iteratively, delivering value through iterations where feedback influences what comes next. My biggest failures came when I either over-planned too rigidly and now live with those decisions to gradually transition or under-planned (chaos). Success lived in the middle: structured experimentation.

The pattern: trust the innovator (even when that innovator is yourself), provide problems and resources with clear context and intentions, then get out of the way while maintaining feedback loops.

This is where I see most organizations fail spectacularly. They identify opportunities, assemble teams, create governance processes, and then wonder why nothing happens. It’s the organizational equivalent of planning a personal project so thoroughly that you never actually build it, and get stuck talking about it for months without action.

The Anti-Pattern That Not All Companies Have Escaped:

Boston Consulting Group research shows 74% of companies struggle to achieve and scale AI value despite significant investments. The pattern is consistent: detailed planning, technical architecture reviews, phased rollout plans, weekly status meetings—and every project taking 6-8x longer while delivering half the value.

The Breakthrough Model:

We learned to identify and celebrate top innovators and give them something powerful: Problems, not prescriptions.

Instead of: “Build a chatbot for customer support using Gemini with these specific prompt templates and this approval workflow…”

Shifting to business problem focus: “Reduce our average customer support resolution time by 40%. Here’s access to our support data, here’s unlimited monitored access to our trusted AI platforms, here’s protection from other meetings. Show me results or learnings in three weeks.”

This mirrored how I managed my personal project portfolio: keep a range of projects at varying maturity levels running. Newest projects leverage latest learnings. Older projects get upgraded gradually. Each experiment teaches something that improves the next one.

Shopify exemplifies this empowerment. CEO Tobi Lütke’s March 2025 memo stated that teams must demonstrate why they cannot get what they want done using AI before asking for resources—enabling headcount to stay flat at 8,100 while Q4 2024 revenue grew 31% year-over-year. Three innovators might tackle problems three different ways, all working, with standardization emerging naturally. Just as I learned which patterns worked across my projects and codified them as flexible guidelines (not rigid rules), Shopify let patterns emerge from practice.

What This Requires From Leadership:

The innovators who thrived believed capabilities grow through effort. When obstacles appeared, they saw opportunities to get smarter, not evidence of inadequacy. This growth mindset separated those who persisted from those who retreated—the same mindset that let me abandon platforms without ego when something better emerged.

Force 5: Value Now, Leverage Elegance When Available, and Iterate to Long-Term Autonomy and Economies Later

Personal Discovery: This principle saved me countless hours. Early on, I spent weeks struggling with internal Identity Access Management in GCP whose purpose built was not for wide internal use re-building a custom authentication system so every employee could access in a frictionless way before discovering Micorosoft Azure had built-in Microsoft Authentication and Ping SSO integration natively. I spent days troubleshooting my own vector in place database implementation before using a now antiquated hosted vector db solution. I learned: ship working solutions fast using whatever works, then optimize later based on actual usage. My pattern shifted from “build everything perfectly” to “prove value quickly, iterate based on reality.”

My development approach evolved dramatically: I moved from painstakingly creating reusable components to developing flexible context, guidelines, and standards baked into my solutions. Each new project uses progressively better patterns—not because I planned it perfectly, but because I learned what worked through iteration. This cycle allowed me to produce practical work while learning, improving workflows, and building capability. I’m nowhere near optimized, but I’m capable of so much more now than three years ago.

The Traditional Trap:

Organizations plan for full engineering scale from day one without testing in real-world scenarios:

Every one of these projects was still “in progress” when faster-moving competitors shipped. It’s the organizational equivalent of my two-week authentication system that never got used because I discovered native support without effort.

The Value-First Approach:

Now I explicitly prioritize expensive, lower-effort, imperfect value delivery over elegant, future-proofed systems:

Phase 1: Prove Value (0-3 months)

Phase 2: Iterate Based on Usage (3-6 months)

Phase 3: Optimize Economics (6-12 months)

This mirrors my personal project evolution: latest projects use newest tech and patterns; older projects get upgraded gradually as I learn to manage in-place improvements. Practical experience iterated guides the best designs to ground truth.

North Atlantic Industries, a 250-employee defense/aerospace manufacturer, started with expensive Azure OpenAI API calls for code documentation. After demonstrating 60-70% efficiency gains and millions in savings, they secured buy-in for company-wide expansion. The quick, expensive proof enabled the broader investment—just as my quick Microsoft Azure and open-source leverage implementation proved the value of rapid prototyping over perfect planning.

The Iteration Engine: Building Organizational Learning

Personal Discovery: My biggest breakthrough came from using LLMs to improve my own context iteratively. I’d run a project, analyze what worked, then ask Claude to help me refine my guidelines. Each cycle improved the next project. I learned to make feedback, evaluation, and improvement part of every cycle. This meta-loop—AI helping me get better at AI—accelerated my learning exponentially.

The companies that sustain AI advantage build systems that learn from every experiment:

The Learning Architecture I Built:

1. Friction as Signal

Just as my personal projects started when I hit a specific problem, frustration, or friction point, organizations should use friction as the signal for where to innovate next.

2. Experimentation Rituals

This mirrors my personal project portfolio approach: maintain projects at varying maturity levels, share learnings across them, and let failures teach as much as successes.

3. The Sunk Cost Detector

I learned to abandon platforms without sentimentality when something better emerged. Organizations need the same capability.

4. Success Pattern Codification

My development patterns evolved from rigid reusable components to flexible guidelines baked into solutions. Document patterns, but keep them flexible enough to evolve.

5. Cross-Functional Innovation Teams

The Parallel Transformation: What Personal Acceleration Reveals About Organizational Change

Here’s what three years of parallel journeys taught me: The same forces that accelerated my personal capability from struggling with basic RAG to shipping sophisticated multi-agent systems work similarly at organizational scale.

The Pattern Recognition:

In my personal work, I experienced:

At organizational scale, these exact forces produced:

I’m still refining this balance. What I thought was the right approach six months ago often looks incomplete now. The key isn’t having figured it out—it’s staying open to feedback and continuing to adapt, learn, and improve from previous cycles.

The Meta-Lesson: Capability compounds through cycles of practical experience guided by structured experimentation—not through perfect planning alone, nor through pure chaos. Whether building personal capacity or organizational capability, the acceleration comes from:

  1. Embracing the right constraints while listening for which ones to remove
  2. Ruthlessly eliminating non-essential friction while maintaining productive structure
  3. Starting with real problems and clear context, not general capabilities
  4. Trusting innovators with problems, resources, and enough planning to build on learnings
  5. Shipping solutions with structure enough to iterate from, learning what works through practice

The balance point between structure and flexibility shifts constantly. What worked last quarter might be too rigid or too loose this quarter. The key capability isn’t having the perfect system—it’s developing the sensitivity to know when to add structure and when to remove it, when constraints help and when they hinder. I’m still learning this discernment.

The Transformation Journey: In three years, I went from manually debugging my own vector databases to building sophisticated multi-agent systems that iteratively improve their own context. Not because I got dramatically smarter, but because I learned to balance structure with flexibility—setting clear context and planning upfront, then iterating based on feedback. My newest projects leverage patterns discovered in older ones. My oldest projects get upgraded with lessons from recent experiments.

But here’s the humbling truth: I’m nowhere near done learning. Every breakthrough reveals new gaps. Every solved problem exposes new challenges. The field itself evolves faster than any individual can master it. What makes someone effective isn’t having all the answers—it’s developing the muscle to learn, adapt, and recognize when previous approaches no longer serve.

Organizations can compress this learning curve, but they can’t skip it. The path forward: identify people already living it (like I was during my “double life”), listen to where they’re hitting friction, remove their blockers while maintaining productive constraints, give them problems worth solving with clear context and enough structure to build on learnings, and let their results teach others. The constraint-driven approach surfaces these people naturally—if we’re humble enough to listen and observe rather than prescribe.

The Mindset Shift That Makes Everything Possible

Here’s where I was most primed: helping people move past mindset blocks and self-limiting beliefs, into lean-forward action and progress over perfection and waiting. Graceful, empathetic, but not to the point of ruinous empathy and growth-stifling comfort.

The innovators who thrived—both in my personal projects and organizationally—shared one belief: struggle is the mechanism of growth, not evidence of inadequacy. When an AI experiment failed, they interpreted it as “I learned what doesn’t work, now I’m smarter,” not “I’m not technical enough” or “AI doesn’t work.”

This growth mindset separates AI adopters who persist through the chaotic middle from those who retreat to comfortable territory. When you believe capabilities expand through effort, every frustration becomes interesting rather than defeating.

I learned this managing my own progression: things impossibly difficult in month one became routine by month six. Not through natural talent, but through persistent iteration. Learning how LLMs behaved against my context and iteratively improving with their help. Making feedback and evaluation part of every cycle. Keeping multiple projects running at different maturity levels so newest work benefits from latest learnings.

The organizations that will win the AI revolution aren’t those with biggest budgets or best technical talent. They’re the ones building cultures where:

This last point matters more than I initially recognized. The moment you think you’ve figured out the formula is the moment you stop listening to signals that your approach needs to evolve. The best innovators I’ve worked with—and the most successful organizations—share a quality of perpetual openness: confident enough to act, humble enough to adapt.

Why Your Response to This Moment Matters

The transformation isn’t happening to us—it’s being created by persistent innovators who see obstacles as interesting problems rather than evidence of impossibility. BBVA expanded from 2.4% coverage to 11,000 licenses by tracking who created value. Klarna grew from 50% to approximately 90% daily AI adoption by identifying and supporting natural adopters. Shopify achieved 31% Q4 revenue growth with flat headcount through empowered adoption. The companies that recognize this, surface these individuals through strategic constraint and careful listening, remove their blockers while maintaining productive structure, and give them real problems to solve with clear context and enough planning to compound learnings are building sustainable competitive advantage.

What stunned me most: this organizational transformation felt easy relative to the agile transformations I’d led before. Why? Because I’d been living the acceleration in my personal work. I’d experienced how planning within iteration compounds learning, how the right constraints force focus while the wrong ones hinder progress, how real problems pull innovation faster than general capabilities push it.

But “easy” is relative. There were still failures, frustrations, and wrong turns. I’m still making mistakes, still learning which constraints help and which hurt, still refining the balance between structure and flexibility. The difference is I’ve learned to treat those mistakes as data points rather than judgments, to stay curious about what’s not working rather than defensive about what I thought would work.

The parallel journeys—one personal, one corporate—revealed the same truth: capability compounds through cycles of practical experience informed by rapid feedback and just enough planning to build on previous learnings. Whether you’re building your own AI capacity or transforming an organization, the same five forces accelerate the journey. The question isn’t whether AI will transform your organization. The question is whether you’ll surface and empower the people already living the transformation in their off-hours, their side projects, their late-night experiments—and whether you’ll stay humble and observant enough to learn from what they discover.

Those people are your future. Find them, listen to where they’re stuck, remove their blockers while maintaining productive constraints, give them problems worth solving with clear context to build from, and watch what compounds. Then stay curious about what’s working and what’s not, because the approach that works today might need refinement tomorrow.

In Part 2 of this series, we’ll explore why some people naturally thrive in this environment while others struggle, and how to develop the psychological resilience that lets you treat AI’s chaos as a workout for your brain rather than a test of your worth.


Source Verification for This Article

After conducting extensive research across company press releases, official corporate communications, SEC filings, CEO statements, and consulting firm reports, I’ve verified the sourcing for each claim in this article. Here’s what the investigation uncovered about the accuracy and documentation of these widely-circulated statistics. Let’s make fact-checking ourselves the norm with AI assistance: #transparent #ai-assist-ftw

Company Performance Metrics Fully Documented

BBVA’s Strategic Constraint Implementation is thoroughly verified through multiple authoritative sources spanning May 2024 to May 2025. The initial deployment of 3,000 ChatGPT Enterprise licenses for a workforce of 125,000+ employees (2.4% coverage) is confirmed in BBVA’s May 22, 2024 press release. The 83% weekly active usage within five months is documented in BBVA’s November 20, 2024 article “BBVA sparks a wave of innovation among its employees with the deployment of ChatGPT Enterprise.” The creation of 2,900 custom GPTs with 700 shared organization-wide is verified by both OpenAI’s official case study and BBVA’s corporate communications. The expansion to 11,000 licenses is confirmed in BBVA’s May 12, 2025 press release. The legal team’s “BBVA Retail Banking Legal Assistant GPT” reducing response times to under 24 hours is documented in multiple BBVA articles, with the January 7, 2025 piece noting the team of nine attorneys now handles 40,000+ annual queries more efficiently. The 900+ strategically interesting use cases is confirmed in the same January 2025 article.

Sources: BBVA Corporate Communications (May 22, 2024; November 20, 2024; January 7, 2025; May 12, 2025); OpenAI Case Study (November 2024) URLs: https://www.bbva.com/en/innovation/ [multiple dated articles]; https://openai.com/index/bbva/

Klarna’s AI Adoption Journey is comprehensively documented through CEO Sebastian Siemiatkowski’s direct statements and official company communications. The provision of API access to 2,500 employees (50% of workforce) is confirmed in Computer Weekly’s August 28, 2023 article featuring Siemiatkowski’s quote: “still only 50% of our employees use it daily.” The growth to approximately 90% daily adoption is verified in Klarna’s May 14, 2024 press release and CNBC coverage, though this figure includes broader generative AI tools including Klarna’s internal assistant “Kiki,” not just OpenAI. The claim about AI handling work equivalent to 700 full-time agents is confirmed in Siemiatkowski’s Sequoia Capital podcast interview and OpenAI’s case study, with important clarification that these were primarily outsourced customer service contractors. The $40 million profit increase is documented in the same Sequoia podcast with Siemiatkowski stating it’s “estimated to drive a $40 million USD in profit improvement to Klarna in 2024”—a projection rather than realized profit.

Sources: Computer Weekly (August 28, 2023); CNBC (May 14, 2024); Sequoia Capital Podcast “Training Data” (2024); OpenAI Case Study; Klarna Press Release (May 14, 2024) Active URLs:  https://www.sequoiacap.com/podcast/training-data-sebastian-siemiatkowski/

TE Connectivity’s AI Cup statistics are verified from the official TE Connectivity corporate website. The competition involving nearly 300 university students from 24 universities worldwide is documented on their AI Cup story page. The winning systems achieving 98.8% and 99.2% accuracy at 100x speed improvements are confirmed with specific technical details: the 98.8% system increases annotation efficiency by 100 times compared to manual methods, while the 99.2% system is 100 times faster than manual inspection. Important clarification on deployment: the 98.8% system has been deployed on one production line with plans for rollout across 50+ global sites, while the 99.2% system is integrated into three production lines with 100+ sites under consideration for future deployment.

Source: TE Connectivity Corporate Website (2024) URL: https://www.te.com/en/about-te/stories/ai-cup.html

Shopify’s AI-First Approach is documented through CEO Tobi Lütke’s direct communications and official SEC filings. The March 2025 memo stating that teams must demonstrate why they cannot get what they want done using AI before asking for resources is verified through Lütke’s April 7, 2025 post on X (formerly Twitter) where he shared the internal memo publicly. The exact quote states: “Teams must demonstrate why they cannot get what they want done using AI” before requesting headcount. The flat headcount of 8,100 employees is confirmed in Shopify’s Form 10-K filed February 11, 2025, stating “As of December 31, 2024, Shopify had approximately 8,100 employees worldwide.” The 31% year-over-year revenue growth specifically refers to Q4 2024 performance, as documented in Shopify’s February 11, 2025 earnings release titled “Shopify Merchant Success Powers Q4 Outperformance.”

Sources: Tobi Lütke X Post (April 7, 2025); Shopify Form 10-K (February 11, 2025); Shopify Press Release (February 11, 2025); MacroTrends; StockAnalysis Active URLs:  https://www.shopify.com/news/

Business Implementation Cases Documented

North Atlantic Industries’ Azure OpenAI Implementation is documented in a Microsoft customer case study. As a 250-employee defense/aerospace manufacturer, NAI’s journey from initial Azure OpenAI API calls for code documentation to company-wide expansion is detailed in Microsoft’s May 22, 2024 Americas Partner Blog post. The 60-70% efficiency gains and millions in savings are documented with specific use cases: automated code commenting for 100,000+ lines of C# code, elimination of outsourced software testing (saving thousands of hours and millions in costs), and sales proposal automation saving 16 hours per proposal. The company-wide expansion across engineering, sales, service, and manufacturing is confirmed through quotes from NAI President William Forman, Director of Workplace Technology Tim Campbell, and Software Engineer Lacey Stein. Important note: This documentation comes from a single source (Microsoft case study) without independent third-party verification.

Source: Microsoft Americas Partner Blog (May 22, 2024) URL: https://www.microsoft.com/en-us/americas-partner-blog/2024/05/22/

Research and Industry Analysis Verified

Boston Consulting Group’s October 2024 Research provides authoritative documentation for organizational AI challenges. The report “Where’s the Value in AI?” surveyed 1,000+ CxOs and senior executives across 59 countries and over 20 sectors, assessing AI maturity across 30 key enterprise capabilities. The finding that 74% of companies struggle to achieve and scale AI value is stated precisely as: “Seventy-four percent of companies have yet to show tangible value from their use of AI.” The report identifies only 4% of companies as having cutting-edge AI capabilities that consistently generate significant value, with an additional 22% beginning to realize gains, combining to form the 26% designated as “AI leaders.”

Source: Boston Consulting Group - “Where’s the Value in AI?” by Nicolas de Bellefonds et al. (October 24, 2024) URL: https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value PDF: https://media-publications.bcg.com/BCG-Wheres-the-Value-in-AI.pdf

Verification Summary

All company-specific statistics (BBVA, Klarna, TE Connectivity, Shopify, North Atlantic Industries) have verifiable original sources with exact figures and proper citations. Key clarifications include: Klarna’s 700 FTE figure refers primarily to outsourced contractors; Klarna’s $40M is an estimated projection; TE Connectivity’s global deployment is planned rather than fully complete; and North Atlantic Industries claims are based on a single Microsoft source. The BCG research statistic (74% of companies struggling with AI value) is fully verified with detailed methodology. This investigation demonstrates both the availability of authoritative sources for major AI adoption claims and the critical importance of distinguishing between realized results, projections, current deployments, and planned rollouts.

7.10.2025 15:43The Innovation Acceleration Framework: What Three Years of Parallel Transformations Taught Me
https://blog.tobybrooks.com/the-...

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.

https://blog.tobybrooks.com/from...

The Event That Started It

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.

It all began in 2001 on a tiny island in the Pacific.

I was visiting my cousin in Palau with a 0.3-megapixel camera—the kind bundled with my dad's Sony camcorder. You know the type: barely enough pixels to fill a postage stamp.

But something shifted.

As I stood on that shoreline, watching the sky ignite with sunset colors, I felt the tug of something bigger. That humble camera became more than just a gadget.

It became my passport to seeing our world differently.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.
The sunset explosions that started my love for photography

Living a Double Life: When Passion Meets Profession

Back home, my days were consumed with engineering coursework and labs.

But photography kept pulling me back.

I would finish lectures, slip into the photo labs, and lose hours experimenting. It wasn't just about getting a "good shot."

You learn something profound in those quiet solo hours: how light sculpts landscapes, how shadows carry emotion, how color tells stories no textbook ever could.

Photography became my parallel education—teaching me what you miss when you're always rushing.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.
Even in your busiest seasons, remember to look up.

Finding Your Anchor: What Landscapes Taught Me

Landscapes became my anchor. And they can become yours too.

The red rock formations in Utah taught me how nature wears contrast like armor.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.

The still wetlands of Florida showed me patience—how silence itself could be a subject.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.

Standing on the National Mall as the morning haze begins to lift, I found myself face-to-face with one of America’s most iconic tributes: the Lincoln Memorial. A reminder of resilience and freedom, a timeless beacon of glowing hope and reflection.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.

Each scene is more than scenery. They're conversations with the earth and our story, connecting us to the world we live in, frozen in frames.

The Humility Lesson: What Wildlife Photography Reveals

If landscapes teach you stillness, wildlife teaches you humility.

I've spent hours waiting, watching, holding my breath—all for that split-second when magic happens.

There's the osprey carving across blue sky.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.

The great egret lifting into stormy winds.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.

The jellyfish glowing like a lantern in teal depths.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.

Each encounter reminds you of something essential: these worlds aren't yours. You're just a visitor, lucky enough to witness them.

Capturing Energy: When Movement Tells the Story

Not all meaningful frames are quiet.

Feel the raw energy in two Porsche classics racing side by side—a duel of steel and speed.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.

Experience the mystery viewing from inside a sea cave, water stretching into horizon light.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.

These images prove something powerful: photography doesn't just freeze moments. It captures motion, tension, even curiosity.

The Small Wonders You're Missing

Here's what 24 years taught me: slow down.

A fuchsia bloom dangling over stone paths.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.
"Sometimes the smallest subjects carry the loudest stories."

A peach rose unfurling in soft spirals.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.

These whispers demand attention just as much as landscape roars. You'll discover that wonder doesn't change with size and scale.

Your Journey Starts Now

What began with 0.3 megapixels in Palau grew into a journey across continents and consciousness.

Every photo here represents not just a place, but a feeling—awe, patience, exhilaration, humility.

Photography started as memory-making. It taught me new ways to process the world, to reframe, slow down, and really see what I am missing.

And here's the truth: I'm still just getting started. There's always another horizon to chase.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.

Conclusion

After 24 years behind the lens, here's what I know for certain:

You don't need the perfect camera. You don't need the perfect plan.

You just need to start noticing.

From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.
This world holds many paths and stories start capturing yours now.

Action Steps:

Today: Take one photo with whatever camera you have (yes, your phone counts)

This Week: Find one small detail you usually walk past—photograph it

This Month: Visit one location at golden hour (the time just after sunrise, and just before sunset)—see how light transforms the ordinary

The rest will follow.

What moment made you see the world differently, see your circumstances differently? Share your story with us!

30.9.2025 11:53From 0.3 Megapixels to Masterpieces: What 24 Years Behind the Lens Teaches You.
https://blog.tobybrooks.com/from...

The AI Paradox: Why I'm Betting on the Builders While the World Debates

https://blog.tobybrooks.com/the-...

The AI Paradox: Why I'm Betting on the Builders While the World Debates

I've spent three years deep in the AI trenches, and I need to tell you something that might sound contradictory: AI is simultaneously the most overhyped and underestimated technology I've ever encountered. Any given day, just one of the things I might be doing is simultaneously building with 3-4 agents working on different solutions(X, Toby Brooks)—what used to require teams months to deliver each one, now first value happens in hours and full working solutions in days through multi AI agent systems. Does context matter? Yes. Does the driver matter? Absolutely. Does the technology stack matter? Of course. Is it easy to manage 4 parallel multi-agents, while doing other things? Yes and no. The truth of dipping in and out of trust, delegation, and verification is key. The art of not watching when not necessary and watching when absolutely necessary is akin to managing and leading teams.

Here's what fascinates me most: when I step away from AI for a few days, life returns to its pre-AI rhythm almost instantly. The world outside of AI technology continues much as it always has. But when I dive back in, the pull is immediate and intoxicating—not unlike returning to a favorite creative pursuit that makes hours disappear. That energizing, almost addictive quality of engagement and progress followed by the easy return to normalcy tells me something important about what we're experiencing.

The Three Realities Colliding

After three years of working with these rapidly changing tools daily, I've watched three distinct narratives emerge, each backed by compelling evidence:

The Capability Revolution unfolds in real-time as small teams achieve the impossible. Lovable hit $100M in annual recurring revenue in eight months with 45 employees. (TechCrunch) Solo developers generate $40,000 monthly without teams (Max/Wang). Y Combinator reports that AI writes 95% of the code for a quarter of their startups (Y Combinator). These aren't anomalies—they're early signals of a fundamental shift in what humans can accomplish, and I am practically performing these claims without the financial success yet but with tangible solutions to share in my spare time.

The Human Factor reveals itself in both triumph and struggle. Research now documents that 17-24% of AI users develop genuine dependency symptoms, complete with withdrawal effects. (Psychology Research and Behavior Management) Yet I've also witnessed how the right mindset transforms AI from a crutch into a bicycle for the mind—amplifying capability without creating dependence. The same technology that energizes and inspires can also deceive and disappoint, sometimes spectacularly, "I've never been so absolutely right in my entire life." (Anyone using Claude Code)

The Generalist's Path (Polymaths) emerges as traditional specialization becomes fragile. The hyper-specialist who spent decades mastering one domain watches AI perform 80% of their core tasks. Meanwhile, those specialist and generalist who embrace AI's breadth discover they can acquire new capabilities in days rather than decades. The raccoon thrives where the panda struggles—adaptability trumps specialization when the environment changes this rapidly.

Why Your Response to This Moment Matters

The divide isn't just technological—it's psychological, generational, and economic. Research shows 61% of Americans embrace AI tools while 39% remain unmoved (Menlo Ventures). Among C-suite executives and managers, 53% of executives vs 44% of managers using generative AI regularly (McKinsey). There is a significant underrepresentation of women in AI (28.2% in STEM overall). This state is not exclusionary to those in STEM that can still leverage AI without a specialty and focus. These aren't random distributions; they're early indicators of who will thrive and who will struggle in what's coming. A traditional focus on AI does not put you in a will-thrive category.

What I've learned through three years of daily experimentation is that the real divide isn't between optimists and pessimists, or even between users and non-users. It's between those who understand AI as a powerful but fallible amplifier of human judgment and those who mistake it for a replacement for that judgment.

Your Guide Through the Paradox

Over the next three posts, I'll share what I've discovered about navigating this transformative moment—not as someone with all the answers, but as a practitioner who's made enough mistakes to recognize patterns and experienced enough wins to maintain optimism.

Part 1: The Innovation Acceleration Frameowork will explore the capability revolution through the lens of both spectacular successes and catastrophic failures. You'll discover why some small teams are building hundred-million-dollar businesses while 74% of companies fail to capture value from AI (Boston Consulting Group). More importantly, you'll understand which camp you're likely to fall into and why.

Part 2 dives into the psychology of AI engagement—that addictive quality I mentioned, the growth mindset that separates thrivers from survivors, and the critical difference between augmentation and dependency. You'll learn why viewing struggle as strength-building rather than failure might be the most important mental shift of the AI era.

Part 3 presents the generalist's framework for building what Nassim Taleb calls an "antifragile" career—one that grows stronger from disruption rather than breaking. You'll discover the four foundational powers every AI generalist needs and why helping others provides the sustainable motivation to persist through the learning curve.

The Opportunity Hidden in Controversy

The news feeds showcase both sides daily: solo creators earning millions, McDonald's AI adding 260 nuggets to orders against customers' wishes (Today.com). Google losing $96.9 billion in market cap from AI image generation disasters (Fox Business), Replit platform agent deletes app's database during code-freeze and then lies about it (X, Jason Lemkin), while Midjourney generates $500 million with 11 employees. (Contrary Research) Lawyers facing sanctions for AI hallucinations, while Y Combinator startups reach unprecedented velocities.

This chaos isn't a bug—it's a feature. We're living through the messy middle of a transformation where the rules haven't solidified, where advantages compound for early adopters, and where mistakes are still relatively cheap. Research suggests we have a 2-4 year window before AI capabilities become table stakes rather than differentiators. Those building their AI muscles now will have crucial advantages when these tools become workplace prerequisites.

The Choice Before You

You can approach this moment three ways:

Resist it entirely and hope your domain remains untouched. History suggests this rarely ends well for those who bet against technological change.

Embrace it uncritically and risk becoming dependent on tools you don't understand, vulnerable to their failures and biases.

Engage thoughtfully, building capability while maintaining judgment, using AI as a bicycle for your mind rather than a replacement for it.

I've chosen the third path, and after three years, my outlook on the future is brighter than ever. Not because I believe AI will solve all our problems—it won't. Not because I think the risks aren't real—they are. But because I've experienced firsthand how AI makes complex things easier, creates opportunities for serving others, and opens doors that didn't exist before.

The divide between those embracing AI and those avoiding it will define the next decade. But the real differentiation will come from those who learn to harness its power while maintaining their humanity, who use it to amplify their judgment rather than outsource it, and who focus on creating value for others rather than just optimizing for themselves.

Join Me for the Journey

If you're ready to move beyond the hype and fear, to understand both the genuine opportunities and real dangers, and to develop a pragmatic approach to thriving in the AI age, then this series is for you. Whether you're a complete beginner or someone with experience looking for perspective, you'll find actionable insights grounded in real experience rather than speculation.

The future belongs not to AI, but to humans who learn to wield it wisely. The question isn't whether this transformation will continue—it will. The question is whether you'll be among those shaping it or those shaped by it.

The pull of AI is real. The opportunities are genuine. The dangers are serious. And navigating all three successfully might be the defining skill of our time.

Let's figure it out together.


Source Verification for this blog

After conducting extensive parallel research across academic databases, news archives, industry reports, and company sources, I've traced the origins and citations for each of the 11 statistics mentioned. Here's what the investigation uncovered about the accuracy and sourcing of these widely-circulated claims. Let's make fact checking ourselves the norm with AI assistance: #transparent #ai-assist-ftw

Company Performance Metrics Fully Documented

Lovable's $100M ARR milestone is thoroughly verified through multiple authoritative sources. TechCrunch published Anna Heim's article on July 23, 2025, confirming the Swedish startup reached $100M annual recurring revenue in just 8 months with 45 employees. The company's own blog post from the same date states they achieved "the fastest-growing startup, not just in Europe, but in the world." Both the revenue figure and employee count are accurate and current.

Midjourney's $500 million revenue claim is confirmed by Contrary Research's May 2025 business breakdown report, which states the company operates at "$500 million ARR as of May 2025." However, the 11 employee figure appears outdated—this was accurate in 2022, but current sources indicate the company now has between 40-131 employees. The revenue is verified, but the employee count requires correction.

McDonald's AI ordering failures, including the infamous nugget incidents, are well-documented across multiple news sources. Today.com reported in February 2023 about TikTok videos showing the system ordering "28 orders of Chicken McNuggets for hundreds of dollars" while customers begged it to stop. Axios confirmed in June 2024 that McDonald's terminated its IBM partnership after these widespread failures at over 100 test locations.

Google's $96.9 billion market cap loss from the Gemini controversy is precisely documented. Fox Business reported in February 2024, citing Dow Jones data, that Alphabet's market cap fell from $1.798 trillion to $1.702 trillion—exactly $96.9 billion—following the pause of Gemini's image generation feature due to historically inaccurate outputs.

Developer and Startup Statistics Traced to Original Sources

The Y Combinator statistic about AI writing 95% of code originates from a specific source: YC Managing Partner Jared Friedman stated in a YC video titled "Vibe Coding Is the Future" that "a quarter of the founders said that more than 95% of their code base was AI generated." This referred to their Winter 2025 batch survey, first reported by TechCrunch on March 6, 2025.

The $40,000 monthly revenue for solo developers claim has multiple verified examples rather than a single source. David Bressler's FormulaBot, Tony Dinh's TypingMind, Alex Rainey's My AskAI, and Marc Lou's portfolio all documented reaching or exceeding $40K monthly recurring revenue as solo or tiny teams using AI tools. These cases span 2022-2025, with detailed revenue documentation in sources like Indie Hackers and Medium case studies.

Adoption and Dependency Research Located

The 61% of Americans embracing AI tools statistic comes directly from Menlo Ventures' "2025: The State of Consumer AI" report, based on a nationally representative survey of 5,031 U.S. adults conducted with Morning Consult in April 2025. The report explicitly states that "61% have used AI in the past six months" while 39% remain non-adopters.

The 17-24% dependency symptoms range is scientifically documented in peer-reviewed research. Huang et al.'s study published in Psychology Research and Behavior Management (March 2024) found exactly "17.14% of adolescents experienced AI dependence at T1, increasing to 24.19% at T2," with 9.68%-15.51% reporting withdrawal symptoms. This longitudinal study of 3,843 adolescents provides the precise source for this statistic.

Boston Consulting Group's October 2024 report "Where's the Value in AI?" provides the exact source for the 74% of companies failing to capture value statistic. The report, based on surveying 1,000+ executives across 59 countries, states that "74% of companies have yet to show tangible value from their use of AI."

Lawyers facing sanctions is anchored in the landmark case Mata v. Avianca, Inc. (S.D.N.Y. 2023), where attorneys Schwartz and LoDuca were fined $5,000 for submitting ChatGPT-generated fake cases. Additional documented sanctions include Park v. Kim (2024), multiple cases with $10,000+ fines, and numerous bar association warnings about AI use in legal practice.

Statistics Requiring Clarification

Three statistics could not be traced to their exact original sources:

The 2-4 year competitive advantage window represents a common industry concept rather than a specific research finding. MIT Sloan Management Review and California Management Review discuss AI advantages as "transitory" and subject to rapid commoditization, but no authoritative source provides this exact timeframe.

Verification Summary

10 claims have verifiable original sources with exact figures and citations. The Midjourney employee count requires updating from 2022 data, while two of the three demographic statistics were removed and the competitive advantage timeframe appear to be either misattributed, calculated from multiple sources, or based on proprietary research not publicly available. This investigation highlights both the rapid spread of AI statistics across media and the importance of tracing claims back to their original sources for accuracy.

23.9.2025 02:31The AI Paradox: Why I'm Betting on the Builders While the World Debates
https://blog.tobybrooks.com/the-...

From Ashes to AI: How AI Transformed My Darkest Hour into Purpose 🔥

https://blog.tobybrooks.com/from...

From Ashes to AI: How AI Transformed My Darkest Hour into Purpose 🔥

Two years ago on the Fourth of July, my life took an unexpected turn.

What began as a personal crisis became the catalyst for profound personal and professional transformation.

When everything you've dreamed of, worked for, and built over 20 years starts crumbling, what do you do? I discovered something unexpected—work I finally loved.

What had been merely a profession, a means to support my family, transformed into a genuine passion.

Finding Purpose in the Ruins

As a seasoned technologist in the AI era, I found myself not just building solutions, but serving others with something I was genuinely excited about.

This wasn't just about solving complex enterprise-scale problems anymore.

It was about making dreams come true—both for the organizations I served and the people whose lives AI touched. I discovered this passion while working at UKG, where technology meets human potential.

You know that feeling when work stops feeling like work? That's what happened to me.

The Price of Intensity

I poured myself into this newfound purpose with everything I had.

But intensity without balance has a price.

The symptoms of burnout shattered my usual defenses. The grit, self-reliance, and mindfulness practices that once anchored me suddenly began to fail.

As symptoms intensified into darkness, the suffocating reality hit.

I was creating incredible impact with an internal self-talk of failure. Nothing I valued to show for it. Dreams became corrupted in that darkness, impossible to rewrite or reimagine.

Have you ever felt like you're succeeding on the outside while failing on the inside?

From Diagnosis to Transformation

Then came a diagnosis that changed everything.

A medical revelation and treatment that began pulling me from the abyss. I faced a choice: sell everything and change my life completely, or methodically rebuild from where I was.

I chose to rebuild.

The specifics of this diagnosis—and how it transformed my understanding of high performance—I'll share intimately on my upcoming platform. You'll get the full journey there.

The Unexpected Gifts of Darkness

The recovery journey revealed something unexpected.

Through the darkness, I developed resilience from the most challenging experience of my life.

I lost connections with those I loved deeply without reciprocation. But I forged deeper bonds with others who shared similar struggles, selflessness, and human connection.

I gained a perspective that will remain with me forever.

I discovered expertise I never anticipated: not just in AI and enterprise transformation, but in authentic leadership, connection, and the courage to restart.

Building Forward: A New Approach

Today, I approach life and work with renewed purpose.

I am still on my journey.

My personal life is intentional, human-centered, gracefully empathetic, and self-respecting. I enjoy self-deprecation, humor, and sharing difficulties in that way.

My hands are wide open. Not gripping onto anything. Not trying to control things beyond my control.

I maintain self-control, am less affected by others' behaviors, and fully embrace the AI frontier. My work with Google and UKG showed me AI's true power lies not in replacing human connection, but in amplifying it.

Your Path Forward

I'm launching new platforms to share these lessons—not to sell false hope, but to enable real dream realization.

You need leaders who've done the inner work. Who have used the technology first-hand. Who understand that AI's true power amplifies rather than replaces humanity.

🚀 Get connected with me here

A Note on Transparency

All content I create, including this post and the platforms I am building, are AI-assisted and augmented—crafted with the most human and careful stewardship I can provide.

There will be mistakes, issues, and even failures.

I believe in being the responsible architect of AI-enhanced work, where technology amplifies authentic human expression rather than replacing it. This is the future I'm building: transparent, ethical, and deeply human AI collaboration.

Conclusion

My journey from burnout to breakthrough taught me three essential lessons:

  1. Your darkest moments often hide your greatest transformations
  2. True innovation happens when technology serves humanity, not the other way around
  3. Authentic leadership emerges from embracing both your successes and your vulnaribilities

You don't have to wait for a crisis to transform your trajectory. But if you're in one now, know this: the skills you're building in the darkness—resilience, empathy, authentic connection—whatever they may be, these become your greatest professional assets.

Your Turn: What challenge in your life unexpectedly transformed your professional trajectory?

Drop a comment below. Let's normalize discussing growth through adversity. Your scars often become your greatest strengths.

#AILeadership #PersonalGrowth #Resilience #AuthenticLeadership #TechTransformation #ServantLeadership

22.9.2025 01:16From Ashes to AI: How AI Transformed My Darkest Hour into Purpose 🔥
https://blog.tobybrooks.com/from...

These Talking Otters Are Worried About AI Taking Their Jobs, and It's Otterly Relatable

https://blog.tobybrooks.com/thes...

These Talking Otters Are Worried About AI Taking Their Jobs, and It's Otterly Relatable

🎥 A 1-minute journey from AI anxiety to wisdom with the most unexpectedly philosophical otters you'll ever meet


🌊 What You're About to Experience

[📹 WATCH THE FULL VIDEO]

Ever feel like technology is advancing so fast you can barely keep up? You're not alone—even the otters are getting concerned. What you're about to watch isn't just another cute animal video; it's a masterclass in handling AI anxiety, delivered by the most relatable 42-year-old otter you've never met.


🦦 Meet My New Spirit Animal

[📹 CHARACTER SPOTLIGHT CLIP]

Picture this: You're 42, you've mastered your craft, and suddenly you hear that someone "who uses AI can do your job better than you can." Our otter friend's reaction? Pure gold.

"At my humble age of 42, I know otter best. Ain't nobody gonna otter better than I do!"

Created using Google's revolutionary Veo 3 AI model, this video uses adorable talking otters to explore a very real anxiety: job security in the age of artificial intelligence. Our protagonist hears news of beavers—once top-tier civil engineers—being replaced by AI-designed smart dams and starts wondering about his own future.

💭 Reflection moment: When was the last time you felt exactly like this otter?


🔥 The Campfire Transformation

What starts as humorous panic about being "out-ottered" by a machine evolves into profound wisdom. Sitting by a crackling fire with a younger otter, our hero has his breakthrough moment.

The quote that's about to become your new life motto:

"The real question isn't whether AI can replace you. It's whether you're learning, growing, taking care of your present, and shaping the future."

The closing advice hits even deeper: "Go with the flow, but paddle when it matters. Don't panic when the current changes."


🌊 Why This Matters Beyond the Cuteness

This video is more than entertainment—it's a beautifully crafted metaphor for our times. While we can't stop the current of change, we can learn to navigate it. The otter's journey from anxiety to acceptance mirrors what many of us face daily.


🚀 Your Post-Video Action Plan

This week, channel your inner wise otter:

  1. Identify one skill to develop alongside AI, not in competition with it
  2. Connect with someone successfully adapting to change in your field
  3. Start one small project that positions you as a future-shaper

Join the conversation: Share your own "going with the flow" moment using #AIOtterWisdom


🎬 What's Next

This is the first in our AI Video Content Generation journey. Let's have fun with each other at our own expense given the state of AI and the world we live in.

🔔 Never miss the wisdom: Subscribe to our newsletter for early access to new videos, contents, plus downloadable tips, tricks, and prompts perfect for sharing.

Ready for your dose of surprisingly profound wisdom? You 'otter' watch this video.

21.9.2025 19:52These Talking Otters Are Worried About AI Taking Their Jobs, and It's Otterly Relatable
https://blog.tobybrooks.com/thes...

Where Growth Meets Possibility

https://blog.tobybrooks.com/comi...

Where Growth Meets Possibility

Today marks the beginning of something I've been dreaming about for a while—a space where we can explore what it means to live fully, grow continuously, and create lives of meaning and freedom.

I'm Toby Brooks, and I'm thrilled you're here

Why This Blog Exists

We live in an extraordinary time. Never before have we had so many opportunities to learn, connect, and create different lives for ourselves.

This blog exists to help bridge that gap between where we are coming from to where we are going—to share insights, strategies, and real experiences with each other that help us maximize our optionality and design a life that excites us every morning with new possibilities.

Ultimately I wanted a place that wasn't my social media account to express myself, my interests, and connect with the world in meaningful ways.

What You Can Expect

In the coming weeks and months, I'll be sharing:

A Different Approach

This isn't just another self-improvement blog. It's a laboratory for life design—a place where innovation meets introspection, where creativity fuels change, and where positivity isn't toxic but transformative and practical.

I believe in servant leadership, which means this platform isn't about me—it's about us. It's about creating value for you and building a community where we all rise together.

Your Invitation

Whether you're:

You belong here.

Let's Begin

Every journey starts with a single step, and this is ours. I invite you to subscribe, engage, and most importantly, bring your whole self to this community. Share your dreams, your challenges, your victories—because your story matters, and it might be exactly what someone else needs to hear.

Together, we'll explore what becomes possible when we embrace our full potential and support each other's growth.

Welcome to the journey. I can't wait to see where it takes us.

With excitement and gratitude, Toby

P.S. What brought you here today? What dreams are you working toward? Drop a comment below—I'd love to hear your story and start this conversation together.

This site and more will be up and running here shortly, but you can subscribe in the meantime if you'd like to stay up to date and receive emails when new content is published!

21.7.2025 09:01Where Growth Meets Possibility
https://blog.tobybrooks.com/comi...
Subscribe

🔝

Datenschutzerklärung    Impressum