Gianfranco's Best of February 2025 Reading List

The top essays on AI, finance, startups, and personal development in this curated February 2025 reading list.

Welcome to the February 2025 edition of my monthly reading list.

This month, I’ve curated my favorite essays across various categories, including artificial intelligence, finance, startups, and personal development.

If you only have a few minutes, these three posts were my favorite, and are included in the list below:

  • Why I think AI take-off is relatively slow - Tyler Cowen
    • Tyler Cowen argues that even dramatic AI breakthroughs face human, institutional, and regulatory bottlenecks—like the FDA or slow-moving sectors—limiting rapid economic gains. He cites the O-Ring model, cost disease, and historically smooth growth trends to suggest that while AI will add a half-point or so to annual growth, real-world adoption still hinges on messy human factors. Despite optimism about AI’s technical capabilities, Cowen sees incremental rather than explosive change in the near term.
    • Read More
  • Always Start with a Model - Byrne Hobart
    • Byrne Hobart argues that even flawed models are essential for navigating economic complexity, asserting that “even if the thesis is wrong, acting on it means testing it, and that’s the only way to get it right.” Through examples like MySpace’s chaotic social experiments and Reddit’s libertarian-to-mainstream evolution, he illustrates how iterative critique of initial assumptions—not market perfection—drives progress. The piece champions intellectual humility, framing models as provisional tools for uncovering hidden incentives and redefining societal-scale paradigms.
    • Read More
  • Every Productivity Thought I've Ever Had, as Concisely as Possible - Alexey Guzey
    • In a wide-ranging post, Alexey Guzey offers practical strategies for tackling procrastination, structuring daily work into short cycles, and rotating between new contexts (like coffee shops) to maintain focus. He emphasizes that all systems eventually break, so building flexible rules and embracing trial and error—rather than feeling guilty—keeps you moving forward. By combining clear “work vs. break” intervals with a forgiving mindset, Guzey shows how to break free of stagnation and refine one’s approach to productivity.
    • Read More

Artificial Intelligence

  • A Conversation with One of My 8090 Co-Founders - Chamath Substack
    • - Chamath and Sina discuss how “grokking”—an unexpected outcome of pushing transformer training beyond typical overfitting points—revealed emergent reasoning, bridging the simplicity of artificial neurons and the complexity of biological brains. They note that massive scale and fresh “reward modeling” methods still spark surprising intelligence, hinting at deeper optimization and domain-specific data as AI’s next frontier. Yet they caution that U.S.-based efforts may slow under tighter ethical constraints, while China’s bolder approach could accelerate risk-taking and experimentation.
    • Read More
  • Deep Research and Knowledge Value - Stratechery
    • - OpenAI’s new “Deep Research” agent acts like a synthetic research assistant, sifting the entire web to build extensive, on-demand reports—but its power highlights how public information is losing competitive edge, as AI can easily surface even obscure facts. Meanwhile, truly proprietary or offline data (the “unknown known”) gains new value, and prediction markets or specialized knowledge work become more attractive as strategies for monetizing secrets. As these capabilities expand, questions emerge around jobs, privacy, and how to contextualize AI’s incomplete (and sometimes incorrect) outputs.
    • Read More
  • Why I think AI take-off is relatively slow - Tyler Cowen
    •     • Tyler Cowen argues that even dramatic AI breakthroughs face human, institutional, and regulatory bottlenecks—like the FDA or slow-moving sectors—limiting rapid economic gains. He cites the O-Ring model, cost disease, and historically smooth growth trends to suggest that while AI will add a half-point or so to annual growth, real-world adoption still hinges on messy human factors. Despite optimism about AI’s technical capabilities, Cowen sees incremental rather than explosive change in the near term.
    • Read More
  • The End of Search, The Beginning of Research* - Ethan Mollick
    • - Ethan Mollick heralds the convergence of AI “Reasoners” (models that “think” via extended inference-time compute) and narrow agents like OpenAI’s Deep Research, which can autonomously produce graduate-level academic analysis—citing legitimate sources, synthesizing contradictions, and uncovering novel connections—at machine speed. He observes that while general-purpose agents like Operator still falter, specialized AI now rivals human researchers: “I would have been satisfied to see something like it from a beginning PhD student.” This shift signals a near-term future where AI handles high-value white-collar tasks, forcing experts to evolve from doers to orchestrators of machine intelligence.
    • Read More
  • Google’s Future in Search & AI - Tomasz Tunguz
    • - Google’s Q1 2025 earnings reveal a strong focus on AI-driven search experiences, with “AI Overviews” monetizing similarly to classic search ads. Hardware constraints remain critical—Google reports data centers delivering 4x more compute per watt vs. 5 years ago, yet capacity lags surging training and inference demand. Meanwhile, Gemini models and Vertex see rapid adoption, foreshadowing 2025 as a turning point for agentic AI systems that transform how users interact with technology—especially around multimodal search and automation.
    • Read More
  • The LLM Curve of Impact on Software Engineers - SerCe
    • - SerCe argues that AI’s usefulness for software engineers follows a “U-shaped” curve: Junior devs can rapidly accelerate coding and learning but risk stunting deeper skill growth; mid-level devs get big productivity boosts for everyday tasks yet still find the models lacking for on-call or ambiguous specs; senior devs are often unimpressed, as LLMs lack the domain context for non-coding tasks or complex debugging; but staff-level engineers again see big benefits, leveraging LLMs to spin up prototypes and proof-of-concepts quickly. It explains why some devs rave about LLMs while others see little value—the payoff depends heavily on your role and responsibilities.
    • Read More
  • Does OpenAI’s Deep Research Put Me Out of a Job? - Every.to
    • - OpenAI’s new “Deep Research” agent can rapidly generate research reports—threatening to automate substantial swaths of knowledge work. Yet historical examples show that while automation often depresses wages and eliminates certain tasks, market forces and worker adaptability shape its ultimate impact. Thus, analysts and writers must lean into the intangible, interpretive, and managerial aspects of their roles that AI has yet to master.
    • Read More
  • Understanding Reasoning LLMs - Sebastian Raschka
    •     • Sebastian Raschka examines four approaches—inference-time scaling, pure RL, RL plus supervised fine-tuning (SFT), and distillation—for developing LLMs that excel at multi-step “thinking” tasks like advanced math or coding. He uses DeepSeek R1 as a blueprint, illustrating how emergent reasoning can arise through pure RL and be refined via additional SFT, while noting that smaller teams can still build capable models on limited budgets with careful distillation or “journey learning.” Ultimately, Raschka underscores matching the right strategy (and scale) to a model’s intended use case, balancing efficiency, cost, and performance in the rapidly evolving AI landscape.
    • Read More
  • Datacenter Anatomy Part 2 – Cooling Systems ($) - SemiAnalysis
    • - This comprehensive deep dive explores how modern datacenters manage heat, detailing everything from legacy air-cooled setups to increasingly popular liquid-cooling methods driven by AI compute demands. SemiAnalysis breaks down water-cooled chillers, airside and waterside economizers, evaporative cooling, and advanced designs at Microsoft, Meta, Google, and more—showing how each hyperscaler balances PUE, WUE, and time-to-market. Ultimately, as chips push into 1,500W+ territory, the report forecasts a massive shift toward direct liquid cooling and immersion, with new winners in the supply chain (e.g., quick disconnects, CDUs) and potential obsolescence of older architectures.
    • Read More

Finance and Economics

  • AI Eats Equity Research ($) - Net Interest
    • - OpenAI’s new “Deep Research” agent automates much of an equity analyst’s workflow—data gathering, summarizing, and drafting research—in minutes, threatening analysts’ roles in a business long squeezed by regulations and falling commissions; yet humans still matter, as seasoned professionals offer the skeptical eye that AI currently lacks, especially when holding management to account and providing deep contextual insight.
    • Read More
  • Popular econ books: What to read, what not to read - Noah Smith
    • - Noah Smith rounds up his recommendations for recent, reader-friendly economics books, spotlighting titles that distill modern economic concepts (like *The Undercover Economist*, *Mastering ‘Metrics*, and *Good Economics for Hard Times*). Finally, he provides a brief “anti-reading” list, cautioning that some popular works (*Power and Progress*, *Debt: The First 5000 Years*, *The Deficit Myth*, *Freakonomics*) offer ideas or evidence he views as either deeply flawed or simply not holding up over time.
    • Read More
  • Always Start with a Model - Byrne Hobart
    • - Byrne Hobart argues that even flawed models are essential for navigating economic complexity, asserting that “even if the thesis is wrong, acting on it means testing it, and that’s the only way to get it right.” Through examples like MySpace’s chaotic social experiments and Reddit’s libertarian-to-mainstream evolution, he illustrates how iterative critique of initial assumptions—not market perfection—drives progress. The piece champions intellectual humility, framing models as provisional tools for uncovering hidden incentives and redefining societal-scale paradigms.
    • Read More
  • Why Do Companies Even Exist? - Byrne Hobart
    • - Byrne Hobart revisits Ronald Coase’s transaction-cost theory, arguing companies exist to minimize friction in coordination—exemplified by Walmart’s centralized control over logistics and branding versus fragmented models like Airbnb’s lodging ecosystem. He dissects how technology redraws corporate boundaries: scale-driven industries centralize (e.g., data platforms), while tools like POS systems enable restaurant fragmentation. The piece frames companies as evolving “Schelling Points” where shifting trade-offs between internal efficiency and market flexibility dictate organizational form.
    • Read More

Startups

  • The Complete Guide to SaaS Pricing Strategy - Tomasz Tunguz
    • - Tomasz Tunguz highlights a constantly evolving approach—balancing short-term revenue, market share, or premium skimming—while factoring in AI’s growing value, potential Veblen effects for premium tiers, usage-based frictions, and buyer demands for predictability, ensuring product-marketing alignment and margin expansion over time.
    • Read More

Personal Development

  • Every Productivity Thought I've Ever Had, as Concisely as Possible - Alexey Guzey
    •     • In a wide-ranging post, Alexey Guzey offers practical strategies for tackling procrastination, structuring daily work into short cycles, and rotating between new contexts (like coffee shops) to maintain focus. He emphasizes that all systems eventually break, so building flexible rules and embracing trial and error—rather than feeling guilty—keeps you moving forward. By combining clear “work vs. break” intervals with a forgiving mindset, Guzey shows how to break free of stagnation and refine one’s approach to productivity.
    • Read More
  • Productivity - Sam Altman
    •     • Sam Altman advocates combining meaningful, self-directed work with deliberate lifestyle choices—like blocking out high-energy hours, skipping low-value tasks, and optimizing sleep, diet, and exercise—to compound daily gains into transformative long-term results. He emphasizes that truly important outcomes stem from working both hard and smart, while preserving time for relationships and personal passions. Ultimately, focus on what matters most, refine your routine, and avoid productivity “porn” that distracts from real progress.
    • Read More

Reply

or to participate.