Kabir's Tech Dives cover art

Kabir's Tech Dives

Kabir's Tech Dives

By: Kabir
Listen for free

About this listen

I'm always fascinated by new technology, especially AI. One of my biggest regrets is not taking AI electives during my undergraduate years. Now, with consumer-grade AI everywhere, I’m constantly discovering compelling use cases far beyond typical ChatGPT sessions.

As a tech founder for over 22 years, focused on niche markets, and the author of several books on web programming, Linux security, and performance, I’ve experienced the good, bad, and ugly of technology from Silicon Valley to Asia.

In this podcast, I share what excites me about the future of tech, from everyday automation to product and service development, helping to make life more efficient and productive.

Please give it a listen!

© 2025 EVOKNOW, Inc.
Economics Leadership Management & Leadership
Episodes
  • The Illusion of Thinking in Large Reasoning Models
    Jun 21 2025

    This episode investigates the reasoning capabilities of Large Reasoning Models (LRMs), a new generation of language models designed for complex problem-solving. The authors evaluate LRMs using controllable puzzle environments to systematically analyze how performance changes with problem complexity, unlike traditional benchmarks that often suffer from data contamination. Key findings reveal three performance regimes: standard LLMs surprisingly excel at low complexity, LRMs gain an advantage at medium complexity, and both models experience complete collapse at high complexity, often exhibiting a counter-intuitive decline in reasoning effort despite having a sufficient token budget. The analysis also examines the internal reasoning traces, uncovering patterns like "overthinking" on simpler tasks and highlighting limitations in LRMs' ability to follow explicit algorithms or maintain consistent reasoning across different puzzle types.

    Send us a text

    Support the show


    Podcast:
    https://kabir.buzzsprout.com


    YouTube:
    https://www.youtube.com/@kabirtechdives

    Please subscribe and share.

    Show More Show Less
    14 mins
  • The Lucrative World of AI Careers
    Jun 21 2025

    This episode highlights the extremely high demand and corresponding astronomical salaries for professionals skilled in Artificial Intelligence (AI), Machine Learning (ML), and Large Language Models (LLMs). This AI talent war is driven by a limited supply of qualified individuals and the immense potential for significant financial returns that AI implementation offers to businesses across various sectors. While experience in LLMs and AI can lead to salaries well over $150,000, some top-tier roles command compensation packages exceeding $1 million annually, particularly at leading tech companies like OpenAI, Google, and Meta. To excel in these lucrative positions, candidates need a blend of strong technical skills such as programming (especially Python), advanced mathematics (linear algebra, statistics), machine learning fundamentals, data processing, and cloud computing, alongside crucial soft skills like communication, problem-solving, continuous learning, and business acumen.


    keep
    Save to notecopy_all


    docs
    Add note
    audio_magic_eraser
    Audio Overview
    flowchart
    Mind Map






    Send us a text

    Support the show


    Podcast:
    https://kabir.buzzsprout.com


    YouTube:
    https://www.youtube.com/@kabirtechdives

    Please subscribe and share.

    Show More Show Less
    18 mins
  • The Illusion of Thinking in Large Reasoning Models (LRM)
    Jun 19 2025

    This episode investigates the reasoning capabilities of Large Reasoning Models (LRMs), a new generation of language models designed for complex problem-solving. The authors evaluate LRMs using controllable puzzle environments to systematically analyze how performance changes with problem complexity, unlike traditional benchmarks that often suffer from data contamination. Key findings reveal three performance regimes: standard LLMs surprisingly excel at low complexity, LRMs gain an advantage at medium complexity, and both models experience complete collapse at high complexity, often exhibiting a counter-intuitive decline in reasoning effort despite having a sufficient token budget. The analysis also examines the internal reasoning traces, uncovering patterns like "overthinking" on simpler tasks and highlighting limitations in LRMs' ability to follow explicit algorithms or maintain consistent reasoning across different puzzle types.

    Send us a text

    Support the show


    Podcast:
    https://kabir.buzzsprout.com


    YouTube:
    https://www.youtube.com/@kabirtechdives

    Please subscribe and share.

    Show More Show Less
    14 mins
No reviews yet