Promises, Profits, and People: The AI Promise
AI is being sold as salvation: faster work, smarter decisions, cheaper everything, a better future.
But behind every promise is an incentive… and behind every incentive is a human cost.
In this episode, we unpack “The AI Promise” from three angles—what companies say AI will do, what they profit from it doing, and what it actually does to people when it hits the real world. We’ll explore the gap between marketing and reality: productivity that turns into layoffs, personalization that turns into surveillance, “alignment” that turns into control, and innovation that concentrates power in fewer hands.
This isn’t anti-AI or pro-AI. It’s a reality check for builders, founders, and anyone
-
Why AI hype is structurally inevitable (and who benefits most)
-
The hidden trade: convenience vs. autonomy, scale vs. trust
-
How “productivity gains” often translate into fewer jobs, not better lives
-
What ethical AI looks like in practice (not slogans)
-
The opportunity: building tools that expand human agency, not replace it
Episodes

2 days ago
2 days ago
Open source AI feels like freedom: no vendor lock-in, full control, unlimited experimentation.Then you meet the real gatekeeper: hardware.
In this episode, we unpack the uncomfortable reality that “running it yourself” often turns into a hidden tax—GPUs, VRAM ceilings, CUDA driver hell, kernel mismatches, brittle dependencies, slow inference, and surprise bills that make the “free model” anything but free. We’ll break down why open source wins on ownership but can lose on operations, and how teams accidentally trap themselves in an endless cycle of upgrades, optimizations, and infrastructure babysitting.
This isn’t an anti–open source episode. It’s a strategy episode: how to get the upside of open models without becoming a part-time data center.
In this episode, you’ll learn:
Why “open weights” doesn’t mean “cheap to run”
The real bottlenecks: VRAM, bandwidth, latency, and concurrency
The hidden costs nobody budgets for: ops time, debugging, reliability, retries
When renting compute beats owning—and when it doesn’t
How to escape the trap: quantization, routing, caching, smaller models, hybrid stacks
If you’re building agents (whether on Omni-Rogue or your own setup), this episode will save you money, time, and a whole lot of GPU pain.

4 days ago
4 days ago
AI is starting to look less like a tool you buy… and more like a toll you pay forever.
In this episode, we unpack why so many AI products are drifting toward rent-seeking: usage limits that force upgrades, “must-have” features locked behind higher tiers, pricing that scales faster than the value delivered, and businesses that become dependent on vendors who can change terms overnight. It’s not always malicious—sometimes it’s just the economics of compute—but the result can feel the same: your workflow gets captured, and you’re paying rent to keep your own operations running.
We’ll break down the patterns to watch for, how to audit total cost of ownership (not just the monthly sticker price), and the alternatives that give you leverage: model routing, caching, open-source components, portability, and building “AI infrastructure” you control—so your company doesn’t become a tenant inside someone else’s platform.
In this episode, you’ll learn:
What rent-seeking looks like in AI products (and why it’s spreading)
The hidden costs: seats, tokens, overages, rate limits, and lock-in
How to spot “upgrade pressure” baked into the product design
When subscriptions are fair vs. when they’re a trap
How to build leverage: portability, backups, fallbacks, and hybrid stacks
If AI is becoming as essential as electricity, you don’t want your business running on a pricing page you don’t control.

Friday Feb 27, 2026
Friday Feb 27, 2026
When AI can produce “the answer” in seconds, the old school model collapses.Because the product is no longer proof of learning.
In this episode, we explore the shift education has to make in an AI-native world: grading the process instead of the final output. We break down why essays, problem sets, and take-home assignments are becoming weak signals—and what replaces them: decision logs, drafts, voice explanations, real-time work sessions, oral defenses, version history, and project-based evaluation that measures thinking, not typing.
We’ll also cover what this means for teachers, students, and parents: how to keep learning honest without turning school into surveillance, how to reward curiosity and iteration, and how AI can become a coach that improves thinking—rather than a shortcut that replaces it.
In this episode, you’ll learn:
Why “perfect work” is no longer evidence of mastery
How to evaluate reasoning: drafts, checkpoints, and oral defense
Practical classroom systems that don’t require spying on students
How AI can support process-based learning instead of replacing it
What a modern grading rubric looks like when the output is cheap
If we keep grading the product in an AI world, we’ll graduate people who can’t do the work. This episode is the redesign.

Wednesday Feb 25, 2026
Wednesday Feb 25, 2026
Automation is creating an enormous “dividend”—more output with fewer human hours. The question is simple: who gets paid?
In this episode, we argue that if AI and agents keep compounding productivity while wages lag behind, we don’t just get better software—we get a wider inequality gap. We break down how automation quietly shifts leverage away from labor (even when companies call it “efficiency”), why the winners won’t be the most technical—but the most strategically positioned—and what it would look like to redesign the economy so the gains don’t concentrate in a handful of platforms and shareholders.
This isn’t a political rant. It’s a systems episode: incentives, power, and the practical mechanisms that can distribute the upside—profit-sharing, employee ownership, shorter workweeks, portable benefits, outcome-based pay, “agent-as-a-coworker” models, and new safety nets that match an AI-speed world.
In this episode, you’ll learn:
What the “automation dividend” is—and why it’s already here
How productivity gains can increase inequality without intentional design
The difference between “job loss” and “bargaining power loss”
Real-world models for sharing the upside: ownership, dividends, time, and benefits
What builders and founders can do now to make AI expand human freedom—not shrink it
If AI is going to replace labor, then the value it creates has to replace the paycheck too.

Monday Feb 23, 2026
Monday Feb 23, 2026
Your model “crushed” the benchmark. The eval dashboard looks perfect. Everyone celebrates.Then reality shows up… and the system quietly fails in ways the score never measured.
In this episode, we break down why top AI scores often create false confidence—and how “high performance” can hide brittle behavior, metric gaming, and catastrophic edge-case errors. We’ll expose the traps behind popular eval setups (clean test sets, narrow tasks, average-based metrics, and feedback loops that reward style over truth), then give you a practical framework to tell whether a model is actually reliable—or just optimized to look good.
In this episode, you’ll learn:
Why benchmarks and leaderboards routinely overstate real-world capability
How models “pass” while still hallucinating, failing tools, or breaking under pressure
The difference between accuracy and safety, and why averages can be dangerous
How to design evals that catch edge cases, regressions, and real production risk
The new gold standard: reliability, verification, and “catastrophe-aware” testing
If you’ve ever trusted a “top score” and later got burned, this episode will show you exactly why—and how to audit what matters.

Wednesday Feb 18, 2026
Wednesday Feb 18, 2026
Everyone’s “AI-powered” now.But when you look under the hood… a lot of it is theater.
In this episode, we teach you how to audit AI products like a pro—so you can instantly tell the difference between real capability and glossy marketing. We break down the most common gimmicks (fake automation, manual ops disguised as “agents,” shallow prompts packaged as “platforms,” and demos that crumble outside perfect conditions), then give you a simple framework to stress-test any AI tool in minutes.
Whether you’re buying AI software, investing, or building your own product, this episode helps you avoid expensive mistakes—and spot the few products that are actually doing something defensible.
In this episode, you’ll learn:
The biggest red flags that scream “AI theater”
The demo-to-production gap: how to test reliability fast
What real AI automation looks like (tools, workflows, and guardrails)
Questions to ask vendors that instantly expose weaknesses
A quick “capability checklist” you can use on every AI pitch
If you’ve ever thought, “This seems impressive… but I don’t trust it,” this episode will give you the audit lens.
