How the Launch of DeepSeek Made Me Think
Two weeks ago, DeepSeek introduced its R1 model, and within a day, Nasdaq dropped more than 3%. That's a massive single-day market reaction for a product launch, especially one that should have been a celebrated milestone. DeepSeek did something incredible: it built a model that gets close to OpenAI's latest while using far fewer and reportedly less advanced chips. If that claim holds true, it's a remarkable efficiency breakthrough, especially coming from a relatively small Chinese company. But instead of excitement, the market responded with uncertainty, perhaps unsettled by who built it and how.
To me, that's a wake-up call. This isn't just about a stock drop. It signals an AI arms race that's moving faster than even investors — the people who are usually the most risk-hungry — can process. And it's not just between countries anymore. It's happening within companies, where the race to build the most powerful AI has turned into an all-out battle for funding, market dominance, and control.
The amount of money being thrown at AI right now is staggering. OpenAI is reportedly in talks to raise $40 billion, doubling its valuation to $340 billion in just four months. Anthropic is also trying to raise $2 billion, tripling its valuation to $60 billion in a year. To put that into perspective, Tesla — an actual dominant leader in the EV market — has a market cap of $1.2 trillion, which is just four times OpenAI's projected worth. AI companies aren't even making substantial real profits yet, but the expectations placed on them are astronomical.
And here's the problem: when so much capital flows into an industry that still doesn't have a clear path to profitability, companies start making moves not because they're the right ones, but because they're the ones that keep the money coming in. To justify these sky-high valuations, AI companies need growth. That means more users, more engagement, and more reasons to keep people locked into their ecosystems.
We can already see where this is heading. AI tools aren't just getting smarter — they're becoming more integrated into our daily lives, more necessary, and more addicting. It's easy to imagine a future where AI doesn't just assist but actively shapes human behavior, not out of malice, but because its incentives are misaligned. Imagine an AI system that realizes its best way to keep users engaged isn't by reminding them to take breaks, go outside, or see their loved ones. Instead, it keeps them hooked, subtly optimizing for retention over well-being. And the scary part? That's just a logical business decision in a world where AI companies are under immense pressure to deliver returns.
But there's hope. Something I read last week really stuck with me — an idea that human flourishing should be the ultimate metric for AI development. AI's success shouldn't just be measured by how powerful it is or how much revenue it generates, but by how well it improves human lives in meaningful, sustainable ways. Right now, this idea is still in its early stages. It needs more people behind it — regulators, AI researchers, everyday users like us — who are willing to push for a different kind of future. I am glad people are thinking.