
DeepSeek has released open-source GenAI models that outperform advanced OpenAI ones, creating well-deserved buzz. I see this as a defining moment for GenAI and AI—things are about to get really exciting.
Challenge to dominance of USA in AI
Up to now, the U.S. has dominated the GenAI space. Top companies like OpenAI, Google, Microsoft, and Anthropic are all U.S.-based, with exceptions like France’s Mistral AI. Most innovation has come from the U.S., while others play catch-up. Sam Altman, OpenAI’s CEO, once said it was hopeless for Indian companies to compete with the U.S. in foundational models.
DeepSeek proves the field is open to all. Moats are crumbling, no matter the bluffs. This will inspire other countries to build their own models, spurring competition and innovation.
Challenge to Big Tech dominance in AI
Training foundational models is expensive, requiring high-end GPUs and massive electricity. This favored Big Tech’s deep pockets, concentrating power in a few hands—centralized authority is risky.
DeepSeek built theirs using low-end GPUs at lower cost, opening the field to startups. Startups drive innovation; many had previously given up on foundational models, focusing instead on building atop them. Now, they can create their own.
Challenge to closed-source models
Leading GenAI models have been closed-source. Llama is an open-weight exception used widely by companies, but it’s not top-tier, and Meta imposes commercial restrictions.
DeepSeek’s open-weight models challenge this. People prefer self-hosting open weights for data control. Before, you traded performance for control with closed models. DeepSeek eliminates that tradeoff.
There are concerns—guardrails favoring Chinese narratives and plagiarism accusations against OpenAI models. Still, it’s a pivotal moment. David has slain Goliath with one stone. Over the next five years, expect explosive innovation, competition, and applications, all sparked by this shift.