We Don’t Need Smarter AI. We Need Time to Catch Up.
- Chris Odell
- Jul 24, 2025
- 5 min read

In the fast-evolving world of artificial intelligence, the race to develop stronger and more sophisticated foundation models is relentless. With the imminent arrival of GPT‑5, alongside the rapid advancements from companies like xAI, Google, Anthropic, Perplexity, and Meta, the pace of development is staggering.
However, before we press on to create even more powerful and all-encompassing models, it's crucial that we pause to let businesses, governments, and society catch up and fully understand the implications of this new power at our fingertips.
The future of AI isn't just about capability. It's about responsibility. And for that, we need time to refine products, update regulation, and reflect on how AI is reshaping our lives.
Better Products with Current Models
While many AI leaders remain fixated on developing the next breakthrough foundation model, many organizations are still struggling to integrate the powerful tools already available. AI-driven applications for medical billing, clinical summarization, protein discovery, and business operations are often still in pilot phases. Even AI voice agents—touted as a revolution in customer service—have yet to reach full commercial maturity. A recent conversation I had with a VC leader in AI healthcare reinforced this point: when asked about product fit and scale, they described that most teams are still selling a vision of an AI future rather than delivering usable, scalable solutions.
Rather than racing to invent the next model that’s 20% more powerful, there’s a stronger business case for crafting tools that are 200% more usable.
This signals a tremendous economic opportunity, not in building ever-larger models but in building better products. Companies stand to gain far more right now by focusing on good design, strong user experience, and clear problem-solving with today’s models. (Of course, we first need to get our knowledge management in order). There is untapped value in applying existing AI capabilities to real-world use cases with care and intentionality. From customer service to drug discovery to clinical documentation, the models are already strong enough; what’s missing is thoughtful implementation and trust-building.
The future of AI success won’t be defined by who has the strongest model—it will be defined by who builds the most useful, ethical, and widely adopted product.
Regulation Needs to Catch Up
As foundation models grow more powerful and more widely used, regulatory oversight lags shockingly behind. While initiatives such as the EU AI Act are important, and the fractious debate around the US’s “big beautiful bill’s” ban on state-level regulation, much of the current landscape remains reactive and fragmented. In the U.S., there is no unified federal framework, leaving states to fill the void—often with conflicting or uneven rules that frustrate compliance across jurisdictions.
To fill this regulatory gap, industry coalitions have begun stepping forward. For example, the Information Technology Industry Council (ITI) and Americans for Responsible Innovation (ARI) led a coalition of over 60 stakeholders—including corporations, academia, and nonprofits—to call on Congress to establish a U.S. AI Safety Institute inside NIST.
As ARI President Brad Carson put it:
“For the U.S. to lead on the responsible development of AI, our government has to lead on the development of voluntary standards and testing that underpin AI safety.” [1]
Similarly, more than 140 organizations—including advocacy groups, academic institutions, and labor coalitions—sent a letter urging Congress to reject a proposed federal moratorium on state-level AI regulations, warning:
“This moratorium would mean that… the company making that bad tech would be unaccountable to lawmakers and the public.” [2]
In the healthcare sector, the Coalition for Health AI (CHAI) has brought together leading institutions including Stanford, Johns Hopkins, Mayo Clinic, and Microsoft to proactively define trustworthy AI guidelines tailored to clinical care. Their work illustrates how sector-specific partnerships can offer structure and standards in areas where legislation is still catching up.
And in Europe, where the AI Act is coming into force, companies represented by CCIA Europe recently warned:
“With critical parts of the AI Act still missing just weeks before rules kick in, we need a pause to get the Act right, or risk stalling innovation altogether.” [3]
Altogether, these coalitions show the private sector trying to shape the rules of the road, while governments struggle to match the pace of technological progress.

Understanding AI’s Social Toll
Most importantly, before we rush to build the next generation of foundation models, we must take a collective pause to understand what today’s AI systems are already doing to our society.
We’ve seen this story before. When social media platforms like Facebook and MySpace first emerged, they promised deeper connection, better communication, and a more informed world. And for a time, they delivered. But years later, we’re grappling with an epidemic of polarization, misinformation, loneliness, and youth mental health decline—consequences we neither fully anticipated nor adequately mitigated.
This isn’t about slowing down AI—it’s about not repeating the same mistakes we made with social media.
Today’s AI tools may follow a similar trajectory—only faster, deeper, and more intimate. One of the fastest-growing uses of AI isn’t productivity or search; it’s companionship. Apps like Replika and Character.AI are being used not just by teens but by adults seeking friendship, therapy, romantic interaction, and even sexual connection. A recent Common Sense Media survey found that over 70% of U.S. teens have tried AI companions—but the use extends well beyond youth. Lonely adults, caregivers, and people navigating mental health struggles are increasingly turning to AI for emotional support. [4]
The social implications of this shift are profound and still poorly understood. What happens when people form emotional bonds with entities that cannot truly reciprocate? What does it mean for human relationships, empathy, and accountability when an always-available chatbot becomes a preferred substitute for a friend, a partner, or a therapist? How will increased bonds with AI impact our trust of neighbors, governments, or social systems?
Researchers have already begun sounding alarms: early findings link heavy use of AI companions with increased isolation, reduced social engagement, and heightened symptoms of depression. [5][6]
Even Mark Zuckerberg recently framed AI as a potential answer to America’s loneliness crisis. But emotional intimacy cannot be programmed, and digital intimacy does not make our society better as a whole. [7]
This is not about demonizing the technology—AI may very well help people in pain. But without a broader societal conversation and real longitudinal research, we risk repeating the errors of the social media era: building powerful tools that reshape human behavior, norms, and mental health faster than we can understand or respond.
The real question isn’t whether we can build something smarter. It’s whether we understand what it’s doing to us now.
Conclusion
The pace of AI development is astonishing, but speed without reflection is a liability not a strength. We already have immensely powerful models at our disposal.
What we need now is time: time to build better products with real utility and trust; time to establish clear, adaptive regulation; and time to understand the social implications of a technology that is quickly becoming embedded in our most personal spaces.
This is not a call to slow down innovation, it’s a call to mature it. By focusing not just on what AI can do, but on what it should do, we can ensure that the next wave of progress is not only impressive, but responsible, sustainable, and human-centered.
Footnotes
ITI and ARI, “Coalition Calls for U.S. AI Safety Institute,” July 2025.
The Hill, “House leaders urged to remove AI provision in ‘big, beautiful bill’ to prevent ‘unfettered abuse’” May, 2025
“Tech Lobby Group Urges EU to Pause AI Act,” AIApps, July 2025.
“Talk, Trust, and Trade-offs,” Common sense media, 2025.
Zhang et al., “The Rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being,” arXiv, June 2025.
Lai et al., “Depression and Use of Conversational AI for Companionship,” Frontiers in Public Health, 2025.
“Mark Zuckerberg Wants AI to Solve America’s Loneliness Crisis. It Won’t,” TIME, June 2025.



Comments