The recent call by OpenAI for the US government to consider licensing and registration requirements for AI with specific capabilities has stirred up a mix of emotions and concerns among AI enthusiasts, including myself.
CEO Sam Altman argues that regulation is essential for maintaining safety standards. Still, there is a valid concern that this move could create a corporate stronghold around AI, stifling open-source AI tools and models.
The Push for AI Regulation: Weighing the Pros and Cons
Sam Altman’s testimony before Congress advocated for a governance regime that would be flexible enough to adapt to new technical developments, regularly update safety standards and hold companies accountable for their AI systems.
Additionally, an OpenAI staffer proposed the creation of a licensing agency for AI, potentially called the Office for AI Safety and Infrastructure Security (OASIS). While there’s merit to these ideas, it’s essential to weigh the pros and cons and consider alternative approaches.
The Dangers of a Corporate Stronghold: What’s at Stake?
Licencing could lead to several negative consequences that warrant consideration:
- Stifling Innovation: Licensing requirements might make it difficult for smaller players to enter the AI market, potentially limiting competition and hindering the development of novel AI solutions. We must ensure that regulation doesn’t inadvertently hamper innovation and progress.
- Outpacing Regulation: The rapid pace of AI development raises concerns that licensing requirements may become obsolete as new advancements emerge. Policymakers must be vigilant and adaptive to ensure that regulation remains relevant and practical.
- Threat to Open Source: Open-source AI tools and models democratise access to powerful AI technologies, and it’s crucial to protect this. Stringent licensing requirements might make it difficult for the open-source community to continue thriving.
Financial Barriers and Open Source: Finding a Middle Ground
One of the critical concerns for the open-source community is the potential financial burden that licencing requirements could impose. Open-source projects often lack the financial resources of large corporations, and licensing fees could create a barrier to entry for open-source AI initiatives. We need to find a middle ground that promotes safety without stifling the growth and accessibility of open-source AI projects.
While it is being argued that licencing would only apply to large and powerful models like GPT-4, there is a valid concern given the growing number of open-source AI projects and models fast approaching similar levels of capability that they could be held back from regulation.
Trust in Corporate Players: Balancing Skepticism and Collaboration
Many have expressed their distrust of OpenAI and other significant players in the AI field, questioning their motives and the long-term implications of their actions. While it’s crucial to maintain a healthy level of scepticism, we should also recognise the potential for collaboration between corporate and open-source AI initiatives in driving advancements that benefit society.
If you dig beneath the surface of companies like OpenAI, despite technological advances they have undoubtedly pioneered, they’re built on open-source in some way. Open source powers most of the internet; it powers companies like Amazon, Facebook, Google, Microsoft and almost every known profitable corporation.
Open Source AI and Collaboration: Fostering a Thriving AI Ecosystem
Open-source AI projects have the potential to surpass proprietary AI developments by fostering creativity and driving advancements. By maintaining an open and collaborative AI ecosystem, we can ensure that the evolution of AI benefits a wide range of industries and applications rather than being monopolised by a select few.
Open source is a threat to corporations
Open source is a threat to companies like OpenAI. In a leaked Google memo, they even admitted that open source would ultimately win the AI arms race in many different areas. The truth is many exciting developments in open source are happening right now.
“We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?
But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.
I’m talking, of course, about open source.
Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today.”
While Google might just be saying this to thwart future monopoly lawsuits and other legal claims that could be made against them (getting ahead of things), there is truth in that Google memo. I’ve seen some incredible things in open source. GPT4All is one of those exciting open-source projects.
Mandatory Open Source for AI: An Idea Worth Exploring
I think we should be thinking about this the opposite way. Make AI open-source by law, forcing big companies like OpenAI to be transparent in their AI advancements, preventing giant corporations from monopolising AI technology and ensuring that AI advancements remain accessible to everyone. While this idea has challenges, it highlights the importance of exploring alternative governance structures and fostering a diverse AI ecosystem.
The moat approach that Altman is proposing is coming from a corporate perspective, not an open one. At this point, OpenAI should rename itself ClosedAI because its actions are anything but open and are now moving in a direction that is as closed as possible.
OpenAI’s push for AI licensing and regulation may be well-intentioned, but it’s essential to balance safety concerns and the need for open collaboration and innovation. Policymakers and industry leaders must consider the broader implications of AI licensing and work toward a more equitable AI landscape that benefits everyone, not just a few powerful corporations.