2023 will be remembered as the year everyone and their uncle heard about generative artificial intelligence. The explosive popularity of ChatGPT showed millions the power such systems can have, fueling both fear and excitement about how they might develop.
From President Bidenâs Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to the European Unionâs Artificial Intelligence Act approved in March 2024 ahead of the European Unionâs AI Act, legislative bodies responded by attempting to regulate this growing field.
Voices from academic circles as well as industry leaders are reflecting on how to ensure the best possible outcomes: ethical and fairly developed AI balanced with the need to maintain healthy innovation and market competition. In this debate, David Gray Widder, Sarah West and Meredith Whittaker published a position paper titled “Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI.”
The authors are critical of corporations instrumentalizing the promises of Open Source in their AI programs, while not offering true transparency in line with the principles of Open Source.
Further, they highlight the concern that these actors benefit from the free labor of Open Source enthusiasts as they develop within parameters given by companies. However, in offering these critiques doing so, Widder, West and Whittaker come to diminish the role Open Source should play in the development of AI systems. Yes, it’s not only necessary but absolutely crucial to critically evaluate the extent to which Open Source values are being upheld when these claims are made. But in doing so, the authors downplay the central role that the transparency of an âopenâ AI system can offer.
A concern that the authors pose is that creating AI models compared to âtraditionalâ software from the bottom up is more resource intensive, causing AI development to happen within big tech companies’ programs, such as Metaâs LLama2. This allows already powerful companies to profit off the labor of others. They are frank about their skepticism of openness in AI being democratic, stating that, âWhile maximally âopenâ AI is a necessary condition of any hypothetically democratized and level AI playing field, it is not a sufficient condition,â with more than one of the authors questioning whether AI can ever be democratized.Â
Open for all, not just for business
While limited participation by Open Source enthusiasts is a valid concern, Open Source itself doesn’t promise democratization. Open Source can be a tool for democratization efforts, but it needs to be paired with additional political initiatives to be truly effective.
Furthermore, the concern that big tech is profiting from the free labor of Open Source enthusiasts, is a concern not unique to openness in AI, though it challenges its parameters due to inherent exclusiveness. Building on the labor of others and standing on the shoulders of giants is at the heart of Open Source communities. The problem lies not in openness, but in the massive wealth these companies have amassed and those resources not trickling down into the community. This is a political issue rather than an Open Source issue.Â
The trio of researchers also discuss âopen washing,â where corporations use âOpen Sourceâ and even âAIâ’ as a marketing term. Bigger corporations championing buzzwords, as an expression of a more politically aware consumer, are not exclusive to AI or even the tech industry. Take, for example, âgreenwashingâ in the fashion industry, referring to companies using unregulated terms (such as âsustainableâ or ânaturalâ) to mislead consumers while polishing their public image. While recognizing that the scope of AI in the public and private spheres might be more invasive, this a problem of how corporations are left unregulated with their marketing, especially as they use terms that are familiar to the general population even though they do not have a set understanding of what is (in fact, there isnât any set definition of what openness in AI is).
If we abandon what Open Source looks like for AI models, what kind of tool shall we consider when evaluating whether models are available for everyone and every purpose?
Openness is not the only tool to evaluate the safety and efficacy of a system, though it is, nonetheless, still a relevant player. The concern is, if we indeed diminish the importance of openness as an influence, what interest groups, outside of large tech companies, will take its place? This bottom-up approach that openness offers can be in danger of being replaced by a more technocratic style of development and evaluation. If we’re prepared to reject Open Source within AI then we must also ask: If not openness, then what standard will we use to classify the extent to which a programâs architecture is representative, safe and effective? Â
There’s no need to abandon openness. In the paper, the authors offer important points about how Open Source in AI needs to be approached differently.
They point out that those who can develop are required to have a large amount of capital and that how the confusion of what AI actually is, serves as a tool for profit, allowing âopen washing.”
As a standard, openness should be upheld while still recognizing that its implementation will look different in the context of AI. While it’s crucial to be critical of big tech’s instrumentalization of openness, it shouldn’t diminish the relevance of Open Source today.
Openness is crucial in this era of rapid AI development. It wouldn’t be the first time a new technology came along but Open Source proponents missed the boat — we can’t let it happen again.
While defining Open Source AI, let’s consider in what ways this definition needs to be applied differently than how we know it today.Â