top of page
Writer's pictureEvan O'Donnell

A Small Win for Open-Source AI


This issue outlines key insights from a recent report on the risks and benefits of open versus closed AI models.


 

01 |  A small win for open-source AI

There’s an ongoing debate in Silicon Valley around open versus closed AI models.


Yesterday, the National Telecommunications and Information Administration (NTIA), an agency within the Department of Commerce, released a report weighing the benefits and risks of open models. This work was mandated by President Biden's October 2023 Executive Order on “Safe, Secure, and Trustworthy AI.”


TL;DR: it’s too early to propose any restrictions on open models.


This is a small, somewhat unexpected victory for open-source advocates.


(Although “too early” is the operative phrase that best summarizes NTIA’s conclusions. This was more of a “let’s punt and continue to evaluate” than anything else.)


Beyond the final recommendation, the report does offer some early insight into an important question – how as a society should we think about open access to this powerful new technology?


 

02 | Definitions and methodology

The report defines “open models” as AI systems with model weights [1] that are available to the public.


Model weights are numerical values that determine a model's output based on a given input. They act as the AI’s “blueprint,” allowing the underlying model to be used, replicated, integrated directly into an application, and adjusted for various use cases. Examples of open models include LLaMA (by Meta) and Mistral.


In contrast, “closed” models, like the most recent ones provided by OpenAI, keep these weights private. Closed models only allow users to submit inputs and receive outputs via web interfaces or APIs.


Open models allow for easier customization and access. They enable local data processing, which enhances privacy since data does not have to be sent to the model developer. Open models are generally cheaper to run than closed models for similar use cases.


However, because their weights are openly available, they pose challenges in monitoring and preventing misuse by malicious actors. This includes the potential development of weapons (bio, nuclear, cyber), illegal content, and dis/misinformation.


This report specifically examined:


(a) large, dual-use foundational models (excluding smaller models below 10 billion parameters), and


(b) the marginal impact of open models, i.e., their risks and benefits beyond those of closed models and other pre-existing technologies. 


 

03 |  Key takeaways

The report concludes that the benefits of open models—developer contributions, customization, data privacy, democratized access, and competition that drives down costs—currently outweigh the marginal risks of misuse.


But I found the fine print most interesting:


  • Today, closed models typically have a performance advantage. However, open models are only a few months behind, and the time to performance parity is shrinking. Footnote 32 on page 53 of the report has some great insight from industry operators on this point. [2]


  • The risks of open models, such as malicious actors fine-tuning or removing safety features for harmful purposes, are offset by the benefits of developer collaboration to identify vulnerabilities and implement safety measures. In instances where models are used inappropriately, the transparency around weights and source code should create better conditions for auditing, accountability, and response time.


  • For risks related to societal wellbeing (i.e., open models being used to generate illegal sexual material, dis/misinformation, or discriminatory outputs), the negative impact may come more from existing technology, rather than direct, public access to model weights. For instance, open models might make it easier to create “deep fake” content. But focusing regulation and resources on controlling the spread of these “deep fakes” through existing distribution channels like social networks may yield a better ROI. Instead of focusing solely on what started the fire, focus on the conditions that allowed it to spread.


  • Open models may not substantially exacerbate misuse, as closed models are also prone to manipulation (albeit to a lesser extent). For example, the non-consensual, AI-generated intimate images of Taylor Swift that spread across the internet in early 2024 were made using a closed model. Similarly, OpenAI recently reported that malicious nation-state-affiliated actors were using ChatGPT, a closed model, for their cyber operations.


  • Regulating open models is unlikely to change the trend towards an oligopoly of a few dominant foundational models. High barriers to compute access, capital, and talent availability will have a greater influence on these dynamics.


  • Having a few open models among the preferred set of large providers should foster healthy competition further up the AI supply chain, particularly at the tooling and application layers. Open models are easier to customize, which should lead to a more robust ecosystem of specialized applications and reduce the systemic risk of over-reliance on a single system (we all experienced the risks of over-reliance during the recent CrowdStrike failure).


At this juncture, it’s far too early to declare whether open versus closed will “win,” or which is ultimately “better.” And perhaps that is the wrong debate to have in the first place. Both have risks and benefits. Great companies will be built on either – or, more likely, both.


What is important is that policymakers continue to monitor the risks and benefits, and respond appropriately to protect people and allow for healthy market dynamics as this technology evolves.


As for my own investment thesis:


1. I don’t have a fixed point of view whether a startup should build on a closed- vs open model. I care a lot more about whether there’s a clear narrative about why an individual model (or stitching of multiple models) is the right fit for a particular product based on pricing, utility, and developer functionality.


2. If closed models are unlikely to win long-term based solely on a performance edge, I expect they will invest more in commercial integrations, developer tooling, and network activity on top of their core models. We’re seeing it already with OpenAI’s launch of SearchGPT. Going after broad use cases (e.g., AI-powered personal assistants or search) is a very risk endeavor for startups. Closed models will throw virtually unlimited resources at these obviously large markets as they grow and try to differentiate beyond sheer performance.


Philosophically, I believe open models are an important part of the ecosystem. They foster healthy competition, developer optionality, and flexibility. They also offer cost efficiencies, which are important in this early stage of technology development.


If the marginal risk to society is not material, let the market dictate how and where open and closed models get used.


I’m glad that’s where the NTIA landed – at least for now.



 

[1] A quick primer on model weights from this report (page 8): “An AI model processes an input—such as a user prompt — into a corresponding output, and the contents of that output are determined by a series of numerical parameters that make up the model, known as the model’s weights. The values of these weights, and therefore the behavior of the model, are determined by training the model with numerous examples. The weights represent numerical values that the model has learned during training to achieve an objective specified by the developers. Parameters encode what a model has learned during the training phase, but they are not the only important component of an AI model. For example, foundation models are trained on great quantities of data; for large language models (LLMs) in particular, training data can be further decomposed into trillions of sub-units, called tokens. Other factors also play a significant role in model performance, such as the model’s architecture, training procedures, the types of data (or modalities) processed by the model, and the complexity of the tasks the model is trained to perform.”


[2] A few notable quotes:


Center for AI Policy: “We find that the timeframe between closed and open models right now is around 1.5 years. We can arrive at this conclusion by analyzing benchmark performance between current leading open weight AI models and the best closed source AI models.”


Unlearn.ai: “At the moment, it takes about 6 months to 1 year for similarly performing open models to be successfully deployed after the deployment of OpenAI’s closed models. The time gap between proprietary image recognition models and high-quality open-source alternatives has narrowed relatively quickly due to robust community engagement and significant public interest. In contrast, more niche or complex applications, such as those requiring extensive domain- specific knowledge or data, might see longer timeframes before competitive open models emerge.”


Databricks: “Databricks believes that major open source model developers are not far behind the closed model developers in creating equally high performance models, and that the gap between the respective development cycles may be closing.”


Meta: “It is not possible to generally estimate this timeframe given the variables involved, including the model deployment developers’ business models and whether, in the case of Llama 2, they download the model weights from Meta directly or accessed it through third-party services like Azure or AWS.”

bottom of page