top of page

Search Results

7 items found for ""

  • Will Coding Automation Create a New Paradigm for Open Source?

    Will Coding Automation Create a New Paradigm for Open Source? This issue explores how coding automation could reshape the future of open source software. 01 | Code as a commodity The way we build technology is undergoing a profound shift. New development standards and advancements in coding automation are making software development significantly faster and more accessible. Code, once the most proprietary and costly component of the software stack, is rapidly becoming a commodity. I’ve been reflecting on where this will most affect our industry, and I believe open source software will be at the center of this transformation. I predict that our conception of open source – how it is used, who uses it, how it gets integrated into products – might look fundamentally different than in any prior era of technology. 02 | The pros and cons of open source Open source software (OSS) is software where the source code is freely available for anyone to inspect, use, and modify for their own use case. It offers several benefits: Integrating OSS into in-house applications allows for deeper customization  and modularity , compared to buying all-in-one, proprietary software, because the underlying code can be directly copied and edited to meet specific business needs. A large community of contributors  leads to rapid improvements, organic adoption, robust tooling, and (oftentimes) enhanced security, as large networks of developers identify and patch vulnerabilities. OSS is often more cost-effective , allowing firms to circumvent all-in-one licensing fees and vendor lock-in. 97% of firms  report net cost savings from using OSS, and without it, companies would need to spend 3.5x more  on software. However, OSS has historically been a tough sell for investors. Generating revenue and building a competitive edge are challenging when the core product (the codebase) is free and accessible. Only a few open source companies – like MongoDB  and Databricks  – have surpassed $1B in revenue, with business models relying on peripheral services like support, hosting, or premium security. Additionally, OSS implementation requires technical expertise, which limits broader adoption. Closed source alternatives – with their plug-and-play functionality, user-friendly APIs, and dedicated support – appeal to buyers seeking ease of use. As a result, open source companies receive less than 5% of global software spend [1]  and under 5% of U.S. VC investment . 03 | A spike in open source In the past several years, open source projects have gained ground, especially vis-a-vis closed-source alternatives. For example, open source data management systems are now as preferred as proprietary ones, a sharp contrast from a decade ago when proprietary solutions were nearly twice as popular. [2] Two-thirds of companies  increased their use of OSS last year, particularly in (a) machine learning, where frameworks li ke TensorFlow and PyTorch  dominate , and (b) data processing, where companies like Netflix and Uber find Apache Hadoop  and Spark  now suitable for enterprise-scale workflows. Moreover, open source's highly-engaged communities and large user bases are often driving faster, more efficient revenue growth than closed-source software, which usually relies on traditional sales and marketing. [3] What’s driving this spike? This rise in open source adoption and commercialization is closely tied to the standardization of web development. I’ve written before  how frameworks like React / Next make front-end development easier. In addition, standardized tools like Docker and Kubernetes, RESTful APIs, and microservices [4]  have made integrating and customizing open source software significantly faster and more efficient. As coding automation continues to advance, we may be entering a new era, one where open source is orders of magnitude more powerful and accessible. 04 | Will coding automation create a new paradigm for open source? AI-driven coding automation is poised to revolutionize software development. Programming assistants like GitHub Copilot  are automating code generation, review, and deployment. (Already, Copilot estimates 55% time savings  on task completion for developers.) Like past technology cycles, where digital distribution and data storage costs fell to near-zero, the structural barriers that have always limited open source adoption – the costs of creating, customizing, debugging, and deploying code – are rapidly declining. As this trend accelerates, software’s value proposition may shift in favor of open source. If managing OSS is more cost-effective , more customizable , and now simpler than closed-source solutions, it may well become the dominant model for software development. This may be especially true as software budgets get strained, buyers get more discerning around ROI, and sales cycles extend . Here are key areas where AI tooling could reshape open source: Simplified integration : Coding assistants – already used by half of developers on a daily basis  – will simplify OSS integration, allowing smaller teams to deploy and customize code much more quickly and with fewer errors. Wider access: Natural language processing will allow non-technical users to manipulate open source code using plain language. This opens OSS to a much wider audience, rapidly expanding the market for software development. Automated maintenance: AI will streamline the maintenance and security of OSS in live environments by automating updates, proactively resolving bugs, and eliminating the need for manual oversight . This will make OSS even more reliable and secure, addressing concerns that have historically slowed its adoption in enterprise settings. New business models: AI-enabled customization, security, and performance monitoring could create new, recurring revenue streams for OSS companies –  and  a more scalable cost structure for providing those services. For example, open source provider Elastic  began charging for AI-powered features like performance monitoring and security in its Elastic Cloud business, boosting that segment’s revenue by 29% year-over-year and growing its share of total revenue to 43% in 2024, up from 35% in 2022.   [5] Faster R&D: Open source relies on global networks of developers to maintain code. However, collaboration at a large scale often faces bottlenecks, such as problems with code merging, conflict resolution, and quality control. AI tools are providing automated solutions for these issues, enabling open source to innovate faster than closed source in terms of quality and deployment speed. AI is reducing the cost of code generation to near-zero, breaking down the long-standing barriers to OSS integration and monetization. In this new paradigm, the fundamental value in software could shift – from code itself, to the unique ways it is shaped and customized directly by the end user. As a result, open source may soon have an enduring competitive edge over proprietary software. [1]  In 2022, OSS spend was estimated at $25B ( source ) and the overall software market was estimated at $583.5B ( source ). [2]   Source : https://db-engines.com/en/ranking_osvsc . The DB-Engines Ranking measures popularity by combining factors like web mentions, search trends, technical discussions, job postings, professional profiles, and social media activity. These metrics are standardized and averaged to create a relative popularity score for each database system.  [3]   Source : https://www.bvp.com/atlas/roadmap-open-source .  [4] A microservice  is a software architecture where an application is built from small, independent services, each handling a specific function. Unlike monolithic systems, microservices can be developed, deployed, and scaled separately, offering greater flexibility, modularity, and easier maintenance. [5]  Elastic N.V. (2024). Q4 2024 shareholder letter . Source . Pages 21, 59.

  • A Small Win for Open-Source AI

    A Small Win for Open-Source AI This issue outlines key insights from a recent report on the risks and benefits of open versus closed AI models. 01 |   A small win for open-source AI There’s an ongoing debate  in Silicon Valley around open versus closed AI models. Yesterday, the National Telecommunications and Information Administration (NTIA), an agency within the Department of Commerce, released a report weighing the benefits and risks of open models. This work was mandated by President Biden's October 2023 Executive Order on “Safe, Secure, and Trustworthy AI.” TL;DR: it’s too early to propose any restrictions on open models. This is a small, somewhat unexpected victory for open-source advocates. (Although “too early” is the operative phrase that best summarizes NTIA’s conclusions. This was more of a “let’s punt and continue to evaluate” than anything else.) Beyond the final recommendation, the report does offer some early insight into an important question – how as a society should we think about open access to this powerful new technology? 02 | Definitions and methodology The report defines “open models” as AI systems with model weights [1]  that are available to the public. Model weights  are numerical values that determine a model's output based on a given input. They act as the AI’s “blueprint,” allowing the underlying model to be used, replicated, integrated directly into an application, and adjusted for various use cases. Examples of open models include LLaMA  (by Meta) and Mistral . In contrast, “closed” models, like the most recent ones provided by OpenAI, keep these weights private. Closed models only allow users to submit inputs and receive outputs via web interfaces or APIs. Open models allow for easier customization and access. They enable local data processing, which enhances privacy since data does not have to be sent to the model developer. Open models are generally cheaper to run than closed models for similar use cases. However, because their weights are openly available, they pose challenges in monitoring and preventing misuse by malicious actors. This includes the potential development of weapons (bio, nuclear, cyber), illegal content, and dis/misinformation. This report specifically examined: (a) large, dual-use foundational models  (excluding smaller models below 10 billion parameters), and (b) the marginal  impact of open models , i.e., their risks and benefits beyond those of closed models and other pre-existing technologies.  03 |   Key takeaways The report concludes that the benefits of open models—developer contributions, customization, data privacy, democratized access, and competition that drives down costs—currently outweigh the marginal risks of misuse. But I found the fine print most interesting: Today, closed models typically have a performance advantage. However, open models are only a few months behind, and the time to performance parity is shrinking. Footnote 32 on page 53 of the report has some great insight from industry operators on this point. [2] The risks of open models, such as malicious actors fine-tuning or removing safety features for harmful purposes, are offset by the benefits of developer collaboration to identify vulnerabilities and implement safety measures. In instances where models are used inappropriately, the transparency around weights and source code should create better conditions for auditing, accountability, and response time. For risks related to societal wellbeing (i.e., open models being used to generate illegal sexual material, dis/misinformation, or discriminatory outputs), the negative impact may come more from existing technology, rather than direct, public access to model weights. For instance, open models might make it easier to create “deep fake” content. But focusing regulation and resources on controlling the spread  of these “deep fakes” through existing distribution channels like social networks may yield a better ROI. Instead of focusing solely on what started the fire, focus on the conditions that allowed it to spread. Open models may not substantially exacerbate misuse, as closed models are also  prone to manipulation (albeit to a lesser extent). For example, the non-consensual, AI-generated intimate images of Taylor Swift that spread across the internet in early 2024 were made using a closed model. Similarly, OpenAI recently reported that malicious nation-state-affiliated actors were using ChatGPT, a closed model, for their cyber operations. Regulating open models is unlikely to change the trend towards an oligopoly of a few dominant foundational models. High barriers to compute access, capital, and talent availability will have a greater influence on these dynamics. Having a few open models among the preferred set of large providers should foster healthy competition further up the AI supply chain, particularly at the tooling and application layers.  Open models are easier to customize , which should lead to a more robust ecosystem of specialized applications and reduce the systemic risk of over-reliance on a single system (we all experienced the risks of over-reliance during the recent CrowdStrike failure ). At this juncture, it’s far too early to declare whether open versus closed will “win,” or which is ultimately “better.” And perhaps that is the wrong debate to have in the first place. Both have risks and benefits. Great companies will be built on either – or, more likely, both. What is important is that policymakers continue to monitor the risks and benefits, and respond appropriately to protect people and allow for healthy market dynamics as this technology evolves. As for my own investment thesis: 1. I don’t have a fixed point of view whether a startup should build on a closed- vs open model. I care a lot more about whether there’s a clear narrative about why  an individual model (or stitching of multiple models) is the right fit for a particular product based on pricing, utility, and developer functionality. 2. If closed models are unlikely to win long-term based solely on a performance edge, I expect they will invest more in commercial integrations, developer tooling, and network activity on top of their core models. We’re seeing it already with OpenAI’s launch of SearchGPT . Going after broad use cases (e.g., AI-powered personal assistants or search) is a very risk endeavor for startups. Closed models will throw virtually unlimited resources at these obviously large markets as they grow and try to differentiate beyond sheer performance. Philosophically, I believe open models are an important part of the ecosystem. They foster healthy competition, developer optionality, and flexibility. They also offer cost efficiencies, which are important in this early stage of technology development. If the marginal risk to society is not material, let the market dictate how and where open and closed models get used. I’m glad that’s where the NTIA landed – at least for now. [1]  A quick primer on model weights from this report (page 8): “An AI model processes an input—such as a user prompt — into a corresponding output, and the contents of that output are determined by a series of numerical parameters that make up the model, known as the model’s weights . The values of these weights, and therefore the behavior of the model, are determined by training the model with numerous examples. The weights represent numerical values that the model has learned during training to achieve an objective specified by the developers. Parameters encode what a model has learned during the training phase, but they are not the only important component of an AI model. For example, foundation models are trained on great quantities of data; for large language models (LLMs) in particular, training data can be further decomposed into trillions of sub-units, called tokens. Other factors also play a significant role in model performance, such as the model’s architecture, training procedures, the types of data (or modalities) processed by the model, and the complexity of the tasks the model is trained to perform.” [2]  A few notable quotes: Center for AI Policy : “We find that the timeframe between closed and open models right now is around 1.5 years . We can arrive at this conclusion by analyzing benchmark performance between current leading open weight AI models and the best closed source AI models.” Unlearn.ai : “At the moment, it takes about  6 months to 1 year  for similarly performing open models to be successfully deployed after the deployment of OpenAI’s closed models. The time gap between proprietary image recognition models and high-quality open-source alternatives has narrowed  relatively quickly due to robust community engagement and significant public interest. In contrast, more niche or complex applications, such as those requiring extensive domain- specific knowledge or data, might see longer timeframes before competitive open models emerge.” Databricks: “Databricks believes that major open source model developers are not far behind  the closed model developers in creating equally high performance models, and that the gap between the respective development cycles may be closing .” Meta : “It is not possible to generally estimate this timeframe  given the variables involved, including the model deployment developers’ business models and whether, in the case of Llama 2, they download the model weights from Meta directly or accessed it through third-party services like Azure or AWS.”

  • The New Vertical SaaS Playbook

    The New Vertical SaaS Playbook This issue explores how more open technology standards are transforming the playbook for scaling vertical software. 01 | The old vertical SaaS playbook Vertical SaaS (vSaaS), software tailored to specific industries, has long been a compelling category in venture. Industry specialization—the hallmark of vSaaS—drives product stickiness and capital efficiency, as illustrated by the following benchmarks: Privately held (<$50M ARR) [1] Privately held ($50-100M ARR) [1] Public comps [2] LTM revenue growth 175%  / 115% 115%   / 80% 30%   / 13.5% Gross margin 86%   / 74% 73%   / 70% 82%   / 61% Net revenue retention 125%   / 105% 150%   / 115% 119%   / 112% Net magic number (current period net new ARR / prior period S&M) 1.8x   / 1.1x 1.9x   / 1.4x 2.8x   / 1.3x Enterprise value (EV) n/a n/a $20.8B   / $4.7B EV / LTM revenue n/a n/a 11.8x   / 6.7x EV / EBITDA n/a n/a 50.2x   / 28.7x top quartile median   / full data set median vSaaS companies typically grow by first achieving product-market fit with workflow software or point solutions, and over time layering on new products that enhance coordination between users and data assets. As a result, network effects – where a product gains value as more users and data are added – can take years  to fully develop. Toast  (NYSE: TOST) is an example of this playbook in action. Two years after launch (2013), Toast solidified its core software product––restaurant workflow and operations software––with its point-of-sale and kitchen display system, which automated order processing and routing for a given restaurant location. By 2015, the company had scaled that product to ~1k restaurants and achieved $3M in LTM recurring revenue. [3] Over time, Toast layered on new products, such as multi-location management (2015), APIs for third-party integrations (2016), inventory management (2016), labor reporting (2017), payroll (2019), and marketing tools (2019). [4] These features catalyzed network effects by bringing new types of users, data assets, and interactions into the Toast ecosystem. From Toast's 2021 S-1: [5] “This tightly integrated platform has positive network effects for our customers… Higher spend from happier guests is correlated with higher wages for employees, which in tandem with the wage and benefits access enabled by our products, drives happier employees, lower turnover, improved quality of service, and enhanced operational efficiency.” As network effects took hold, growth and account value compounded. Toast’s recurring revenue grew at +111% CAGR, and recurring ACV +11% CAGR, from 2015 to 2023. [6] [7] [8] This is the old vSaaS playbook in action  – start with a point-solution or workflow software for a tailored market segment, and over time expand into a multi-product platform to disrupt an industry. 02 |   The drivers transforming vertical SaaS There's a direct line between this vSaaS playbook and the technology standards of the last decade. Centralized data storage, proprietary APIs, and SSO / in-app identity management help generate closed-loop ecosystems by allowing companies to maintain strict control over user flows and data assets. Building with these technologies requires extensive product roadmaps and custom development to achieve data liquidity, feature synchronization, and third-party integrations, all of which are essential for orchestrating network activity. However, in the next tech cycle, I believe newer, more open technology standards will shorten the timeline for creating network effects in vSaaS products. The following are driving this transition: Declining barriers in product development  – the rise of AI and automated low/no-code tooling simplifies third-party integrations and  expands the pool of network contributors to non-technical users. Increasing standardization  – standard frameworks (i.e., React/Next for front-end development, Rust for ML workflows) [9]  make cross-network integrations easier, reducing the need for custom builds. Decentralized compute  – blockchain architecture is scaling , providing clear incentives for companies to transition data transactions onto these protocols. This will enhance interoperability, allowing seamless movement of users and data across applications leveraging the same Layer 1 protocol. [10] Deeper personalization  – highly customized digital experiences, powered by on-device data processing  and secure LLM prompting, can dramatically reduce onboarding time, increase transaction velocity, and improve collaboration on even more complex projects. Existing digital rails  – users are already engaged on existing tech platforms (unlike the last cycle where users were transitioning to software from pen-and-paper). New vSaaS companies can leverage these existing digital workflows to accelerate user growth and adoption. Growing externalities  – heightened social awareness and rising negative externalities – particularly in sectors like energy, healthcare, and logistics – will create economic incentives for new types of cooperation and value exchange among stakeholders. These shifts are especially well-positioned to take hold in specialized verticals, where industry-specific data needs and fine-tuned workflows create sticky use cases for these emerging technologies. 03 |   The new vertical SaaS playbook As these dynamics take hold, the vSaaS playbook will inevitably shift. Instead of building point-solutions that can grow into platforms over time, entrepreneurs will build products that power robust network effects from day one. This playbook is especially relevant to industries with (i)  high fragmentation, (ii)  significant negative externalities, (iii)  persistent barriers like information asymmetry and prohibitive pricing, and (iv)  where outdated technology still handles essential workflows (e.g., payments, read / write pathways, or mission-critical tasks). It’s impossible to predict exactly where the next, power law-defining vertical SaaS companies will come from and what the precise form factor will look like. But there are some spaces I’m exploring and excited about. New networks might emerge in industries like construction  and logistics , where better, more flexible software and customization can improve coordination and bring new user archetypes into the market who currently sit on the periphery (e.g., regulators, auditors, project financiers). In pockets of healthcare , particularly where value (outcomes per unit cost) lags behind peer OECD countries, new networks can solve problems that require buy-in from multiple stakeholders—patients, payers, providers, suppliers, and pharma. For example, Alma  is driving new network activity between providers, patients, and payers to reduce cost and access barriers in mental health. The growing costs and externalities of climate change in sectors like energy  and agriculture  can incentivize the adoption of decentralized protocols for transparent data tracking, multi-party coordination, and risk monitoring. Epoch  is driving this type of data architecture in supply chains for large multinationals. In high-security industries such as legal  and finance , leveraging AI for automated threat detection or new encryption techniques for data masking can break down long-standing data silos. More intimate, verticalized social networks  can build lucrative, high-transaction ecosystems on top of larger networks with portable identity management (e.g., Farcaster ). The alienation and misinformation users experience on existing platforms can drive adoption of these new networks. These are just a few examples. As we look to the future, the convergence of open technology standards and evolving market dynamics presents a massive opportunity for entrepreneurs in specialized markets. Startups that capitalize on these shifts – and build products capable of facilitating new network dynamics from the outset – will be well-positioned to drive outsized value creation in this next cycle. [1] Source . Data aggregated across 22 private vertical SaaS companies as of Aug 2023. [2] Source , source . Market data as of June 7, 2024. Data from 28 public companies: 2U (TWOU), Alkami Technologies (ALKT), Appfolio (APPF), Aspen Technology (AZPN), Autodesk (ADSK), Bentley Systems (BSY), Blackbaud (BLKB), Blend (BLND), CS Disco (LAW), Doximity (DOCS), EverCommerce (EVCM), Guidewire (GWRE), Instructure (INST), Intapp (INTA), LegalZoom (LZ), Lightspeed Commerce (LSPD), nCino (NCNO), Olo (OLO), PowerSchool (PWSC), Procore (PCOR), Q2 (QTWO), Sabre (SABR), Samsara (IOT), Shopify (SHOP), SmartRent (SMRT), Toast (TOST), Tyler Technologies (TYL), Veeva Systems (VEEV) [3]   Source . Recurring revenue defined as annual software subscription revenue + annual payments gross margin. This excludes revenue related to hardware and professional services. [4] Source . See page 6 for products released by year. [5] Source . See page 139 for more details on Toast’s network effects. [6] Recurring revenue = LTM subscription revenue + LTM fintech gross profit [7] Source . See page 22 for 2019 financials and page 6 for 2019 total locations. [8]   Source . See page 78 for 2023 financials and page 70 for 2023 total locations. [9] React is preferred by 42% of professional developers and Next.js has quickly moved from 11th place to 6th place in terms of preferred architectures. Source . [10] Layer 1 protocol refers to the base layer of a blockchain network, responsible for the fundamental operations upon which additional applications can be built and the underlying consensus mechanism that validates and secures transactions. Examples of Layer 1 protocols include Bitcoin, Ethereum, and Solana.

  • The Next Chapter of Consumer Marketplaces

    The Next Chapter of Consumer Marketplaces This issue explores emerging trends in consumer marketplaces and forecasts what the future of the category might look like. 01 |   A brief history of consumer marketplaces Consumer marketplaces are digital platforms that facilitate transactions between individual customers and various sellers, such as peers, freelancers, and businesses. These companies have been among the most consequential in venture capital. Examples include Amazon, Uber, Airbnb, and Etsy. It’s a model almost perfectly designed for the connective tissue of the internet – linking diverse users across the globe into a singular hub of commercial activity. Successful consumer marketplaces do four things remarkably well: 1. Define a novel, recurring experience. Amazon created the first destination for purchasing everyday goods (starting with books) through the web browser. 2. Drive better unit economics and scalable growth through new technology. Uber used GPS, mobile distribution, and real-time data processing to achieve economies of scale, on-demand delivery, and a dramatically better cost structure. 3. Harness network effects to create high switching costs. GoodRx’s extensive pharmacy network, discounts, and integrated telehealth services made it difficult for consumers to find comparable savings and convenience elsewhere. 4. Run a creative go-to-market strategy to crack (pun intended!) the chicken-egg problem (where users on one side of the platform only find it useful if the other side is already active). Airbnb did this by scraping Craigslist to automate outreach to its early cohort of hosts, then thoughtfully curating its initial listings to attract renters. Timing is key to evaluating new consumer marketplaces (although often overlooked). That's because, historically, these models grow in tandem with technological shifts that unlock new types of commercial activity. 02 |   Underlying patterns in consumer marketplaces When examining the evolution of consumer marketplaces, several patterns emerge. // As automation improves, digital marketplaces are both (a) specializing and (b) handling more complex transactions. Early platforms like eBay managed relatively simple transactions for second-hand, commodity goods. Now, with better data processing and advanced search capabilities, platforms are building products that can match heterogeneous supply (in Fiverr’s case, freelancers with diverse skills) to more complex projects. This trend will accelerate as advancements in AI and improvements in data processing drive even deeper automation. // Strong network effects lead to industry concentration. In the last several years, 2-3 companies have consistently accounted for 40-60% of total annual GMV among the top 100 private consumer marketplaces [1] . Public companies reflect a similar trend – Amazon, for example, commands nearly 40% of e-commerce market share . For entrepreneurs, this dynamic highlights (i) the importance of timing, to capitalize on new technology shifts and secure a first-mover advantage, and (ii) the need for a clear product thesis that accounts for precisely how this moment in technology can service unmet needs for buyers and sellers. // Data ownership and customization are increasingly important for suppliers. SaaS platforms like Shopify and social networks like Instagram are unbundling the traditional marketplace model. Instead of having a central intermediary trafficking transaction flows, sellers are now building custom storefronts and distributing them directly to various devices and social networks in order to retain more control over their data and customer experience. To compete, new marketplaces must also address these needs. This can be achieved by using decentralized infrastructure and/or better self-custodial tools. // Historically, the most transformative marketplaces build new types of network endpoints. These endpoints serve as the essential building blocks that give rise to entirely new product categories. A network endpoint acts as a gateway that ingests and processes data into a marketplace network. Examples include distribution channels like web browsers, IoT devices, mobile phones, and APIs that pull and integrate data from external sources. These new types of endpoints introduce unique data assets into the product experience, which then unlock entirely new categories of commerce. For example, Uber tapped into a new type of endpoint (mobile distribution) to feed a new type of data asset (real-time location data) into its product, which was essential for creating a new category (on-demand rides). Similarly, GoodRx built its own proprietary API to feed a new type of data asset (real-time prescription drug pricing mapped against insurance plan formularies [2] ) into its product. This created an entirely new market around prescription savings and price comparison. I expect these trends to continue in this next wave of technology development. 03 | New trends in consumer marketplaces Recently, I have been spending time with companies that are in the early innings of building new consumer marketplaces. A few years ago, this space felt stale – too many " Uber for [inset niche, existing product here]. " But today feels different. Founders are in experimentation mode, testing use cases and exploring the potential for new technology to open up entirely new commercial categories. One trend I'm digging into is portable identity management . Startups are developing systems for longitudinal digital identities, which integrate seamlessly across different marketplaces. Imagine Craigslist 2.0, where your verified information, preferences, and transaction history travel with you into different niche marketplace verticals. This makes interactions smoother and more personalized, and means fewer fraud issues and enhanced trust between buyers and sellers. Another trend is around harnessing the power of artificial intelligence to create new economies around very complex workflows . Most existing marketplaces focus on one-to-one matching. But AI can unlock more complex, multi-party coordination. Picture, for example, a personal finance management marketplace that unlocks access for a whole new customer segment that is unwilling to pay traditional advisor fees. An AI agent could organize advisors, tax consultants, investment products, insurance agents, and estate planners. The AI handles administrative tasks and sequencing, ensuring each step is assigned to the right professional at the right time. This frees up time for professionals to focus on their unique skills, take on significantly more clients (and compensation) for their time, and creates greater accountability and transparency for the end buyer. One last trend is creator augmentation . In the last several years, brands have increasingly looked to influencers to market their products. In 2016, $1.7B was spent on influencer marketing. That figure reached $21.1B in 2023 (a 43% CAGR). [3] As this market continues to grow and large foundational models improve, it will allow these influencers to capture more rent from these arrangements and scale their likeness, tone, and personalized recommendations across the internet. Imagine a creator-led marketplace with: WYSIWYG tooling [4] , so creators can easily build custom, dynamic storefronts that are unique to their brand and voice a personalized dashboard, powered by models fine-tuned with creator data. This would enable creators to communicate at scale across multiple channels and give them insight into what is motivating their fan base bespoke product recommendations that align not just with the creator’s brand, but also with the specific follower and their specific relationship to that creator full creator control over their tooling and data, to build trust, increase the likelihood of adoption, and provide an authentic user experience. If Sam Altman envisions a world where a one-person team can build a unicorn company , perhaps a more immediate opportunity is for individual creators to build and manage their own mini-marketplaces, all on a singular back-end network. Conceptually, none of these trends feel entirely new. Many have been in the entrepreneurial imagination for some time. But with recent technology advances, we’re finally seeing products materialize and deliver tangible functionality. If you are building along any of these themes, or have a different take on what the next, landmark consumer marketplace will look like, I’d love to connect. [1]  Sources: 2022 , 2021 , 2020  a16z Marketplace 100 reports. Instacart represented 64.2% of private marketplace GMV in 2022, 71.5% in 2021, and Airbnb, DoorDash, Instacart, and Postmates collectively represented 76% of private marketplace GMV in 2020. (Shout out to Bennett Carroccio, co-founder of Canal  and an entrepreneur I have partnered with, who first ran this marketplace analysis in 2020.) [2] A formulary is a list of prescription medications approved for use and covered by a particular health insurance plan or provided by a healthcare provider. [3]   Source . [4] WYSIWYG  stands for "what you see is what you get."  It refers to a software interface that allows users to see what the end result will look like while the document or content is being created. This is commonly used in the context of text editors, website builders, and content management systems where the user can format text, insert images, and make other changes in a visual editor that directly reflects how the content will appear when published.

  • An AI R/evolution

    An AI R/ evolution This issue frames AI as an evolution in software development. And uses that foundation to construct a framework for how AI startups can implement creative business models to find their edge. 01 | An AI r/ evolution Lately I’ve been hearing variations of this sentiment reverberate across the tech ecosystem:  "AI will fundamentally restructure what we are capable of. Software will move from work efficiency to doing all the work for you." The implication: AI represents a totally new paradigm, something categorically different than yesterday’s software. Yes, the capabilities of AI represent a substantial improvement in automation. And yes, adoption is happening at a record pace  (thanks ChatGPT!). But I like to think of AI as a moment in the evolution of our tooling, not something fundamentally novel.  After all, "work efficiency" and "doing work for you" are just two different points on the same productivity curve. Like prior technology shifts, it will take time for the underlying infrastructure and applications to specialize and reach significant adoption. It took enterprise SaaS 20+ years to eclipse 50% penetration . [1]  Even at 5x that pace, it will take until the end of the decade for our economy to incorporate AI at scale. As such, the questions we need to ask to evaluate new, AI-native applications should remain largely the same as traditional SaaS: What's the “10x” upgrade in user experience? Why should a user care? What is the theory for how the product will unlock latent or non-obvious demand to grow a new market, [2]  or quietly steal share from incumbents? What are the core, defensible assets, and at what level of scale will those assets materialize? What's the unique wedge for initial adoption, and how does that wedge drive long-term embeddedness? Where is there leverage in the growth model, through network effects, a viral coefficient, brand halo, or compounding revenue? An "AI-first" lens that doesn’t account for these aspects may lead to investments in trendy technology. But it risks backing products that fail to meet real user needs, or are vulnerable to competition from incumbents who integrate LLMs and already have broad distribution and deep network moats. On that last point, look no further than the velocity of Stripe’s successful roll out of AI tools into its product stack. Within 12-18 months, Stripe has deployed Sigma Assistant (converts natural language into SQL queries), a Copilot integration , neural net-powered fraud detection, LLM-powered merchant risk scoring, and intelligent subscription reactivation. That’s a massive headwind for any AI startup offering similar tooling that has not solved for the above questions. Technical capability is only one side of the coin. Entrepreneurs must now innovate around their positioning and business model if they wish to build the next iconic, AI-first platform. 02 | A new framework for AI business models How application-layer business models will adapt in the AI era boils down to two key factors: 1. What makes AI distinct from traditional SaaS is the ability to derive utility from unstructured data. [3]  Unstructured data doesn't neatly fit into “rows and columns” for simple retrieval, manipulation, and distribution (what SaaS applications were great at). Extracting value demands complex inferences, translation between formats, and accounting for context - which language models and computer vision can unlock. 2. As machines get better at reading unstructured data, users will move beyond execution-based tasks to operate at a higher level of abstraction.  With SaaS, the impact of our keystrokes and clicks were amplified by swift data processing and zero-cost distribution. AI-powered applications will automate the decision-making and judgment underlying those very actions. Users will progress from manual tasks – drafting and distributing a blog post  – to more advanced work – an AI agent automatically generates and sends a blog post to a tailored audience, based on a brief outline . (Note - this blog is not quite there yet!) As a result, software KPIs could shift. Greater output per user could mean fewer seats. Customers may begin to demand pricing aligned with outcomes  and quality  — e.g., throughput rate, workload completion, accuracy, business results — rather than feature counts or the number of licenses. Product love might be gauged by time saved  and work delivered , rather than the scale of a retentive, active user base. (Imagine for some products daily active user count could be inversely correlated with ARR!) Software providers can take on more risk (and upside) around service level agreements. The below framework outlines how business models can align with shifting user behavior in AI-powered applications: Of course, these models will take time to proliferate. Buyers will need to adjust to more pricing complexity. Enhanced measurement techniques and reporting need to be integrated into the product GUI, so that customers can trace business outcomes to what they’re paying. Software vendors will need to experiment to mitigate revenue volatility. But as AI transforms the way customers experience value, business models should inevitably follow suit. 03 | Finding an edge As companies explore these business models, it’s important to keep in mind: (1)  The success of any new venture hinges largely on the quality of its product offering. Any of these business models – absent a game-changing product – won’t deliver long-term value. (2)  Frameworks like usage-, value-, and risk-based pricing are not new in and of themselves; there's a long history of their application across the economy. However, marrying these business models with the unique value AI applications provide can help startups build a go-to-market advantage. Value-based care (VBC) models in healthcare provide an interesting analog. [4] VBC refers to a payment method whereby providers are reimbursed based on the quality and effectiveness of the care they deliver, rather than the quantity of services provided (what’s called “fee-for-service”). This (i) shifts the fundamental unit of payment to outcomes, and (ii) provides economic rewards and penalties to hold healthcare providers accountable for delivering those outcomes. VBC proliferated in the U.S. over the last decade in the wake of the Affordable Care Act of 2010. Transitioning to these arrangements was not easy. Healthcare providers needed to invest in new care coordination workflows, define outcome benchmarks, and establish data infrastructure for tracking results. Introducing risk and variability into the revenue model further complicated the transition from fee-for-service. However, newer, more innovative providers like Oak Street Health  (founded in 2012 and acquired by CVS  in 2023 for $10.6 billion) embraced VBC, competing on value from Day 1. In successful cases, implementing VBC contracts and  delivering a superior product helped new players achieve structural advantages over their fee-for-service counterparts through longer-term payer contracts, higher margins , and economic incentives to continue to improve their offering over time. As AI shifts the software performance metric from features and seats to outcomes, I can imagine similar dynamics playing out. Take a crude example: an AI-powered marketing product that auto-generates and optimizes digital ad campaigns, guaranteeing $100k in net new revenue and taking a 20% cut, might be preferable to a SaaS company charging $500 per user per year, automating work for 40 ad marketers. In both scenarios, ACV starts at $20k. But the former shifts risk onto the software provider, incentivizes ongoing performance, and fosters deeper, longer-term alignment with the buyer. Startups integrating these models into their DNA can gain an edge over incumbents with static, per-seat pricing who would need to invest in new market positioning, revamp parts of their operational infrastructure, and possibly cannibalize existing business lines in order to transition. It’s early days, but we’re already seeing signs  of outcome- and value-based business models penetrate the technology sector. As AI shifts how we work and what we expect from software, we may see this trend accelerate. Given the rapid pace of investment from high growth and large-cap companies into AI, technical moats – in and of themselves – are unlikely to be a reliable source of defensibility for new entrants. That’s why I’m excited to see the creative ways entrepreneurs will marry new, AI-native products with these pricing frameworks in order to craft their own unique edge. [1]  Salesforce launched in 1999. By 2021, 50% of corporate data was stored in the cloud. [2]  See Jensen Huang’s “ zero billion dollar markets ” [3]  “Unstructured data is information that is not arranged according to a preset data model or schema, and therefore cannot be stored in a traditional relational database or RDBMS. Text and multimedia are two common types of unstructured content. Many business documents are unstructured, as are email messages, videos, photos, webpages, and audio files.” ( source ) [4]  Value-based care  is a complex topic – there is a vast body of literature defining VBC, its implementation, efficacy, policy frameworks, and the various contract models between payers and provider networks. This primer  from the University of Pennsylvania provides a good overview. Comparing value-based care to software business models offers a conceptual framework for how to restructure and reprice offerings, in order to optimize incentives and value distribution in a market. I acknowledge the limitations here – it’s not a direct equivalence or one-to-one comparison between the two domains.

  • AI Models are Converging

    AI Models are Converging This issue examines new research around how large AI models are converging, and implications for investors. 01 | AI models are converging I recently came across a fascinating paper on AI model convergence from MIT researchers: The Platonic Representation Hypothesis . The crux:  large, generalized language and vision models, the backbone of a growing number of AI-power applications, are converging on the same statistical model of reality. [1]  This happens  even when models are trained with different objectives, on different data sets (language vs images), and with different architectures and training methods. In other words, these models aren't just completing tasks with similar proficiency; they're interpreting the underlying structures, patterns, and relationships of data in the same way. What's even more interesting is that model size [2]  has a positive correlation with convergence – the larger these models become, the more they "think" the same way. At first, this seemed counterintuitive. Wouldn’t we expect greater differentiation and specialization as models with different training methods and architectures scale up? But on deeper reflection, it makes sense. The paper explains that the tasks human brains perform to understand the world—breaking down information, detecting patterns, and classifying objects—are the same tasks we train neural networks to do. So, it's logical that these models would start to mirror the ways our brains process information, even when they’re built in fundamentally different ways. Two thousand years ago, Plato envisioned a universal truth underlying our human perceptions. Today, AI model convergence suggests that these technologies are uncovering universal patterns in our data. 02 | Implications for investing I'm thinking about these findings in two ways: 1. Standardization in model performance.  As general-purpose AI models scale, they will become more proficient (fewer hallucinations, [3]  higher accuracy), but we might expect less differentiation between models. Early-stage investors should focus on the unique parts of the stack—e.g., applications  that apply standardized models to proprietary data sets and specialized tasks, dev tools  that reduce the barriers to building on top of models and can expand the addressable market by attracting new or non-technical users, devices  and new UX  that offer a distinct form factor. Developers should prioritize building on models that (a) offer the best cost structure for their specific needs and (b) are supported by a strong ecosystem of tools and a robust network of developers. Model proficiency – in and of itself – may become a less differentiating metric. 2. Growing model interoperability.  The paper finds that as AI models start to process and represent information in similar ways, it will become easier to integrate and stitch [4]  them together. This interoperability allows for more modular AI systems, where several different models can be easily combined and ripped-and-replaced without extensive reconfiguration or additional training. Startups like Kindo , OpenPipe , and Flowise  are making this easy, automated, and secure for both technical and non-technical users. The future of AI lies not just in the power of individual models, but in the seamless integration and specialized application of these technologies. Model convergence makes this even more imperative. [1]  In this paper, researchers define model convergence  as the degree to which vector embeddings from different models match one another based on the same prompt. The more the vectors match, the more the models align in the ways they understand, interpret, and represent data. Vector embeddings  are mathematical representations that transform words, images, or other data into numerical vectors, capturing their meanings and relationships, and allowing that data to be processed and understood by computers. Each dimension captures some aspect of the data point's properties or its relationship to other data points. For example, if an LLM is tasked with evaluating the sentiment of a movie review, words are converted into vectors, processed through the model, and transformed into an output vector indicating sentiment. A movie review like "I loved the movie" is transformed into a sequence of vectors. Let's say "I" is represented as [0.1, 0.2], "loved" as [0.8, 0.7], "the" as [0.2, 0.1], and "movie" as [0.9, 0.8]. These vectors are input into the AI model. Words like “happy” and “joyful” might have similar vectors because they share similar meanings. The model processes these numerical representations through layers of neurons, where each layer transforms the vectors into new representations by combining and modifying them. For example, it might combine the vectors for "loved" and "movie" to create a new vector that emphasizes the sentiment of the phrase. The final layer of the model might output a single vector that represents the overall sentiment of the review. Researchers measure similarity  in model performance by comparing these output vectors. If different models produce similar vectors for the same input, they are understanding and representing the data similarly. For instance, it could produce a vector like [0.9] to indicate a high probability that the review is positive. If we have two different models processing the same review, we can compare their output vectors. If both models produce similar vectors, such as [0.9] and [0.85], it means they both think the review is positive and are representing the sentiment in a similar way. [2]   Model size  is defined as the number of parameters within the model. Larger models have more parameters, which means they have more capacity to learn and represent complex patterns in the data. These parameters are the weights and biases that are adjusted during training to optimize the model’s performance on a given task. [3]  A hallucination  refers to the generation of information that appears plausible but is actually incorrect or fabricated. This can include: false facts (the model generates information that is not true or has no basis in reality), inconsistent information (the model provides contradictory output), or misleading details (the model includes specific details that are not accurate or verifiable). Hallucinations occur because the model relies on patterns in the training data rather than understanding or verifying facts. As a result, the output might be linguistically coherent and contextually appropriate but factually incorrect. [4] Model stitching  is the process of combining parts of different AI models to create a single, more powerful system. A modular AI system is designed with separate, interchangeable components or modules, each responsible for specific tasks or functions.

  • My Story

    My Story In this issue, I share a piece of my personal story — growing up as a member of the LGBTQ+ community and how those experiences influence my work as an early-stage investor early-stage investor. I recently sat down with Angie D’Sa for her podcast, The Clarifier. Angie serves as a partner at Talentism and coaches founders and CEOs at high growth startups. I've known Angie for over a decade, since our time together at Bridgewater. She's a master at understanding people through their personal narrative. I've come to believe that stories are the best tool we have for evaluating people and teams. To really understand who someone is and where they're headed, it’s worth the time to explore the cause-and-effect relationships between their environment, life experiences, and the evolution of their values, abilities, and decision-making. With that in mind, I want to share a part of my own story — growing up as a member of the LGBTQ+ community (happy Pride Month!), the impact that had on me, and how those experiences influence my work as an early-stage investor. Angie, thank you for having me on. I came away with a deeper, more refined sense of self. Shout out to Jessi, Rocio, John, Nate, and the rest of the Talentism team for making this happen. You can listen here , Spotify , Apple , or wherever you get your podcasts.

bottom of page