Part I took us through the Crypto Wars, the fight for the open standards that shaped the early internet. This Part II digs into implications for this coming tech cycle – as software becomes more autonomous.
01 | Lessons from the Cypherpunks
The Cypherpunks – and their fight for open encryption – offer some timeless lessons about technology adoption:
01/ Foundational technologies tend to move toward open access and standardization – especially when their principal value hinges on interoperability.
We saw this with open-source encryption and protocols like HTTP. Because they were free and accessible, they scaled faster than proprietary alternatives. And this broad adoption was essential for their utility – adding value across an entire network, not just isolated cases.
02/ When a new technology standard gets in the hands of developers and delivers value to end users, attempts to centralize or suppress it tend to fail.
Government efforts to control encryption through (a) international export restrictions and (b) a “backdoor” with the Clipper Chip didn’t work – developer collaboration and free distribution outpaced these measures.
03/ Once a technological breakthrough becomes a foundational standard – and costs asymptote – value accrual moves to the application layer.
In the 1990s, open-source encryption enabled secure transactions at scale, which laid the foundation for commercial applications that could channel and monetize users, transactions, and data.
02 | Applications come in waves
Technology breakthroughs like TCP/IP and the first transformer model are unpredictable.
Research and developer forums hint at what's mathematically possible; but step-function improvements remain hard to forecast until the prototype is built, computation is run, and benchmarks are tested.
But new breakthroughs in software often lead to a follow-on wave of open standards, which help get technology into the hands of users.
As the most commoditized parts of the stack, open-source protocols and standardized frameworks serve as the “plumbing and wiring” of digital products – setting a baseline for performance, guiding how data flows, and shaping how systems interact in any given technological moment.
They inform what’s possible – and what isn’t.
For example, early Internet protocols like TCP/IP, HTTP, and SMTP enabled basic digital communication for the first time. But these protocols handled data in a static and siloed manner. Applications were limited to basic content delivery. Interoperability required developers to build custom middleware.
In the early 2000s, commercial-scale virtualization – the ability to abstract computing resources from physical hardware – ushered in the cloud era. A wave of new, open standards – RESTful APIs, containers, and open-source databases – streamlined software integrations and scaled complex data processing. This directly enabled platforms like Shopify, Uber, and Netflix.
But these new protocols lacked inherent mechanisms for security, identity management, quality assurance, and distribution. And so proprietary platforms – like AWS and iOS – filled those gaps with managed services, cloud infrastructure, and app stores. (Even today, security, authentication, and performance monitoring are the most purchased SaaS tools – reinforcing the limits of open standards in the cloud era and and the ongoing dependence on proprietary solutions for these critical functions.)
By studying the open protocols that follow each breakthrough – their specific capabilities and limitations – entrepreneurs and investors can anticipate the the business models, distribution vectors, and product primitives that will win.
And history shows that each wave of open standards builds on the last, increasing the total value of the application layer with every new technology cycle. (A rough analysis suggests a 13-15x increase in the value of the application layer from the early Internet to the cloud era.[1])
If this pattern holds, the next wave – driven by autonomous software – will introduce new open standards that radically expand the scale and scope of value creation for new startups, beyond what we’ve seen in prior technology cycles.
03 | Open protocols for the autonomous web
Gradually, over the next several years, we expect AI agents to become a core component of the software stack.
Unlike traditional workflows that rely on human input and predefined rules, AI agents are autonomous, performing tasks and interacting with applications, services, and data with minimal human oversight.This introduces two key vulnerabilities that existing standards fail to address:
Unbounded data manipulation: AI agents autonomously query, generate, and transmit data across systems, often combining information in ways that were not explicitly designed or anticipated.
Static access controls: Existing permissioning frameworks are static and rule-based, designed for human users rather than adaptive AI agents. Without more dynamic, context-aware protocols, there's a risk of over- or under-restricting access – leading to data leaks, misuse, and slow performance.
Addressing these challenges requires rethinking our existing protocols. From my conversations with early-stage companies, I'm seeing a fresh wave of open standards coming to market that can support AI-native applications and drive real agentic autonomy. Specifically:
Context-aware permissioning and authentication: Universal frameworks to verify agent identity and intention before granting access.
New, privacy-preserving computation: Decentralized methods like homomorphic encryption[2] that allow agents to process encrypted data without decryption or exposing raw inputs.
Automated communication protocols: Frameworks that enhance data with universal schemas, enabling deeper cross-platform interoperability.
Transparent decision logging: Tamper-proof logs of agent actions and decisions, for auditability and accountability. These can be complemented with tools that auto-detect anomalies and restrict operations in response to unexpected agentic behavior.
Open-source models:[3] AI models with open architectures, weights, and training code, which provide more flexibility, lower costs, and reduce lock-in with proprietary systems.
Automated observability and remediation: Open frameworks for real-time monitoring, enabling automated anomaly detection, response, and recovery.
Achieving the above objectives will require a major redesign of data standards — those that replace convenience for humans with interoperability for machines as the primary success metric.
The Crypto Wars taught us a vital lesson – during major technological shifts, open standards consistently outpace closed ones. Markets reward applications built on open, cost-efficient protocols and punish those that try to lock them down.
We're at the start of a new wave of open standards, as automation and decentralization become the new paradigm.
The entrepreneurs who understand and adapt to this shift will be the long-term winners in this next cycle. That’s where I’m placing my bets.
Endnotes:
[1] This analysis estimates the comparative value of the application layer across two key technology eras: the early Internet era (1985-2001) and the cloud era (2002-2020). To do this, I identified the top 15 public technology companies by revenue in each era (excluding companies that did not generate meaningful revenue from software applications). For each company, I examined the 10-K from the year in which its revenue peaked within the era and isolated the portion derived from application-layer products. Revenue from other business lines, such as hardware, consulting, and infrastructure, was excluded. The total application-layer revenue across these 15 companies was then aggregated and compared between the two periods – 14.1x greater in the cloud era than the early Internet era. This analysis is designed for relative comparison rather than absolute measurement of the application layer's total market size. There are several limitations here: (i) company selection is based on revenue rather than market cap or other measures of industry impact; (ii) financial reporting varies across companies and time periods, making precise revenue isolation imperfect; (iii) the long-tail of application-layer businesses are underrepresented due to lack of information or accounting constraints; and (iv) the definition of the application layer evolves, making cross-era comparisons inherently approximate. Despite these limitations, the approach provides a directional sense of how the application layer's relative value has shifted over time.
[2] Open-source AI models make their architecture, pre-trained weights, training code, and inference code freely available for use, modification, and redistribution. Unlike proprietary models, which restrict access through APIs, open-source models provide the raw components developers can directly integrate, fine-tune, and deploy within their own applications.
[3] Homomorphic encryption is an advanced encryption technique that allows computations to be performed directly on encrypted data without decrypting it, preserving privacy and security in data transmission.
[4] Secure multiparty computation refers to a cryptographic technique that enables multiple parties to jointly compute a function over their inputs without revealing the inputs to each other.
[5] JSON-LD is a lightweight format for linking and structuring data using JSON, designed to make data machine-readable across disparate systems while maintaining human readability.