Two years from departing Twitter after Elon Musk’s takeover, Parag Agrawal has come up with an unconventional thesis about artificial intelligence. The founder who once commanded 330 million daily active users said that the web isn’t becoming obsolete, it’s becoming more essential than ever. On that belief, his startup Parallel Web Systems just raised $100 million in Series A funding, valuing the company at $740 million.
The round was co-led by Kleiner Perkins and Index Ventures, with participation from Spark Capital and existing backers Khosla Ventures, First Round Capital, and Terrain. Unlike most AI startups that chase consumer applications or large language models, Parallel is building something far more fundamental. The plumbing that allows AI agents to actually use the web as humans do.
When Parallel launched in August 2025, most observers dismissed Agrawal’s central premise as contrary nonsense. The prevailing Silicon Valley wisdom held that large language models, trained on the entirety of the internet, had rendered web search obsolete. Why would AI agents need to search when they already “knew” everything?
Agrawal saw it differently. He believed AI agents would depend on live web access far more than humans ever have. An M&A lawyer uses the web constantly to verify current market conditions, regulatory filings, and recent precedent. Why would their AI counterpart be deprived of the same access?
“How many jobs are there where we could turn off web access and ask you to do the same job fully?” Agrawal asked in an interview with Reuters. “You can’t deprive an M&A lawyer from not being able to use the web, so why would you deprive their agents?”
This simple question unlocked a market opportunity. Most AI systems today rely on static training data, often months or years old. When they encounter situations requiring current information, they hallucinate, fabricating plausible-sounding but false information.
For enterprise applications where accuracy matters, hallucinations are unacceptable. Insurance underwriters can’t guess. Lawyers need verified precedent. Salespeople require accurate data.
Building Search for Machines
Traditional search engines like Google optimize for human users. They rank URLs for people to click, measure engagement metrics, and prioritize attracting ad impressions. But an AI agent doesn’t need ranked links, it needs optimized tokens.
Parallel’s innovation is architectural. Instead of returning the “best” link for humans to click, the system identifies the highest-quality tokens from the web to place directly into an AI model’s context window. This requires rethinking every layer of the search stack from crawling, indexing to retrieval and ranking, all purpose-built for machine reasoning rather than human browsing.
The results speak for themselves. In benchmarks, Parallel’s Deep Research API achieved 58% accuracy on the BrowseComp multi-hop web navigation test, compared to 41% for GPT-5 and 25% for humans with a two-hour time limit. On the DeepResearch Bench across 22 academic disciplines, Parallel achieved 82% versus GPT-5’s 66%. These aren’t marginal improvements, they’re fundamental.
Agrawal’s Series A announcement included validation from Fortune 100 companies, plus venture-backed startups like Clay (which uses Parallel for GTM workflows), Sourcegraph (coding agents), and Genpact (workflow automation).
These customers report that Parallel reduces hallucination rates and cuts operational costs by optimizing API calls and token consumption, the primary drivers of AI inference expense.
The use cases are diverse and concrete. Insurance companies automate underwriting by verifying claims with current web data. Sales teams use AI agents to research prospects with live firmographic information.
Lawyers identify relevant precedent by searching live legal databases and news sources. Developers debug code by cross-referencing updated documentation. None of these tasks work well with static training data, all require live web access.
Parallel still faces a significant obstacle. The web is increasingly hostile to machine access. Publishers, worried about AI companies extracting their content without compensation, have deployed paywalls, login barriers, and terms-of-service restrictions. OpenAI, Perplexity, and others have been accused of web scraping, triggering lawsuits and throttled access from major content platforms.
Agrawal acknowledged this challenge and signaled that part of the $100 million will fund solutions. He mentioned developing an “open market mechanism,” an economic model to incentivize publishers to keep content accessible to AI systems.
Details remain sparse, but the ambition is clear. Instead of a zero-sum battle between AI companies and publishers, create value-sharing arrangements where content owners benefit from AI agent access to their data.
If Parallel can solve this, it becomes the bridge between two hostile constituencies. If it cannot, the company faces a future where key web content becomes increasingly gated, reducing the effectiveness of its agents.
Parallel’s $100 million Series A validates that enterprise AI agents need infrastructure. The company’s early customer traction and benchmark performance suggest the technology works. The remaining question is whether Agrawal can navigate the coming content wars and establish the economic model needed to keep the web open to machines.
For investors and enterprise builders, the bet is clear. In a world where AI agents become the web’s primary users, the infrastructure layer that enables that transition will be enormously valuable. Parallel may not be OpenAI or Anthropic, but it’s building something equally essential, the connective tissue between thinking machines and human knowledge.








