
A 20-year-old founder from Mumbai has attracted backing from senior Google figures for a new AI startup designed to help large language models remember what users tell them.
Supermemory
Supermemory, founded by developer Dhravya Shah, is building what he calls a “universal memory layer” for artificial intelligence, which is a tool that allows AI apps to retain and recall information across different sessions.
Google Investor
The company has now raised around $3 million in seed funding, supported by investors including Google’s Chief Scientist Jeff Dean, Cloudflare’s Chief Technology Officer Dane Knecht, and executives from OpenAI and Meta.
Tackling One Of AI’s Hardest Problems
For all their sophistication, it seems that current AI systems still have remarkably short memories. For example, each time a user starts a new conversation, most models forget the details of previous ones. Even with growing “context windows” (i.e. the measure of how much data a model can process at once), the ability to sustain meaningful long-term context remains limited.
Supermemory, therefore, is trying to fix this problem. However, rather than rebuilding models, it acts as an intelligent memory system that connects to existing AI tools. For example, the platform analyses a user’s files, chats, emails, notes and other unstructured data, identifies key relationships and facts, and then turns that information into a kind of knowledge graph. When an AI system queries the memory layer, it can instantly access relevant past context, making the interaction more accurate and personal.
Shah describes the concept as giving AI “self-learning context about your users that is interoperable with any model.” He says this is where the next wave of AI innovation will focus: not on larger models, but on personalised, context-rich systems that actually remember.
From Mumbai To Silicon Valley
Originally from Mumbai, Shah began programming as a teenager, building small web apps and chatbots. One early creation, a bot that turned tweets into neatly formatted screenshots, was acquired by the social media tool Hypefury. The sale gave him early experience of product building and enough financial headroom to pursue further projects.
He was preparing for India’s elite engineering entrance exams when he decided instead to move to the United States and study computer science at Arizona State University. There, he challenged himself to create a new app every week for 40 weeks. During one of those weeks, he built an experimental tool that let users chat with their Twitter bookmarks. The concept later evolved into Supermemory.
Internship at Cloudflare
In 2024, Shah secured an internship at Cloudflare, working on AI and infrastructure projects, before joining the company full-time in a developer relations role. Mentors there encouraged him to turn Supermemory into a serious product, leading him to leave university and focus on it full-time.
“I realised the infrastructure for memory in AI simply didn’t exist,” he explained in a company blog post. “We built our own vector database, content parser and extractor, all designed to make memory scalable, flexible and fast, like the human brain.”
How It Works
In terms of how the Supermemory platform actually works, it can ingest a wide range of content types, including documents, messages, PDFs, and data from connected services such as Google Drive, OneDrive, and Notion. Users can add “memories” manually, via a chatbot or a Chrome extension, or allow apps to sync data automatically.
Once uploaded, the system extracts insights from the content and indexes them in a structure that AI models can query efficiently. It can then retrieve context across long timespans (from emails written months earlier to notes saved in other tools) allowing different AI agents to maintain a coherent understanding of users and projects.
Shah claims the company’s purpose-built infrastructure gives it a technical edge. The system has been benchmarked for low latency, meaning responses arrive quickly even at scale. This speed, he argues, will be key to making memory-driven AI practical in everyday applications.
As Shah says, “Our core strength is extracting insights from any kind of unstructured data and giving apps more context about users,” and that “As we work across multimodal data, our solution can support everything from email clients to video editors.”
The Investors
Supermemory’s $3 million seed round was led by Susa Ventures, Browder Capital, and SF1.vc. It also drew high-profile individual investors including (notably) Google AI’s Jeff Dean, DeepMind product manager Logan Kilpatrick, Cloudflare CTO Dane Knecht, and Sentry founder David Cramer.
Joshua Browder, the founder of legal automation firm DoNotPay, invested through his personal fund, Browder Capital. “What struck me was how quickly Dhravya moves and builds things,” Browder said publicly. “That prompted me to invest in him.”
Early Customers
The startup already lists several enterprise and developer customers. For example, these include AI productivity tool Cluely, AI video editor Montra, search platform Scira, Composio’s multi-agent tool Rube, and the real estate data firm Rets. One robotics company is reportedly using Supermemory to help machines retain visual memories captured by onboard cameras, which is an example of how the technology could extend beyond software.
While the app has some consumer-facing tools for note-taking and bookmarking, the broader ambition is to make Supermemory the default memory engine for AI agents, providing a universal layer that different applications can plug into.
Not The Only One
Several other startups are also exploring long-term AI memory. For example, companies such as Letta, Mem0 and Memories.ai are developing their own frameworks for building memory layers into AI systems. Some target specific use cases such as customer support or industrial monitoring, while others focus on consumer productivity.
What Makes Supermemory So Different?
Shah argues Supermemory’s technical foundations are its main differentiators. For example, by building its own underlying infrastructure, rather than relying on third-party databases, the company claims to offer faster and more reliable performance than rivals. Early customers reportedly send billions of tokens of data through the platform each week.
Analysts have noted that as AI assistants become embedded across daily workflows, effective memory systems will be essential to making them useful. Without it, users must constantly repeat information or re-train models for every new task. The growing number of investors and engineers now entering the “AI memory” space reflects that urgency.
From Side Project To Infrastructure Company
It seems, therefore, that what began as a teenager’s personal productivity experiment has quickly become a serious infrastructure business. The original open-source version of Supermemory attracted over 50,000 users and 10,000 stars on GitHub, making it one of the fastest-growing projects of its kind in 2024. That early traction revealed the technical limits of existing tools and gave Shah the confidence to rebuild it from the ground up.
The company now describes its product as “interoperable with any model” and capable of scaling across billions of data points. It is hiring engineers, researchers and product designers to continue improving its platform.
Shah, who recently turned 20, says he sees memory as the next defining challenge in AI. “We have incredibly intelligent models,” he wrote on his blog, “but without memory, they can’t truly understand or personalise for the people they serve.”
What Does This Mean For Your Business?
The growing interest in memory infrastructure highlights how the next advances in AI will not come solely from bigger models, but from systems that can learn and recall over time. Supermemory’s approach to context retention gives developers and enterprises a practical route towards that goal. For AI to be genuinely useful across sectors such as healthcare, education and business operations, the ability to remember earlier inputs securely and accurately will be critical. This is the gap Shah’s technology is aiming to close, and its progress is already attracting serious attention from investors and other AI developers.
For UK businesses, the implications could be significant. For example, many organisations are now experimenting with generative AI tools for writing, analysis, and customer engagement, yet find themselves limited by the absence of memory between sessions. A reliable layer that provides long-term contextual understanding, therefore, could make those tools far more effective, whether in automating reports, managing client communications or maintaining project continuity. If Supermemory delivers the speed and scalability it claims, it could simplify how businesses integrate AI into daily workflows without constantly retraining or re-prompting systems.
There are also questions that the technology community will need to address. Any system designed to ingest and store personal or corporate data at scale will face scrutiny over privacy, compliance and data security. How Supermemory and its competitors handle that responsibility will help define the credibility of this emerging market. Investors appear confident that Shah and his team are aware of those challenges, and that their focus on infrastructure gives them a technical edge.
For now, Supermemory’s rapid evolution from side project to venture-backed platform shows how quickly new layers of the AI ecosystem are forming. It is a story about a young founder spotting one of the field’s most persistent gaps and convincing some of the world’s leading technologists that he has a credible solution. Whether the company can translate that promise into long-term commercial success remains to be seen, but its emergence signals a clear direction of travel for the next stage of AI development, i.e. towards systems that don’t just process information, but remember it.