Can AI truly remember like humans?

Video has always been the toughest format for AI which is messy, dense, and hard to index.
In September 2025, Shawn Shen, co-founder and CEO of Memories.ai, said his team was “building an AI model that can see and remember just like humans.” That comment came as the San Francisco startup began an aggressive hiring push, offering compensation packages of up to $2 million to recruit top researchers from Meta, Google, Microsoft, Anthropic, xAI, and other AI labs. Among the new hires was Chi-Hao Wu, a former Meta research scientist who joined as Chief AI Officer to advance the company’s long-context video understanding work.
Now, that vision is entering the hardware phase. Through a new partnership with Qualcomm, Memories.ai will integrate its Large Visual Memory Model (LVMM) 2.0 into Snapdragon-powered phones, PCs, cameras, and wearables starting in 2026, Qualcomm’s first public collaboration with a company focused entirely on visual memory. AIM Media House spoke exclusively to Shen about the partnership and future plans.
Giving Machines Visual Recall
The idea behind LVMM, Shen explained, came from a simple observation: video data is enormous and difficult to search, while text is small and inherently indexable. “The idea was to add a missing visual memory layer for video,” he said. “For text, the files are small and searchable by default. Video isn’t a ten-minute clip can be more than a hundred gigabytes, with different voices, faces, and locations. LVMM turns those raw frames into structured memory so AI can find exact moments and answer broader questions.”
LVMM 2.0 compresses and encodes frames, stripping away noise while fusing video, audio, and image information into a unified format. “Video has always been the toughest format for AI which is messy, dense, and hard to index,” Shen said. “LVMM 2.0 encodes frames, compresses them, and builds a sub-second index. It fuses video, audio, and images so results keep context. No more hoping transcripts match what the eyes see.”
That means a user could type or ask something as specific as “friends eating dinner in Seoul” and instantly find the relevant clip without relying on manual tags or transcripts.
Partnering With Qualcomm
The Qualcomm partnership moves Memories.ai from research into commercial deployment. “We’re thrilled to partner with Qualcomm,” Shen said. As one of the world’s largest semiconductor manufacturers across mobile, IoT, and AI devices, Qualcomm was seen as the natural fit to bring LVMM 2.0 directly onto hardware. The collaboration was established to ensure the model could run natively on devices, reducing latency and improving privacy. Through this arrangement, Memories.ai is now able to work directly with manufacturers so that LVMM can be embedded into products from the outset rather than added later.
Under the collaboration, LVMM’s encoder runs on the device’s neural-processing unit, while retrieval operates through the CPU. Qualcomm has not yet specified which processors will feature the integration first, but the Snapdragon X2 Elite for PCs and Snapdragon 8 Elite Gen 5 for smartphones are being prepared for testing. Running LVMM directly on-device eliminates dependence on external cloud systems that address latency and privacy simultaneously.
Shen expects the first deployments to focus on improving tools users already rely on, such as faster photo and video search on devices and smarter flagging for security cameras. “It’ll start with products we’re already familiar with better photo and video search on device,” he said. Over time, these capabilities are expected to extend into newer areas unlocked by AI and edge computing, including wearable glasses able to capture and recall their surroundings, and home or factory robots programmed to remember specific processes with precision.
For now, the company’s roadmap centers on concrete, near-term integrations rather than experimental ones, ensuring LVMM 2.0 can deliver measurable efficiency and accuracy improvements across existing ecosystems.
Funding and Expansion
Memories.ai closed an $8 million seed round earlier this year, led by Susa Ventures with participation from Samsung Ventures. Shen called the investment “a great endorsement” of the company’s mission and said it enabled key hires and deeper research investment. “This allowed us to make key hires and investments in our technology,” he said. “We’re now generating healthy revenue from our various partnerships and customers to prove what it’s capable of.”
The company’s initial traction is strongest in areas where video data accumulates quickly through social media platforms, enterprise archives, and consumer devices. “We think the first few years will see the most adoption in improving technologies people already use like AI search for videos and photos, or analytics for large volumes of content,” Shen said.
Privacy and Architecture
A defining element of LVMM 2.0 is its on-device structure. “LVMM 2.0 runs on-device, so data stays local,” Shen said. By eliminating routine cloud round trips, the system allows recall and search features to be built directly on the device without the need to export raw video. This architecture delivers faster performance and improved privacy simultaneously, creating a unified memory framework that keeps sensitive information local.
This architecture fits within Qualcomm’s wider shift toward on-device AI, in which computation occurs where data is created. It also offers manufacturers a way to improve AI performance without adding the regulatory complexity of remote storage or transmission.
Shen sees LVMM as a complement to large language models rather than a competing system. “Text is easy; video is hard,” he said. While many existing technologies have helped LLMs retain and retrieve massive amounts of text, achieving the same level of memory across hundreds or thousands of gigabytes of video had been considered nearly impossible before LVMM. As artificial intelligence moves beyond screens and into physical environments, such systems are now being built to enable machines to process and recall the world with human-like continuity.
The company is supplying an indexing layer designed to make visual data retrieval faster and more private, while Qualcomm gains software that can demonstrate the capabilities of its new generation of NPUs. Both companies describe the relationship as an engineering partnership focused on integration and performance testing ahead of next year’s launch schedule. Memories.ai provides the indexing and recall layer that structures visual data, while Qualcomm supplies the chip-level optimization to make it run locally and efficiently.
Key Takeaways
- Memories.ai partners with Qualcomm to integrate its visual memory AI (LVMM 2.0) into Snapdragon devices by 2026.
- Memories.ai aims to give AI visual recall, enabling machines to process and remember video data like humans.
- LVMM 2.0 addresses the challenge of video data by transforming raw frames into structured, searchable memories for AI.
- The company recruited top AI talent, offering significant compensation, to advance its long-context video understanding.