Memories.ai Partners With Nvidia to Build Visual Memory for AI Wearables

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

Memories.ai, a startup founded by two former Meta engineers, has announced a collaboration with Nvidia to build visual memory infrastructure for AI-powered wearables and robotics — technology the company’s founders say is missing from the physical AI stack entirely.

The partnership, announced at Nvidia‘s GTC conference, gives Memories.ai access to Nvidia‘s Cosmos-Reason 2, a reasoning vision language model, and Nvidia Metropolis, a video search and summarization application. According to the announcement, the tools feed into the company’s core product: a system that allows AI devices to store, index, and retrieve what they have seen.

CEO Shawn Shen and co-founder and CTO Ben Zhou say the idea came directly from their previous work building the AI system behind Meta‘s Ray-Ban glasses. The experience exposed a gap: AI glasses could record video, but users had no way to recall it meaningfully. The pair left Meta to build the solution themselves.

“AI is already doing really well in the digital world. What about the physical world?” Shen said. “AI wearables, robotics need memories as well. … Ultimately, you need AI to have visual memories.”

Text Memory Won’t Cut It for Physical AI

Memory features have been spreading through the AI industry — OpenAI added chat memory to ChatGPT in 2024 and refined it in 2025, while xAI and Google Gemini have launched their own memory tools in the past two years. But Shen argues those advances are built on text, which is structured and easy to index but of limited use to devices that navigate the world visually.

Visual memory demands different infrastructure. The company says it requires embedding and indexing video into a retrievable data format — a substantially harder problem than logging text conversations.

To build that infrastructure, Memories.ai launched its large visual memory model (LVMM) in July 2025. Shen describes it as comparable to a smaller version of Google‘s Gemini Embedding 2, a multimodal indexing model. A second-generation LVMM is already out, and the company has signed a partnership with Qualcomm to run on Qualcomm‘s processors later this year.

Hardware Built to Train, Not to Sell

For training data, the company built a proprietary wearable device called LUCI, worn by employees who record video to feed the model. Shen was explicit that Memories.ai has no intention of becoming a hardware company — the device exists because commercial video recorders prioritized high-definition formats that consumed too much battery for the company’s needs.

The company says it is already working with several large wearable manufacturers, though it declined to name them.

Founded in 2024, Memories.ai has raised $16 million in total — an $8 million seed round in July 2025 followed by an $8 million extension. Susa Ventures led the round, with participation from Seedcamp, Fusion Fund, and Crane Venture Partners, among others.

Shen acknowledges the market isn’t fully there yet. “We are more focused on the model and the infrastructure, because ultimately we think the wearables and robotics market will come, but it’s probably just not now,” he said.

Photo by Jerry Zhang on Unsplash

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article