Delivery robots in dense cities have long struggled with GPS unreliability — signals bounce off buildings, drift by dozens of meters, and routinely place a device on the wrong block entirely. A new partnership is now putting a surprising solution to work on those streets.
Niantic Spatial, an AI company spun out of Niantic in May of last year, has teamed up with Coco Robotics to deploy a visual positioning system built on data gathered from hundreds of millions of Pokémon Go players. The model, according to the announcement, can pinpoint a device’s location to within a few centimeters — based on nothing more than a handful of snapshots of nearby buildings or landmarks.
The origin of that capability is unusual. When Pokémon Go launched in 2016, it sent players into streets worldwide, phones raised toward buildings, parks, and public landmarks. “Five hundred million people installed that app in 60 days,” says Brian McClendon, CTO at Niantic Spatial. Each of those images came tagged with precise location data. Over the years, combined with images from Ingress — Niantic’s earlier AR game launched in 2013 — the company assembled a training dataset of 30 billion images captured in urban environments. The game still drew more than 100 million players in 2024, eight years after release, according to Scopely, the video-game firm that acquired Pokémon Go from Niantic at the same time the AI spinout was created.
From Pikachu to Pavement
The dataset clusters around more than a million locations worldwide — places that served as key in-game sites, such as battle arenas, where players repeatedly returned. That repetition matters. For each location, the system holds thousands of images taken from slightly different angles, distances, and lighting conditions, giving the model a dense, multi-perspective view of the same physical space. “We know where you’re standing within several centimeters of accuracy and, most importantly, where you’re looking,” says McClendon.
Coco Robotics operates around 1,000 flight-case-sized robots across Los Angeles, Chicago, Jersey City, Miami, and Helsinki. Built to carry up to eight extra-large pizzas or four grocery bags, the robots move along sidewalks at roughly five miles per hour and have completed more than half a million deliveries, covering several million miles to date. CEO Zach Rash is direct about the core operational challenge: “The best way we can do our job is by arriving exactly when we told you we were going to arrive.” Dense urban corridors — high-rises, underpasses, freeways — make standard GPS too imprecise to meet that standard.
“The urban canyon is the worst place in the world for GPS,” says McClendon. “If you look at that blue dot on your phone, you’ll often see it drift 50 meters, which puts you on a different block going a different direction on the wrong side of the street.”
A Shared Problem, an Unexpected Dataset
Visual positioning — determining location from what a camera sees — is not new technology. But scale changes everything. “It’s obvious that the more cameras we have out there, the better it becomes,” says Konrad Wenzel at ESRI, a digital mapping and geospatial analysis firm. Niantic Spatial‘s dataset is, by any measure, an outsized one.
John Hanke, CEO of Niantic Spatial, frames the connection plainly: “It turns out that getting Pikachu to realistically run around and getting Coco’s robot to safely and accurately move through the world is actually the same problem.”
The partnership with Coco Robotics represents the first large-scale real-world deployment of Niantic Spatial‘s positioning technology.
Photo by Kindel Media on Pexels
This article is a curated summary based on third-party sources. Source: Read the original article