A few hundred protesters marched through London’s King’s Cross neighborhood on Saturday, February 28, targeting the city’s concentration of major AI company offices in what organizers called the largest anti-AI demonstration of its kind.
The march was organized by two activist coalitions, Pause AI and Pull the Plug, whose members chanted slogans and carried signs outside the UK headquarters of OpenAI, Meta, and Google DeepMind. Protesters directed much of their criticism at generative AI products, particularly OpenAI‘s ChatGPT and Google DeepMind‘s Gemini.
Researchers have raised concerns about the harms of generative AI for years. What the London march signals is a shift in how those concerns are being expressed: moving from academic papers and policy debates into organized street demonstrations capable of drawing significant public participation.
What’s Orbiting Earth Right Now
Separate from the protest news, a new feature examines the growing volume of human-made objects in low Earth orbit. The numbers are striking. Over the past five years, the count of active satellites has risen from roughly 3,000 to approximately 14,000, and the figure continues to climb.
The inventory includes research telescopes, the International Space Station, and an expanding field of commercial satellites. It also includes a substantial and growing volume of debris. Scientists describe this accumulating layer as the “anthroposphere,” a band of human-built material wrapped around the planet just beyond the atmosphere.
Humanity began placing objects in orbit in 1957. What started as a rare and expensive feat has become routine, driven largely by the falling cost of satellite launches and rising commercial demand for connectivity infrastructure.
AI’s Energy Footprint Earns Awards Recognition
The American Society of Magazine Editors named MIT Technology Review a finalist for a 2026 National Magazine Award in the reporting category. The shortlisted piece, titled “We did the math on AI’s energy footprint. Here’s the story you haven’t heard,” was produced by senior AI reporter James O’Donnell and senior climate reporter Casey Crownhart, who spent six months reviewing hundreds of pages of reports and interviewing industry experts.
Other Developments Worth Tracking
- Anthropic and the Pentagon failed to reach a data-sharing agreement after the Defense Department sought access to bulk data collected from Americans. OpenAI subsequently signed a separate deal with the Pentagon.
- DeepSeek is preparing to release a new multimodal AI model, designated V4, timed ahead of China’s annual parliamentary sessions.
- Iranian news sites and a religious app were compromised following US-Israeli strikes, with the attackers displaying anti-military messages and urging personnel to abandon the government.
- The UK is piloting a social media restriction program targeting users under 16, testing overnight digital curfews and screen time limits with hundreds of teenagers.
The question of what follows large language models is also drawing attention. Many researchers argue the next significant advances in AI may not resemble current language model architectures at all, with multiple emerging technical directions competing to define the field’s next phase.
Photo by Annie Spratt on Unsplash
This article is a curated summary based on third-party sources. Source: Read the original article