LinkedIn Replaced Five Feed Systems With One LLM at 1.3B Scale

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

Global professional networks have been under mounting pressure to justify the relevance of their feeds as user behavior grows more fragmented and harder to predict. Against that backdrop, LinkedIn has spent the past year dismantling and rebuilding the core architecture that determines what 1.3 billion members see when they open the platform.

The old system was a product of years of layered engineering. Five separate retrieval pipelines operated in parallel — each with its own infrastructure, its own optimization logic, pulling from different content sources: a chronological network index, geographic trending topics, interest-based filters, industry-specific content, and embedding-based systems. It worked. But according to the announcement, maintenance costs climbed steadily as each source required independent upkeep.

Engineers replaced that structure with a single LLM-based retrieval system. The redesign touched three distinct layers: content retrieval, ranking, and the underlying compute management.

Converting Professional Data Into Something a Model Can Use

Tim Jurka, vice president of engineering at LinkedIn, described running hundreds of tests over the past year before reaching what he called a reinvention of a large chunk of the company’s infrastructure. “Starting from our entire system for retrieving content, we’ve moved over to using really large-scale LLMs to understand content much more richly on LinkedIn and be able to match it much in a much more personalized way to members,” Jurka said.

One early obstacle was format. To feed the LLM, the team had to convert platform data into text sequences. They built a prompt library using templated structures. For posts, the template draws on format, author information, engagement counts, article metadata, and post text. For members, it pulls profile data, skills, work history, education, and — notably — a chronologically ordered sequence of posts that member had previously engaged with.

That last element points to a deeper design philosophy. The company says traditional ranking models treat engagement as random, missing the patterns that emerge from a person’s professional trajectory. LinkedIn‘s new proprietary Generative Recommender model is built to treat those patterns as meaningful signal.

What the Model Got Wrong — and How the Team Fixed It

Testing surfaced a specific failure mode worth attention. When a post carried an engagement figure such as 12,345 views, the model read “views:12345” as plain text — stripping the number of any meaning as a popularity signal. The fix was structural: engagement counts were broken into percentile buckets and wrapped in special tokens, separating them from unstructured text. According to the report, this intervention meaningfully improved how the system weighs post reach.

The underlying challenge Jurka identified is two-sided. LinkedIn must reconcile what members say they care about — title, skills, industry — with how they actually behave over time. It must also surface content from outside a member’s immediate network when that content is more relevant. Those two signals frequently pull against each other.

The company notes that members use the platform in distinct ways: some prioritize industry connection, others focus on thought leadership, and a separate group uses it primarily for hiring or job searching. A single retrieval model now handles all of those contexts.

The stated outcome is a feed the company says is more precisely matched to professional context and less expensive to operate at scale. The next step, according to Jurka, involves continued work on how member context is maintained within the model prompt and how data points are sampled for fine-tuning the LLM.

Photo by Shantanu Kumar on Pexels

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article