Article by: KarenZ, Foresight News
Did Elon Musk change Twitter’s recommendation system from “rule-based heuristics and most heuristic algorithms” to “pure AI large models guessing your preferences”?
On January 20, Twitter (X) officially disclosed the new recommendation algorithm, which is the logic behind the “For You” timeline on the Twitter homepage.
In simple terms, the current algorithm is: mix “content from the people you follow” with “content that might appeal to you from across the internet,” rank them based on your previous actions on X such as likes and comments, filter twice in the process, and finally present you with the recommended information stream.
Below is a plain-language translation of the core logic:
Build a Profile
The system first collects user context information to establish a “profile” for subsequent recommendations:
User behavior sequence: historical interaction records (likes, retweets, dwell time, etc.).
User features: follow list, personal preference settings, etc.
Where does the content come from?
Every time you refresh the “For You” timeline, the algorithm sources content from two places:
Familiar circle (Thunder): tweets from people you follow.
Stranger circle (Phoenix): posts from people you don’t follow, but AI will find posts that match your taste from the vast sea of users (even if you haven’t followed the author).
These two sets of content are mixed together, forming candidate tweets.
Data Completion and Initial Filtering
After retrieving thousands of posts, the system fetches the complete metadata of each post (author info, media files, core text). This process is called Hydration. Then, a quick cleaning pass removes duplicate content, old posts, posts authored by the user themselves, content from blocked authors, or content containing blocked keywords.
This step saves computational resources and prevents invalid content from entering the core scoring phase.
How is scoring done?
This is the most critical part. Based on the Phoenix Grok Transformer model, the system evaluates each remaining candidate post after filtering, calculating the probability that you will perform various actions on it. It’s a game of positive and negative feedback:
Positive feedback (score boosting): AI predicts you might like, retweet, reply, click on images, or visit the author’s profile.
Negative feedback (score reduction): AI predicts you might block the author, mute, or report.
Final score = (Like probability × weight) + (Reply probability × weight) - (Block probability × weight)…
It’s worth noting that in the new recommendation algorithm, the Author Diversity Scorer usually intervenes after the AI computes the final score. When detecting multiple posts from the same author within a candidate batch, this tool automatically “lowers” the scores of subsequent posts from that author, increasing content diversity.
Finally, the posts are sorted by score, and the top-scoring batch is selected.
Secondary Filtering
The system rechecks the top-scoring posts, filtering out violations (such as spam, violent content), deduplicating multiple branches of the same thread, and then sorts them from high to low score to form the information stream you see.
Summary
X has eliminated all manually designed features and most heuristic algorithms from its recommendation system. The core advancement of the new algorithm is “letting AI autonomously learn user preferences,” achieving a leap from “telling the machine what to do” to “letting the machine learn how to do it.”
First, recommendations are more accurate, with “multi-dimensional pre-judgment” that better fits real needs. The new algorithm relies on the Grok large model to predict various user behaviors — not only whether you will like or retweet but also whether you will click links, how long you stay, whether you will follow the author, and even whether you will report or block. This refined judgment greatly enhances the alignment of recommended content with users’ subconscious needs.
Second, the algorithm mechanism is relatively fairer, to some extent breaking the “monopoly of big accounts,” giving new and small accounts more opportunities: past heuristic algorithms had a fatal flaw — big accounts gained high exposure through high historical engagement, regardless of content quality, while new accounts with good content were buried due to “lack of data.” The candidate isolation mechanism scores each post independently, unrelated to whether other content in the batch is a viral hit. Meanwhile, the Author Diversity Scorer also reduces the spamming of multiple posts from the same author within a batch.
For X Inc.: This is a cost reduction and efficiency enhancement move — using computing power to replace manpower, and AI to improve retention. For users, we are facing a “super brain” that constantly tries to understand human psychology. The better it understands us, the more dependent we become. But because it knows us so well, we risk falling deeper into the “information cocoon” woven by algorithms and becoming more easily targeted by emotionally charged content.