The "Friend Finding" Feature No One Asked For: OpenAI’s Pivot to Surveillance Capitalism
How the February 2026 privacy update transforms ChatGPT from a tool into a tracking ecosystem.
It arrived in my inbox at 8:00 PM on Valentine's Day—a cynical timeslot often reserved for news companies hoping you won't notice. The subject line was innocuous enough: "Updates to OpenAI's Privacy Policy".
It framed the changes as a benevolent exercise in transparency, claiming the update was designed to "give you more information about what data we collect" and "how you can control it". It highlighted three seemingly helpful pillars: a new feature for "Finding friends on OpenAI services" , "Age prediction" mechanisms to provide "safeguards for teens" , and details on "New tools" like the Atlas browser and Sora 2.

On the surface, it reads like a standard housekeeping update for a scaling tech company. But if you ignore the friendly summary and conduct a forensic line-by-line comparison of the new 28-page policy against the archived 2024 version, a different story emerges.