Letting users stitch their own data together: LLM agents for cross‑platform personalization
This paper argues that personalization should move from being controlled by individual platforms to being governed by the user. The authors show that large language model (LLM) agents — AI systems that can read, reason, and act across mixed kinds of personal data — make this shift practically possible. In short, users can gather their scattered data from many services and let an LLM agent turn that combined view into personalized recommendations that a single platform could not produce on its own.
The starting point is the data barrier that limits platform-centric personalization. Each service only sees the slice of a person’s life that happens inside its own app. Platforms have improved their methods over decades, from collaborative filtering and deep learning to in-house LLM approaches, but they still rely on data generated inside their boundaries. Competitive pressures, technical limits, user privacy concerns, and regulation keep platforms from assembling a truly complete picture. The authors point to examples such as Apple and Google building personalization inside their own ecosystems, and to laws like the European Union’s Digital Markets Act and a 2025 fine against Meta as signs that combining cross‑platform data is legally and politically constrained.
The alternative the paper proposes is user‑governed personalization. Here the user collects exports of their data from many services and gives an LLM agent access to that combined context. The agent reasons over heterogeneous records — messages, calendars, purchases, and other traces — to produce personalized choices that reflect the user’s full life. The authors provide proof‑of‑concept evidence that users who pair cross‑platform data exports with an off‑the‑shelf LLM agent can outperform single‑platform personalization baselines. That means the idea works in limited tests, and that the user can become the one entity able to integrate fragmented contexts.