How Meta Turned Its Payroll Into a Training Set
The Model Capability Initiative is not a productivity tracker in disguise — it is a data-acquisition pipeline aimed at a very specific technical problem. Current large language models are surprisingly weak at the unglamorous parts of computer use: finding the right dropdown, remembering a keyboard shortcut, clicking precisely inside a modal that opened in an unexpected place. The open web, the substrate most frontier models were trained on, contains almost none of that labeled interaction data. Meta's statement is unusually blunt about this: 'our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus.' MCI is how Meta plans to generate those examples — by instrumenting the computers of tens of thousands of knowledge workers who already use Gmail, VS Code, GitHub, Slack, Atlassian, Metamate, and a long tail of other sanctioned apps every day.
Mechanically, the tool sits on company-issued laptops and captures four signal types on an allowlist of apps: keystrokes, mouse coordinates, click targets, and periodic screenshots. Paired with the surrounding UI state in those screenshots, this produces exactly the (observation, action) pairs that agent training pipelines consume. Meta's Chief AI Officer Alexandr Wang runs Superintelligence Labs, and Maher Saba leads the Applied AI Engineering team building autonomous agents — MCI feeds both. The program's rename from 'AI for Work' to 'Agent Transformation Accelerator' is not cosmetic: it reframes the entire corpus of Meta's internal labor as fuel for an agent rollout, with CTO Andrew Bosworth articulating the end-state explicitly — 'the vision we are building towards is one where our agents primarily do the work and our role is to direct, review and help them improve.'



