On January 11, 2026, Wired reported that OpenAI, in partnership with training data provider Handshake AI, has instituted a policy requiring third-party contractors to submit verifiable outputs from their previous and ongoing projects. The initiative is designed to bolster the quality of training datasets for AI systems, enabling them to automate a broader spectrum of white-collar professional tasks. Under this policy, contractors are obligated to furnish comprehensive descriptions of tasks undertaken in prior roles, along with tangible work samples. These samples encompass a variety of deliverables, including but not limited to Word documents, PDFs, presentation slides, Excel spreadsheets, visual graphics, and code repositories.
To mitigate intellectual property risks, OpenAI stipulates that contractors must redact any proprietary or personally identifiable information prior to submission. The company suggests utilizing ChatGPT's 'Super Cleaning Tool' to facilitate this data-sanitization process. Nevertheless, intellectual property attorneys caution that this method places considerable trust in the contractors' discretion to accurately identify and exclude sensitive information. Such reliance introduces substantial vulnerabilities, potentially resulting in legal repercussions and ethical dilemmas. When approached for additional comment, an OpenAI spokesperson declined to elaborate further on the matter.
