Responding to growing concerns about the impact of AI-generated material on this year’s global elections, Microsoft-backed firm OpenAI said Tuesday it released a tool that can identify images produced by the DALL-E text-to-image generator. 3, according to Reuters.
In internal tests, the company said the device detected 98% of images captured by the DALL-E 3. It was also less sensitive to normal changes such as compression, shear and saturation changes.
ChatGPT developers also want to add a tamper-resistant watermark that can be used to mark digital files such as audio or images with signals that are difficult to remove.
The standard, which can help track the provenance of various media, is planned by OpenAI, which joins an industrial consortium formed by Google, Microsoft and Adobe as part of the effort.
A fake video of two Bollywood stars criticizing Prime Minister Narendra Modi has gone viral online amid India’s general election in April.
Deepfake and AI-generated content is increasingly used in elections around the world, such as in Indonesia, Pakistan, the US, and India.
OpenAI and Microsoft have announced a $2 million “community sustainability” fund to advance AI education.