$30 trln in additional investments required to achieve net zero in 8 hard-to-abate sectors
The Net Zero Industry Tracker 2024 estimates that $30 trillion in additional capital will be required across ...
Meta said it is working with industry partners on common technical standards for identifying AI content, including video and audio.
As a company that’s been at the cutting edge of AI development for more than a decade, it’s been hugely encouraging to witness the explosion of creativity from people using its new generative AI tools, like Meta AI image generator which helps people create pictures with simple text prompts.
Meta has been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI. Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads.
Meta is building this capability now, and in the coming months it’ll start applying labels in all languages supported by each app. Meta is taking this approach through the next year, during which a number of important elections are taking place around the world. During this time, Meta expects to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve. What Meta learns will inform industry best practices and our own approach going forward.
When photorealistic images are created using Meta AI feature, Meta does several things to make sure people know AI is involved, including putting visible markers that you can see on the images, and both invisible watermarks and metadata embedded within image files. Using both invisible watermarking and metadata in this way improves both the robustness of these invisible markers and helps other platforms identify them. This is an important part of the responsible approach it is taking to building generative AI features.
Since AI-generated content appears across the internet, Meta has been working with other companies in its industry to develop common standards for identifying it through forums like the Partnership on AI (PAI). The invisible markers it uses for Meta AI images – IPTC metadata and invisible watermarks – are in line with PAI’s best practices.
Meta is building industry-leading tools that can identify invisible markers at scale – specifically, the “AI generated” information in the C2PA and IPTC technical standards – so it can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.
While companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale, so Meta can’t yet detect those signals and label this content from other companies. While the industry works towards this capability, Meta is adding a feature for people to disclose when they share AI-generated video or audio so it can add a label to it. Meta’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and it may apply penalties if they fail to do so. If Meta determines that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, it may add a more prominent label if appropriate, so people have more information and context.
This approach represents the cutting edge of what’s technically possible right now. But it’s not yet possible to identify all AI-generated content, and there are ways that people can strip out invisible markers. So Meta is pursuing a range of options. It is working hard to develop classifiers that can help people to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, Meta is looking for ways to make it more difficult to remove or alter invisible watermarks. For example, Meta’s AI Research lab FAIR recently shared research on an invisible watermarking technology we’re developing called Stable Signature.
This integrates the watermarking mechanism directly into the image generation process for some types of image generators, which could be valuable for open source models so the watermarking can’t be disabled.
This work is especially important as this is likely to become an increasingly adversarial space in the years ahead. People and organizations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it. Across Meta industry and society more generally, we’ll need to keep looking for ways to stay one step ahead.
In the meantime, it’s important people consider several things when determining if content has been created by AI, like checking whether the account sharing the content is trustworthy or looking for details that might look or sound unnatural.
These are early days for the spread of AI-generated content. As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content. Industry and regulators may move towards ways of authenticating content that hasn’t been created using AI as well content that has. What Meta is setting out today are the steps it thinks are appropriate for content shared on its platforms right now. But it’ll continue to watch and learn, and it’ll keep its approach under review as it does. Meta’ll keep collaborating with its industry peers. And Meta’ll remain in a dialogue with governments and civil society.
AI-generated content is also eligible to be fact-checked by its independent fact-checking partners and it labels debunked content so people have accurate information when they encounter similar content across the internet.
Meta has been a pioneer in AI development for more than a decade. Meta knows that progress and responsibility can and must go hand in hand. Generative AI tools offer huge opportunities, and Meta believes that it is both possible and necessary for these technologies to be developed in a transparent and accountable way. That’s why Meta wants to help people know when photorealistic images have been created using AI, and why it is being open about the limits of what’s possible too. It’ll continue to learn from how people use its tools in order to improve them. And Meta’ll continue to work collaboratively with others through forums like PAI to develop common standards and guardrails.
The Net Zero Industry Tracker 2024 estimates that $30 trillion in additional capital will be required across ...
Egypt is gearing up for the 2025 Human Development Report, and debt swap initiatives, all ...
The European Bank for Reconstruction and Development (EBRD), the United Kingdom’s High-Impact Partnership on Climate ...
اترك تعليقا