Tech News & Podcast | Africa

info@techgist.org

Have a question, comment, or concern? Our dedicated team of experts is ready to hear and assist you. Reach us through our social media, phone, or live chat.

Tech News & Podcast | Africa

Meta’s Watermarking Feature: How to Create and Manage AI Content

The Facebook-parent company Meta Platforms (META.O) announced on Wednesday that it will improve transparency by incorporating invisible watermarking into its text-to-image generating product envision with Meta AI chatbot in the upcoming weeks.

In late September, the social media company launched consumer-facing artificial intelligence (AI) devices, such as chatbots that produce lifelike visuals and conversational smart eyewear.

Meta’s watermarking feature is a technique to embed invisible information into images created by open source generative AI models. The watermark can identify the source, version, or owner of the model that generated the image. The watermark is not visible to the human eye, but it can be detected by algorithms, even if the image is edited or modified. The watermark is embedded into the image during the generation process, not after, so it cannot be easily removed by deleting a line of code. The watermark is based on a method called Stable Signature, which fine-tunes a small part of the generative model to root a given watermark for each user.

In a blog post, Meta stated that it can withstand standard image alterations like cropping and screen captures, among other things.

Due to the popularity of ChatGPT, the chatbot developed by Microsoft-backed OpenAI (MSFT.O), companies are now utilising large language models to drive innovation, draw in new investors, and communicate with existing and potential customers through AI-powered solutions.

A bespoke model built on the potent Llama 2 large language model—which the business released for commercial use in July—was used by Meta to create Meta AI.

More than twenty new ways that generative AI can enhance user experience are being tested by the firm on its social media platforms, which include WhatsApp and Instagram.

In addition to testing to enhance search capabilities across its many products, the Menlo Park, California-based business is increasing access to Imagine outside of chats, making it available in the U.S.

Stable Signature: A Watermarking Method for AI Images that Resists Fine-Tuning

By embedding a watermark that can be used to determine the image’s original location in the model, Stable Signature eliminates the possibility of the watermark being removed.

Fine-tuning is a common practice in AI that adapts foundational models to specific use cases or preferences. For example, you can fine-tune a model with images of your dog and then ask it to generate new images of your dog in different scenarios. However, this does not affect the watermarking method of Stable Signature, which embeds invisible information into the images at the model level. Stable Signature works with popular image modeling methods like VQGANs and Stable Diffusion, which use vector quantization and latent diffusion to generate realistic and diverse images. Stable Signature does not modify the generation process of these methods, so it preserves the quality and diversity of the images. Stable Signature can also be applied to other image modeling methods with some adaptation.

Identifying and Labelling AI-Generated Images

With text inputs, generative AI is a potent technology that can produce a wide range of realistic visuals. Nevertheless, the industry lacks uniform guidelines for labelling and identifying AI-generated material. At Meta, we think that doing responsibility research is crucial to improving products and guaranteeing the moral application of generative AI.

We are thrilled to present our work on Stable Signature, a technique that allows us to identify the owner, version, or source of the model that produced an image by embedding invisible information into it. Even if the image is altered or changed, algorithms can still identify the information even when it is not visible to the human eye.

The information is not easily deleted since it is integrated during the generating process rather than after. To encourage cooperation and creativity in this area, we are opening up our tools and code to the AI research community.

While the current focus of our work on Stable Signature is photos, we intend to eventually expand its application to other generative AI modalities. Although our solution has significant restrictions, it is compatible with several widely used open source models. It might not work with newer technologies because it does not apply to non-latent generative models. We are dedicated to funding this study because we think we can influence the development of a responsible, creative future for generative AI.

Share this article
Shareable URL
Prev Post

How Trezor is Promoting Bitcoin Literacy and Adoption in Africa

Next Post

The Ghana Tech Summit: Impact on Africa’s Future of Innovation

Read next