For those who are interested in AI, and especially for those who aren’t, leaks have shown that there are plans to release new tags to common platforms to identify which content is AI-generated. On Twitter/X reverse engineering Alessandro Paluzzi has shown a sneak peek of this with labels that show the “generated by Meta AI” within the program.
As AI becomes more and more commonly used, it can be really hard to distinguish between what has been made by a real human and what has been made by a bot. This will allow people to be able to make more informed choices as well as be transparent with users. There have been lots of companies who are moving into AI that have promised to ensure user trust by applying things like these tags to their content.
In a perfect world, AI would be used as an assistive tool to help creators, businesses and everyone in between develop the kind of content they want to create. However, in the wild west which is the foundation of the AI movement, things will always be a little rouge until there are standards in place. These labels as well as other measures ensure that we all have some guide rails walking into this unknown future.
#Instagram is working to label the contents created or modified by #AI in order to be identified more easily 👀 pic.twitter.com/bHvvYuDpQr
— Alessandro Paluzzi (@alex193a) July 30, 2023
Aside from the generative AI labels, Paluzzi has also discovered other AI-powered tools in the works for Instagram, such as a summary feature that will interact with your direct messages. This way it can provide a brief synopsis of what was in the message. These tools can be used for things like decreasing cognitive load when disabilities are in play. Or AI can help with breaking down tasks into smaller tasks, helping people who have ADHD.
Therefore its of the utmost importance that any AI-generated content, including any sort of summary feature or lists, has proper disclosure and disclaimers. Disclaimers in the content will help users understand the content they are interacting with. This is regardless of if it has been generated or influenced by AI. It’s important because it allows for transparency between users and the platform so they’re engaging with the content they want to see. This is crucial in maintaining trust and ensuring that users are well-informed about the technologies they’re using as well as being aware of what is behind the content they encounter.
As Meta continues to innovate and explore the possibilities of generative AI, implementing clear disclaimers will remain a critical aspect to ensure responsible and ethical usage of these powerful AI tools.