Artificial Intelligence  promises to revolutionise the way we use the internet and digital services. But it also has a dark side we need to keep in check.

The arrival of next-generation Artificial Intelligence (AI) technology services such as ChatGPT and Google Bard provide unprecedented opportunities as well as challenges. Amongst these is the risk of the misuse of AI for information manipulation.

In a world dominated by the overabundance of information, separating fact from fiction can be very challenging. The reach of ubiquitous social media platforms like Facebook, YouTube, TikTok and Instagram, to name a few, means the global information environment is increasingly complex.

Fact-checkers are now confronted with what is potentially the mass production of synthetic ‘facts’ generated by computers using AI software. A number of top tech entrepreneurs and leading AI researchers have gone so far as to call for a pause on the rush to launch large scale AI services until we understand more about its potential effects and before we have robust regulation in place.

A European approach to artificial intelligence

While Artificial Intelligence (AI) will be a great boost to research science and industry, Digital Services Act (DSA) and Digital Market Act (DMA) aims to protect the fundamental rights of all users for a trustworthy information environment.

The DSA will provide legal protections to ensure the internet remains a fair and open environment both for communication and for trade. The manipulation of information has a huge impact on our daily lives and will continue as digital services develop and grow.

· Follow the link to learn more about creating a safer digital space with Digital Services Act

· Follow the link to learn more about the European approach to artificial intelligence.

The current generation of AI technology has demonstrated the ability to produce seemingly plausible conversational content and well-written essays on a wide range of topics in just a few seconds.

AI can also be used to fabricate images and videos that are engaging enough to go viral and spread rapidly on social media. Whether depicting a celebrity in unexpected situations or portraying a political leader as doing something provocative, for example, these computer-generated illusions easily distort our perceptions of reality.

The large volume of content coming through social media platforms means it is very difficult to fact check everything. In contrast, providing reliable, factual information takes a lot more time and effort than spreading sensational claims or falsehoods.

If used with malicious intent, AI has the potential to flood the information environment with false narratives generated on an industrial scale, overwhelming the public discourse.

AI can and will be used for good in countless ways, but only if we remain on guard for untruths generated by AI technology.

Things are moving incredibly fast and more and more fact-checking AIs are in the pipeline but, in the meantime, there are some techniques that are helpful to test the authenticity of all types of online information, including AI-generated or not.

Five ways to check online content

  1. Breathe. Allow your fast-acting emotional response to pass. Take the time to engage your critical thinking skills.
  2. Remember the rule of thumb that, if it’s too good (or bad) to be true, it’s probably not true.
  3. Seek a second source. Cross-reference using a reliable news provider.
  4. When in doubt, advanced searches are available on most search engines. For example, an image search will reveal if a picture was ever posted previously in another context.
  5. Use an AI fact-checking service. One way to catch an AI is to use another AI which looks for tell-tale patterns that indicate if the content was AI-generated. Services such as Deepware can be used to detect ‘deepfake’ videos.

 

 

Article source: EEAS