English News

MEPRA RELEASES EIGHT GUIDELINES FOR USING AI IN THE COMMUNICATIONS INDUSTRY

THE FIRST FRAMEWORK FOR APPLYING ARTIFICIAL INTELLIGENCE IN THE COMMUNICATIONS SECTOR

 

 

The Middle East Public Relations Association (MEPRA), the region’s leading body for public relations and communication professionals, has released the industry’s first whitepaper on artificial intelligence (AI) usage guidelines within the communications sector.

Developed by MEPRA’s AI Committee – with ChatGPT helping to inform topline recommendations – the association identified 8 guiding points providing a framework for using AI for its members and others working in communications in the Middle East.

The guidelines, which took regional and global AI guiding resources into consideration, are intended to be used alongside other guidance, policies and references to help inform ethical and legal use of the technology in the daily content production.

Commenting on the release, Margaret Flanagan, Co-Founder of Tales & Heads and Head of MEPRA’s AI Committee stated:

“With the aim of optimising the use of AI and protecting the output of the public relations and communications industry in the region, we have developed the first framework that intends to help the industry standardise the way it benefits from the use of AI tools. We wanted to ensure that everyone working in communications in the region is aware of the ethical and legal ramifications of using these technologies, and the responsibilities we have to the organisations we work with and to other stakeholders, including media and the wider public. Given the speed at which AI is evolving, as well as the dynamic regulations that govern its use, we are aware that these guidelines and resources will also need to evolve. MEPRA will continue to provide practical guidance and updates in line with these changes”.

The members of the MEPRA AI Committee are Ibrahim Al Mutawa, Co-Founder and Managing Director, Jummar PR; Tala Abu Taha, Director of PR, Viola Communications; Andrea Gissdal, Independent Communications Consultant; Natasha Hatherall-Shaw, Founder and CEO, TishTash; Henry Jakins  SVP, Head of Brand & Business Marketing, First Abu Dhabi Bank; Professor Dahlia Mahmoud, University of Europe for Applied Sciences; Joseph Nalloor, Discipline Lead, School of Communications, Murdoch University Dubai; Kirsty O’Connor, Regional Director of Innovation, Hill & Knowlton; Neda Shelton, SVP Group Communications , Mubadala; Inka Stever Professor of Practice at Zayed University, College of Communication and Media Sciences; and Denise Yammine Chouity, Director of Operations at GREY Qatar.

The guidelines released by MEPRA are listed below.

  1. Be honest and open: Always inform people when you’re using AI in PR and

communication to strengthen trust.

This means having a conversation with your client or line manager to let them know if and how you are using AI, before you use it. This doesn’t necessarily mean informing them in each instance, but it does mean discussing parameters of use. If asked directly if you used AI on a specific project, be honest and transparent.

If you are asked not to use AI for a specific project, don’t. We recommend looking at best practice guidelines, like Cambridge University’s, on how to best use text, audio and visual AI tools. It would be prudent to keep up on relevant legal articles, as change is constant when it comes to bills proposed to further define intellectual property and fair use of creative content.

  1. Be responsible and credible: It’s essential for our industry to be truthful about the use of AI in content shared with the media.

It’s important that transparency about our use of AI extends to materials shared with media. Media images, videos and audio created using AI should always be labelled accurately, while text content for media should ideally be written in whole by a human and at the very minimum, have significant human intervention. If not, it should be labelled as AI generated text.

Concerns around misinformation and disinformation are increasing and trust in the media decreasing, and AI generated content has the potential to further undermine this trust. Communications professionals have a responsibility to the media outlets they work with to provide content that does not risk damaging an outlet’s reputation or lessening credibility among their audiences. It is important to have a plan in place to immediately rectify an issue or crisis caused by sharing AI generated content with media.

  1. Respect privacy: Make sure any AI tools you use follow the rules about keeping people’s information private and their use of copyrighted content.

It’s worth remembering that the data you give to AI tools – including prompts and files, as well as your own account information – is kept and used by the companies behind those tools as standard practice. Understanding privacy settings and what you need to opt in or out of to keep information safe is essential. As an industry we handle lots of sensitive information for clients and our organisations – including governments and listed companies. To reduce any risk, only input data and files that are non-confidential or already in the public realm. Your clients or organisation may also have data security protocols and NDAs relating to the use of AI that you need to adhere to, so make sure you are aware of your responsibilities.

 

  1. Get it right: Double-check that any content or data AI helps create is accurate before sharing it, so you don’t spread false information.

Always check facts using trusted sources. Reference scientific research, reputable third parties and trusted media. Look for original sources and research findings wherever possible rather than second-hand references. AI has been known to create facts and figures, and even links to non-existent media stories and research so don’t assume that the information provided is accurate. It’s important that our industry does not help to propagate misinformation. If your content is in Arabic, it’s also worth remembering that AI tools will be working with a much smaller content base than English language, so will need more human interaction and editing, as well as additional fact checking.

 

  1. Treat everyone fairly: Watch out for any unfair biases/cultural sensitivities in AI

programs and content and do your best to make sure everyone feels included and

represented.

Machine learning has a history of building on our own implicit human biases and magnifying them, which means that AI-generated content can be misrepresentative or exacerbate stereotypes. Always look at text, images, or audio created or complemented by AI with a critical eye. Using thoughtful prompts can help create more inclusive content, but as many of the biases need to be tackled within the existing technologies, it’s up to us to gatekeep what’s created and ensure it accurately represents our clients and our organisations as well as our cultures and communities. Consider whose voices are not represented in the AI generated content, and if there are other expert sources that should be included.

  1. Keep it real: Use AI to help your work but remember that real human creativity and

connection are what make communication effective and genuine.

Remember that AI is only able to draw on sources of existing material and references, rather than create. While detailed prompts provided can fine tune tone and technical outcomes, machines make links and connections in different ways to humans. They can’t see, they can’t smell, they can’t taste, they can’t feel. The best copy and the best images are the ones that stand out and make us think or feel differently. Right now, nothing does that better than creative people.

  1. Think about people: Your use of AI might affect society, so make sure you’re using it in a way that helps people, not just your bottom line. Encourage your team and others in the industry to use AI responsibly and fairly.

The implications of the use of AI are going to be huge and far reaching. Right now, they are also largely unknown and unforeseen. Whether they bring job losses or job gains, create greater equity or further division will only be known in time. In the meantime, we can ensure our own use of AI technologies remains considered. That means practical things like not billing clients or logging hours for work that was done by AI for the human equivalent. It means using the skills of human editors, copywriters, graphic designers, photographers and videographers where we know that the value and veracity this brings is critical (including in our submissions to media). And it means using AI to help us make better decisions, rather than outsourcing these decisions to the technology itself.

  1. Keep learning and improving: Stay on top of how AI is changing PR and communication. Be ready to adjust how you use it to make sure you’re doing the right thing.

Not only is AI itself constantly evolving, but so are the rules and regulations that govern its use across industries and markets. We’re likely to see many more changes in the future, as the technology changes, and as we apply its power in new ways. We’re also likely to see increased debates around ownership and copyright use – like the New York Times lawsuit with Microsoft, which could have significant implications for media and communications; as well as heightened concerns about AI’s use to support the spread of misinformation and disinformation. It’s up to all of us to be aware of what’s happening, and our own role in ensuring that our use of AI is both ethical and legal.

مقالات ذات صلة

اترك تعليقاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *

زر الذهاب إلى الأعلى