Middle East PR Industry Issued With 8 AI-Use Guidelines
In A First For MEPRA
The Abu Dhabi-headquartered Middle East Public Relations Association (MEPRA) has unveiled new guidelines to regulate the use of Artificial Intelligence (AI) within the communications sector in the Middle East region.
The guidelines, a product of MEPRA’s AI Committee and ChatGPT’s expertise are a valuable resource for PR industry professionals who want to navigate AI technology’s ethical and legal complexities.
Given the rapid advancements in AI and the evolving regulatory landscape, MEPRA stresses the ongoing adaptability of these guidelines. They will be continually updated to reflect emerging AI trends and regulatory changes, ensuring their ongoing relevance. This commitment to adaptability should reassure PR practitioners in the Middle East region about the industry’s ability to keep pace with AI advancements.
Transparency emerges as a cornerstone principle within the guidelines, urging practitioners to openly communicate with stakeholders regarding the use of AI in their work. It underscores the importance of clearly labelling AI-generated media content to maintain trust and integrity in communications.
Furthermore, the guidelines stress the responsibility of practitioners to safeguard privacy and adhere to data protection regulations when employing AI tools. They caution against the indiscriminate use of sensitive information and emphasise the need for rigorous fact-checking to mitigate the risks of misinformation.
Addressing concerns about bias and representation, MEPRA advises PR practitioners to critically assess AI-generated content for potential biases and cultural sensitivities. They advocate for inclusive practices that reflect diverse perspectives within communications.
MEPRA also highlights AI’s transformative impact on societal norms and professional practices, urging stakeholders to adopt AI responsibly. They underscore the crucial role of human creativity and judgment, valuing the unique expertise of PR professionals in complementing AI technologies. This is to ensure that authentic and effective communication strategies are maintained.
Below are the eight guidelines newly issued by MEPRA for using AI in the regional PR communications industry. Middle East News 247 editors have edited these guidelines:
1. Be Honest And Open
Always inform people when using AI in PR and communication to strengthen trust. This means conversing with your client or line manager to let them know if and how you use AI before you use it. This does not necessarily imply informing them in each instance, but it does mean discussing the use parameters.
Be honest and transparent if you use AI on a specific project. Do not use AI if you are asked not to in a project. We recommend looking at best practice guidelines, like Cambridge University’s, on using text, audio, and visual AI tools. It would be prudent to keep up on relevant legal articles, as change is constant in bills proposed further to define intellectual property and fair use of creative content.
2. Be Responsible And Credible
Our industry must be truthful about using AI in content shared with the media. Transparency about our use of AI must extend to materials shared with media. Media images, videos and audio created using AI should always be labelled accurately.
In contrast, text content for media should ideally be written in whole by a human and, at the very minimum, have significant human intervention. If not, it should be labelled as AI-generated text. Concerns around misinformation and disinformation are increasing, and trust in the media is decreasing. AI-generated content has the potential to undermine this trust further.
Communications professionals are responsible to the media outlets they work with for providing content that does not risk damaging an outlet’s reputation or lessening credibility among its audiences. It is essential to have a plan to immediately rectify an issue or crisis caused by sharing AI-generated content with the media.
3. Respect Privacy
Make sure any AI tools you use follow the rules about keeping people’s information private and their use of copyrighted content. Remember that the data you give to AI tools, including prompts, files, and your account information, is kept and used by the companies behind those tools as standard practice.
Understanding privacy settings and what you need to opt in or out of to keep information safe is essential. As an industry, we handle sensitive data for clients and our organisations, including governments and listed companies.
To reduce risk, only input data and files that are non-confidential or already in the public realm. Your clients or organisation may also have data security protocols and NDAs relating to the use of AI that you must adhere to, so make sure you are aware of your responsibilities.
4. Get It Right
Double-check that any content or data AI helps create is accurate before sharing it so you do not spread false information. Always check facts using trusted sources. Reference scientific research, reputable third parties and trusted media. Look for sources and research findings wherever possible rather than second-hand references.
AI has been known to create facts and figures and links to non-existent media stories and research, so do not assume the information is accurate. Our industry must not help to propagate misinformation. If your content is in Arabic, it is also worth remembering that AI tools will work with a much smaller content base than English so that they will need more human interaction, editing, and additional fact-checking.
5. Treat Everyone Fairly
Watch out for any unfair biases or cultural sensitivities in AI programs and content, and do your best to ensure that everyone feels included and represented. Machine learning has a history of building on our own implicit human biases and magnifying them, which means that AI-generated content can be misrepresentative or exacerbate stereotypes.
Always look at text, images, or audio created or complemented by AI with a critical eye. Using thoughtful prompts can help create more inclusive content. Still, as many of the biases need to be tackled within the existing technologies, we must gatekeep what is produced and ensure it accurately represents our clients, organisations, cultures, and communities. Consider whose voices are not represented in the AI-generated content and if other expert sources should be included.
6. Keep It Accurate
Use AI to help your work, but remember that natural human creativity and connection make communication effective and genuine. AI can only draw on sources of existing material and references rather than create. While detailed prompts provided can fine-tune tone and technical outcomes, machines make links and connections in different ways to humans.
They cannot see, smell, taste, or feel. The best copy and images make us think or feel differently. Right now, nothing does that better than creative people.
7. Think About People
AI might affect society, so use it to help people, not just your bottom line. Encourage your team and others in the industry to use AI responsibly and moderately. The implications of using AI will be vast and far-reaching; right now, they are also largely unknown and unforeseen.
We will only know whether they cause job losses or gains, create more significant equity or further division. In the meantime, we can ensure our use of AI technologies remains considered. That means practical things like not billing clients or logging hours for work that AI did for the human equivalent.
It means using the skills of human editors, copywriters, graphic designers, photographers, and videographers, where we know that the value and integrity this brings are critical (including in our submissions to media). It also means using AI to help us make better decisions rather than outsourcing them to the technology itself.
8. Keep Learning And Improving
Stay on top of how AI changes PR and communication. Be ready to adjust how you use it to ensure you are doing the right thing. Not only is AI itself constantly evolving, but so are the rules and regulations that govern its use across industries and markets.
We are likely to see many more changes as technology changes and as we apply its power in new ways. We are also expected to see increased debates around ownership and copyright use, which could have significant implications for media and communications and heightened concerns about AI’s use to support the spread of misinformation and disinformation. It is up to all of us to be aware of what is happening and our role in ensuring that our use of AI is ethical and legal.
Featured image: MEPRA’s AI-use guidelines are aimed at PR communications professionals in the Middle East region. Credit: Kaitlyn Baker
Last Updated on 7 months by News Desk 2