In this episode of FlashPoint, we speak with Michele Ewing, professor of public relations at 国产伦理's School of Media and Journalism about public relations, ethics and AI.
Below is an excerpt of the conversation between Ewing and Eric Mansfield, assistant vice president of content at 国产伦理.
The [public relations] industry you described is one that has always had to be at the front of technology, whether it was the rise of the internet or social media. Is that fair?
Our curriculum is constantly evolving because as technology changes, it changes the way we communicate with audiences. It鈥檚 important that we understand how to use that technology in a responsible way and use it to better communicate with all of those audiences.
You talk about ethics, and we鈥檙e going to talk about the ethics of AI, but ethics have been part of your industry long before AI got here, correct?
Absolutely. We have an ethics course that all our students are required to take, but we intentionally integrate ethics into every one of our classes. In a media relations class, we talk about how to ethically work with journalists. In our campaigns class, we examine ethical decisions when working with a client. The largest trade organization in public relations is the Public Relations Society of America. I sat on the Board of Ethics and Professional Standards for six years, which is designed to monitor the industry and promote ethical practice. You can't build relationships without trust, and ethics plays a key role in building that trust.
How does the public relations industry react as AI starts to grow?
You begin by asking how AI tools can help us do our jobs. Public relations uses AI in several ways. One is gaining audience insights 鈥 learning about the audiences we communicate with. Another key use is brainstorming. In my research with public relations professionals, they call AI their 鈥渂rainstorm buddy,鈥 a place to start ideas. It can help with content creation, personalization and planning programs. But it鈥檚 a tool that requires human oversight.
One of the key ethical challenges is misinformation. AI has improved, but it can still make up information or sources, often called hallucination. Everything pulled from AI needs to be checked. Another concern is bias. AI scrapes the internet, which includes false and biased information. You don鈥檛 always know if the data represents the audience you鈥檙e working with or if it introduces stereotypes. Understanding and checking sources is one of the most important ethical considerations.
AI creates new opportunities, but it sounds like there should be guardrails.
Absolutely, and safeguards. I鈥檓 working on a study with Professor Stefanie Moore, where we interviewed 23 PR practitioners and educators. A key theme is that communicators and critical thinkers are still essential. AI is not going to replace that. Students sometimes worry AI will replace them, but we still need human oversight for emotional context, fact-checking and ethical judgment. Human oversight ensures information is factual, unbiased and not using protected or unauthorized data.
Because it鈥檚 a writing-intensive career, do you teach students how to use AI ethically for writing?
In some classes, there may be reasons not to use AI, but in others, students need to understand how to use it responsibly. I tell students to start with their own ideas. AI can help brainstorm or enhance work, but it shouldn鈥檛 replace original thinking. I also ask students to share what prompt they used and what tool they used, and reflect on how it helped them. Transparency is important. Plagiarism is a concern, but teaching responsible use makes students more likely to use AI ethically. Being open allows us to talk about what is and isn鈥檛 appropriate.
How much push is there for transparency about AI use in the industry?
There are mixed opinions. Some practitioners compare it to using a word processor 鈥 something assumed. Others believe organizations should clearly state how AI is used and how responsible use is ensured. Transparency builds trust and encourages responsible use.
AI and ethics also play a role in internal communications, correct?
Yes. Internal communications is one of my research areas. Organizations need AI policies because employees are already using it. Clear guidelines help employees understand appropriate use and prevent sharing confidential information. Public relations helps create a culture of responsible AI use by raising ethical questions and ensuring transparency. AI can also help personalize internal communications, making messages more relevant and effective. Research shows AI will help with mundane tasks like list creation and monitoring, freeing professionals for higher-level thinking. Used responsibly, AI can make us better communicators.
PR professionals set the example for AI use within companies. Is that fair?
Yes. Research shows public relations plays a critical role in AI training. I recently attended a workshop with the Associated Press where a reporter discussed AI use in social work. Social workers relied on AI-generated criteria without understanding how it was created. That lack of understanding created ethical risks. PR professionals help ensure employees ask questions and are involved in AI development, reducing bias and improving effectiveness.
How does someone know they鈥檙e in a gray area, and what should they do?
There are many AI ethical guidelines, and I鈥檝e helped co-write some for public relations. Students should familiarize themselves with these guidelines. Ethics is about conversation and perspective. Organizations should also have internal AI policies. Using both industry guidance and internal policy helps guide ethical decisions.
Why should people take ethics and AI seriously?
We need to avoid sharing misinformation, reinforcing bias, or unfairly targeting audiences. Human oversight and expertise are essential. AI is a tool, but people remain responsible for how it is used.
