Fri., Sep. 5, 2025
By
![]() |
Simon Erskine Locke |
I recently met a leader in the communications industry, and as we were chatting over coffee, he shared that he’s been hearing the phrase “two things can be true at the same time” a lot recently. This is also something I’ve been saying for a couple of years in discussions around politics, AI, and a variety of other issues.
In a polarized world in which opinions are shared as fact, data and statistics are made to fit ideologies and the truth doesn’t seem to matter, expressing the view that two seemingly contradictory perspectives can both be true is a pragmatic way to find common ground. It recognizes that there are different ways to look at the same issues.
While making the effort to recognize different perspectives is healthy, idealogues (on either side of the political spectrum) are rarely interested in recognizing that there may be another side to an argument. When you are devoted to a particular position, the idea of an alternate version — or even the acknowledgement that there may be grey between black and white — creates cognitive dissonance.
Why bring this up? In part, because many of the discussions around AI seem to be somewhat bipolar.
For many, AI is still the shiny new tool that will write great emails, automate the lengthy process of engaging with journalists, or lead to faster and easier content generation. For others, AI will kill jobs, dumb down the industry, or lead us to an existential doomsday in which the rise of content leads to the fall of engagement.
As someone who has spent significant time with AI companies, building tools, working with various LLMs, and discussing the impact of AI with lawmakers, I firmly believe that there are reasons to be optimistic and pessimistic. It’s not all black and white.
One way to frame the discussion of AI is to think of it like electricity. Electricity is key to powering the economy and it drives machines that do a lot of different things. Some of those are good. Some are not. Electricity gives us light, but it can also kill us.
AI, like electricity, is not intrinsically good or bad. It’s what we do with it that matters. As communicators, we have agency. We decide which choices will shape the future of the industry. We are not powerless.
We are responsible for making decisions about how AI is employed. And, consequently, if we get this wrong, shame on us. If communicators ultimately put the industry out of business by automating the engagement process with journalists, mass producing content to game LLM algorithms, and delegating thinking to chatbots — rather than helping the next generation of communicators hone their writing, editing, fact checking, and critical thinking skills — that will be on us.
Equally, if we don’t leverage AI, we will miss an opportunity. AI can help streamline workflows and its access to the vast body of knowledge on the internet can lead to smarter, more informed engagement with reporters and impactful content.
A key takeaway from conversations with AI startups is that they are now able to do things that were simply not possible two years ago. One is making the restaurant booking process more efficient, leading to greater longevity of the businesses they work with - which keeps staff employed. Another company’s voice technology is enabling local government to serve constituents at any time and in any language.
As with every other generational technology shift, some jobs will disappear, and others will be created. Communicators need to avoid both being Panglossian, and the trap of seeing AI as the end of days.
Finding the right use cases and effectively implementing the technology will be essential. The customer service line of a major financial institution states, “We are using AI to deliver exceptional customer service”, only to require the customer to repeat the same basic information three times. This underscores the distance between AI’s potential and the imperfect experience most of us see every day.
Pragmatic agency and corporate communications leaders will continue to experiment, invest time to understand what is now possible with AI. They will need to implement tools selectively, while carefully considering the impact of decisions on the industry in the years to come.
At this stage, there is an element of the blind leading the blind with AI. Startups are not omniscient. Communicators looking at applications as a magic bullet are going to be sorely disappointed. We are already seeing questions about the returns on the rush of gold into AI, significant gaps between the vision and experience, and the dark side of the technology in areas such as rising fraud and malicious deepfakes. As I have written previously AI is creating new problems to solve – and is a driving force behind new solutions including content provenance authentication.
Just because you can do something doesn’t mean you should — without careful consideration of use cases, consequences and implementation. AI has both enormous potential but also brings a whole new set of challenges and, potentially, existential risks. The idea that these two seemingly opposite things can be true underscores the weight of responsibility we have to get this right.
***
Simon Erskine Locke is founder & CEO of CommunicationsMatch™ and cofounder & CEO of Tauth.io, which provides trusted content authentication based on C2PA standards. He is a former head of communications functions at Prudential Financial, Morgan Stanley and Deutsche Bank, and founder of communications consultancies.
Category: Artificial intelligence
More Artificial intelligence posts from O'Dwyer's:
• | Companies Lag on Setting Up AI PoliciesWed., Aug. 20, 2025
|
• | AI is Creative PoisonTue., Aug. 5, 2025
|
• | Why Employees Are Pushing Back on AIand What to Do About ItFri., Jul. 25, 2025
|