英语轻松读发新版了,欢迎下载、更新

AI, politics and the battle against misinformation

2024-11-14 05:00:41 英文原文

作者:Javier Espinoza November 14 2024

The Financial Times held a Future of AI summit in London on November 6-7. Below we publish an edited transcript of a conversation about AI, politics and the battle against misinformation between Javier Espinoza, the FT’s EU correspondent covering competition and digital policy, and Elizabeth Dubois, professor at the Centre for Law, Technology and Society at the University of Ottawa.

Elizabeth Dubois: I research the political uses of new technology, so that includes AI, but also social media and search engines and all kinds of other communication technologies that have been infiltrating our electoral systems. And so, recently, I wrote a report looking at AI uses in Canadian politics. But, as you’ve mentioned, I’ve obviously been following the US election to see what’s on the cusp — and what we can expect in others coming forward.

Javier Espinoza: And it’s the perfect timing, I think, for our conversation. Today, we have fresh news and, apart from Trump being re-elected . . . Elon Musk has emerged, in my view, as the other big personality, character, player that has influence. I don’t know to what degree, but he has been a player in this election like we have not seen before — also, with X [formerly Twitter] as a platform to help. I don’t know how many of you are on X, I know that the numbers are dwindling, but the algorithm in my X profile is as if I’m a Republican. What are your thoughts on this?

ED: It’s a really interesting example because people with a lot of money have had a lot of influence in US politics for a long time. That’s not new. And social media companies being this controller of information, they decide what to incentivise . . . what to prioritise in your feed. That’s also not new. But the combination of those things has really played out in a way that I don’t think we were fully prepared for or fully expected.

You mentioned your feed looks like you are a Republican. And we know that that’s the case in a lot of people’s feeds — even though there are roughly equal numbers of Democrats and Republicans that we’re reporting through surveys to be using those tools. So it’s really interesting to see the power of those algorithms. It really shows that our information environment is controlled by these systems, and sometimes by particular people when they take over a large company, for example.

JE: And do you think we’re just learning about the effects that X and Musk have had on this election? Do you think that it’s about the number of people that he managed to reach through the platform, or is it about mobilising the ones that are already converted? What are your thoughts on . . . the use of this platform and also misinformation?

ED: Yeah, I think that X is an example of the larger kind of misinformation conversation, where a lot of the most effective misinformation and disinformation right now is actually about mobilising particular communities and convincing people in specific groups to believe one thing or another. It’s not, at this point, as much about creating a mass misunderstanding of reality, but rather getting certain people who are highly active, who have a loud voice, to be sharing this information and resharing it over and over. 

A person standing at a podium speaks to a large rally crowd, while another individual jumps joyfully next to him, wearing a cap and casual clothes. The scene includes American flags and a festive atmosphere
Elon Musk joins Donald Trump on stage at a Pennsylvania rally in October. The president-elect has since offered the owner of X a role in government © Jim Watson/AFP via Getty Images

JE: I was having a discussion just earlier this morning with someone from the UK regulatory part of the government who was saying that, in their research, they have noticed that people might be seeking out exposure to deep fakes or to misinformation on purpose and excluding the verifiable . . . highly sophisticated information that the Financial Times and other media outlets are producing every day. Have you picked this up in your research? 

ED: Yeah. So one of the things about disinformation research is we really think everyone’s going to want true content, right? If we just have enough high quality content, it’ll be fine. But the reality is people like to be entertained. People like to feel community. People like to have their ideas supported and reinforced. So there’s a bunch of reasons that people are going to intentionally choose or just not kind of question disinformation. 

JE: And I guess this sort of amplifies the use and the efficiency that we’re talking about in terms of X. But, moving beyond X and the elections today, can you . . . flesh out some of the ways that you’ve identified people, agents or countries are using misinformation, and playing with algorithms?

ED: Yeah. So, when we’re thinking about AI and disinformation, the immediate idea is ‘Oh, it’s the deepfakes’. And, absolutely, deepfakes are happening. We have seen examples, even in this US election. But AI use goes beyond that. The thing that we’ve seen emerging is people making use of generative AI tools like ChatGPT as a search engine, and we know that those tools often hallucinate, often produce information that has inaccuracies in it or that lacks contextual information.

So you end up with often ‘true’ misinformation. There isn’t an intent to harm necessarily, but nevertheless, people are being sent to polling stations on the wrong day, as an example.

JE: Wow. That’s quite shocking to hear. In your research, what counts as AI in elections? Give us a little bit more . . . 

ED: There’s a lot of political softwares that embed AI technologies into their systems to help a campaign better target, or better profile, potential voters — and people to not pay attention to.

We also have examples of AI being used for translation or creating robocalls to make it easier to reach greater [numbers of] people, which could be good — we could see that is a very helpful democratic thing if you’re engaging communities that speak particular languages that maybe no candidates speak. But it can also be very deceptive and confusing for people.

And, then, we have this whole group of conversational agents or spoken bots that we’re starting to see emerge. In the last Mexican presidential election, for example, there was a presidential candidate who created an AI-powered “spokesbot” to literally be a spokesperson for her and her campaign. And that really changes the landscape of information. 

JE: How effective was that? Was the candidate elected as a result? 

ED: The candidate was not elected. With all of these kinds of tools, it’s going to be really hard to say that AI was the thing that gets someone elected or doesn’t get someone elected. There are so many different versions of these kinds of tools, and they’re embedded into really complex campaign structures.

JE: But what we do know is it is changing the way that we can interact with political candidates. Do you think - and I know that we have to do the research but let’s speculate a little bit ... so we can ... at least think about these things ... I mean, we’re talking about AI now but ... the Brexit vote arguably was also influenced by social media, and the outcome. Do you think that . . . we are seeing outcomes in elections that we wouldn’t have, if we didn’t have these new emerging technologies or ways of disseminating misinformation?

A group of people on stage holding signs supporting Brexit with slogans such as ‘Leave Means Leave,’ ‘No Deal? No Problem!,’ ‘Believe in Britain,’ and ‘Stop Brexit Betrayal.’ The backdrop also displays similar slogans
A 2018 ‘Leave Means Leave’ rally. Anti-Brexit campaigners have claimed that misinformation on social media affected the vote © Daniel Leal/AFP via Getty Images

ED: There’s absolutely no denying the fact that technology impacts elections and the way campaigns are run. It’s really hard to tease apart what is the main thing that changed the result of an election. But it is very clear that the way these new technologies are being integrated into campaigns changes how campaigns run. It changes how journalists report on campaigns, and it changes how the public interacts with the information that comes out of those systems.

So absolutely, we’re seeing impacts now. Does mis- or disinformation impact elections to the point where we can’t trust the results or question the integrity? I think that, so far, what we’ve seen, particularly in the recent US election, is that AI, in terms of its deceptive ability and the way it’s being used is not having the kind of impacts or the kind of disordering effects that we initially expected.

But that doesn’t mean that it’s not going to change as these tools evolve and are integrated into our daily lives.

Audience question: What more do you think the platform owners themselves can do to combat misinformation and disinformation? 

ED: I’ll start first with AI and particularly generative AI tools. I think there needs to be very clear safeguards built into the system so that it is not able to hallucinate or offer inaccurate information or, contextualised information, particularly when it’s relating to how people can vote, and when they should vote, and who’s running in their elections. Those are really essential pieces of information that will undermine the integrity of an election.

Then we go to the larger question of mis- and disinformation being spread across all kinds of social media and search, and that’s a much more tricky one. I think one option is to have increased transparency and clarity on how prioritisation and deprioritisation algorithms work. Let’s make sure there are trust and safety teams that these companies support and make use of, to make sure that when potentially harmful information is being spread across those platforms, they’re actually responding to it.

I also think that, at some point, there may need to be more substantial governmental regulation coming in, because we know that these platforms each make individual choices, and that the change in leadership in one of these organisations can drastically shift the information environment very quickly, which — in the context of an actual election campaign — could be really risky.

Audience question: As we’ve seen in the US election, in states like Georgia, we have seen genuine disruption . . . people being discouraged to vote. And that has had a serious effect. Have you got any general comments about how we can go further to minimise and mitigate that?

ED: In terms of . . . Georgia, I would take it even broader and say, looking at the US election and potential impacts on people’s ability to vote, we know that it extends beyond the uses of AI. And there are electoral systems which make it difficult for people living abroad to get ballots in time, as an example. There’s a variety of things that I think come into play when we’re thinking about whether or not people were informed enough to get their vote cast. That’s not something that’s brand new. But it is something that . . . mis- and disinformation can be used to exacerbate because information can travel so quickly. With AI in so many different formats online, it can be really hard to track what is true and know how to execute the next steps. 

Audience question: The algorithms in social media, they reward engagement, right? And, a lot of times, there’s a bias in humans where they are more drawn to negative, hateful, provocative content. Therefore, the algorithms, like social networks, can just say: ‘Oh, we’re just we’re just repeating what people want to see’. And so politicians that kind of exploit negativity and hate benefit from that. Do you think that the actual algorithms themselves need to be regulated to protect against that?

ED: It’s a great point. It makes me think about the research that’s been done on attack ads. Political attack ads are known to be very effective at making people not trust or not want to vote for whoever is being attacked. But there’s a rebound effect where whoever created the attack also takes a hit in the polls. So that has been a bit of a natural deterrent to using too much of that sort of negative campaigning.

But what we see in online systems is that it’s a lot easier for political campaigns to distance themselves from those attacks and from those kinds of fear-based approaches. So you end up being able to undermine your opponent without necessarily taking the hit yourself.

It can be exacerbated again by different kinds of generative AI tools, the social media algorithms that are amplifying it. So, should the algorithms themselves be regulated? I don’t know that we necessarily want a situation where every platform has to comply with a particular kind of prioritisation, for both business reasons and kind of access to information reasons. But I do think we need a lot more transparency in how those work, and we need additional options so that people can choose different kinds of curators of their information.

This transcript has been edited for brevity and clarity.

关于《AI, politics and the battle against misinformation》的评论


暂无评论

发表评论

摘要

The discussion围绕社交媒体和人工智能在选举中的作用,以及它们如何影响投票率和选举结果。以下是对话的主要内容总结: 1. **AI技术与虚假信息**:专家指出,虽然目前的AI技术尚未达到预期的欺骗效果,但随着技术的进步,未来可能会对选民产生更深远的影响。 2. **平台责任**: - 平台所有者需要建立明确的安全措施以防止生成不准确的信息,特别是在投票时间、地点和候选人资格等关键信息方面。 - 提高算法优先级机制的透明度,并确保公司有专业的信任与安全团队来监控潜在有害信息。 3. **选举系统的挑战**:除了技术因素外,还有许多其他系统性问题影响选民参与率,例如海外投票障碍。假新闻可以利用这些漏洞进一步损害民众的信心和积极性。 4. **算法的负面影响**: - 现有的社交媒体推荐机制往往倾向于展示负面、仇恨性的内容以吸引用户互动。 - 政治竞选活动可以通过这种情绪化的传播手段来攻击对手,而不必承受自身的投票率下降的风险。这使得通过煽动恐惧和不满来进行选举策略变得更加容易。 5. **解决方案**: - 增强平台对假信息的响应速度与能力。 - 考虑到监管问题复杂性,建议先从提高算法透明度做起,并为用户提供不同信息源的选择权,而非强制执行统一标准。 该对话强调了技术如何被用于操纵公众舆论,在此背景下提出了加强监管、提升公众认知以及优化社交平台机制等多种对策来保护民主过程的公正性和完整性。

相关新闻