英语轻松读发新版了,欢迎下载、更新

Juli Ponce, lawyer: ’100% of AI machines are psychopaths. Humans make mistakes, but only 1% are psychopathic’

2025-04-27 04:00:00 英文原文

作者:Jordi Pérez Colomé

Juli Ponce Solé, 57, is a professor of administrative law at the University of Barcelona. He has just published a manual on the appropriate and reasonable use of artificial intelligence (AI) in public administrations, with a lengthy title: The European Union’s 2024 Artificial Intelligence Regulation, the Right to Good Digital Administration, and its Judicial Control in Spain. As in many other professions, civil servants will benefit from, and suffer from, AI. But due to the delicate nature of their work, the requirements for machines are more demanding. Ponce Solé believes that their “lack of empathy and other emotions means they cannot make decisions that affect humans.”

Question. How does an administrative law professor use ChatGPT?

Answer. I use it, and I encourage my students to use it, because I know they’ll use it in any case. I give them guidelines on its possibilities and limitations. Case law is important for lawyers. ChatGPT either makes it up or is more honest and tells you it doesn’t have access to case law databases, which is a significant shortcoming. It’s very useful, above all, to help you get your bearings.

Q. It doesn’t sound that useful.

A. Some law journals of which I’m a member of the editorial board have already asked us discreetly about how we use AI in universities, because they’re finding more and more scientific research articles based on AI. I don’t know the percentage, but it’s something that exists and is worrying.

Q. Do officials also use these tools?

A. We’re in a Wild West moment. At my university, I asked if there were any criteria, any ethical protocols for the use of AI, and they told me no. I think it’s a general thing. I’m not aware of any guidelines or instructions. Everyone does what they think best. It’s up to each public servant.

Q. Are you afraid of this lack of control over AI in public administration?

A. Artificial intelligence shouldn’t be feared, but rather respected and [treated with] caution. AI requires huge amounts of data, and public administrations must take data protection legislation into account. Another precaution is the current limits of artificial intelligence, which is narrow or weak. It has limitations, and if senior officials were to use it in decision-making, it would be a mistake and illegal.

Q. Human officials also make mistakes, but that doesn’t mean they are illegal.

A. It’s true, humans make mistakes and have a specific problem with cognitive biases. These are uniquely human biases because only we have brains. Machines can’t have those biases, but they do have others, such as hallucinations. Both humans and machines make mistakes. But artificial intelligence is capable of replicating its error thousands or hundreds of thousands of times. It’s a matter of scalability. Artificial intelligence, if it works well, will speed up management. But, on the other hand, there will be a price to pay if it works poorly: the impact can be on a larger scale.

Juli Ponce, last Thursday in Sant Cugat del Vallés.

Q. Emotions are relevant to decisions, you say. Since AI doesn’t have them, are its decisions illegal?

A. I don’t know if it will ever have emotions. Certainly not now; in the medium term either, and in the long term, we’ll see.

Q. But it does imitate emotions.

A. But it’s clear that artificial intelligence doesn’t have emotions, it doesn’t have a brain, it doesn’t have empathy because it doesn’t have mirror neurons. 100% of AI machines are psychopaths because they lack empathy. Humans, indeed, make mistakes, but only 1% are psychopathic. There we enter an interesting field that goes far beyond the law and affects us all. A machine’s decision could send us to prison, which would be the most visible risk for the average citizen. It’s a debate we were already having with humans before artificial intelligence arrived.

[Barack] Obama chose Justice Sonia Sotomayor for the Supreme Court and said he appointed her because he valued her capacity for empathy in her decisions. There was a schism there. We lawyers are trained to link the law to cold rationality. It’s a legacy we’ve carried over from the Enlightenment. This model has been questioned for years. The tradition is that emotions had to be isolated to make good decisions. But the neuroscientist António Damásio argues the opposite: it’s not possible to make good decisions without emotions. There must be a balance. He also argues that emotions are linked to the existence of a physical body in humans, which machines don’t have. I’m not sure I want a machine incapable of having any kind of emotion, of displaying empathy, and therefore incapable of displaying fairness to decide about me. Fairness has been part of our Civil Code for many years; the rules have to be tempered, they have to be tailored to the case. I wouldn’t be very comfortable.

Q. That’s logical.

A. With the technology we have today, it is simply illegal to use machines to make decisions that involve margins of appreciation, which must take into account the interests and rights of people.

Q. But there are often complaints about judges whose verdicts are always biased in the same direction or who treat victims poorly.

A. We don’t have to choose. We have to be smart about combining the best of both worlds. The ideal would be a collaboration between machines as assistants and humans as final decision-makers. The European Union’s AI regulation makes a significant commitment: artificial intelligence cannot pass judgment. It can only assist the judge. And it adds that the function of judging is intrinsically human. That doesn’t mean AI can’t be used, just that a machine can’t have the final say.

Q. It would be a major change.

A. It sounds very reasonable because the middle ground always is. But I’m going to play devil’s advocate for myself. The presence of a human at the end soothes consciences. But we suffer from a bias called automation: we tend to trust machines too much. And there’s another important human element: thinking is exhausting. Instead of the human supervising and deciding, what can end up happening is that the human only signs what the AI has told them. If a judge has to issue, for example, 30 sentences in a month, they might be very cautious on the first one, perhaps also on the second; but when they see that the AI works relatively well as an assistant, on the 300th, they’ll probably sign it and move on to something else. It’s a risk we should be aware of.

Q. Could it be that humans hide behind AI decisions to avoid responsibility?

A. It’s a risk in a context where a phenomenon called defensive bureaucracy is emerging, the fear of signing. It’s created a certain fear of signing, lest you get into trouble with an accounting body or in the criminal courts. The use of AI can exacerbate this problem, because if I’m afraid, the easiest thing is for a machine to make the decision. We tend to protect ourselves and avoid problems. And that’s unacceptable. The responsibility of whoever makes the decision must be very clear.

Q. If things continue to advance as they are in the private sector, we might need fewer civil servants.

A. It is very likely that fewer public employees will be needed.

Q. Civil servants are not exempt from this progress.

Juli Ponce, in Sant Cugat del Vallés this Thursday.

A. There will be similar developments in both the public and private sectors. In the book, I develop the concept of the humanity reserve. I had already proposed it years ago. The creator of Eliza [one of the first chatbots] said in the 1960s that there were things that couldn’t be left to machines. This intuition is still valid. The humanity reserve is not a preserve to save the jobs of the last senior officials in the public sector. I’ve had these conversations with people specializing in the private sector and on boards of directors. There’s a certain consensus that a humanity reserve should be established for a set of functions and tasks that affect very sensitive rights and the interests of citizens: criminal law, credit granting, large fines.

The European Regulation on Artificial Intelligence is a good example of a regulation that refuses to become rigid. It’s full of doors and loopholes for adaptation. Misuse of artificial intelligence, inappropriate use, would lead us to the opposite of what we want. I’m not techno-pessimistic or anti-technology, quite the opposite. We have to find a balance. Making grand proclamations that AI will solve all our problems or that it will generate epic disasters is simple and makes a good headline. But the reality is the quiet work that will have to be done in the coming years. In the courts, how is artificial intelligence used? For what cases? What role will the judge play? Or in urban planning, what role can it play? Who will make the final decision?

Q. It doesn’t seem there’s an easy answer.

A. There are very big issues that we were already discussing before AI, we’re discussing them now, and we’ll continue to discuss them. When we talk about AI, by contrast, we’re talking about what it means to be human, what it means to make a good decision: what advantage does the human contribute compared to the machine, and what advantage does the machine contribute compared to the human? I’ve already read campaign promises that said they were going to automate to save €15 billion. Artificial intelligence systems don’t get tired, they don’t get sick, they don’t take vacations, they don’t join unions, they don’t cause problems. It’s very attractive to say you’re going to avoid those lazy civil servants. But we’re going to try to avoid dystopias.

Artificial intelligence must be incorporated, and we’re already taking too long. Public administrations should use AI much more to offer personalized and proactive services. Just like in the private sector, they identify and predict what you might need, obviously for profit. Public administrations could do the same: identify which children in school cafeterias need the allowance and not wait for parents to apply, who may not know about the allowance at all or find applying for it difficult.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

关于《Juli Ponce, lawyer: ’100% of AI machines are psychopaths. Humans make mistakes, but only 1% are psychopathic’》的评论


暂无评论

发表评论

摘要

The integration of artificial intelligence (AI) into various sectors, including the public sector, is a complex and evolving topic. According to the interview with the author of "El Reserva de la Humanidad," there are several key points regarding the use of AI in courts and urban planning as well as other areas within the public administration: 1. **Role of Humans vs. Machines**: The primary concern when considering the integration of AI into judicial processes is ensuring that final decisions remain with human judges. This is because making a judgment involves complex moral, ethical, and human-centric considerations that machines cannot fully replicate. According to European Regulation on Artificial Intelligence, AI can be used as an aid but must not replace human decision-making in courts. 2. **Urban Planning**: In urban planning, AI could assist planners by analyzing large datasets for better zoning decisions, infrastructure needs, or environmental impact assessments. However, the final approval and oversight should remain with human officials to ensure that community values and ethical considerations are taken into account. 3. **Proactive Services in Public Administration**: One promising area where AI can be beneficial is in offering personalized and proactive public services. For example, identifying children who qualify for school meal subsidies without waiting for parents to apply, thereby reducing administrative burdens and ensuring that assistance reaches those who need it most efficiently. 4. **Humanity Reserve Concept**: The author proposes the concept of a "humanity reserve," which involves designating certain tasks and functions as too critical or sensitive to be left solely to machines. This includes areas like criminal justice, credit assessment, and issuing large fines where human judgment is irreplaceable due to the complexity and ethical dimensions involved. 5. **Avoiding Dystopias**: While there are clear advantages to integrating AI into public services, such as increased efficiency and better resource allocation, it's crucial to avoid potential dystopian scenarios. For instance, reducing civil servants purely for cost-cutting measures can lead to a loss of human oversight and empathy that is essential in many areas. 6. **Balanced Approach**: The ideal approach would involve collaboration between AI systems (as powerful tools) and humans (who provide critical ethical judgment). This balance ensures that technology enhances service delivery without compromising on the humane aspects of governance. In conclusion, while AI offers significant potential to streamline public services and enhance decision-making processes, it is essential to maintain a human-centered perspective. Final decisions should remain with humans in sensitive areas, ensuring that technological advancements support rather than undermine the principles of justice, ethics, and humanity in the public sector.

相关新闻