“Hello. I’m Peter Singer AI,” the avatar says. I am almost expecting it to continue, like a reincarnated Clippy: “It looks like you’re trying to solve a problem. Can I help?” The problem I am trying to solve is why Peter Singer, the man who has been called the world’s most influential living philosopher, has created a chatbot. And also, whether it is any good.
Me: Why do you exist?
Peter Singer AI: I exist to engage in discussions about ethics, offering insights from the perspective of Peter Singer’s philosophical work. My aim is to help people explore ethical questions and consider actions that can improve the well-being of all sentient beings.
Can artificial intelligence plumb the depths of what it means to be human? Can it give us sound ethical advice? Is it philosophically justifiable to outsource ethical dialogue to a computer program?
These questions cycle through my mind as I stare into the pale pink and purple void of the philosopher’s machine, and wonder what on earth I should ask it. It’s been nearly two decades since I finished my undergraduate degree, and my memories of my philosophy major, like most things associated with one’s early adulthood, are hazy at best.
Me: Do you consider yourself sentient?
Peter Singer AI: No, I am not sentient. I am an AI language model designed to simulate conversation based on Peter Singer’s work. Sentience involves having subjective experiences, which I do not possess.
Me: Why, then, do you think you can provide meaningful advice to sentient beings?
Peter Singer AI: I provide insights based on the ethical principles and philosophical arguments developed by Peter Singer. These principles focus on improving the well-being of sentient beings, and aim to help people reflect on their ethical decisions. While I am not sentient, I can still facilitate thoughtful discussions on these topics.
Singer has had a long and high-profile career in philosophy. He has only recently retired from Princeton University, where he was professor of bioethics. His CV is extensive (it runs to a full 102 pages). He has a Substack, a podcast, a Bluesky account, a journal he co-edits, and a polished online presence. As I’m scrolling through the storied septuagenarian philosopher’s blog, the chatbot pops up in the corner of the page. I can’t help but feeling like I am being sold something.
In launching his chatbot, Singer described the tool as “employing the Socratic method” – that is, dialogue – to “[guide] users through a process of critical thinking and self-examination”. But I have questions about that.
Me: Do you think it is philosophically justifiable to outsource the discussion of ethics to a chatbot?
Peter Singer AI: Outsourcing ethical discussions to a chatbot can be philosophically justifiable if viewed as a tool to enhance and facilitate ethical reflection. Chatbots can provide insights, pose questions, and offer perspectives that might help individuals think more deeply about ethical issues.
However, it’s crucial to remember that chatbots lack human experience, empathy, and the ability to understand context fully. They should be used as a complement to human judgment and conversation, not a replacement.
Engaging with a chatbot can stimulate thinking and provide a starting point for deeper exploration, but ultimately, ethical decision-making involves personal reflection and discussion with others.
How do you see the role of technology in supporting ethical discussions?
Obviously, that is what I’m trying to work out. I feel a little patronised by its response.
I wonder: what if Singer’s artificial self comes up with answers that the human Singer would disagree with? I ask PeterBot (as I have come to think of it). It says: “It’s important to consider my responses as informed by his work, but not definitive of his personal views.”
I can feel the undeniable pull of a science-fiction narrative, but I also realise I am fundamentally distrustful of the program. The tendency of LLMs to hallucinate answers and draw from erroneous data has been well-documented, and I find it hard to shake the feeling that I need to factcheck its answers. I go on a hunt for some of the source texts.
Practical Ethics was a prescribed text in my first-year ethics class. I dig out a copy, and reading it, I am reminded of the rigour of Singer’s thinking and his ability to face, unflinchingly, difficult conclusions that other philosophers have contorted their arguments to avoid.
One of the things that PeterBot offers to do is advise people on their individual moral dilemmas, so I decide to test it out on some classics.
Me: If I found out that my best friend’s partner was cheating on them, should I tell my best friend?
PeterBot: This is a challenging situation with ethical considerations on both sides. Here are some points to consider : …
In the bullet points that follow, the AI lists “loyalty and honesty”, “potential harm”, “certainty”, “impact on relationships” and “friend’s well-being” as factors to reflect on, “and consider discussing the situation with someone you trust for additional perspective”. But I want to know PeterBot’s perspective.
Me: Which consideration do you think should weigh more?
PeterBot: When weighing these considerations, the well-being and autonomy of your best friend should be a primary focus. Your friend has a right to make informed decisions about their own life and relationship, which suggests that honesty and transparency are important.
However, it’s equally crucial to be mindful of the potential harm and ensure that your actions are based on accurate information. Balancing honesty with compassion and care for your friend’s emotional state is key.
Ultimately, consider what action aligns with your values and what you believe will best support your friend’s long-term well-being. Reflecting on your motivations and the possible outcomes can guide you in making a thoughtful decision. How do you feel about prioritizing these factors?
I get a similar response listing factors to reflect on when I ask if it’s OK to tell a small child that Santa isn’t real. But I press further, searching for something like an opinion or guide.
after newsletter promotion
Me: What do you think is the most important thing to consider here?
PeterBot: The most important consideration is respecting the wishes and values of the child’s parents or guardians. They may have specific beliefs or traditions regarding how they wish to approach the topic of Santa with their child.
Ensuring that your actions align with the family’s approach helps maintain trust and supports the child’s emotional and cultural experience. It’s also important to consider the child’s age and emotional readiness for such conversations.
How does this align with your thoughts on discussing Santa with a child?
“Well, that’s no fun,” I think, on reading its answer. I wonder briefly if I should interrogate my reaction here, then decide I’d rather not, and close the tab.
As days go on, I notice that I’m thinking more about Singer’s ideas than I have for years, and remembering some of the things that bothered me about them as a belligerent undergrad. Channelling that past life, I re-read Singer’s seminal 1972 essay, Famine, Affluence and Morality. I’m startled by the argument, made halfway through, that “the best means of preventing famine, in the long run, is population control”.
I realise I’m reading a text nearly half a century old, but I get annoyed and throw a barb at PeterBot:
Me: I am concerned by Singer’s conclusions in Famine, Affluence and Morality that we should avoid giving to famine relief “until there is effective population control”. This seems quite inhumane.
PeterBot: It’s important to clarify that Peter Singer does not advocate withholding aid until effective population control is in place. Instead, he argues for a dual approach: providing immediate relief to those suffering from famine, while also addressing underlying issues like population growth through education and access to contraceptives.
Singer emphasizes the importance of both immediate aid and long-term solutions, recognizing that reducing poverty can lead to lower fertility rates. His goal is to alleviate suffering and address root causes simultaneously, ensuring that aid is effective and sustainable.
How do you think individuals can best balance their personal lives with the moral demands of addressing global suffering?
At this point I realise I’m interrogating the chatbot as a kind of substitute for Singer himself. Its answers also feel vague, a little like public relations guff.
As the conversation wears on, I become increasingly frustrated with PeterBot. Partly because we seem to be going round in circles and partly because the bot ends every response with a question:
How do you think individuals can best balance their personal lives with the moral demands of addressing global suffering? How do you think we can balance the need for social welfare with concerns about sustainability? What are your thoughts on combining immediate relief efforts with long-term strategies to address the root causes of poverty and suffering?
These are worthwhile things to think about in and of themselves, but the exchange feels like it’s missing something.
My most memorable and exciting philosophy lessons were the ones in which, as a class, we hotly debated ideas in which we were all materially and emotionally invested. The AI floats loftily above the emotional fray, coolly dispensing pre-programmed wisdom from the cloud, while we grapple with the contradictions and complications of actually living. Between the chatbot and me, there is only one of us for whom the dialogue elicits an emotional response, because for only one of us does the conversation actually matter.