英语轻松读发新版了,欢迎下载、更新

Black Box AI: What It Is and Why It Matters to Businesses | PYMNTS.com

2025-06-23 23:31:31 英文原文

作者:PYMNTS

Highlights

Black box AI refers to AI systems whose decision-making processes are opaque, raising concerns about transparency.

Businesses have to carefully balance the gains they get from AI with its risks, including lack of explainability.

Explainable AI, or XAI, is a growing field that addresses this problem.

Artificial intelligence (AI) is behind many of today’s most cutting-edge capabilities. It helps power everything from voice assistants to online shopping recommendations and drug discovery.

But some of the most advanced AI systems are so complex that not even the people who build them fully know how the AI makes decisions. This is what experts call black box AI.

The term “black box refers to a system that takes in information and produces results, but keeps what happens in between hidden. Imagine if a borrower applies for a loan and the AI makes the decision on whether or not to approve it. You can’t easily see the reasoning behind it.

That’s because today’s AI models, especially deep learning systems, are complex. These systems are made up of layers of mathematical formulas and millions or even billions of connections that work together to answer queries or solve problems — in mysterious ways.

Many of today’s AI chatbots are black boxes, such as OpenAI’s ChatGPT, according to IBM. The same can be said of Google’s Gemini, Anthropic’s Claude, Perplexity AI, Meta’s Llama and others using deep neural networks with billions of parameters. 

The idea that AI can make decisions that are not easily understandable brings uncertainty to organizations. For example, a hospital patient is flagged as at risk for sepsis, but the AI can’t exactly explain why. Can the doctors trust the recommended actions?

This lack of clarity can lead to bigger risks. If an AI system makes a mistake, how can a company figure out what went wrong? If it turns out to be unfair — like favoring one group of people over another — how can it be fixed? In areas like lending, hiring, healthcare and law enforcement, these questions can be of interest to regulators.

See also: AI Models Arm Payment Processors With Real-Time Risk Intelligence

Why It Matters to Businesses

For businesses, using black box AI comes with both opportunities and risks. On the plus side, these complex systems can do things that were almost impossible before.

They can help companies better detect fraud, more deeply predict customer behavior, hyper-personalize marketing or optimize supply chains. They can spot patterns in data that humans would miss, and they often do it faster and more accurately.

But if companies can’t explain how their AI makes decisions, they could run into trouble. Let’s say the company uses AI to screen job applicants. If candidates start complaining that the system is biased, how will the company defend its process?

Regulators are paying attention. New York City has enacted a law that requires auditing of automated job hiring tools for bias. In Europe, the GDPR gives people the right to an explanation if they’re affected by automated decisions.

According to a PYMNTS Intelligence report, middle-market CFOs said regulatory pressures heighten uncertainty for their companies, with smaller firms feeling the greatest impact. The results of this uncertainty are operational disruptions, heightened legal risks and increased spending on compliance and risk management.

The internal risks are real as well. As AI becomes central to business strategy, companies are expected to make sure it’s being used responsibly. But that’s tough to do if they can’t see inside the black box. Trust, accountability and brand reputation are all at stake.

The good news is that researchers are working hard to make AI easier to understand. There’s a growing field called Explainable AI (XAI), which tries to give people insights into how these systems work. For example, French AI company Dataiku lets data scientists test different scenarios and show the results to business users so they can trust the AI system.

Today, governments and regulators are pushing for more transparency. The EU AI Act aims to set clear rules for how high-risk AI can be used, including requirements for transparency and accountability.

Read more: GenAI and Voice Assistants: Adoption and Trust Across Generations

Read more: The Enterprise Reset: Tariffs, Uncertainty and the Limits of Operational Response

关于《Black Box AI: What It Is and Why It Matters to Businesses | PYMNTS.com》的评论


暂无评论

发表评论

摘要

Businesses face challenges with "black box" AI systems, which make decisions without clear transparency into the decision-making process. This opacity raises concerns about accountability and fairness, particularly in regulated sectors like healthcare and finance. While black box AI offers significant opportunities for enhancing business efficiency, it also brings risks related to explainability and regulatory compliance. To address these issues, the field of Explainable AI (XAI) is growing, aiming to provide clarity on how AI systems operate and make decisions. Governments and regulators are increasingly demanding transparency in AI use, particularly for high-risk applications, to ensure ethical and responsible deployment.