英语轻松读发新版了,欢迎下载、更新

Do AI LLMs have values?

2025-05-05 15:15:20 英文原文

作者:Daily Report Staff


As large language models become increasingly integrated into enterprise operations, company executives should remain aware of potential embedded values within the evolving technology, Harvard Business Review writes. 

Because LLMs are trained on opaque, proprietary data sets, it’s difficult to assess whether their responses reflect the training data, algorithmic design choices, or a combination of both. The lack of transparency complicates efforts to detect bias and ensure accountability.

An analysis by HBR evaluated several LLMs and found that, broadly, models tend to emphasize pro-social values such as universalism and benevolence, while placing less weight on individualistic values like power, tradition and personal security. 

However, the results varied significantly across platforms, especially in categories like caring, health and self-directed action. For example, Meta’s LLaMA showed low regard for rule-following, while ChatGPT o1 showed the weakest consistency and least empathy in its responses.

Preprogrammed safeguards can mask deeper biases, and models’ susceptibility to prompt phrasing and regular updates means that outputs—and embedded values—are subject to change. Because of these discrepancies, executives should not assume consistent behavior across models or over time.

For business leaders, the insights underscore the importance of tailoring AI deployments to the specific capabilities and tendencies of each model rather than taking a one-size-fits-all approach. Strategic use of LLMs requires ongoing testing, careful prompt engineering and an awareness of each model’s evolving behavior, HBR warns. 

Read the full story. A subscription may be required. 

关于《Do AI LLMs have values?》的评论


暂无评论

发表评论

摘要

As large language models (LLMs) are integrated into enterprise operations, executives must be aware of the embedded values within these technologies due to opaque training data and lack of transparency, complicating bias detection and accountability. An analysis by Harvard Business Review found that LLMs generally emphasize pro-social values but showed significant variations across platforms in categories like caring and health. This variability underscores the need for business leaders to customize AI deployments based on each model's specific capabilities and tendencies, involving ongoing testing and prompt engineering.