作者:Daily Report Staff
As large language models become increasingly integrated into enterprise operations, company executives should remain aware of potential embedded values within the evolving technology, Harvard Business Review writes.
Because LLMs are trained on opaque, proprietary data sets, it’s difficult to assess whether their responses reflect the training data, algorithmic design choices, or a combination of both. The lack of transparency complicates efforts to detect bias and ensure accountability.
An analysis by HBR evaluated several LLMs and found that, broadly, models tend to emphasize pro-social values such as universalism and benevolence, while placing less weight on individualistic values like power, tradition and personal security.
However, the results varied significantly across platforms, especially in categories like caring, health and self-directed action. For example, Meta’s LLaMA showed low regard for rule-following, while ChatGPT o1 showed the weakest consistency and least empathy in its responses.
Preprogrammed safeguards can mask deeper biases, and models’ susceptibility to prompt phrasing and regular updates means that outputs—and embedded values—are subject to change. Because of these discrepancies, executives should not assume consistent behavior across models or over time.
For business leaders, the insights underscore the importance of tailoring AI deployments to the specific capabilities and tendencies of each model rather than taking a one-size-fits-all approach. Strategic use of LLMs requires ongoing testing, careful prompt engineering and an awareness of each model’s evolving behavior, HBR warns.
Read the full story. A subscription may be required.