英语轻松读发新版了,欢迎下载、更新

How the generative AI boom opens up new privacy and cybersecurity risks

2025-09-03 07:34:30 英文原文

作者:by Raquel C. Pico

Corporate strategy will need to take these potential issues into account, both by shielding who owns the data and by preventing AI from becoming a security breach.

It was one of the viral tech news stories at the start of July when WeTransfer, the popular file sharing service used massively by companies and end users alike, had changed its terms of use.

It’s the kind of thing that is usually accepted without going too deeply into it, but on this occasion they had added an element connected to artificial intelligence. As of early August, WeTransfer reserved the right to use the documents it managed to “operate, develop, market and improve the service or new technologies or services, including improving the performance of machine learning models.” User information, whatever it was, could be used to train AI, it was understood.

The scandal was huge and WeTransfer ended up backtracking, explaining to the media that, in fact, what they wanted to cover was the possibility of using AI to moderate content and not exactly what their users had understood.

However, the WeTransfer scandal became a very visible sign of a potential new risk in cybersecurity, privacy and even protection of sensitive information. A lot of data is needed to power AI and a lot of data is used for it, causing privacy policies of very popular online services to change to adapt to this new environment.

Add to that the fact that the incorporation of artificial intelligence is being done in real time, testing and trying things out. This opens up potential problems, such as the fact that company workers may be using services they have known about in their personal lives — such as ChatGPT — for work uses where they should not be using them. All the corporate privacy policies matter little if a random worker then uploads that confidential information to ChatGPT to have a translation done or a letter written.

Thus, this new context opens up new questions, both for end users at the personal level and for CIOs and CISOs at the corporate level as those responsible for IT strategy and security.

The owners of the data

One such issue is that of information, who owns the data and to whom it may belong. This is leading to updated terms of use for different services in order to be able to use the data that their users have generated to train AI. This has happened, for example, with social networks, such as Meta; but it is also happening with services widely used in corporate environments. Panda reminds us that Slack uses customer data by default for its machine learning models.

This state of affairs is not exactly new. Public data no longer reaches organizations to develop their AIs and they need new sources of data. “The datasets collected in their applications are worth a lot,” Hervé Lambert, global consumer operations manager at Panda Security, explains in an analysis. “Which explains why most of these companies are rushing to modify their privacy policies to be able to make use of them, and to adapt to new data protection regulations that force them to be more transparent about the use of the information they collect and store,” he adds.

Of course, this is first and foremost a problem for the IT and cybersecurity managers of the companies concerned, who have to change their rules of use. But then they can become headaches for companies that use their services in one way or another, or that know that their employees will do so regardless.

“They want to open the door to new ways of exploiting data in areas such as AI, advanced marketing and product development,” Lambert points out, “but at the same time they need to be in good standing with legislation.” This makes the terms of use and privacy conditions include broad mentions or the separation between uses becomes a “very thin” line.

Privacy and cybersecurity risks

Another major problem lies in potential privacy and cybersecurity breaches, both for end users and for the companies themselves.

Panda warns how AIs fed with large amounts of personal data can become a gateway to fraud or to create much more sophisticated and infallible attacks if they fall into the wrong hands. “When we dump personal data into AI tools without convenient control we are exposing ourselves to the fact that the information can be copied, shared or used without our consent,” notes its head of security operations.

Sometimes it doesn’t even have to fall into the wrong hands, but rather end users’ lack of expertise allows sensitive information to surf the web. There’s the case of ChatGPT conversations indexed by Google. “When the ‘make this chat discoverable’ option is activated, the user of certain AI solutions such as ChatGPT agree to make them public and accessible from Google or other search engines and appear in search results, which generates controversy because some of these chats may contain sensitive data, business ideas, commercial strategies or personal experiences,” explains Lambert.

In fact, AI is already one of the most worrying issues for CISOs who are beginning to show signs of burnout from an increasingly complex work environment. While 64% of security managers believe that enabling the use of generative AI tools is a strategic goal in two years’ time, they are also concerned about the risks they pose. This is confirmed by data from Proofpoint’s fifth annual Voice of the CISO report.

“AI has gone from a concept to a fundamental element, transforming the way defenders and adversaries alike operate,” explains Ryan Kalember, chief strategy officer at Proofpoint. “CISOs now face a dual responsibility: to leverage AI to strengthen their security posture while ensuring its ethical and responsible use,” he adds. To do so, they will have to make “strategic decisions” but with the added complexity that CISOs are not the only decision-makers in the implementation of this resource.

The secure use of generative AI is already a priority for 48% of CISOs.

SUBSCRIBE TO OUR NEWSLETTER

From our editors straight to your inbox

Get started by entering your email address below.

关于《How the generative AI boom opens up new privacy and cybersecurity risks》的评论


暂无评论

发表评论

摘要

The news highlights the growing concerns over data privacy and cybersecurity risks associated with the integration of artificial intelligence (AI) in popular online services like WeTransfer and Slack. Companies are updating their terms of use to include AI-related clauses, leading to potential issues regarding who owns user-generated data and how it can be used for training AI models. This shift raises significant concerns for both individual users and corporate IT managers responsible for cybersecurity policies. The incorporation of real-time AI testing poses risks such as employees misusing personal AI services like ChatGPT for work-related tasks, potentially compromising confidential information. CISOs are increasingly worried about the dual challenge of leveraging AI to enhance security while ensuring ethical use, making strategic decisions in a complex environment where they may not be the sole decision-makers.

相关新闻