全部新闻
人工智能代理人正在为您的行业而来:这是第一个排队的人
自主AI代理商的兴起将彻底改变各个行业的业务运营,由于其在基础设施,法规和文化认可方面,技术,财务和零售等领域都比其他行业更快地采用了这项技术。这些代理商可以在不中断或错误的情况下进行24/7的操作,从而创造新的商机和产品。但是,采用受到监管不确定性,缺乏明确的业务案件和对技术整合的抵制等因素的阻碍。行业必须通过有关监管合规性,业务生存能力和文化适应的问题来评估其准备,以有效利用代理AI的好处。
Top 5 ways attackers use generative AI to exploit your systems Cybercriminals are leveraging generative AI technologies to enhance the sophistication and efficiency of their attacks, including phishing, malware development, vulnerability hunting, and authentication bypass. This滥用生成式AI进行网络犯罪使得技术门槛降低,提高了钓鱼邮件的欺骗性,并加速了漏洞利用的速度。同时,犯罪分子正转向中国的新AI模型(如DeepSeek和Qwen)以及开发自己的大型语言模型来生成恶意内容,进一步加剧了网络安全威胁。为应对这些挑战,企业需要采用更先进的安全措施并利用人工智能技术进行防御。 Cybercriminals are increasingly exploiting gen AI technologies to enhance the sophistication and efficiency of their attacks. Artificial intelligence is revolutionizing the technology industry and this is equally true for the cybercrime ecosystem, as cybercriminals are increasingly leveraging generative AI to improve their tactics, techniques, and procedures and deliver faster, stronger, and sneakier attacks. But as with legitimate use of emerging AI tools, abuse of generative AI for nefarious ends isnât so much about the novel and unseen as it is about productivity and efficiency, lowering the barrier to entry, and offloading automatable tasks in favor of higher-order thinking on the part of the humans involved. âAI doesnât necessarily result in new types of cybercrimes, and instead enables the means to accelerate or scale existing crimes we are familiar with, as well as introduce new threat vectors,â Dr. Peter Garraghan, CEO/CTO of AI security testing vendor Mindgard and a professor at the UKâs Lancaster University, tells CSO. Garraghan continues: âIf a legitimate user can find utility in using AI to automate their tasks, capture complex patterns, lower the barrier of technical entry, reduced costs, and generate new content, why wouldnât a criminal do the same?â Here is a look at various ways cybercriminals are putting gen AI to use in exploiting enterprise systems today. Taking phishing to the next level Gen AI enables the creation of highly convincing phishing emails , greatly increasingly the likelihood of prospective marks giving over sensitive information to scam sites or downloading malware. Instead of sending a reasonably generic and unconvincing email, often with grammatical and formatting inconsistencies and errors, the use of AI enables cybercriminals to quickly generate more sophisticated and legitimate-looking emails, with the potential for greater personalization to target the recipient. Gen AI tools help criminals pull together different sources of data to enrich their campaigns â whether this is group social profiling, or targeted information gleaned from social media. âAI can be used to quickly learn what types of emails are being rejected or opened, and in turn modify its approach to increase phishing success rate,â Mindgardâs Garraghan explains. As phishing attacks branch out in kind , AI-generated audio and video deepfakes can be used as part of more sophisticated social engineering attacks. In the most high-profile example to date, a finance worker at design and engineering company Arup was tricked into authorizing a fraudulent HK$200 million ($25.6 million) transaction after attending a videoconference call during which fraudsters used deepfake technology to impersonate its UK-based chief finance officer. Facilitating malware development Artificial intelligence can also be used to generate more sophisticated or at least less labour-intensive malware. For example, cybercriminals are using gen AI to create malicious HTML documents. The XWorm attack, initiated by HTML smuggling, which contains malicious code that downloads and runs the malware, bears the hallmarks of development via AI. âThe loaderâs detailed line-by-line description suggesting it was crafted using generative AI,â according to the latest edition of HP Wolf Securityâs Threat Insights Report . In addition, the âdesign of the HTML webpage delivering XWorm is almost visually identical as the output from ChatGPT 4o after prompting the LLM to generate an HTML page that offers a file download,â HP Wolf Security adds. Similar techniques were in play with the earlier AsyncRAT campaign, according to HPâs enterprise security division. Elsewhere, ransomware group FunkSec â an Algeria-linked ransomware-as-a-service (RaaS) operator that takes advantage of double-extortion tactics â has begun harnessing AI technologies, according to Check Point Research. âFunkSec operators appear to use AI-assisted malware development, which can enable even inexperienced actors to quickly produce and refine advanced tools,â Check Point researchers wrote in a blog post . Accelerating vulnerability hunting and exploits The traditionally difficult task of analyzing systems for vulnerabilities and developing exploits can be simplified through use of gen AI technologies. âInstead of a black hat hacker spending the time to probe and perform reconnaissance against a system perimeter, an AI agent can be tasked to do this automatically,â Mingardâs Garraghan says. Gen AI may be behind a 62% reduction in the time between a vulnerability being discovered and its exploitation by attackers from 47 days to just 18 days, according to a recent study by threat intelligence firm ReliaQuest . âThis sharp decrease strongly indicates that a major technological advancement â likely GenAI â is enabling threat actors to exploit vulnerabilities at unprecedented speeds,â ReliaQuest writes. Adversaries are leveraging gen AI alongside pen-testing tools to write scripts for tasks such as network scanning, privilege escalation, and payload customization. AI is also likely being used by cybercriminals to analyze scan results and suggest optimal exploits, effectively allowing them to identify flaws in victim systems faster. âThese advances accelerate many phases in the kill chain, particularly initial access,â ReliaQuest concludes. CSOâs Lucian Constantin offers a deeper look at how generative AI tools are transforming the cyber threat landscape by democratizing vulnerability hunting for for pen-testers and attackers alike. Escalating threats with alternative platforms Cybercriminals are rapidly shifting from ChatGPT to new AI models from China â DeepSeek and Qwen â to generate malicious content. âThreat actors are openly sharing techniques to jailbreak these models, bypass security controls, and create malware, info-stealers, and spam campaigns with minimal restrictions,â according to Check Point Research. âSome are even discussing how to use these AI tools to evade banking anti-fraud protections â a significant escalation in cyber threats.â âMultiple discussions and shared techniques on using DeepSeek to bypass banking system anti-fraud protections have been found, indicating the potential for significant financial theft,â Check Point warns in a technical blog post . China-based AI company DeepSeek , whose recent entry has sent shockwaves through the industry , is weakly protected against abuse compared to its Western counterparts. Check Point Research explains: âWhile ChatGPT has invested substantially in anti-abuse provisions over the last two years, these newer models appear to offer little resistance to misuse, thereby attracting a surge of interest from different levels of attackers, especially the low skilled ones â individuals who exploit existing scripts or tools without a deep understanding of the underlying technology.â Cybercriminals have also begun developing their own large language models (LLMs) â such as WormGPT , FraudGPT, DarkBERT, and others â built without the guardrails that constrain criminalsâ misuse of mainstream gen AI platforms. These platforms are commonly harnessed for applications such as phishing and malware generation. Moreover, mainstream LLMs can also be customized for targeted use. Security researcher Chris Kubecka recently shared with CSO how her custom version of ChatGPT, called Zero Day GPT, helped her identify more than 20 zero-days in a matter of months. Breaking in with authentication bypass Gen AI tools can also be abused to bypass security defences such as CAPTCHAs or biometric authentication. âAI can defeat CAPTCHA systems and analyze voice biometrics to compromise authentication,â according to cybersecurity vendor Dispersive. âThis capability underscores the need for organizations to adopt more advanced, layered security measures.â Countermeasures Collectively the misuse of GenAI tools is making it easier for less skilled cybercriminals to earn a dishonest living. Defending against the attack vector challenges security professionals to harness the power of artificial intelligence more effectively than attackers. âCriminal misuse of AI technologies is driving the necessity to test, detect, and respond to these threats, in which AI is also being leveraged to combat cybercriminal activity,â Mindgardâs Garraghan says. In a blog post, Lawrence Pingree, VP of technical marketing at Dispersive, outlines preemptive cyber defenses that security professionals can take to win what he describes as an âAI ARMS (Automation, Reconnaissance, and Misinformation) raceâ between attackers and defenders. âRelying on traditional detection and response mechanisms is no longer sufficient,â Pingree warns. Alongside employee education and awareness programs, enterprises should be using AI to detect and neutralize generative AI-based threats in real-time. Randomization and preemptive changes to IP addresses, system configurations, and so on, can act as an obstacle to attack. Leveraging AI to simulate potential attack scenarios and predict adversary behavior through threat simulation and predictive intelligence also offers increased resilience against potential attacks. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below.
在工作中,安静的人工智能革命正在进行中
德勤最近进行的一项调查发现,有47%的受访者熟悉AI进行了个人活动,而在Chatgpt发行的两年内,有23%用于工作目的。现在,许多专业人员通常使用聊天机器人来起草电子邮件和其他沟通,通常没有公司的知识或批准。尽管人们希望AI可以提高生产率,但人们对准确性和数据安全性提出了担忧。公司必须解决围绕AI使用政策缺乏明确性,以防止诸如沟通不畅和未经授权使用敏感信息等潜在风险。
人工智能:Chatgpt如何改变学校作弊
法国学生(包括年轻的高中时代年轻人)对Openai的Chatgpt的广泛使用引起了教师的关注,尤其是在该工具特别有效的人文学科中。尽管有些学生谨慎地使用它来完成家庭作业或改善困难领域的表现,但其他学生则广泛依靠它来努力工作,从而导致学术成果不同。
预测:即使DeepSeek的中断,NVIDIA仍将是第一个达到4万亿美元估值的人工智能(AI)股票。这就是原因。
Openai于2022年11月30日在商业上推出了Chatgpt,这标志着AI革命的开始,在2022年11月至2025年1月之间,NVIDIA的股票价值显着提高了743%,增加了近3万亿美元的市值。但是,一家中国初创公司(Deepseek)出现了,声称可以提供更便宜的Chatgpt替代品,导致Nvidia的股票下降了近6000亿美元。尽管经济低迷,作者认为,由于对AI基础设施的需求持续需求降低了成本,NVIDIA可能会成为第一家达到4万亿美元估值的公司,这可能是由Jevons Paradox驱动的。本文以建议是根据潜在的未来收益和市场绩效投资NVIDIA股票的好时机。
预测:即使DeepSeek的中断,NVIDIA仍将是第一个达到4万亿美元估值的人工智能(AI)股票。这就是原因。|Motley傻瓜
NVIDIA的股票价值显着下降,这是由于对新的中国初创公司DeepSeek的担忧,该公司声称以较低的成本提供了类似于Chatgpt的AI功能。尽管最近有所下降,但作者认为,Nvidia仍然有良好的位置成为华尔街的第一家4万亿美元的公司,这表明降低AI开发成本实际上可以增加对Nvidia基础设施的需求。
软银的眼睛为5000亿美元的AI推销债务融资
软银创始人Masayoshi Son正在考虑使用项目融资(通常用于大型基础设施项目)的项目融资,以资助涉及OpenAI的5000亿美元的星际之门项目。这种方法将需要减少主持投资者的初始投资,并根据预期的现金流依靠长期融资。软银正在探索一种资金结构,其中与合作伙伴Oracle和Abu Dhabi的MGX合作伙伴可以贡献约10%的股权,其余的人通过债务市场来源。
Grimes庆祝开拓者使用AI创造“魔术”
Grimes在迪拜的Time100 AI Impact奖中获得了认可,因为她在音乐中的人工智能开创性的工作。她庆祝了其他创新者,例如Anima Anandkumar,Refik Anadol,Arvind Krishna,David Holz,Aza Raskin,Sam Zeloof和Holly Herndon,强调了AI的变革潜力,同时强调了其魔力。Grimes强调了认识到那些通过技术对世界积极影响的人的重要性。
BBC开创性的BBC研究显示了超过一半的人工智能(AI)助手的问题
英国广播公司的研究在回答与新闻有关的问题时揭示了AI助手的准确性的重大问题。关键发现包括:-51%的AI答案发生了重大错误或虚假陈述。 - 19%的人引用了BBC内容引用的事实不准确。 - BBC文章中的13%更改或制造的报价。该研究跨越一个月,评估了四名AI助手:Chatgpt,Copilot,Gemini和困惑。错误范围从不正确的事实到虚假陈述的来源。BBC生成AI计划总监Pete Archer强调了负责AI使用的重要性,同时强调了其在提供准确信息方面的挑战。引用的例子包括有关政治人物身份的扭曲和误导性健康建议。该研究强调了出版商和AI公司在确保内容准确性方面进行合作的需求。
NTT数据揭示了银行中Genai采用的全球见解:提高生产力与削减成本的不同策略
NTT数据的全球研究报告“ AI时代的智能银行业务”表明,尽管银行业的生成AI(Genai)的采用率正在增加,但银行在生产率和成本优化方面的使用却有分歧。只有50%的人将Genai视为提高效率的工具,而将近一半的目的是减少其支出。该研究强调了区域差异:美国银行更专注于削减成本,而欧洲银行则优先考虑生产力。评估Genai计划的关键绩效指标包括提高生产率(46%),竞争优势(42%)和降低成本(43%)。策略在组织之间的差异很大,有些旨在完全自动化,而另一些则专注于人类协作。实现ROI仍然是一个关键的挑战,强调需要量身定制的实施和强大的治理。