全部新闻

这是我现在要购买的顶级人工智能(AI)股票|Motley傻瓜
这是我现在要购买的顶级人工智能(AI)股票|Motley傻瓜
2025-02-11 09:07:00
高通公司(QCOM)面临诸如失去华为作为客户的挑战,并在2027年之前在iPhone中被Apple的潜在替代者替代,但其枢纽可能会推动未来的增长。最近的财务业绩显示,收入同比增长17%,达到117亿美元,净收入增长了15%。与同行相比,高通的股票被低估了,市盈率约为18,这表明如果AI需求增加,则可能会获得大量回报。
IBM和C40城市在新的AI项目中合作,用于弹性城市-C40城市
IBM和C40城市在新的AI项目中合作,用于弹性城市-C40城市
2025-02-11 09:02:40
IBM和C40城市通过IBM的可持续性加速器开设了合作,以开发用于城市弹性的AI解决方案,重点是应对与极端热量和城市热岛效应相关的城市挑战。该计划与联合国可持续发展目标11保持一致,并旨在提供数据驱动的工具,以帮助城市创建适应策略,以减轻与气候变化相关的健康风险和经济负担。IBM将为C40城市(从100多名申请人中选择)提供高达300万美元的资金和技术服务,以支持为城市弹性的开发解决方案。
NOXUS筹集了150万美元,为企业建立AI劳动力
NOXUS筹集了150万美元,为企业建立AI劳动力
2025-02-11 09:00:45
Noxus是一家SaaS平台帮助企业建立AI劳动力的企业,已获得SFC Capital领导的150万美元的预付款资金,并获得了其他投资者。这些资金将用于开发平台和高级项目。Noxus由JoãoPedroAlmeida,Jorge Pessoa,GonçaloFerreira和Miguel Ribeiro创立,旨在为数据驱动的Insights Insights and Process Crestration提供全栈AI工具。首席执行官JoãoPedro强调了该平台在欧洲企业的AI采用民主化中的作用,旨在通过可访问的AI集成来弥合生产力差距。来自鹿角的Vegard Medbo称赞Noxus的创新方法可以有效地实施AI,而没有复杂性或风险。noxus
四家公司利用人工智能(AI)来查找和修复网络安全漏洞
四家公司利用人工智能(AI)来查找和修复网络安全漏洞
2025-02-11 08:45:13
通过DARPA的美国军事研究人员已向INGOTS项目颁发了合同,旨在使用AI来衡量和减轻复杂系统中的网络安全漏洞。合同签署了两个六个实验室(1,860万美元),RTX BBN Technologies(1030万美元),Kudu Dynamics(710万美元)和NARF Industries(680万美元)。Ingots项目旨在自动化识别和修复漏洞的过程,然后才能被攻击者利用,重点是开发数据集以快速风险评估并减少人类干预脆弱性管理。
思科在阿姆斯特丹现场展示创新
思科在阿姆斯特丹现场展示创新
2025-02-11 08:34:09
**新闻稿:**### **思科在Cisco Live EMEA 2025中介绍了全面的AI解决方案,以加速数字转型并增强客户体验****加利福尼亚州圣塔克拉拉 - 2025年1月16日 - ** cisco(纳斯达克:CSCO),安全,智能网络的全球领导者,今天揭幕了一套创新的解决方案弹力。这些产品在Cisco Live EMEA宣布,涵盖了AI驱动的基础架构,可观察性工具和客户体验增强。#### **加速使用可持续解决方案的AI采用**为了解决与AI工作负载相关的不断增长的能源效率问题,思科开设了一系列工程联盟,旨在与Vertiv,DeepCoolai,Green Revolution,Green Revolution Colioning和Asperitas合作提供液体冷却解决方案。这些伙伴关系应对AI数据中心的热密度挑战,从而确保可持续增长而不会损害性能。此外,思科还与Panduit,MHT Technologies和ThinLabs合作,以提供端到端直接电流微电网解决方案,这些解决方案利用了电力以上的Ethernet(POE)(POE)和故障管理功率(FMP)。这些整合旨在减少智能建筑的能源消耗,从而促进更可持续的数字未来。#### **使用可观察性解决方案增强数字弹性**思科通过引入数千人交通见解扩大了其可观察性功能。该工具通过将流量流与合成测量值相关联,以端到端保证每种数字体验,将可见性更深入地扩展到本地网络。可在2025年3月售,数千人的交通见解可通过使客户加速补救并快速查明性能问题来增强数字弹性。此外,思科宣布了其安全解决方案与Splunk的企业平台之间的新集成。该集成包括思科塔洛斯(Cisco Talos)的实时威胁情报,以及提高安全调查的可见性。这些更新为威胁检测和管理提供了全面的方法。#### **通过教育计划建立关键的AI技能**认识到AI迅速发展领域不断发展的技能差距,思科正在启动其最新的AI技能旅程计划。该计划旨在在U.Cisco.com上建立关键的AI基础设施技能,从进入到专家级别。此外,思科学习和认证已更新了其现有认证(CCNA,CCNP和Internet Workwork专家),以反映最新的AI和网络安全趋势,以确保专业人员为有影响力的业务成果提供必要的知识。#### ** AI驱动的客户体验创新**思科宣布与欧洲解决方案的主要领先提供商之一Mistral AI合作,宣布了AI更新代理的共同开发。这种创新的代理通过合并50个信号和来源的数据来简化续订提案创建过程,从而提供实时情感分析,个性化建议以及与客户结果和关键绩效指标(KPIS)相关的智能自动化。这标志着思科(Cisco)提供超出期望的个性化,积极主动和预测的客户体验的愿景的又一步。#### **授权与BYOD能力合作**为了响应在启用视频的空间中使用自己的设备(BYOD)使用的日益增长的趋势,思科引入了BYOD室。这种具有成本效益的解决方案使用户可以通过USB-C在笔记本电脑上的会议上加入会议,从而提高了各种工作环境的协作和生产力。#### **结论**思科综合的AI解决方案,工程联盟,可观察性工具和客户体验创新将其定位在数字化转型的最前沿。通过应对能源效率的挑战,增强数字弹性,并赋予专业人士具有关键技能的能力,思科继续彻底改变组织在AI时代的联系和保护。---**关于思科**思科(NASDAQ:CSCO)是安全,智能网络的全球领导者,可增强全球企业和社区的能力。经过四十年的创新,思科仍然致力于为所有人创造一个更加联系和包容的未来。在[新闻编辑室](http://www.cisco.com/go/newsroom/)上发现更多信息,然后在[@cisco](https://twitter.com/cisco)上在x上关注我们。**媒体联系人:** - **名称:**简·杜(Jane Doe) - **电子邮件:** jdoe@cisco.com - **电话:** +1 408-992-3365---**注明读者***本文提到的思科徽标和产品名称是Cisco和/或其在美国和其他国家/地区的分支机构的商标或注册商标。使用“合作伙伴”一词并不意味着思科与任何其他公司之间存在伙伴关系。*思科对本文档中的错误不承担任何责任,或者对本文包含的信息造成的损害赔偿的任何责任。
人工智能代理人正在为您的行业而来:这是第一个排队的人
人工智能代理人正在为您的行业而来:这是第一个排队的人
2025-02-11 06:47:52
自主AI代理商的兴起将彻底改变各个行业的业务运营,由于其在基础设施,法规和文化认可方面,技术,财务和零售等领域都比其他行业更快地采用了这项技术。这些代理商可以在不中断或错误的情况下进行24/7的操作,从而创造新的商机和产品。但是,采用受到监管不确定性,缺乏明确的业务案件和对技术整合的抵制等因素的阻碍。行业必须通过有关监管合规性,业务生存能力和文化适应的问题来评估其准备,以有效利用代理AI的好处。
2025-02-11 06:02:09
Top 5 ways attackers use generative AI to exploit your systems Cybercriminals are leveraging generative AI technologies to enhance the sophistication and efficiency of their attacks, including phishing, malware development, vulnerability hunting, and authentication bypass. This滥用生成式AI进行网络犯罪使得技术门槛降低,提高了钓鱼邮件的欺骗性,并加速了漏洞利用的速度。同时,犯罪分子正转向中国的新AI模型(如DeepSeek和Qwen)以及开发自己的大型语言模型来生成恶意内容,进一步加剧了网络安全威胁。为应对这些挑战,企业需要采用更先进的安全措施并利用人工智能技术进行防御。 Cybercriminals are increasingly exploiting gen AI technologies to enhance the sophistication and efficiency of their attacks. Artificial intelligence is revolutionizing the technology industry and this is equally true for the cybercrime ecosystem, as cybercriminals are increasingly leveraging generative AI to improve their tactics, techniques, and procedures and deliver faster, stronger, and sneakier attacks. But as with legitimate use of emerging AI tools, abuse of generative AI for nefarious ends isn’t so much about the novel and unseen as it is about productivity and efficiency, lowering the barrier to entry, and offloading automatable tasks in favor of higher-order thinking on the part of the humans involved. “AI doesn’t necessarily result in new types of cybercrimes, and instead enables the means to accelerate or scale existing crimes we are familiar with, as well as introduce new threat vectors,” Dr. Peter Garraghan, CEO/CTO of AI security testing vendor Mindgard and a professor at the UK’s Lancaster University, tells CSO. Garraghan continues: “If a legitimate user can find utility in using AI to automate their tasks, capture complex patterns, lower the barrier of technical entry, reduced costs, and generate new content, why wouldn’t a criminal do the same?” Here is a look at various ways cybercriminals are putting gen AI to use in exploiting enterprise systems today. Taking phishing to the next level Gen AI enables the creation of highly convincing phishing emails , greatly increasingly the likelihood of prospective marks giving over sensitive information to scam sites or downloading malware. Instead of sending a reasonably generic and unconvincing email, often with grammatical and formatting inconsistencies and errors, the use of AI enables cybercriminals to quickly generate more sophisticated and legitimate-looking emails, with the potential for greater personalization to target the recipient. Gen AI tools help criminals pull together different sources of data to enrich their campaigns — whether this is group social profiling, or targeted information gleaned from social media. “AI can be used to quickly learn what types of emails are being rejected or opened, and in turn modify its approach to increase phishing success rate,” Mindgard’s Garraghan explains. As phishing attacks branch out in kind , AI-generated audio and video deepfakes can be used as part of more sophisticated social engineering attacks. In the most high-profile example to date, a finance worker at design and engineering company Arup was tricked into authorizing a fraudulent HK$200 million ($25.6 million) transaction after attending a videoconference call during which fraudsters used deepfake technology to impersonate its UK-based chief finance officer. Facilitating malware development Artificial intelligence can also be used to generate more sophisticated or at least less labour-intensive malware. For example, cybercriminals are using gen AI to create malicious HTML documents. The XWorm attack, initiated by HTML smuggling, which contains malicious code that downloads and runs the malware, bears the hallmarks of development via AI. “The loader’s detailed line-by-line description suggesting it was crafted using generative AI,” according to the latest edition of HP Wolf Security’s Threat Insights Report . In addition, the “design of the HTML webpage delivering XWorm is almost visually identical as the output from ChatGPT 4o after prompting the LLM to generate an HTML page that offers a file download,” HP Wolf Security adds. Similar techniques were in play with the earlier AsyncRAT campaign, according to HP’s enterprise security division. Elsewhere, ransomware group FunkSec — an Algeria-linked ransomware-as-a-service (RaaS) operator that takes advantage of double-extortion tactics — has begun harnessing AI technologies, according to Check Point Research. “FunkSec operators appear to use AI-assisted malware development, which can enable even inexperienced actors to quickly produce and refine advanced tools,” Check Point researchers wrote in a blog post . Accelerating vulnerability hunting and exploits The traditionally difficult task of analyzing systems for vulnerabilities and developing exploits can be simplified through use of gen AI technologies. “Instead of a black hat hacker spending the time to probe and perform reconnaissance against a system perimeter, an AI agent can be tasked to do this automatically,” Mingard’s Garraghan says. Gen AI may be behind a 62% reduction in the time between a vulnerability being discovered and its exploitation by attackers from 47 days to just 18 days, according to a recent study by threat intelligence firm ReliaQuest . “This sharp decrease strongly indicates that a major technological advancement — likely GenAI — is enabling threat actors to exploit vulnerabilities at unprecedented speeds,” ReliaQuest writes. Adversaries are leveraging gen AI alongside pen-testing tools to write scripts for tasks such as network scanning, privilege escalation, and payload customization. AI is also likely being used by cybercriminals to analyze scan results and suggest optimal exploits, effectively allowing them to identify flaws in victim systems faster. “These advances accelerate many phases in the kill chain, particularly initial access,” ReliaQuest concludes. CSO’s Lucian Constantin offers a deeper look at how generative AI tools are transforming the cyber threat landscape by democratizing vulnerability hunting for for pen-testers and attackers alike. Escalating threats with alternative platforms Cybercriminals are rapidly shifting from ChatGPT to new AI models from China — DeepSeek and Qwen — to generate malicious content. “Threat actors are openly sharing techniques to jailbreak these models, bypass security controls, and create malware, info-stealers, and spam campaigns with minimal restrictions,” according to Check Point Research. “Some are even discussing how to use these AI tools to evade banking anti-fraud protections — a significant escalation in cyber threats.” “Multiple discussions and shared techniques on using DeepSeek to bypass banking system anti-fraud protections have been found, indicating the potential for significant financial theft,” Check Point warns in a technical blog post . China-based AI company DeepSeek , whose recent entry has sent shockwaves through the industry , is weakly protected against abuse compared to its Western counterparts. Check Point Research explains: “While ChatGPT has invested substantially in anti-abuse provisions over the last two years, these newer models appear to offer little resistance to misuse, thereby attracting a surge of interest from different levels of attackers, especially the low skilled ones — individuals who exploit existing scripts or tools without a deep understanding of the underlying technology.” Cybercriminals have also begun developing their own large language models (LLMs) — such as WormGPT , FraudGPT, DarkBERT, and others — built without the guardrails that constrain criminals’ misuse of mainstream gen AI platforms. These platforms are commonly harnessed for applications such as phishing and malware generation. Moreover, mainstream LLMs can also be customized for targeted use. Security researcher Chris Kubecka recently shared with CSO how her custom version of ChatGPT, called Zero Day GPT, helped her identify more than 20 zero-days in a matter of months. Breaking in with authentication bypass Gen AI tools can also be abused to bypass security defences such as CAPTCHAs or biometric authentication. “AI can defeat CAPTCHA systems and analyze voice biometrics to compromise authentication,” according to cybersecurity vendor Dispersive. “This capability underscores the need for organizations to adopt more advanced, layered security measures.” Countermeasures Collectively the misuse of GenAI tools is making it easier for less skilled cybercriminals to earn a dishonest living. Defending against the attack vector challenges security professionals to harness the power of artificial intelligence more effectively than attackers. “Criminal misuse of AI technologies is driving the necessity to test, detect, and respond to these threats, in which AI is also being leveraged to combat cybercriminal activity,” Mindgard’s Garraghan says. In a blog post, Lawrence Pingree, VP of technical marketing at Dispersive, outlines preemptive cyber defenses that security professionals can take to win what he describes as an “AI ARMS (Automation, Reconnaissance, and Misinformation) race” between attackers and defenders. “Relying on traditional detection and response mechanisms is no longer sufficient,” Pingree warns. Alongside employee education and awareness programs, enterprises should be using AI to detect and neutralize generative AI-based threats in real-time. Randomization and preemptive changes to IP addresses, system configurations, and so on, can act as an obstacle to attack. Leveraging AI to simulate potential attack scenarios and predict adversary behavior through threat simulation and predictive intelligence also offers increased resilience against potential attacks. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below.
在工作中,安静的人工智能革命正在进行中
在工作中,安静的人工智能革命正在进行中
2025-02-11 05:01:10
德勤最近进行的一项调查发现,有47%的受访者熟悉AI进行了个人活动,而在Chatgpt发行的两年内,有23%用于工作目的。现在,许多专业人员通常使用聊天机器人来起草电子邮件和其他沟通,通常没有公司的知识或批准。尽管人们希望AI可以提高生产率,但人们对准确性和数据安全性提出了担忧。公司必须解决围绕AI使用政策缺乏明确性,以防止诸如沟通不畅和未经授权使用敏感信息等潜在风险。
人工智能:Chatgpt如何改变学校作弊
人工智能:Chatgpt如何改变学校作弊
2025-02-11 03:38:20
法国学生(包括年轻的高中时代年轻人)对Openai的Chatgpt的广泛使用引起了教师的关注,尤其是在该工具特别有效的人文学科中。尽管有些学生谨慎地使用它来完成家庭作业或改善困难领域的表现,但其他学生则广泛依靠它来努力工作,从而导致学术成果不同。
预测:即使DeepSeek的中断,NVIDIA仍将是第一个达到4万亿美元估值的人工智能(AI)股票。这就是原因。
预测:即使DeepSeek的中断,NVIDIA仍将是第一个达到4万亿美元估值的人工智能(AI)股票。这就是原因。
2025-02-11 02:00:00
Openai于2022年11月30日在商业上推出了Chatgpt,这标志着AI革命的开始,在2022年11月至2025年1月之间,NVIDIA的股票价值显着提高了743%,增加了近3万亿美元的市值。但是,一家中国初创公司(Deepseek)出现了,声称可以提供更便宜的Chatgpt替代品,导致Nvidia的股票下降了近6000亿美元。尽管经济低迷,作者认为,由于对AI基础设施的需求持续需求降低了成本,NVIDIA可能会成为第一家达到4万亿美元估值的公司,这可能是由Jevons Paradox驱动的。本文以建议是根据潜在的未来收益和市场绩效投资NVIDIA股票的好时机。