全部新闻

消息人士称,中国禁止在国家资助的数据中心使用外国人工智能芯片
消息人士称,中国禁止在国家资助的数据中心使用外国人工智能芯片
2025-11-05 08:08:08
中国当局要求接受国家资助的新数据中心项目只能使用国产人工智能芯片。完工率低于 30% 的数据中心必须移除外国芯片或取消购买它们的计划。此举旨在减少对外国技术的依赖,提高国内芯片自给率,可能会影响英伟达在中国的市场份额,同时使华为等本土竞争对手受益。该指令在全国范围内的适用情况仍不清楚。
由于对人工智能泡沫的担忧,欧洲市场下跌,亚洲芯片制造商在全球股票抛售中暴跌——事实确实如此
由于对人工智能泡沫的担忧,欧洲市场下跌,亚洲芯片制造商在全球股票抛售中暴跌——事实确实如此
2025-11-05 07:55:00
根据所提供的摘要,以下是近期新闻和事态发展的要点:1. **全球股市下跌**:由于担心人工智能(AI)公司估值过高,全球股市大幅下跌。英伟达、亚马逊、苹果、微软、特斯拉、Alphabet(谷歌)和 Meta(Facebook、Instagram、WhatsApp)等主要科技股单日百分比大幅下跌。2. **英国财政大臣预算暗示增税**:英国财政大臣雷切尔·里夫斯 (Rachel Reeves) 在 11 月 26 日预算公布之前暗示,可能有必要增加所得税,以帮助重建经济并修复国家财政。3. **Drax 电站补贴协议**:北约克郡的 Drax 电站已与政府签署了 2027-31 年新的补贴协议,同意减少燃烧木材,以换取 157.46 英镑/MWh(按今天的货币计算)的降低执行价格。批评者认为,尽管采取了这些措施,该工厂仍然是英国最大的排放国。4. **水务行业奖金被阻止**:由于管理环境绩效和污染预防的新规定,六家水务公司被禁止向其老板提供 400 万英镑的奖金。5. **制药公司预测下调**:诺和诺德(Novo Nordisk)是流行的糖尿病和减肥药物(例如 Ozempic 和 Wegovy)的制造商,由于来自礼来公司(Eli Lilly)的 Mounjaro 和 Zepbound 产品的竞争日益激烈,因此下调了销售额和利润预测。6. **玛莎百货利润减半**:玛莎百货 (M&S) 报告称,由于今年早些时候发生的一次重大网络攻击导致在线订单中断超过六周,销售受到严重影响,因此半年利润减半。7. **JD Wetherspoon 销售报告**:JD Wetherspoon 报告总体销售额较高,但在 11 月预算公布之前对即将出台的政府政策和潜在的额外成本表示谨慎。8. **政府对经济和公共财政的承诺**:英国内阁教育大臣布里奇特·菲利普森强调,虽然政府认真对待宣言承诺,但应对经济挑战需要为公共福利和长期经济稳定做出必要的决定。9. **零售商呼吁反对增税**:玛莎百货的老板斯图尔特·梅钦(Stuart Machin)敦促财政大臣雷切尔·里夫斯(Rachel Reeves)不要对日常用品征收更多税,因为这可能会对消费者支出产生负面影响,而不会促进增长。这些要点强调了近期影响英国和全球各个行业的经济、金融、环境和企业发展。
CNBC 每日开盘:当人工智能炒作遇到重力时
CNBC 每日开盘:当人工智能炒作遇到重力时
2025-11-05 06:21:01
围绕人工智能的乐观情绪正面临现实制约,导致市场波动。尽管 AMD 和 Palantir 的盈利报告强劲,但纳斯达克指数仍下跌,反映出华尔街对人工智能的关注有限。由于投资者对人工智能相关公司的高估值越来越谨慎,英伟达和亚马逊等主要科技股纷纷回落。由于亚洲人工智能相关股票普遍低迷,软银的市值下降了 320 亿美元。与此同时,英伟达宣布成立 20 亿美元的深度技术联盟,以支持印度的人工智能初创公司,扩大其全球足迹。
汽车经销商裁员 350 人,防寒机器人正在取代工作岗位
汽车经销商裁员 350 人,防寒机器人正在取代工作岗位
2025-11-05 06:02:53
美国最大的二手车零售商 CarMax 在充满挑战的汽车市场中解雇了 350 名客户服务员工,导致销售额下降 6.3%,令投资者感到失望。该公司将裁员归因于现代化,而不是人工智能投资。由于高昂的汽车成本给消费者带来压力并对二手车零售商产生负面影响,CarMax 的股价今年已下跌近 48%。此举是在近期各行业其他主要公司裁员之后做出的,反映出更广泛的经济挑战。
2025-11-05 05:39:47
Could a ‘grey swan’ event bring down the AI revolution? Here are 3 risks we should be preparing for The term "black swan" denotes unforeseen events with significant impacts, while "grey swans" refer to rare but more foreseeable risks that are often inadequately prepared for. Examples of grey swans in the AI industry include security threats from malicious use of AI, legal challenges over intellectual property rights, and technological breakthroughs that could disrupt market stability. These risks highlight the need for greater resilience in应对AI领域的灰天鹅事件,包括来自恶意使用AI的安全威胁、知识产权问题的法律挑战以及可能破坏市场稳定的技术突破。这些风险强调了建立更强韧性的必要性, 为了符合摘要的要求并保持英文表述,以下是精简后的版本: The term "grey swan" describes rare but foreseeable risks often neglected. In AI, these include security threats from malicious use, legal challenges over IP rights, and disruptive technological breakthroughs. These risks underscore the need for greater resilience in应对AI领域的灰天鹅事件,包括恶意使用带来的安全威胁、知识产权问题的法律挑战以及可能破坏市场稳定的技术突破。这些风险强调了建立更强韧性的必要性, 更简洁版本: Grey swans represent foreseeable but neglected risks, including security threats from malicious AI use, legal challenges over IP rights, and disruptive technological advancements. These highlight the need for greater resilience in应对AI领域的灰天鹅事件,包括恶意使用带来的安全威胁、知识产权问题的法律挑战以及可能破坏市场稳定的技术突破。这些风险强调了建立更强韧性的必要性, 最终精简: Grey swans represent foreseeable risks like security threats from malicious AI use, legal challenges over IP rights, and disruptive technological advancements. These highlight the need for greater resilience in应对AI领域的灰天鹅事件,包括恶意使用带来的安全威胁、知识产权问题的法律挑战以及可能破坏市场稳定的技术突破。这些风险强调了建立更强韧性的必要性, 最终版本: Grey swans represent foreseeable risks like security threats from malicious AI use, legal challenges over IP rights, and disruptive technological advancements. These highlight the need for greater resilience in dealing with AI's unpredictable future. The term “black swan” refers to a shocking event on nobody’s radar until it actually happens. This has become a byword in risk analysis since a book called The Black Swan by Nassim Nicholas Taleb was published in 2007. A frequently cited example is the 9/11 attacks. Fewer people have heard of “ grey swans ”. Derived from Taleb’s work , grey swans are rare but more foreseeable events. That is, things we know could have a massive impact, but we don’t (or won’t) adequately prepare for. COVID was a good example: precedents for a global pandemic existed, but the world was caught off guard anyway. Although he sometimes uses the term , Taleb doesn’t appear to be a big fan of grey swans. He’s previously expressed frustration that his concepts are often misused, which can lead to sloppy thinking about the deeper issues of truly unforeseeable risks. But it’s hard to deny there is a spectrum of predictability, and it’s easier to see some major shocks coming. Perhaps nowhere is this more obvious than in the world of artificial intelligence (AI). Putting our eggs in one basket Increasingly, the future of the global economy and human thriving has become tied to a single technological story: the AI revolution. It has turned philosophical questions about risk into a multitrillion-dollar dilemma about how we align ourselves with possible futures. US tech company Nvidia, which dominates the market for AI chips, recently surpassed US$5 trillion (about A$7.7 trillion) in market value. The “Magnificent Seven” US tech stocks – Amazon, Alphabet (Google), Apple, Meta, Microsoft, Nvidia and Tesla – now make up about 40% of the S&P 500 stock index . The impact of a collapse for these companies – and a stock market bust – would be devastating at a global level, not just financially but also in terms of dashed hopes for progress . Jensen Huang, chief executive of Nvidia, which has become the world’s most valuable company. Lee Jin-man/AP AI’s grey swans There are three broad categories of risk – beyond the economic realm – that could bring the AI euphoria to an abrupt halt. They’re grey swans because we can see them coming but arguably don’t (or won’t) prepare for them. 1. Security and terror shocks AI’s ability to generate code, malicious plans and convincing fake media makes it a force multiplier for bad actors . Cheap, open models could help design drone swarms, toxins or cyber attacks. Deepfakes could spoof military commands or spread panic through fake broadcasts. Arguably, the closest of these risks to a “white swan” – a foreseeable risk with relatively predictable consequences – stems from China’s aggression toward Taiwan . The world’s biggest AI firms depend heavily on Taiwan’s semiconductor industry for the manufacture of advanced chips. Any conflict or blockade would freeze global progress overnight. 2. Legal shocks Some AI firms have already been sued for allegedly using text and images scraped from the internet to train their models. One of the best-known examples is the ongoing case of The New York Times versus OpenAI , but there are many similar disputes around the world. If a major court were to rule that such use counts as commercial exploitation, it could unleash enormous damages claims from publishers, artists and brands. A few landmark legal rulings could force major AI companies to press pause on developing their models further – effectively halting the AI build-out. 3. One breakthrough too many: innovation shocks Innovation is usually celebrated, but for companies investing in AI, it could be fatal. New AI technology that autonomously manipulates markets (or even news that one is already doing so) would make current financial security systems obsolete. And an advanced, open-source, free AI model could easily vaporise the profits of today’s industry leaders. We got a glimpse of this possibility in January’s DeepSeek dip , when details about a relatively cheaper, more efficient AI model developed in China caused US tech stocks to plummet. Artificial intelligence investment has driven remarkable growth on stock markets. Seth Wenig/AP Why we struggle to prepare for grey swans Risk analysts, particularly in finance, often talk in terms of historical data. Statistics can give a reassuring illusion of consistency and control . But the future doesn’t always behave like the past. The wise among us apply reason to carefully confirmed facts and are sceptical of market narratives . Deeper causes are psychological: our minds encode things efficiently , often relying on one symbol to represent very complex phenomena. It takes us a long time to remodel our representations of the world into believing a looming big risk is worth taking action over – as we’ve seen with the world’s slow response to climate change . How can we deal with grey swans? Staying aware of risks is important. But what matters most isn’t prediction. We need to design for a deeper sort of resilience that Taleb calls “ antifragility ”. Taleb argues systems should be built to withstand – or even benefit from – shocks, rather than rely on perfect foresight. For policymakers, this means ensuring regulation, supply chains and institutions are built to survive a range of major shocks. For individuals, it means diversifying our bets, keeping options open and resisting the illusion that history can tell us everything. Above all, the biggest problem with the AI boom is its speed. It is reshaping the global risk landscape faster than we can chart its grey swans. Some may collide and cause spectacular destruction before we can react.
《危险边缘》的肯·詹宁斯在因“人工智能生成”标签引发强烈反对后向约翰·波克道歉
《危险边缘》的肯·詹宁斯在因“人工智能生成”标签引发强烈反对后向约翰·波克道歉
2025-11-05 04:41:00
Jeopardy 主持人肯·詹宁斯 (Ken Jennings) 在 10 月 17 日的一集中发布了有关互联网人物约翰·波克 (John Pork) 的错误信息后道歉。该线索错误地指出人工智能生成了病毒表情包,粉丝们更正了这一点,因为它是在人工智能工具广泛向公众开放之前创建的。詹宁斯在 TikTok 上发表道歉声明,承认错误并寻求对造成的任何侮辱的原谅。危险边缘
Nvidia 斥资 20 亿美元深度技术联盟深化印度业务以指导人工智能初创公司
Nvidia 斥资 20 亿美元深度技术联盟深化印度业务以指导人工智能初创公司
2025-11-05 03:30:00
领先的芯片制造商英伟达已作为创始成员加入印度深度科技联盟 (IDTA),并将投资 20 亿美元来培训和指导印度新兴的深度科技初创公司。该公司计划通过其 Nvidia 深度学习学院提供技术讲座和培训,重点关注人工智能系统、开发人员支持和负责任的部署。此举是在全球对印度不断增长的人工智能市场的兴趣日益浓厚以及印度政府促进深度技术研究和创新的举措之际做出的。
Jim Cramer 最近推荐这 4 只人工智能股票,称“明智的投机”可以改变你的生活
Jim Cramer 最近推荐这 4 只人工智能股票,称“明智的投机”可以改变你的生活
2025-11-05 02:01:00
CNBC 主持人吉姆·克莱默 (Jim Cramer) 建议年轻投资者考虑挑选个股以获得可观利润,并警告不要投资没有明确盈利路径的投机股票。他推荐苹果公司(AAPL)和英伟达公司(NVDA),尽管市场存在担忧,但仍称赞它们的潜力和领导力。克莱默强调明智投机的重要性,建议 25 岁以上的投资者选择一只投机性股票,如果年轻投资者经过充分研究,则最多五分之二的股票适合年轻投资者。
人工智能“教母”李飞飞说她“为与众不同而自豪”
人工智能“教母”李飞飞说她“为与众不同而自豪”
2025-11-05 00:18:15
李飞飞教授被称为人工智能“教母”,是荣获查理三世国王颁发的 2025 年伊丽莎白女王工程奖的七位先驱之一。李承认这一称号是为了强调女性在科学技术方面的贡献。与她一起的还有 Yoshua Bengio 教授、Geoffrey Hinton 教授、John Hopfield 教授、Nvidia 创始人 Jensen Huang、Bill Dally 博士和 Meta 的首席人工智能科学家 Yann LeCun 博士。他们因在现代机器学习方面的工作而受到表彰。Li 因 ImageNet 而受到特别认可,该项目在计算机视觉方面取得了重大进展。她强调了数据驱动人工智能的重要性,并展望了未来的里程碑,人工智能可以与环境互动,在各个领域“超越”人类。颁奖典礼反映了科学家之间关于人工智能潜在影响的持续争论。
人工智能研究揭示了为什么超级识别器擅长识别人脸
人工智能研究揭示了为什么超级识别器擅长识别人脸
2025-11-05 00:01:00
研究揭示了为什么超级识别器通过采用更智能而不仅仅是更广泛的面部视觉探索,利用对典型识别器和超级识别器的眼动追踪数据的人工智能分析,在面部识别方面表现出色。研究表明,超级识别器专注于携带更多身份线索的区域,增强人脸识别性能,而不仅仅是增加处理的信息量。