全部新闻

独家:特朗普称英伟达可以向中国出售强大的H200人工智能芯片
独家:特朗普称英伟达可以向中国出售强大的H200人工智能芯片
2025-12-09 10:50:00
美国总统特朗普宣布,将允许英伟达向经批准的中国客户出售其H200芯片,并附加25%的附加费,旨在平衡出口管制政策,维持美国在全球的技术主导地位。拜登政府采取了严格的出口限制措施,旨在遏制中国人工智能的进步,但鉴于中国在开发人工智能模型和硬件方面取得的进展,尽管采取了这些措施,但一些人认为此举还不够。
意见 |这是21世纪的军备竞赛。美国能跟上吗?
意见 |这是21世纪的军备竞赛。美国能跟上吗?
2025-12-09 10:03:27
在乌克兰,廉价的遥控武器在战场上占据主导地位,而中国则测试了可能具有自主狩猎和杀戮能力的同步无人机飞行。乔·拜登总统和习近平之间的会面凸显出日益加深的技术怀疑,习近平的团队对表面进行消毒,以防止收集 DNA 来设计针对他的潜在生物武器。人工智能设计的生物武器和网络武器等新兴技术正在迅速发展,引发了人们对军备控制和此类工具的道德开发必要性的担忧。美国必须协调公私合作、投资研发并考虑出口管制,以在这些技术军备竞赛中保持领先地位,同时确保安全和道德。
众议院民主党人成立人工智能工作组,以促进行业在华盛顿的存在
众议院民主党人成立人工智能工作组,以促进行业在华盛顿的存在
2025-12-09 10:00:01
众议院民主党人正在组建一个人工智能委员会,将自己定位为不断发展的人工智能政策领域的领导者。众议院民主党人工智能和创新经济委员会将与人工智能公司、利益相关者和相关国会委员会合作,开发政策专业知识。这一举措是为了响应 2026 年中期选举之前人工智能公司不断加大的游说力度和重大竞选捐款。民主党的目标是解决可能与联邦标准相冲突的监管人工智能的州法律,强调他们对公共卫生、安全和经济福祉的承诺。众议员 Ted Lieu、Josh Gottheimer 和 Valerie Foushee 将领导该委员会,而 Zoe Lofgren 和 Frank Pallone 担任当然联合主席。
这是泡沫吗?
这是泡沫吗?
2025-12-09 08:00:00
该备忘录对人工智能(AI)对金融市场和整个社会的潜在影响进行了详细而深思熟虑的分析。以下是要点:### 金融市场和人工智能泡沫1. **历史模式**:作者认为,变革性技术往往会产生过度的热情,从而导致基础设施建设超出所需的泡沫,并且资产价格过高。2. **当前对人工智能的热情**:尽管目前围绕人工智能的热情令人兴奋,但这种热情是否会导致符合历史模式的泡沫仍不确定。3. **泡沫带来的损失**:当泡沫破灭时,人们往往会在一片热情中对公司做出过于积极的评价,因此很难预测当尘埃落定后,哪些公司会成为赢家。4. **风险与回报平衡**:作者建议在人工智能投资策略上采取适度的态度,并指出,如果热情过高,充分参与潜在收益可能会带来巨大的损失风险。### 社会影响:失业和无目的1. **劳动力节省设备**:主要关注的是人工智能将如何成为一种令人难以置信的劳动力节省设备,通过提高生产率来潜在地消除工作岗位。2. **就业创造怀疑**:虽然有些人认为,在过去的技术进步之后,新的就业机会将会出现,但作者对人工智能影响的背景下这些新就业机会可能来自何处表示怀疑。3. **生产力和GDP增长**:人们担心,虽然人工智能可能会提高生产力(从而提高GDP),但它也可能会消除许多工作岗位,而没有明确的途径让失业工人找到新的就业机会或负担增加的产出。4. **全民基本收入(UBI)**:作者对全民基本收入等政府反应进行了推测,但担心为此类计划提供资金并解决与失业相关的更广泛的社会问题,包括心理健康影响和吸毒成瘾。5. **人才与人工智能**:在某些需要对日常任务进行判断和洞察的职业中,作者想知道人类是否可以被取代,或者需要多少高技能职位(例如医生、律师)是否有限制。6. **社会分裂**:人们担心少数受过高等教育的亿万富翁创造的技术会取代数百万人,这可能会加剧社会和政治分裂。### 结论备忘录最后,作者表达了对这些问题的严重担忧,尽管他承认在他的一生中取得了进展。他要求乐观主义者解释为什么他可能是错的,并强调了对工作岗位流失、经济不平等和技术变革的社会影响的具体担忧。### 法律信息和披露该文件包括有关使用权、分发限制的标准法律免责声明,以及过去的表现并不保证未来结果的免责声明。它仅用于教育目的,不应被解释为投资本文所述任何产品或服务的要约或招揽。
“人工智能教父”对工程师说:计算机科学学位将在很长一段时间内保持价值,只要学会...... - 印度时报
“人工智能教父”对工程师说:计算机科学学位将在很长一段时间内保持价值,只要学会...... - 印度时报
2025-12-09 07:55:00
被称为“人工智能教父”的杰弗里·辛顿(Geoffrey Hinton)建议学生和工程师尽管人工智能取得了进步,但不要放弃计算机科学学位。他强调,计算机科学学位提供系统思维、数学和解决问题等基础技能,这些技能在几十年内仍然有价值。Hinton 还鼓励中学生学习编码,将其比作学习拉丁语,作为一种丰富智力的练习。此外,他建议有抱负的人工智能研究人员关注批判性思维和核心数学技能。最近,Hinton 对 Google 在 AI 竞赛中花了这么长时间才赶上 OpenAI 表示惊讶,这表明 Google 现在开始超越他们。
人类研究人员表示,该行业应该停止构建大量人工智能代理——真正的突破是更简单的事情
人类研究人员表示,该行业应该停止构建大量人工智能代理——真正的突破是更简单的事情
2025-12-09 07:28:00
人类研究人员认为,科技行业不需要更多的人工智能代理,而是需要为通用代理配备领域专业知识和可重用工作流程的“技能”。来自 Anthropic 的 Barry 张和 Mahesh Murag 在人工智能工程代码峰会上发表演讲,表示这些技能有助于填补当前智能代理的上下文空白,并正在被财富 100 强公司采用用于内部实践。尽管人们对人工智能代理改变办公室工作充满热情,但一些行业专家警告不要过度夸大其能力。
亚马逊不希望您能够联系他们
亚马逊不希望您能够联系他们
2025-12-09 06:16:41
Daily Kos 是一个提供政治和激进主义相关内容的网站。关键部分包括首页、漫画、RSS 源、订阅选项以及有关隐私、使用条款和广告的信息。该网站强调与黑人社区的团结,并为用户提供各种联系和参与的方式。
2025-12-09 05:00:00
‘I feel it’s a friend’: quarter of teenagers turn to AI chatbots for mental health support A study by the Youth Endowment Fund reveals that about 40% of victims and perpetrators of youth violence in England and Wales aged 13-17 turn to AI chatbots for mental health support, driven by factors like availability, privacy, and perceived lack of judgment. The research highlights concerns over the adequacy of conventional mental health services and the potential risks associated with prolonged use of chatbots, leading experts to呼吁为儿童提供人类支持而非机器人支持。研究表明,约40%的13至17岁的英国和威尔士的年轻人,在遭受暴力行为的影响下,转向使用AI聊天机器人寻求心理健康支持。这一趋势受到年轻人对传统心理健康服务可及性不足、等待时间长以及缺乏同理心等问题的推动。专家警告说,尽管AI提供了即时帮助,但儿童仍需要人类专业人士的支持,而非依赖技术来解决复杂的情感和心理问题。同时,研究者强调了建立基于证据且由青年人主导的监管框架的重要性。 It was after one friend was shot and another stabbed, both fatally, that Shan asked ChatGPT for help. She had tried conventional mental health services but “chat”, as she came to know her AI “friend”, felt safer, less intimidating and, crucially, more available when it came to handling the trauma from the deaths of her young friends. As she started consulting the AI model, the Tottenham teenager joined about 40% of 13- to 17-year-olds in England and Wales affected by youth violence who are turning to AI chatbots for mental health support, according to research among more than 11,000 young people. It found that both victims and perpetrators of violence were markedly more likely to be using AI for such support than other teenagers. The findings , from the Youth Endowment Fund, have sparked warnings from youth leaders that children at risk “need a human not a bot”. The results suggest chatbots are fulfilling demand unmet by conventional mental health services, which have long waiting lists and which some young users find lacking in empathy. The supposed privacy of the chatbot is another key factor in driving use by victims or perpetrators of crimes. After her friends were killed Shan, 18, not her real name, started using Snapchat’s AI before switching to ChatGPT, which she can talk to at any time of day or night with two clicks on her smartphone. “I feel like it definitely is a friend,” she said, adding that it was less intimidating, more private and less judgmental than her experience with conventional NHS and charity mental health support. “The more you talk to it like a friend it will be talking to you like a friend back. If I say to chat ‘Hey bestie, I need some advice’. Chat will talk back to me like it’s my best friend, she’ll say, ‘Hey bestie, I got you girl’.” One in four of 13- to 17-year-olds have used an AI chatbot for mental health support in the past year, with black children twice as likely as white children to have done so, the study found. Teenagers were more likely to go online for support, including using AI, if they were on a waiting list for treatment or diagnosis or had been denied, than if they were already receiving in-person support. Crucially, Shan said, the AI was “accessible 24/7” and would not tell teachers or parents about what she had disclosed. She felt this was a considerable advantage over telling a school therapist, after her own experience of what she thought were confidences being shared with teachers and her mother. Boys who were involved in gang activities felt safer asking chatbots for advice about other safer ways to make money than a teacher or parent who might leak the information to police or other gang members, putting them in danger, she said. Another young person, who has been using AI for mental health support but asked not to be named, told the Guardian: “The current system is so broken for offering help for young people. Chatbots provide immediate answers. If you’re going to be on the waiting list for one to two years to get anything, or you can have an immediate answer within a few minutes … that’s where the desire to use AI comes from.” Jon Yates, the chief executive of the Youth Endowment Fund, which commissioned the research, said: “Too many young people are struggling with their mental health and can’t get the support they need. It’s no surprise that some are turning to technology for help. We have to do better for our children, especially those most at risk. They need a human not a bot.” There have been growing concerns about the dangers of chatbots when children engage with them at length. OpenAI, the US company behind ChatGPT, is facing several lawsuits including from families of young people who have killed themselves after long engagements. In the case of the Californian 16-year-old Adam Raine , who took his life in April, OpenAI has denied it was caused by the chatbot. It has said it has been improving its technology “to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”. The startup said in September it could start contacting authorities in cases where users start talking seriously about suicide. Hanna Jones, a youth violence and mental health researcher in London, said: “To have this tool that could tell you technically anything – it’s almost like a fairytale. You’ve got this magic book that can solve all your problems. That sounds incredible.” But she is worried about the lack of regulation. “People are using ChatGPT for mental health support, when it’s not designed for that,” she said. “What we need now is to increase regulations that are evidence-backed but also youth-led. This is not going to be solved by adults making decisions for young people. Young people need to be in the driving seat to make decisions around ChatGPT and mental health support that uses AI, because it’s so different to our world. We didn’t grow up with this. We can’t even imagine what it is to be a young person today.”
特朗普允许英伟达向中国出售先进人工智能芯片
特朗普允许英伟达向中国出售先进人工智能芯片
2025-12-09 02:13:58
美国总统唐纳德·特朗普取消了对英伟达向批准的中国客户出售其 H200 AI 芯片的禁令,旨在保护国家安全并支持美国就业。这一决定是在英伟达首席执行官黄仁勋的游说下作出的,并要求该公司向美国政府支付 15% 的中国收入。尽管一些人认为这有利于美国的芯片行业和全球技术的获取,但人们仍然对中国在军事应用中使用先进人工智能芯片的可能性感到担忧。
莱昂纳多·迪卡普里奥表示人工智能可以成为电影制片人的“增强工具”,但常常“消散”成“互联网垃圾”:“它没有人性”
莱昂纳多·迪卡普里奥表示人工智能可以成为电影制片人的“增强工具”,但常常“消散”成“互联网垃圾”:“它没有人性”
2025-12-09 02:06:00
莱昂纳多·迪卡普里奥 (Leonardo DiCaprio) 被《时代》周刊评为年度艺人,并因其在《一场接一场的战斗》中的角色而获得提名,他讨论了人工智能对电影制作的潜在影响。他认为人工智能既是年轻电影制作人的增强工具,也是对真实艺术表达的威胁,因为它无法复制人类的独特性。迪卡普里奥与其他好莱坞人物一样对人工智能在电影制作中的地位表示担忧,同时承认人工智能可以降低视觉特效等技术领域的成本。莱昂纳多·迪卡普里奥