cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused OurCoders (我们程序员)

OC

Knowledge OS
鹦鹉螺口语

全部新闻

亚马逊不希望您能够联系他们

2025-12-09 06:16:41

Daily Kos 是一个提供政治和激进主义相关内容的网站。关键部分包括首页、漫画、RSS 源、订阅选项以及有关隐私、使用条款和广告的信息。该网站强调与黑人社区的团结,并为用户提供各种联系和参与的方式。

亚马逊不希望您能够联系他们

2025-12-09 05:00:00

‘I feel it’s a friend’: quarter of teenagers turn to AI chatbots for mental health support A study by the Youth Endowment Fund reveals that about 40% of victims and perpetrators of youth violence in England and Wales aged 13-17 turn to AI chatbots for mental health support, driven by factors like availability, privacy, and perceived lack of judgment. The research highlights concerns over the adequacy of conventional mental health services and the potential risks associated with prolonged use of chatbots, leading experts to呼吁为儿童提供人类支持而非机器人支持。研究表明,约40%的13至17岁的英国和威尔士的年轻人,在遭受暴力行为的影响下,转向使用AI聊天机器人寻求心理健康支持。这一趋势受到年轻人对传统心理健康服务可及性不足、等待时间长以及缺乏同理心等问题的推动。专家警告说,尽管AI提供了即时帮助,但儿童仍需要人类专业人士的支持,而非依赖技术来解决复杂的情感和心理问题。同时,研究者强调了建立基于证据且由青年人主导的监管框架的重要性。 It was after one friend was shot and another stabbed, both fatally, that Shan asked ChatGPT for help. She had tried conventional mental health services but “chat”, as she came to know her AI “friend”, felt safer, less intimidating and, crucially, more available when it came to handling the trauma from the deaths of her young friends. As she started consulting the AI model, the Tottenham teenager joined about 40% of 13- to 17-year-olds in England and Wales affected by youth violence who are turning to AI chatbots for mental health support, according to research among more than 11,000 young people. It found that both victims and perpetrators of violence were markedly more likely to be using AI for such support than other teenagers. The findings , from the Youth Endowment Fund, have sparked warnings from youth leaders that children at risk “need a human not a bot”. The results suggest chatbots are fulfilling demand unmet by conventional mental health services, which have long waiting lists and which some young users find lacking in empathy. The supposed privacy of the chatbot is another key factor in driving use by victims or perpetrators of crimes. After her friends were killed Shan, 18, not her real name, started using Snapchat’s AI before switching to ChatGPT, which she can talk to at any time of day or night with two clicks on her smartphone. “I feel like it definitely is a friend,” she said, adding that it was less intimidating, more private and less judgmental than her experience with conventional NHS and charity mental health support. “The more you talk to it like a friend it will be talking to you like a friend back. If I say to chat ‘Hey bestie, I need some advice’. Chat will talk back to me like it’s my best friend, she’ll say, ‘Hey bestie, I got you girl’.” One in four of 13- to 17-year-olds have used an AI chatbot for mental health support in the past year, with black children twice as likely as white children to have done so, the study found. Teenagers were more likely to go online for support, including using AI, if they were on a waiting list for treatment or diagnosis or had been denied, than if they were already receiving in-person support. Crucially, Shan said, the AI was “accessible 24/7” and would not tell teachers or parents about what she had disclosed. She felt this was a considerable advantage over telling a school therapist, after her own experience of what she thought were confidences being shared with teachers and her mother. Boys who were involved in gang activities felt safer asking chatbots for advice about other safer ways to make money than a teacher or parent who might leak the information to police or other gang members, putting them in danger, she said. Another young person, who has been using AI for mental health support but asked not to be named, told the Guardian: “The current system is so broken for offering help for young people. Chatbots provide immediate answers. If you’re going to be on the waiting list for one to two years to get anything, or you can have an immediate answer within a few minutes … that’s where the desire to use AI comes from.” Jon Yates, the chief executive of the Youth Endowment Fund, which commissioned the research, said: “Too many young people are struggling with their mental health and can’t get the support they need. It’s no surprise that some are turning to technology for help. We have to do better for our children, especially those most at risk. They need a human not a bot.” There have been growing concerns about the dangers of chatbots when children engage with them at length. OpenAI, the US company behind ChatGPT, is facing several lawsuits including from families of young people who have killed themselves after long engagements. In the case of the Californian 16-year-old Adam Raine , who took his life in April, OpenAI has denied it was caused by the chatbot. It has said it has been improving its technology “to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”. The startup said in September it could start contacting authorities in cases where users start talking seriously about suicide. Hanna Jones, a youth violence and mental health researcher in London, said: “To have this tool that could tell you technically anything – it’s almost like a fairytale. You’ve got this magic book that can solve all your problems. That sounds incredible.” But she is worried about the lack of regulation. “People are using ChatGPT for mental health support, when it’s not designed for that,” she said. “What we need now is to increase regulations that are evidence-backed but also youth-led. This is not going to be solved by adults making decisions for young people. Young people need to be in the driving seat to make decisions around ChatGPT and mental health support that uses AI, because it’s so different to our world. We didn’t grow up with this. We can’t even imagine what it is to be a young person today.”

特朗普允许英伟达向中国出售先进人工智能芯片

2025-12-09 02:13:58

美国总统唐纳德·特朗普取消了对英伟达向批准的中国客户出售其 H200 AI 芯片的禁令,旨在保护国家安全并支持美国就业。这一决定是在英伟达首席执行官黄仁勋的游说下作出的,并要求该公司向美国政府支付 15% 的中国收入。尽管一些人认为这有利于美国的芯片行业和全球技术的获取,但人们仍然对中国在军事应用中使用先进人工智能芯片的可能性感到担忧。

特朗普允许英伟达向中国出售先进人工智能芯片

莱昂纳多·迪卡普里奥表示人工智能可以成为电影制片人的“增强工具”,但常常“消散”成“互联网垃圾”:“它没有人性”

2025-12-09 02:06:00

莱昂纳多·迪卡普里奥 (Leonardo DiCaprio) 被《时代》周刊评为年度艺人,并因其在《一场接一场的战斗》中的角色而获得提名,他讨论了人工智能对电影制作的潜在影响。他认为人工智能既是年轻电影制作人的增强工具,也是对真实艺术表达的威胁,因为它无法复制人类的独特性。迪卡普里奥与其他好莱坞人物一样对人工智能在电影制作中的地位表示担忧,同时承认人工智能可以降低视觉特效等技术领域的成本。莱昂纳多·迪卡普里奥

莱昂纳多·迪卡普里奥表示人工智能可以成为电影制片人的“增强工具”,但常常“消散”成“互联网垃圾”:“它没有人性”

Aware Super CIO警告称,随着估值飙升,人工智能融资将出现“橙色”灯光

2025-12-09 01:44:00

Aware Super 的首席信息官警告人工智能行业融资出现“橙灯”,但指出盈利增长支撑了当前估值。他强调,如果资本投资不能转化为利润,特别是在最近转向不太稳定的融资方式的情况下,市场将面临调整的风险。华纳还指出了主要科技公司的资本支出与更广泛的经济状况之间的相互依赖性,并指出微软是 Aware 的第二大上市股票投资。

Aware Super CIO警告称,随着估值飙升,人工智能融资将出现“橙色”灯光

Nvidia vs. Broadcom:2026 年更值得拥有哪只 AI 芯片股?|杂七杂八的傻瓜

2025-12-09 01:00:00

Nvidia 和 Broadcom 预计将在 2026 年争夺 AI 芯片市场的主导地位。Nvidia 凭借在数据中心 GPU 领域占据主导地位的 GPU 处于领先地位,通过 CUDA 软件平台提供灵活性。然而,博通通过帮助超大规模企业设计定制 AI ASIC 来提供成本效益和更低的功耗,从而获得了关注。尽管英伟达可能仍保持领先地位,但博通较小的收入基础及其 ASIC 业务的爆炸性增长潜力使其在 2026 年的股票前景更好。

Nvidia vs. Broadcom:2026 年更值得拥有哪只 AI 芯片股?|杂七杂八的傻瓜

在人工智能最盛大的聚会上,其内部运作方式仍然是个谜

2025-12-09 00:36:00

今年在圣地亚哥举行的 NeurIPS 会议有创纪录的 26,000 名与会者参加,会议重点关注人工智能 (AI),并强调可解释性,即了解先进的 AI 系统如何运作。尽管人工智能取得了快速进步,但这些系统的基础方面仍然存在不确定性,从而引发了关于实现更好理解和可靠性的最佳方法的争论。会议强调了人工智能在科学研究中日益增长的重要性,尽管在准确衡量其有效性方面存在挑战。

在人工智能最盛大的聚会上,其内部运作方式仍然是个谜

迈克尔·基顿和肖恩·道格拉斯在《甜水》短片中探索悲伤、怀旧和人工智能

2025-12-07 21:15:00

《Sweetwater》是一部由肖恩·道格拉斯编剧、迈克尔·基顿·道格拉斯执导的 21 分钟电影,探讨了人工智能技术在抚慰悲伤方面的应用。故事讲述罗伯特在失去名人母亲后回到儿时的家,迎接她的是一个根据网上公开信息训练的人工智能再现的母亲。这部电影质疑新时代技术是否可以为悲伤提供更健康的替代方案,并且是谷歌屏幕人工智能计划的一部分。肖恩·道格拉斯讨论了编写他的第一个剧本的挑战,而基顿·道格拉斯则反思了执导短片与故事片,强调在严格的时间限制内讲故事的重要性。两位电影制作人的目标是提出有关人工智能在个人背景下的未来影响的问题,而不是提供明确的答案。

迈克尔·基顿和肖恩·道格拉斯在《甜水》短片中探索悲伤、怀旧和人工智能

杰米·戴蒙 (Jamie Dimon) 表示,尽管人工智能将消除一些工作,“也许有一天我们会工作得不那么辛苦,但会过上美好的生活”财富

2025-12-07 17:57:00

摩根大通首席执行官杰米戴蒙对人工智能对经济的影响表示乐观,并将其与拖拉机和疫苗等历史性创新相比较,这些创新对社会产生了积极的影响。他承认潜在的工作流失,但强调需要监管和适应人类独特的技能,例如批判性思维和情商。戴蒙还建议政府和私营部门采取分阶段的方法,以减轻人工智能转型期间的经济损失,并强调在人工智能相关基础设施开发中创造新的就业机会。摩根大通

杰米·戴蒙 (Jamie Dimon) 表示,尽管人工智能将消除一些工作,“也许有一天我们会工作得不那么辛苦,但会过上美好的生活”财富

人工智能开发工具中发现的关键缺陷被称为“IDEsaster”——可能导致数据盗窃和远程代码执行

2025-12-07 14:26:28

对 AI 辅助开发工具进行的为期六个月的调查揭示了 30 多个安全漏洞,这些漏洞允许跨 Visual Studio Code 和 JetBrains 产品等流行 IDE 进行数据泄露和远程代码执行。IDEsaster 报告强调,所有经过测试的 AI IDE 和编码助手都容易受到攻击,至少分配了 24 个 CVE。安全研究员 Ari Marzouk 指出,当前的威胁模型忽略了 IDE 功能中的自主 AI 代理带来的风险。长期解决方案需要重新设计 IDE 管理 AI 代理交互的方式以确保安全。

人工智能开发工具中发现的关键缺陷被称为“IDEsaster”——可能导致数据盗窃和远程代码执行