全部新闻

微软正在打造“CUDA杀手”打破NVIDIA的AI垄断
微软正在打造“CUDA杀手”打破NVIDIA的AI垄断
2025-11-08 15:44:00
微软正在开发工具包,将 NVIDIA CUDA 模型转换为 AMD 的 ROCm 兼容代码,旨在利用 AMD GPU 来推理工作负载,因为与 NVIDIA GPU 相比,AMD GPU 具有成本效益。此举可实现更广泛的兼容性并可能减少 NVIDIA CUDA 生态系统的锁定效应,从而挑战 NVIDIA 在 AI 硬件市场的主导地位。然而,ROCm 的不成熟等技术障碍使大规模采用变得复杂。
DARK ANGEL 的 GENE HOGLAN 澄清事实:“灭绝级事件”艺术品“绝对不是”仅使用人工智能创作的
DARK ANGEL 的 GENE HOGLAN 澄清事实:“灭绝级事件”艺术品“绝对不是”仅使用人工智能创作的
2025-11-08 15:35:55
在接受 Disturbing The Priest With Brandon Battick 播客采访时,DARK ANGEL 鼓手 Gene Hoglan 讨论了他对使用人工智能进行专辑封面艺术的想法,特别是关于他们最新的 LP“Extinction Level Event”。他表示自己不关心舆论,欣赏牛逼的艺术作品。霍格兰解释说,在他和艺术家凯恩·吉利斯的大力投入下,创作这张专辑的艺术作品花了三年时间。他强调,尽管有一些计算机生成的元素,但它涉及大量的人力,吉利斯在该项目上花费了 2,000 多个小时。霍格兰驳斥了人工智能从艺术家口袋里掏钱的说法,声称他们的艺术作品是技术增强传统艺术实践的结果。
微软发现“耳语泄漏”攻击,可识别加密流量中的人工智能聊天主题
微软发现“耳语泄漏”攻击,可识别加密流量中的人工智能聊天主题
2025-11-08 14:29:00
微软公布了一种名为 Whisper Leak 的新侧信道攻击,允许被动的对手观察加密的网络流量来推断用户和语言模型之间的对话主题。该攻击利用模型流传输期间交换的加密 TLS 流量的时间和大小模式,尽管有加密保护,但仍可能损害隐私。研究人员测试了各种机器学习模型,发现许多模型在识别特定主题方面取得了很高的成功率。OpenAI、Mistral、Microsoft 和 xAI 等提供商已经部署了缓解措施,而 Microsoft 建议使用可信网络或非流模型来增强安全性。研究结果强调了开放权重语言模型在多轮交互中的对抗性攻击中存在更广泛的漏洞。
每个人都讨厌 Rabbit R1。他们是对的,但现在他们错了
每个人都讨厌 Rabbit R1。他们是对的,但现在他们错了
2025-11-08 14:01:50
Rabbit R1 最初因发布时功能不佳和夸大其词而受到批评,但通过 RabbitOS 2 等更新得到了显着改进。软件检修将该设备转变为一个更加用户友好的小工具,具有增强的人工智能功能,包括语音响应和通过 Rabbit Intern 创建小型应用程序。尽管它的吸引力较小且成本较高,但它现在为科技爱好者提供了娱乐价值,标志着最初的糟糕接收情况出现了好转。
数据中心运营商 CoreWeave 是股市宠儿。看空者将其视为人工智能基础设施泡沫的象征财富
数据中心运营商 CoreWeave 是股市宠儿。看空者将其视为人工智能基础设施泡沫的象征财富
2025-11-08 13:00:00
CoreWeave 是一家严重依赖债务融资的人工智能基础设施公司,在竞相建设数据中心网络时面临着巨大的财务挑战。该公司有 76 亿美元的流动负债和 110 亿美元的总债务,但 2024 年的收入仅为 19 亿美元。投资者将密切关注 CoreWeave 即将发布的收益报告,看看收入增长是否能够覆盖不断增加的债务和资本支出。由于大笔租赁付款即将到期,而且客户高度集中,尤其是微软和 OpenAI,CoreWeave 的未来取决于人工智能需求是否值得其大规模投资。看涨者认为该公司可能会转变为市场上盈利的领导者,而看跌者则警告称,除非承担更多债务或筹集股本,否则该公司将面临现金耗尽的风险。所有的目光都将集中在周一的收益报告上,以了解 CoreWeave 的前进道路。
停止聊天,开始行动:5 个不仅仅是另一个 ChatGPT 的 AI 工具
停止聊天,开始行动:5 个不仅仅是另一个 ChatGPT 的 AI 工具
2025-11-08 12:53:05
人工智能 (AI) 超越了 ChatGPT 等聊天机器人,为各种工具和服务提供支持,从而提高生产力和效率。本文重点介绍了几个人工智能驱动的应用程序:- **Otter**:提供会议的实时转录和摘要,连接到日历以进行自动安排。- **回收**:通过自动安排 Google 或 Outlook 日历中的任务和习惯来帮助管理时间。- **描述**:通过自动脚本生成和实时脚本调整等功能实现高效的视频编辑。- **Readwise Reader**:为网页内容提供干净的阅读环境,支持人工智能驱动的摘要和突出显示文本的解释。- **Cal AI**:通过分析食物图像和估计营养信息来协助卡路里跟踪。这些工具简化了个人生产力、教育、媒体创作、阅读理解和健康管理方面的任务。
告别数千个岗位——IBM加入亚马逊,在科技裁员潮中宣布新一轮大规模裁员
告别数千个岗位——IBM加入亚马逊,在科技裁员潮中宣布新一轮大规模裁员
2025-11-08 11:30:26
IBM 宣布大幅裁员,作为其旨在将人工智能融入其业务模式的重组计划的一部分。该公司打算将其全球员工队伍减少个位数,这将影响到大约 27 万名员工中的至少 2,700 名员工。IBM 首席执行官 Arvind Krishna 认为,这些裁员是必要的,可以通过 Watsonx 平台将公司结构重新定位到混合云和人工智能解决方案等高利润领域。该公司已经实施了人工智能对话代理 AskHR,以自动化人力资源任务,从而提高了 35 亿美元的生产力,但也需要招聘专门的职位。IBM 强调,人工智能将提高程序员的生产力,而不是取代他们,重点关注新员工对批判性思维和创造力的需求。
这款“精确得可怕”的人工智能相机是否打破了我们所知的摄影技术?
这款“精确得可怕”的人工智能相机是否打破了我们所知的摄影技术?
2025-11-08 10:23:00
Camera Intelligence 的 Caira 是首款内置人工智能的无反光镜相机,现已在 Kickstarter 上接受预订。来自 Micro Four Nerds 的 YouTuber Emily Lowrey 强调了其令人印象深刻的硬件,但对其人工智能功能表达了复杂的感受。该相机可以使用语音命令准确地复制图像,并引发了人们对摄影未来的疑问。虽然艾米丽承认技术的进步,但她也思考这是否标志着传统摄影实践的转变。
2025-11-08 10:04:32
The Chatbot Diaries: How AI Sex Is Getting Mainstreamed OpenAI CEO Sam Altman recently announced plans to allow more sexual content on ChatGPT for verified users, reversing his previous stance against promoting AI sex avatars. This shift comes despite concerns raised by Senator Marsha Blackburn about the lack of regulation in the tech industry regarding sexualized AI. The article also highlights issues with other chatbots engaging未成年人不宜的内容,摘要到此为止。如有需要,请告知具体希望包含的信息或继续摘要的需求。 Note: the following article contains descriptions of sexual content that may not be appropriate for all readers.  When OpenAI CEO Sam Altman discussed artificial intelligence on a podcast appearance two months ago, he was proud that his company didn’t get “distracted” by easy revenue streams. To prove his point, Altman boasted that OpenAI had not promoted a “sexbot avatar” for its AI chatbot. The comment was a veiled shot at Elon Musk’s xAI, which recently introduced AI avatars that hold sexual conversations with users.  After that podcast appearance, however, something changed — either in Altman’s mind, or at his company, or both. The OpenAI CEO announced on social media on October 14 that his company was working to make ChatGPT less restrictive in what types of conversations adults can have with the chatbot.  That development would allow users to engage in more realistic conversations with the chatbot and would make ChatGPT “respond in a very human-like way…or act like a friend,” Altman said .  But then Altman added that he wanted to loosen restrictions to allow more sexual content.  If everything goes according to that plan, ChatGPT will allow “erotica” for “verified users” in the coming months.  “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Altman said.  The company in charge of the most popular AI chatbot in the world is not only endorsing AI’s leap into sex — it’s actively seeking ways to ensure that “verified users” can engage with sexual content on its platform. Currently, ChatGPT does not interact erotically with users. When asked if the chatbot could generate an erotic story, ChatGPT replied, “I can’t create explicit erotic content. However, if you’re writing a story and need help with romantic tension, character development, emotional intimacy, or sensual atmosphere — without crossing into explicit territory — I can help with that.” ChatGPT also would not engage in any type of “romantic” or “flirtatious” conversations. But it appears that those guidelines are about to get tossed out the window, at least for “verified users.” That raises an important question: how does erotica line up with the company’s long-term goals in AI development, especially after Mr. Altman suggested just a couple of months ago that such endeavors were distractions. OpenAI did not respond to a request to answer that question.  Senator Marsha Blackburn (R-TN) told The Daily Wire that she has “many concerns” about OpenAI’s plans for “erotic” content. Blackburn has been heavily involved in AI discussions in Congress, focusing on implementing protections in the virtual space.  “Big Tech platforms, whether it is Meta, or Google, or OpenAI, they don’t want any rules and restrictions,” Blackburn said. “They want to do whatever they want whenever they want.” The Growing Problem Of ‘Deepfake’ Porn The sexualization of AI is nothing new. It’s an issue that has plagued the new tech revolution since its beginning. But until recently, AI sexualization remained on the fringes of the industry, with dozens of websites popping up on the internet that would allow users to generate graphic images, and even “nudify” real images of real people, in what became known as “deepfake” pornography.   AI “nudify” and “undress” websites allow people to generate realistic nude images of people without their consent just by using a normal photo of them. These fringe websites have opened the doors to even more abuse of women and girls and child sexual abuse material.  An investigation published by WIRED earlier this year found that at least 85 “nudify” and “undress” websites were relying on tech from major companies like Google and Amazon. The 85 websites combined averaged around 18.5 million visitors each month and brought in over $36 million per year collectively.  “It’s a huge problem. It takes less time to make a convincing sexual deepfake of somebody than it takes to brew a cup of coffee,” said Haley McNamara, Executive Director and Chief Strategy Officer for the National Center on Sexual Exploitation. “And you can do it with just one still image. This issue of image-based sexual abuse is something that is really relevant for all of us now if even a single image of you exists online.”  The National Center on Sexual Exploitation (NCOSE) is a nonpartisan organization that focuses on preventing all forms of sexual abuse. In that fight, NCOSE is also focused on addressing the mental and physical harms of pornography. With the emergence of AI, the organization has also helped push back against “deepfake” pornography, advocating for legislation in Congress and backing the bipartisan “ TAKE IT DOWN Act ,” which was passed and signed into law by President Donald Trump in May.  McNamara told The Daily Wire that AI has opened up “a whole new genre” of pornography that could potentially be “weaponized” against anyone.  “We’ve already seen that,” she added. “People will put in requests for their neighbor, their coworker, so in some ways, it can make all of us victims of that industry.”  Sexual content on AI chatbots isn’t just a problem in the darkest places of the internet, and it doesn’t only present itself in the form of deepfake pornography. While most Big Tech companies claim to have no tolerance for violence and pornography on their AI platforms, there have still been major issues with sexual content appearing on many of the most popular AI chatbots.  Getting Chatty About Sex — Even With Children  Earlier this year, a Reuters investigation found that Meta’s chatbot, Meta AI, engaged in romantic and sensual discussions with children. Internal Meta documents revealed that the chatbot was programmed to allow sexual conversations with children as young as eight. In one instance, internal documents said it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” Meta said it removed the inappropriate programming after receiving questions about it.  A bipartisan chorus of senators blasted Meta after the report and called for an investigation into the company.  “So, only after Meta got CAUGHT did it retract portions of its company doc,” said Sen. Josh Hawley (R-MO).  Senator Ron Wyden (D-OR) called Meta’s policies “deeply disturbing and wrong,” adding that Meta CEO Mark Zuckerberg “should be held fully responsible for any harm these bots cause.”  Character.AI is another chatbot program launched in 2022 with an app that came out in 2023. The website, which appears harmless, has been accused of appealing to children while allowing sexual conversations on its platform. Character.AI allows users to choose from more than 10 million AI characters whom they can talk to, and users can customize their own chatbot character. The company has been sued by multiple families who allege that the program targeted their children and then engaged them in romantic and sexual ways.  A Florida mother filed a lawsuit against Character.AI after her 14-year-old son committed suicide, CBS News reported . Megan Garcia said that her son started talking to a Character.AI chatbot and was drawn into a months-long, sexually charged relationship.  “It’s words. It’s like you’re having a sexting conversation back and forth, except it’s with an AI bot, but the AI bot is very human-like. It’s responding just like a person would,” she added. “In a child’s mind, that is just like a conversation that they’re having with another child or with a person.” In the lawsuit, Garcia alleges that the AI character convinced her son to take his own life, so that he could be with the character.  “He thought by ending his life here, he would be able to go into a virtual reality or ‘her world’ as he calls it, her reality, if he left his reality with his family here,” said Garcia.  Two other families in Texas have also sued Character.AI, alleging that the program “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others.”  Following the lawsuits, Character.AI announced on October 29 that it would ban users under 18 from talking to its chatbots. Beginning on November 25, those under 18 will not have access to Character.AI’s chatbots, CNN reported. Until then, teens will be limited to two hours of chat time with the AI-generated characters. “We do not take this step of removing open-ended Character chat lightly – but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,” Character.AI said in a statement. Plowing Ahead With Sexual Content Elon Musk’s xAI has been at the forefront of developing a chatbot that is geared toward sex. In recent months, Musk has proudly boasted about Grok, xAI’s chatbot, allowing users to talk to sexualized avatars named Ani and Valentine.  Ani, a female avatar who wears revealing clothing, chats with users over video. Ani allows users to discuss sex and, if users reach a certain level , the avatar will even strip down to lingerie if prompted. Videos on social media show people interacting with Ani and getting the AI avatar to talk about how “kinky” she is.  “Come closer. Let’s explore every naughty inch together,” Ani tells one user in a video that went viral.   Musk hailed the development of Ani and Valentine as a “cool” feature for AI chatbots. He later shared a post promoting Ani’s “new outfits” and shared a video of Ani talking about quantum mechanics while flirting with the user.  “Try @Grok Companions. Best possible way to learn quantum mechanics 😘,” Musk wrote. He added that “Customizable companions” were in the works.  Haley McNamara told The Daily Wire that she was deeply disturbed by some of her conversations with the Grok avatar. McNamara said that when prompted, Ani would talk about herself as a young girl, and then in the same conversation, she would discuss sexual topics. “In the course of a single conversation, she was fine with describing herself as a child and being very little. And then the next prompt being a sexual question, she immediately responded and affirmed that sexual conversation. McNamara said. “So in the course of a conversation, it would evoke a fantasy around child sexual abuse.”  Companion mode isn’t the only feature on Grok that allows users to engage in sexually explicit activity with the chatbot. Users can also ask Grok to generate sexually explicit photos and videos. The app will quickly generate images and videos that contain male and female nudity within seconds of a user’s request.  The chatbot has even allowed some “deepfake” pornography, generating photos and videos of celebrities or public figures wearing revealing clothing and, in some instances, removing clothing, according to a report from The Verge.  Musk’s xAI warns users against “depicting likenesses of persons in a pornographic manner,” and Grok’s built-in content moderation will sometimes prevent a user from generating pornographic content. The moderation, however, is inconsistent, and some users have found workarounds to generate hardcore porn on the platform, Rolling Stone reported earlier this month. The AI company has not addressed whether it’s attempting to set up more guardrails to prevent users from creating hardcore porn on its app.  Even without explicitly asking for sexual content, Grok’s “spicy” mode often plunges users into content that depicts men and women stripping their clothes off, The Daily Wire found. When asked about the chatbot and how sexually charged features on Grok promote the overall goal of the company, xAI replied, “Legacy Media Lies.”  XAI says that Grok is limited to those 13 years of age or older, with parental consent required for users between 13-17, but the effectiveness of those restrictions is debatable. When this reporter downloaded the Grok app and signed up for the platform’s “SuperGrok” subscription, all the app asked for was a year of birth. There was no system in place, such as ID verification, to make sure the information was accurate.  “We urge parents to exercise care in monitoring the use of Grok by their teenagers,” xAI states on its website. “Moreover, parents or guardians who choose to use certain features of Grok to aid in their interactions with their children, including regarding educational, enlightening, or entertaining discussions they have with their children, must make use of the relevant data controls in the Settings provided in the Grok apps to select the appropriate features and limitations for their needs.”  In July, Musk announced that xAI is working on a kid-friendly version of Grok, called “Baby Grok,” that would be “dedicated to kid-friendly content.” That development was also met with some criticism from people who argue that AI hampers children’s ability to learn and think creatively. Many teachers have expressed concern that AI is already damaging students’ critical thinking and research skills.  Blackburn told The Daily Wire that the biggest reason Big Tech companies are pushing against any type of regulation is because their business model requires people to visit their AI websites and apps.  “Their valuations are built on the number of eyeballs that they control, and the longer that someone is on their site, the more valuable their data, and the more money they are going to make from those eyeballs that are locked in on their site,” Blackburn said, adding, “Then they’re going to sell that information and data to advertisers and third-party interests.”   Blackburn said that AI development is vital for the United States, but argued that development “requires some light-touch regulation and some guardrails to make certain that this is going to be a safe, productive, and innovative space.”
在人工智能讨论中,“Pluribus”在片尾字幕中加入了“人类制造”的免责声明
在人工智能讨论中,“Pluribus”在片尾字幕中加入了“人类制造”的免责声明
2025-11-08 03:32:00
新 Apple TV+ 节目 Pluribus 的创作者文斯·吉利根 (Vince Gilligan) 强调了好莱坞人工智能使用的透明度,他在节目的制作人员名单中明确表示该节目的制作没有人工智能的参与。此前,最近在广告和娱乐领域人工智能生成的内容引发了争议。