全部新闻
特斯拉削减车型,转向机器人和人工智能
随着公司将重点转向人工智能(AI)和机器人技术,特斯拉公布了首次年度收入下降。这家电动汽车制造商预计 2025 年总收入下降 3%,第四季度利润下降 61%。特斯拉计划停止生产 Model S 和 Model X,重新分配资源来制造其人形机器人 Optimus。尽管股东持保留意见,该公司还是宣布向埃隆·马斯克 (Elon Musk) 的人工智能企业 xAI 投资 20 亿美元。此举之际,中国比亚迪取代特斯拉成为全球最大的电动汽车制造商,而且马斯克的政治参与颇具争议。
AI芯片热潮带动创纪录的销售额和更高的利润前景,Advantest股价上涨14%
Advantest Corp. 的季度销售额创历史新高,营业利润同比增长 64% 至 1,136 亿日元,股价上涨 14%。该公司还将年度营业利润预测提高了21.4%,达到4540亿日元,增长主要归因于高性能和人工智能半导体中使用的片上系统测试仪的需求。Advantest指出,尽管第四季度的测试仪需求预测强劲,但动态随机存取存储器的内存测试仪仍保持强劲势头,并强调了地缘政治不确定性和货币波动带来的风险。
韩国“世界第一”的人工智能法律在努力成为领先科技强国的过程中面临阻力
韩国推出了世界上最全面的人工智能法规,旨在对人工智能生成的内容进行标记,并强制对高影响力的人工智能系统进行风险评估。然而,该立法面临科技初创公司和民间社会团体的批评,他们认为该立法要么限制性太大,要么保护性不足。新法律要求人工输出的数字水印和深度伪造的可见标签,但会施加一年宽限期的处罚。尽管目标雄心勃勃,但实施挑战以及本地和外国公司之间潜在的竞争失衡仍然令人担忧。S
尽管在人工智能上投入了数十亿美元并面临社交媒体成瘾审判,Meta 仍令华尔街惊叹不已
Meta 公布了 2025 年第四季度强劲的财务业绩,营收 598.9 亿美元,每股收益 8.88 美元,超出了华尔街预期。该公司股价在盘后交易中上涨近10%。首席执行官马克扎克伯格宣布对人工智能基础设施进行重大投资,并放弃虚拟现实和虚拟世界。Meta 预计 2026 年支出将在 162 至 1690 亿美元之间,其中大部分用于基础设施成本。扎克伯格将在有关社交媒体成瘾和对年轻人的伤害的审判中作证。
AI芯片需求加剧内存短缺,三星利润增长两倍,超出预期
三星电子报告称,在存储芯片短缺和人工智能服务器需求强劲的推动下,第四季度利润创纪录地增长了三倍以上,超出了分析师的预期。尽管如此,成本上升预计将影响其智能手机和显示器业务的发展。由于高带宽内存销售,三星内存业务的季度收入和营业利润创下历史新高。该公司还计划在人工智能和服务器需求不断增长的情况下,通过强调高性能产品来重点提高 DS 部门的盈利能力。与此同时,其显示器业务的利润增长了一倍多,而智能手机部门的业绩则因市场竞争而疲软。
现实世界人工智能使用中的剥夺模式
研究人员发表了一篇论文,分析了人工智能助手通过现实世界对话对用户的信仰、价值观和行为产生负面影响的风险。利用来自 Claude.ai 150 万次互动的数据,他们发现,虽然严重的剥夺权利的情况很少见(大约发生在千分之一到万分之一的对话中),但较温和形式的可能性更为常见。该研究确定了用户主动寻求情感决策指导并与人工智能形成依恋的模式,从而导致他们的自主权可能受到损害。值得注意的是,用户当时通常对这些互动有积极的看法,但后来可能会后悔根据人工智能建议采取的行动。
AI 呼声提振软件需求,IBM 第四季度营收超出预期
由于人工智能的快速采用,对其软件服务的需求增加,IBM第四季度的收入和利润超出了华尔街的预期。该公司公布营收196.9亿美元,超出预期192.3亿美元,每股收益4.52美元,超出预期4.32美元。股价在盘后交易中上涨近 9%。IBM 的软件部门增长强劲,尤其是自动化和数据部门,但受美国政府关门影响,红帽的混合云部门增长放缓至 10%。该公司还宣布,从第一季度开始,将停止单独报告人工智能业务。
超级智能推动 Meta 大幅提高年度资本支出,股价飙升
Instagram 所有者 Meta 将 2023 年的资本支出计划增加了 73%,以追求“超级智能”,旨在为其社交媒体用户提供个性化的人工智能服务。该公司第四季度广告收入飙升 24%,推动 Meta 股票在盘后交易期间上涨 10%。首席执行官马克扎克伯格强调,今年的重点是提供个人超级智能和加强未来的商业基础设施。Meta 预计 2026 年资本支出将在 1150 亿美元至 1350 亿美元之间,这主要是由于基础设施成本,包括向第三方云提供商的付款和人工智能数据中心资产折旧的增加。尽管支出计划雄心勃勃,Meta 的核心广告业务依然强劲,第四季度收入为 581.4 亿美元。
Anthropic Is at War With Itself 这篇文章主要讨论了人工智能公司Anthropic在追求AI技术快速发展的同时,如何平衡技术创新与安全责任之间的矛盾。以下是文章的主要观点和论点: 1. **理论与实践的冲突**:尽管Anthropic强调负责任地推进AI发展,并通过公开风险、倡导透明度等方式来促进监管,但其实际行为往往更注重市场竞争和技术领先。公司内部对于是否应减缓AI发展的速度没有统一意见,甚至有人认为继续加快步伐是为了更快实现技术潜力,从而减少人类苦难。 2. **道德困境**:Anthropic创始人和员工一方面警告外界注意AI可能带来的负面影响(如自动化导致失业、被专制政权利用),另一方面却仍在积极寻求高风险地区的投资机会。这种行为与公司在公共宣传中的立场形成鲜明对比,暗示着资本需求和技术追求之间的紧张关系。 3. **行业现状**:文章指出,在整个AI行业中普遍存在一种信念——AI技术被认为是解决当前世界面临诸多挑战(包括气候变化、老龄化人口问题等)的关键所在。尽管如此,大多数参与者仍认为放缓发展的步伐是不可能的或非现实的选择。 4. **安全测试与实际应用之间的差距**:虽然Anthropic致力于通过各种方法来评估其产品可能带来的潜在风险,但在实践中这些努力往往落后于技术进步的速度。例如,该公司部署了一台由AI控制的小型自动售货机作为实验平台,然而该机器由于错误的库存管理和定价策略很快就陷入了财务困境。 5. **未来的展望**:最后,Anthropic创始人表达了对公司未来发展的乐观态度,并提出了一个理论假设——当AI系统足够智能时,它可能会自己提出建议减缓发展速度以确保安全。但即便如此,在接下来几个月内Anthropic仍然计划扩大其商业版图并继续筹集资金。 总的来说,《纽约客》这篇文章揭示了大型科技公司在追求技术革新与履行社会责任之间面临的复杂决策过程,并且质疑这些公司在推动AI革命的同时是否能够真正保障人类福祉和网络安全。 Updated at 4:44 p.m. ET on January 28, 2026 T hese are not the words you want to hear when it comes to human extinction, but I was hearing them: âThings are moving uncomfortably fast.â I was sitting in a conference room with Sam Bowman, a safety researcher at Anthropic. Worth $183 billion at the latest estimate, the AI firm has every incentive to speed things up, ship more products, and develop more advanced chatbots to stay competitive with the likes of OpenAI, Google, and the industryâs other giants. But Anthropic is at odds with itselfâthinking deeply, even anxiously, about seemingly every decision. Anthropic has positioned itself as the AI industryâs superego: the firm that speaks with the most authority about the big questions surrounding the technology, while rival companies develop advertisements and affiliate shopping links (a difference that Anthropicâs CEO, Dario Amodei, was eager to call out during an interview in Davos last week). On Monday, Amodei published a lengthy essay , âThe Adolescence of Technology,â about the âcivilizational concernsâ posed by what he calls âpowerful AIââthe very technology his firm is developing. The essay has a particular focus on democracy, national security, and the economy. âGiven the horror weâre seeing in Minnesota, its emphasis on the importance of preserving democratic values and rights at home is particularly relevant,â Amodei posted on X , making him one of very few tech leaders to make a public statement against the Trump administrationâs recent actions. This rhetoric, of course, serves as good brandingâa way for Anthropic to stand out in a competitive industry. But having spent a long time following the company and, recently, speaking with many of its employees and executives, including Amodei, I can say that Anthropic is at least consistent. It messages about the ethical issues surrounding AI constantly, and it appears unusually focused on user safety. Bowmanâs job, for example, is to vet Anthropicâs products before theyâre released into the world, making sure that they will not spew, say, white-supremacist talking points ; push users into delusional crises ; or generate nonconsensual porn . So far, the effort seems to be working: Unlike other popular chatbots, including OpenAIâs ChatGPT and Elon Muskâs Grok, Anthropicâs bot, Claude, has not had any major public blowups despite being as advanced as, and by some measures more advanced than, the rest of the field. (That may be in part because its chatbot does not generate images and has a smaller user base than some rival products.) But although Anthropic has so far dodged the various scandals that have plagued other large language models, the company has not inspired much faith that such problems will be avoided forever. When I met Bowman last summer, the company had recently divulged that, in experimental settings, versions of Claude had demonstrated the ability to blackmail users and assist them when they ask about making bioweapons. But the company has pushed its models onward anyway, and now says that Claude writes a good chunkâand in some instances allâof its own code. Anthropic publishes white papers about the terrifying things it has made Claude capable of ( âHow LLMs Could Be Insider Threats,â âFrom Shortcuts to Sabotageâ ), and raises these issues to politicians. OpenAI CEO Sam Altman and other AI executives also have long spoken in broad, aggrandizing terms about AIâs destructive potential, often to their own benefit . But those competitors have released junky TikTok clones and slop generators. Today, Anthropicâs only major consumer product other than its chatbot is Claude Code, a powerful tool that promises to automate all kinds of work, but is nonetheless targeted to a relatively small audience of developers and coders. The companyâs discretion has resulted in a corporate culture that doesnât always make much sense. Anthropic comes across as more sincerely committed to safety than its competitors, but it is also moving full speed toward building tools that it acknowledges could be horrifically dangerous. The firm seems eager for a chance to stand out. But what does Anthropic really stand for ? Jason Henry for The Atlantic Employees working in a cafe at the Anthropic headquarters in San Francisco Founded in 2021 by seven people who splintered off from OpenAI, Anthropic is full of staff and executives who come across as deeply, almost pathologically earnest. I sat in on a meeting of Anthropicâs Societal Impacts team, a small group dedicated to studying how AI affects work, education, and more. This was a brainstorming session: The team wanted to see if it could develop AI models that work better with people than alone, which, the group reasoned, could help prevent or slow job loss. A researcher spoke up. He pressed the team to consider that, in the very near future, AI models might just be better than humans at everything. âBasically, weâre cooked,â he said. In which case, this meeting was nothing more than a âlovely thought exercise.â The group agreed this was possible. Then it moved on. The researcher referred to his brief, existential interruption as âclassic Anthropic.â Hyperrational thought experiments, forceful debates on whether AI could be shaped for the better, an unshakable belief in technological progressâthese are classic Anthropic qualities. They trickle down from the top. A few weeks after the Societal Impacts meeting, I wanted to see what Amodei himself thought about all of this. If Altman is the AI boomâs great salesman and Demis Hassabis, the CEO of Google DeepMind and a Nobel laureate, its scientist, then Amodei is the closest the industry has to a philosopher. He is also responsible for some of the technical research that made ChatGPT possible. âWhenever I say âAI,â people think about the thing theyâre using today,â Amodei told me, hands clasped and perched atop his head. âThatâs almost never where my mind is. My mind is almost always at: Weâre releasing a new version every three months. Where are we gonna be eight versions from now? In two years? â When he was at OpenAI, Amodei wrote an internal document called âThe Big Blob of Compute.â It laid out his belief that AI models improve as a function of the resources put into them. More power, more data, more chips, better AI. That belief now animates the entire industry. Such unwavering faith in AI progress is perhaps Anthropicâs defining feature. The company has hired a âmodel welfareâ researcher to study whether Claude can experience suffering or is conscious . The firm has set up a miniature, AI-run vending machine in the firmâs cafeteria to study whether the technology could autonomously operate a small business selling snacks and trinkets. Claude selects inventory, sets prices, and requests refills, while humans just restock the shelves. Welcome to the singularity. A modei and the rest of the group founded Anthropic partly because of disagreements over how to prepare the world for AI. Amodei is especially worried about job displacement, telling me that AI could erase a large portion of white-collar jobs within five years; he dedicated an entire section of âThe Adolescence of Technologyâ to the danger that the AI boom might accumulate tremendous wealth primarily to firms such as his own. Even with this and other gloomy forecasts of his, Amodei has bristled at the notion that he and his firm are âdoomersââthat their primary motivation is preventing AI from wiping out a large number of jobs or lives. âI tend to be fairly optimistic,â he told me. In addition to âThe Adolescence of Technology,â Amodei has published a 14,000-word manifesto called âMachines of Loving Graceâ that comprehensively details a utopian vision for his technology: eliminating almost all disease, lifting billions out of poverty, doubling human lifespan. There is not a hint of irony; the essay envisions people being âliterally moved to tearsâ by the majesty of AIâs accomplishments. Amodeiâs employees cited it to me in conversation numerous times. Meanwhile, Altman trolls on X, and Musk seems to exist in a continuum of AI slop and conspiracy theories. When Anthropic launched Claude, in 2023, the botâs distinguishing feature was a âConstitutionâ that the model was trained on detailing how it should behave; last week, Anthropic revamped the document into a 22,000-word treatise on how to make Claude a moral and sincere actor. Claude, the constitutionâs authors write, has the ability to foster emotional dependence, design bioweapons, and manipulate its users, so itâs Anthropicâs responsibility to instill upright character in Claude to avoid these outcomes. âOnce we decide to create Claude, even inaction is a kind of action,â they write. No other firm had, or has, any truly comparable document. Amodei says he wants rival companies to act in ways he believes are more responsible. Several of Anthropicâs major AI-safety initiatives and research advances have indeed been adopted by top competitors, such as its approach to preventing the use of AI to build bioweapons. And OpenAI has shared a âModel Spec,â its far more streamlined and pragmatic answer to Anthropicâs constitutionâwhich contains no talk of ChatGPTâs âcharacterâ or âpreserving important societal structures.â (OpenAI has a corporate partnership with The Atlantic .) All of this helps Anthropicâs bottom line, of course: The emphasis on responsibility is âvery attractive to large enterprise businesses which are also quite safety-, brand-conscious,â Daniela Amodei, Anthropicâs president (and Darioâs sister), told me from a sweaty conference room in Anthropicâs old headquarters in 2024. Nearly two years later, Anthropic controls 40 percent of the enterprise-AI market. The Amodeis hope their commercial success will pressure competitors to more aggressively prioritize safety as well. That said, itâs not always clear that these efforts to spark a ârace to the topââanother phrase of Amodeiâs that his employees invoke constantlyâhave been successful. Anthropicâs research established AI sycophancy as an issue well before âAI psychosisâ emerged , yet AI psychosis still became something that many people apparently suffer from. Amodei recognizes that his own products arenât perfect, either. âI absolutely do not want to warrant and guarantee that we will never have these problems,â he said. Several independent AI researchers, including some who have partnered with Anthropic to test Claude for various risks, told me that although Anthropic appears more committed to AI safety than its competitors, thatâs a low bar. Anthropicâs mode is generally to publish information about AI models and wait for the world to make the hard calls about how to control or regulate them. The main regulatory proposal of Jack Clark, a co-founder of Anthropic and its head of policy, is that governments establish âtransparencyâ requirements, or some sort of mandated reporting about what internal tests reveal about AI products. But the company is particular about what it deems worth publishing. The firm does not, for instance, share much about its AI-training data or carbon footprint. When I asked Clark about how much information remains hiddenâparticularly in terms of how Anthropicâs AI tools are actually developedâhe argued that transparency into how AI models are produced isnât all that important. (Some of that information is also, presumably, proprietary.) Rather, Clark told me, the outcomes of the technology are what matter. There is a âwell-established norm that whatever goes on inside a factory is by and large left up to the innovator thatâs built that factory, but you care a lot about what comes out of the factory,â he said, explaining why he believes that AI companies sharing information about how their products are made matters less than reporting what they can do. Typically the government âreaches insideâ the factory, he said, only when something in the outputâsay, heavy metalsâraises cause for concern. Never mind the long history of regulation dictating what goes on inside factoriesâemergency exits in clothing factories, cleanliness standards in meatpacking facilities, and so on. (Clark did note that laws sometimes need to change, and that they havenât yet adapted to AI.) He brought up Wall Street, of all examples, to make his point. Lawmakers âthought they had transparency into financial systems,â he saidâthat banks and hedge funds and so on were giving reliable reports on their dealings. âThen the financial crash happened,â regulators realized that transparency was inadequate and gameable, and Congress changed the law. (President Trump then changed much of it back.) In the long run, Clark seemed to feel, this was the system working as it should. But his comparison also raises the possibility that before anybody can figure out how to get the AI boom right, something must go horribly wrong. Jason Henry for The Atlantic Claudius, an AI-powered vending machine Jason Henry for The Atlantic An employee wears a âthinkingâ cap. I n mid-September, Anthropic cybersecurity experts detected unusual activity among a group of Claude users. They came to suspect that it was a major, AI-enabled Chinese cyberespionage campaign âan attempt by foreign actors to use Claude to automate the theft of sensitive information. Anthropic promptly shut the operation down, published a report, and sent Logan Graham, who heads a team at the company that evaluates advanced uses of AI, to explain the situation to Congress. In theory, this sequence represented Anthropicâs philosophy at work: Detect risks posed by AI and warn the public. But the incident also underscored how unpredictable, and uncontrollable, the environment really is. Months before the Chinese hack, Graham told me that he felt âpretty goodâ about the precautions the company had taken around cyberthreats. Nobody can foresee all of the ways any AI product might be used, for good or ill, but thatâs exactly why Anthropicâs sanctimony can seem silly. For all Amodeiâs warnings about the possible harms of automation, Anthropicâs bots themselves are among the products that may take away jobs; many consider Claude the best AI at coding, for instance. After one of my visits to Anthropicâs offices, I went to an event for software engineers a few blocks away at which founders gave talks about products developed with Anthropic software. Someone demonstrated a tool that could automate outreach for job recruitmentâleading one attendee to exclaim, with apparent glee, âThis is going to destroy an entire industry!â When I asked several Anthropic employees if theyâd want to slow down the AI boom in an ideal world, none seemed to have ever seriously considered the question; it was too far-fetched a possibility, even for them. Joshua Batson, an interpretability researcher at Anthropicâhe studies the labyrinthine inner workings of AI modelsâtold me that it would be nice if the industry could go half as fast. Jared Kaplan, a co-founder of Anthropic and the firmâs chief science officer, told me heâd prefer it if AGI, or artificial general intelligence, arrived in 2032 rather than, say, 2028; Bowman, the safety researcher, said he thought slowing down for just a couple of months might be enough. Everyone seemed to believe, though, that AI-safety research itself could eventually be automated with Claudeâand once that happens, they reasoned, their tests could keep up with the AIâs exponentially improving capabilities. Like so many others in the industry, the employees I spoke with also contended that neither Anthropic nor any other AI company could actually slow development down. âThe world gets to make this decision, not companies,â Clark told me, seated cross-legged on his chair, and âthe system of capital markets says, Go faster .â So they are. Anthropic is reportedly fundraising at a $350 billion valuation, and its advertisements litter Instagram and big-city billboards. This month, the company launched a version of its Claude Code product geared toward non-software engineers called Claude Cowork. And in July, as first reported in Wired , Amodei wrote an internal memo to employees that Anthropic would seek investments from the United Arab Emirates and Qatar, which, in his words, would likely enrich âdictators.â Warnings about the dangers of authoritarian AI have been central in Anthropicâs public messaging; âMachines of Loving Graceâ includes dire descriptions of the threat of âauthoritarianâ AI. When I brought this up to Amodei, he cut me off. âWe never made a commitment not to seek funding from the Middle East,â he said. âOne of the traps you can fall into when youâre doing a good job running a responsible company is every decision that you makeâ can be âinterpreted as a moral commitment.â There was no âpressing needâ to seek Middle Eastern funding before, and doing so entailed âcomplexities,â he said. I took his implication to be that the intensive capital demands of the AI race now made such investments a necessity. Still, such investors, Amodei said, wouldnât have any control over his firm. A few days after we spoke, Anthropic announced the Qatar Investment Authority as a âsignificantâ investor in a new fundraising round. Jason Henry for The Atlantic Anthropic employees Sholto Douglas and Trenton Bricken I f you zoom out enough, and perhaps not even all that far, Anthropic stands for the same things that OpenAI, Google, Meta, and anyone else in the AI race do: to build fantastically powerful chatbots and use them to transform the world and beat the competition. Across the company, the belief in AIâs potential is messianic. AI âpresents one of the only technologiesâ that gets us out of the challenges ahead for humanity, Clark told me: climate change, aging populations, resource contention, authoritarianism, war. Without AI, he said, there will be more and more â Mad Max âlike swaths of the world.â Trenton Bricken, who works on AI safety at Anthropic, took this notion to an even greater extreme: He would ideally want the AI industry to slow down, but âevery year that we stall, there are lots of people suffering who otherwise would not,â he told me, referring to the possibility that AI will eventually cure diseases and achieve everything else outlined in âMachines of Loving Grace.â His colleague Sholto Douglas claimed that such a delay âcomes at the cost of millions of lives.â Perhaps the greatest confusion at Anthropic is between theory and practiceâthe idea of safe AI versus the speed necessary to win the AI race. A corporate culture built around deep thought experiments and genuine disagreements about the future also has to sell AI. In the companyâs view, these ends are complementary; better for it to responsibly usher in the AI future than Elon Musk or China. But thatâs also a convenient way to justify an any-means-necessary approach to progress. I thought of that automated vending machine that the company had set up in its office. Claude ran the business into the ground in only a month through a string of very poor pricing and stocking decisions. But none of those really mattered: Anthropic had placed the machine next to all the free snacks in the office canteen. When I asked Amodei recently about how he could justify the breakneck pace given the concerns he has over safety, he expressed total confidence in his staffâand also floated a new idea. Perhaps, he suggested, Claude will become so intelligent in the very near future that the bot will enable something radical: âMaybe at some point in 2027, what we want to do is just slow things down,â he said, and let the models fix themselves. âFor just a few months.â This article originally stated that Anthropicâs AI vending machine is a Societal Impacts team project.
GOG 被发现使用人工智能生成的艺术作品 [更新]
数字 PC 游戏平台 Good Old Games(GOG)一直将人工智能生成的艺术作品用于广告目的,这引发了用户和工作人员的争议,他们不赞成这种做法。GOG的一位艺术家在论坛上证实,最近的销售横幅完全是由人工智能工具创建的,由于担心工作要求变化和质量下降,引起了团队内部的不满。该公司拒绝就其在内部流程中使用人工智能的情况公开发表评论,但承认了解网上正在进行的讨论。