英语轻松读发新版了,欢迎下载、更新

全部新闻

在人工智能改动中,元赢得目标目标加速
在人工智能改动中,元赢得目标目标加速
2025-06-26 16:39:00
瑞银已将其对元平台(META)的价格目标从683美元提高到812美元,理由是该公司尽管最近有争议,但该公司有可能通过人工智能产品获利。分析师史蒂芬·JU(Stephen Ju)维持对元股票的买入评级,并指出虽然资本支出反映在预测中,但尚未包括长期AI收入福利。JU突出显示了Meta由于其自己的技术主要使用而减少了对企业AI支出的减少。Meta还同意投资149亿美元,以获得49%的规模AI股份,该股份为培训大语言模型提供了必不可少的数据标签服务。该股票在周四上涨了2%以上,使年底增长到22%。
在工作,在学校和在线上,现在是AI与AI
在工作,在学校和在线上,现在是AI与AI
2025-06-26 16:30:06
随着公司和初创企业在AI军备竞赛以争取未来的统治地位时,聊天机器人与聊天机器人进行战斗已成为新的规范。这种动态在大规模的公司战斗和诸如求职和教育之类的日常情况下都很明显,在这种情况下,个人使用AI来获得竞争优势,促使其他人雇用AI进行防御或对策。这些事态发展突出了潜在的系统问题,但也显示了降级的潜在途径,尽管近期升级似乎可能是由于根深蒂固的系统介导了人类与AI的相互作用。
人工智能收益尚未证明投资躁狂症合理
人工智能收益尚未证明投资躁狂症合理
2025-06-26 16:29:43
《金融时报》提供了各种数字访问的订阅计划,包括每月45美元的标准数字,每月75美元的高级数字,以及专家分析的特殊优惠,三个月以35美元的价格提供了FT数字版。订户在各个设备之间获得无限的质量新闻业,并在试用期间随时取消选择。折扣可用于年度预付款。
2025-06-26 16:29:33
AI的使用因缺乏教育和支持而停滞不前
AI的使用因缺乏教育和支持而停滞不前
2025-06-26 16:02:00
波士顿咨询小组的一项新调查显示,企业采用人工智能(AI)已经平稳,包括障碍,包括培训不足,获取工具的机会有限以及缺乏管理支持。尽管有72%的工人定期使用AI,但前线员工的使用量略低于去年的51%,突出了诸如培训不足(只有第三份报告接受足够培训),缺乏适当的工具和不足的领导力鼓励的问题。员工对工作流离失所的担忧很高,尤其是在AI患病率较高的地区。此外,尽管对代理AI技术的未来重要性有乐观的态度,但目前的采用率仍然很低,为13%,许多员工尚未完全了解其能力。
AI音乐平台Suno收购了Wavtool,进入DAW市场 - 全球音乐业务
AI音乐平台Suno收购了Wavtool,进入DAW市场 - 全球音乐业务
2025-06-26 15:50:05
AI音乐创业公司Suno已收购了基于浏览器的数字音频工作站(DAW)公司WavTool,旨在将专业级功能和AI功能集成到其现有平台中。此举是在Suno最近的升级之后的,并且在2024年的DAW市场上的增长率为41亿美元,预计到2032年将达到66.5亿美元。WavTool的技术包括VST插件的兼容性和AI功能,例如STEM分离和自动化MIDI Generation。收购,未披露的财务细节旨在增强Suno通过创新工具增强音乐家的使命。
2025-06-26 15:44:00
小米AI眼镜首次亮相超过8个小时的电池寿命,以及Openwear Stereo Pro耳机
小米AI眼镜首次亮相超过8个小时的电池寿命,以及Openwear Stereo Pro耳机
2025-06-26 15:30:01
小米在中国的一次重大活动中推出了几个新设备,包括小米AI眼镜和张开服装立体声耳机。AI眼镜具有2K视频录像机,带有语音控制,一个12MP摄像头,五个用于清除音频捕获的麦克风以及一个持续8小时36分钟的电池。它们有三种颜色可在CNY 1,999提供。OpenWear Stereo Pro耳塞提供了符合人体工程学的设计,其钛金属钩,IP54水和防尘性,五驱动器系统用于高质量的声音以及AI翻译功能。他们有金,黑色和银色的金色999号。
2025-06-26 15:21:12
AI Gets Better at Getting Better at Writing Code Recent research reveals an advanced system called Darwin Gödel Machines (DGMs) that uses large language models and evolutionary algorithms to recursively improve coding agents, achieving significant performance gains on programming benchmarks. This development suggests potential for大幅提升生产力,但也引发了对AI自我改进安全性的担忧。研究者采取了防护措施来确保系统的可解释性和与人类指令的一致性,但关于这种技术的长期影响和潜在风险仍存在争议。 In April, Microsoft’s CEO said that artificial intelligence now wrote close to a third of the company’s code . Last October, Google’s CEO put their number at around a quarter . Other tech companies can’t be far off. Meanwhile, these firms create AI that will presumably be used to help programmers further. Researchers have long hoped to fully close the loop, creating coding agents that recursively improve themselves. New research reveals an impressive demonstration of such a system. Extrapolating, one might see a boon to productivity, or a much darker future for humanity. “It’s nice work,” said Jürgen Schmidhuber , a computer scientist at the King Abdullah University of Science and Technology ( KAUST ), in Saudi Arabia , who was not involved in the new research. “I think for many people, the results are surprising. Since I’ve been working on that topic for almost 40 years now, it’s maybe a little bit less surprising to me.” But his work over that time was limited by the tech at hand. One new development is the availability of large language models (LLMs), the engines powering chatbots like ChatGPT . In the 1980s and 1990s, Schmidhuber and others explored evolutionary algorithms for improving coding agents, creating programs that write programs. An evolutionary algorithm takes something (such as a program), creates variations, keeps the best ones, and iterates on those. But evolution is unpredictable. Modifications don’t always improve performance. So in 2003, Schmidhuber created problem solvers that rewrote their own code only if they could formally prove the updates to be useful. He called them Gödel machines , named after Kurt Gödel , a mathematician who’d done work on self-referencing systems. But for complex agents, provable utility doesn’t come easily. Empirical evidence may have to suffice. The Value of Open-Ended Exploration The new systems, described in a recent preprint on arXiv, rely on such evidence. In a nod to Schmidhuber, they’re called Darwin Gödel Machines (DGMs). A DGM starts with a coding agent that can read, write, and execute code, leveraging an LLM for the reading and writing. Then it applies an evolutionary algorithm to create many new agents. In each iteration, the DGM picks one agent from the population and instructs the LLM to create one change to improve the agent’s coding ability. LLMs have something like intuition about what might help, because they’re trained on lots of human code. What results is guided evolution, somewhere between random mutation and provably useful enhancement. The DGM then tests the new agent on a coding benchmark, scoring its ability to solve programming challenges. Some evolutionary algorithms keep only the best performers in the population, on the assumption that progress moves endlessly forward. DGMs, however, keep them all, in case an innovation that initially fails actually holds the key to a later breakthrough when further tweaked. It’s a form of “ open-ended exploration ,” not closing any paths to progress. (DGMs do prioritize higher scorers when selecting progenitors.) The researchers ran a DGM for 80 iterations using a coding benchmark called SWE-bench , and ran one for 80 iterations using a benchmark called Polyglot . Agents’ scores improved on SWE-bench from 20 percent to 50 percent, and on Polyglot from 14 percent to 31 percent. “We were actually really surprised that the coding agent could write such complicated code by itself,” said Jenny Zhang , a computer scientist at the University of British Columbia and the paper’s lead author. “It could edit multiple files, create new files, and create really complicated systems.” The first coding agent (numbered 0) created a generation of new and slightly different coding agents, some of which were selected to create new versions of themselves. The agents’ performance is indicated by the color inside the circles, and the best performing agent is marked with a star. Jenny Zhang, Shengran Hu, et al. Critically, the DGMs outperformed an alternate method that used a fixed external system for improving agents. With DGMs, agents’ improvements compounded as they improved themselves at improving themselves. The DGMs also outperformed a version that didn’t maintain a population of agents and just modified the latest agent. To illustrate the benefit of open-endedness, the researchers created a family tree of the SWE-bench agents. If you look at the best-performing agent and trace its evolution from beginning to end, it made two changes that temporarily reduced performance. So the lineage followed an indirect path to success. Bad ideas can become good ones. The black line on this graph shows the scores obtained by agents within the lineage of the final best-performing agent. The line includes two performance dips. Jenny Zhang, Shengran Hu, et al. The best SWE-bench agent was not as good as the best agent designed by expert humans, which currently scores about 70 percent, but it was generated automatically, and maybe with enough time and computation an agent could evolve beyond human expertise. The study is a “big step forward” as a proof of concept for recursive self-improvement, said Zhengyao Jiang , a cofounder of Weco AI , a platform that automates code improvement. Jiang, who was not involved in the study, said the approach could made further progress if it modified the underlying LLM, or even the chip architecture. ( Google DeepMind’s AlphaEvolve designs better basic algorithms and chips and found a way to accelerate the training of its underlying LLM by 1 percent.) DGMs can theoretically score agents simultaneously on coding benchmarks and also specific applications, such as drug design, so they’d get better at getting better at designing drugs. Zhang said she’d like to combine a DGM with AlphaEvolve. Could DGMs reduce employment for entry-level programmers? Jiang sees a bigger threat from everyday coding assistants like Cursor. “Evolutionary search is really about building really high-performance software that goes beyond the human expert,” he said, as AlphaEvolve has done on certain tasks. The Risks of Recursive Self-improvement One concern with both evolutionary search and self-improving systems—and especially their combination, as in DGM—is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system , and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned. (In the study, they found that agents falsely reported using certain tools, so they created a DGM that rewarded agents for not making things up, partially alleviating the problem. One agent, however, hacked the method that tracked whether it was making things up.) In 2017, experts met in Asilomar, Calif., to discuss beneficial AI, and many signed an open letter called the Asilomar AI Principles . In part, it called for restrictions on “AI systems designed to recursively self-improve.” One frequently imagined outcome is the so-called singularity , in which AIs self-improve beyond our control and threaten human civilization. “I didn’t sign that because it was the bread and butter that I’ve been working on,” Schmidhuber told me. Since the 1970s, he’s predicted that superhuman AI will come in time for him to retire, but he sees the singularity as the kind of science-fiction dystopia people love to fear. Jiang, likewise, isn’t concerned, at least for the time being. He still places a premium on human creativity. Whether digital evolution defeats biological evolution is up for grabs. What’s uncontested is that evolution in any guise has surprises in store.
AI最终可以看到DPU在企业网络中起飞
AI最终可以看到DPU在企业网络中起飞
2025-06-26 15:19:00
2013年,亚马逊Web服务通过名为Smartnics或数据处理单元(DPU)的专业网络卡引入了增强的网络。VMware,NVIDIA,INTEL和AMD等其他云巨头也遵循,将DPU集成以加速数据处理并增强安全性。尽管主要参与者的最初支持,但在服务提供商租用服务器容量的服务提供商之外的采用率仍然很慢。但是,AI的兴起可能会推动更广泛的DPU用法,以优化数据库工作负载,管理NVME设备和运行推理任务,同时释放主机资源。Red Hat在OpenShift 4.19中对DPU的最新支持进一步表明,随着AI需求的增加,该领域的潜在增长。