OC

Knowledge OS
鹦鹉螺口语

全部新闻

在中国,人工智能对于一些孩子来说不再是可选的。这是课程的一部分

2026-01-27 10:00:00

作为政府增强国家技术竞争力举措的一部分,中国小学生正在将人工智能(AI)纳入他们的课程。五年级学生李子辰展示了一款能够通过人工智能编程举起和移动积木的机器人,而宋浩跃则利用人工智能进行图形设计。教育部要求学校从三年级开始纳入人工智能课程,重点是基础知识、数据和编码,以及更高级的主题,如五年级的智能代理。家长们支持这些举措,因为他们相信这能让孩子们为技术主导的未来做好准备,尽管存在关于过度依赖人工智能等潜在风险的争论。

在中国,人工智能对于一些孩子来说不再是可选的。这是课程的一部分

泄露的电子邮件显示美国银行与 Nvidia AI 的斗争:“当当地汽车修理工驾驶赛车时,你必须帮助我们!”

2026-01-27 10:00:00

由于美国银行难以采用其人工智能企业软件,英伟达面临挑战,凸显了银行等受监管行业在实施尖端技术时遇到的困难。内部电子邮件显示,尽管美国银行购买了英伟达的“AI Factory”这一全面的人工智能系统解决方案,但由于运营和监管障碍,部署起来很困难。Nvidia 高管承认需要更好的支持来帮助客户有效部署他们的产品。这种情况凸显了将人工智能基础设施采购转化为跨行业的实际部署所面临的更广泛的行业挑战。

泄露的电子邮件显示美国银行与 Nvidia AI 的斗争:“当当地汽车修理工驾驶赛车时,你必须帮助我们!”

OpenAI 副总裁 Matt Knight 辞职;在 Twitter 上发送给公司的股票“说明”;首席执行官萨姆·奥尔特曼 (Sam Altman) 回复 - 印度时报

2026-01-27 08:10:00

OpenAI 副总裁兼首席信息安全官 (CISO) 马特·奈特 (Matt Knight) 在公司工作五年多后宣布辞职。在社交媒体帖子中,Knight 反思了从早期阶段构建 OpenAI 安全计划到保护全球数百万用户的过程。他对定义人工智能创新的安全性表示自豪,并强调用户的信任。首席执行官萨姆·奥尔特曼赞扬了奈特的贡献,并祝愿他在未来的努力中一切顺利。

OpenAI 副总裁 Matt Knight 辞职;在 Twitter 上发送给公司的股票“说明”;首席执行官萨姆·奥尔特曼 (Sam Altman) 回复 - 印度时报

由于人工智能热潮导致全球内存供应紧张,美光科技将投资 240 亿美元在新加坡建厂

2026-01-27 06:33:14

美光科技已投入约 240 亿美元扩大其在新加坡的晶圆制造业务,在现有的 NAND 制造基地增加 700,000 平方英尺的洁净室空间。由于人工智能和以数据为中心的应用对 NAND 技术的需求激增,预计将于 2028 年年底开始生产。美光还在新加坡建造一座价值 70 亿美元的先进封装工厂,用于高带宽内存生产。此次扩建将创造约 1,600 个就业岗位,并加强新加坡在全球半导体供应链中的地位。

由于人工智能热潮导致全球内存供应紧张,美光科技将投资 240 亿美元在新加坡建厂

2026-01-27 05:00:00

Geopolitics in the Age of Artificial Intelligence 这篇文章提供了一个关于人工智能(AI)发展及其对地缘政治影响的分析框架。作者认为,未来的AI发展具有不确定性,需要政策制定者在面对未知和复杂性时采用概率思维方法来规划策略。以下是对文章主要内容的总结: 1. **构建未来可能性矩阵**:基于三个关键维度——AI发展的程度(是否接近于超级智能)、技术复制的速度以及中国对前沿科技的态度,构建一个包含八种可能未来的矩阵。 2. **确定基线情境**:政策制定者首先需要确定他们认为最有可能的情境作为“基准案例”。 3. **策略灵活性与风险管理**:根据选定的基准案例评估各种政策的有效性,并考虑如何防止或缓解在风险最大的未来场景(例如世界一)下可能出现的不利情况。 4. **动态调整规划**:随着环境的变化,需要能够快速适应并修改计划。这包括设立一个灵活应对机制,以便当迹象显示可能进入另一个未来时可以及时作出调整。 5. **提升透明度与交流质量**:通过明确假设和将讨论映射到矩阵上,有助于减少争论双方之间的误解,并使辩论更加富有成效。 6. **制定适应性政策框架**:鉴于AI发展的复杂性和不确定性,需要建立一个可以随着新信息的出现而不断进化的政策体系。 总之,《通过可能性矩阵理解人工智能的地缘政治影响》一文强调了在面对未来不确定性的挑战时采取一种系统化、概率导向的方法对于做出明智决策的重要性。它鼓励领导者将AI视为一个多变领域,并学会在这种环境中导航,以便更好地塑造未来的地缘政治格局。 Everyone has a theory of artificial intelligence. Some believe the technology is progressing toward superintelligence—powerful AI that will bring epochal changes beyond any previous technology. Others expect that it will boost productivity and scientific discovery but will follow a more uneven and potentially less dramatic path. People also disagree about how easily breakthroughs can be replicated. Some argue that rivals will fast-follow (that is, quickly imitate), whereas others believe catching up will become slower and costlier, giving first movers lasting advantage. And whereas many are sure China is determined to beat the United States at the frontier, others insist it is focused on deployment of existing technology while seeking to distill and reproduce leading-edge American innovations once they appear. Every confident policy argument rests on hidden assumptions about which of these stories is true. Those prioritizing frontier innovation assume breakthroughs will compound and be difficult to replicate, whereas those focused on spreading American systems abroad often assume the opposite. If those assumptions are wrong, the strategies built on them will waste resources and could cost the United States its lead. Betting everything on a single story is tempting but dangerous. Washington does not need another prediction about the AI age. It needs a way to make choices under uncertainty—one that secures the United States’ advantage across multiple possible futures and adapts as the shape of the AI era comes into view. EIGHT WORLDS However the AI future ultimately unfolds, U.S. strategy should begin with a clear definition of success. Washington should use AI to strengthen national security, broad-based prosperity, and democratic values both at home and among allies. When aligned with the public good, AI can drive scientific and technological progress to improve lives; help address global challenges such as public health, development, and climate change; and sustain and extend American military, economic, technological, and diplomatic advantages vis-à-vis China. The United States can do all of this while responsibly managing the very real risks that AI creates. The challenge is how to get there. To make hidden assumptions explicit and to test strategies against different futures, those thinking about AI strategy should consider a simple framework. It turns on three questions: Will AI progress accelerate toward superintelligence, or plateau for an extended period? Will breakthroughs be easy to copy, or will catching up become difficult and costly? And is China truly racing for the frontier, or is it putting its resources elsewhere on the assumption that it can imitate and commodify later? Each question has two plausible answers. Considering every combination yields a three-dimensional matrix—a 2×2×2 diagram with eight possible worlds. The first axis is the nature of AI progress. At one end lies superintelligence: an AI that far outpaces humans and is capable of recursive self-improvement, teaching itself to become ever smarter and inventing ever more new things. At the other end lies bounded and jagged intelligence: impressive scientific, economic, and military applications, but not a singular break with history. It is bounded because the progress it makes eventually hits limits, at least for a while. And it is jagged because it is uneven; systems may reach incredible performance in areas such as math or coding but struggle with judgment, creativity, or certain physical applications. If progress leads to superintelligence, even a narrow lead could prove decisive, justifying massive frontier investments. If it is bounded and jagged, channeling unlimited resources to moonshots is less compelling than prioritizing adoption and diffusion. The second axis is the ease of catching up—the fast-follow problem. In one world, catching up is easy. Breakthroughs can be copied quickly through espionage; leaked weights, in which a trained model’s internal parameters are stolen or released; innovative training on older hardware; or model distillation, in which a less capable system is trained to imitate a more advanced one. In the other, catching up is hard: frontier capability depends on the full technological stack—proprietary hardware, institutional expertise, vast and often unique datasets, a vibrant ecosystem of talent, and structural factors that cannot be foreseen. The model, or software layer, may be easy to copy, but the quality and scale of hardware, infrastructure, and human capital behind training and inference may be far more difficult to reproduce. When catching up is easy, the contest is more about diffusion, embedding American systems abroad before rivals can spread their own. When it is hard, diffusion still matters, but strategy places greater emphasis on defending the underlying foundations of frontier capability—that is, the inputs and know-how that allow advances to compound over time. Across the whole axis, the question is not whether AI spreads, but how quickly, to whom, and on what terms. AI processors on display at a tech conference in Tongxiang, China, November 2025 Tingshu Wang / Reuters The third axis is China’s strategy. At one extreme, Beijing is racing aggressively to the frontier, funding massive training runs and competing labs. At the other extreme, Beijing is not racing but prioritizing adoption and diffusion and occasionally producing large models to signal progress and spur the United States into focusing on the frontier. China may not have a perfectly coherent national plan—indeed, different institutions within the country may act differently—but at the system level, China’s behavior will still approximate either racing or not racing. This dimension of the framework focuses on China because, at present, it is the United States’ dominant competitor at the frontier. If other actors emerge, the matrix would need to adjust to reflect their racing calculus, as well. Reality is, of course, more complicated than any diagram. More axes could be added, and each axis could be treated as a spectrum. China may pursue a middle path in frontier R & D. Catching up may be only somewhat hard. AI may be truly powerful but still have certain limitations. Although considering binary outcomes can make strategic planning easier, policymakers can still account for the intermediate possibilities by thinking probabilistically along each axis. A partial Chinese investment strategy, for instance, increases the odds that Beijing narrowly follows the United States or even unexpectedly closes the gap. Finally, policymakers’ own decisions can shape which AI future emerges, at least on the margins. U.S. actions can make catching up harder or easier, particularly by tightening or loosening export controls. Whether China races or holds back will depend in part on how Beijing judges the pace of AI progress and the difficulty of catching up. Still, by making uncertainty part of the policy framework, policymakers will at least be forced to confront their own assumptions and plan for multiple futures rather than one. SOURCES OF AI POWER Before turning to that planning exercise, it is worth pausing to ask two questions: Who actually sets U.S. AI strategy? And what tools does Washington have to shape the trajectory of AI? After all, the government doesn’t own the country’s leading labs or decide what they build. It can’t set production targets or direct investment flows the way Beijing can. Yet Washington’s policy choices and signaling significantly influence the AI ecosystem, even if indirectly. Many American policies amount to an implicit subsidy for the domestic AI industry. Export controls and investment restrictions have limited China’s access to advanced chips and U.S. capital. They have raised the value of American and allied firms by constraining their strongest competitors and channeling private capital toward them. Expectations amplify that effect. When senior officials describe AI leadership as a national priority, companies and investors anticipate favorable rulemaking, administrative streamlining, and closer coordination with government. Those assumptions influence how much risk firms take on and where investors place their bets—perhaps even more than a slow-to-deploy congressional appropriation would. Washington’s direct support complements these signals. R & D tax credits, infrastructure investments, federal research grants, and a host of executive branch decisions—on permitting, immigration, and much else—collectively influence where and how AI capacity grows. Meanwhile, federal procurement and partnership is becoming a meaningful demand signal itself as agencies begin testing and adopting AI systems at scale. If diffusion becomes as strategically important as frontier breakthroughs, Washington may need to use more of the tools at its disposal, offering partners a trusted alternative to Beijing’s AI stack and working through institutions such as the Development Finance Corporation to fund deployment abroad in places the market alone will not serve. This also includes thinking about how open or closed American AI systems should be. The United States must decide whether to rely on tightly controlled proprietary models or promote open-source alternatives as a way to shape global adoption. Still, the private sector remains the engine of this race, and its incentives do not always align with the country’s interests. Many leading labs in the United States are betting on superintelligence, pouring resources into massive training runs rather than safe deployment or broad diffusion. Some would prefer to build and operate the infrastructure for large-scale training runs overseas, drawn by looser rules, cheaper energy, and additional capital. Managing that tension will remain one of Washington’s most difficult tasks. Policymakers should treat AI not as a single story but as a shifting landscape.   The United States’ strength has never been central planning but deploying a mix of tools to direct a decentralized system toward shared goals. It creates policy incentives, shapes expectations, and coaxes capital toward national purpose. How to use these tools to maintain U.S. leadership in AI depends on which future ultimately emerges. Some policies that make sense in one scenario may be counterproductive in another. But a few priorities will hold across most of them—core elements of national power that most versions of the AI future are likely to require, even as their relative importance varies from one world to another. Compute, or computing power, remains the foundation of AI capability. Control over chips, data centers, and the energy to run them determines who can train and deploy the systems that set the pace of progress. Robotics and advanced manufacturing extend that power into the physical world, turning digital intelligence into productive capacity. None of it endures without a strong industrial-scientific base. The United States needs basic research both to advance today’s technologies and to explore new approaches to AI development; talent, both homegrown and attracted from around the world; the manufacturing capacity to build at scale; and energy that keeps it all running. If AI firms lack sufficient access to electric power, in particular, that bottleneck could limit overall progress. Risk management, often regarded as a constraint because it can slow deployment and limit experimentation, can be a source of stability and legitimacy. It’s what keeps competition from collapsing due to unintended escalation from accidents, deliberate misuse of AI systems, or loss of control resulting from the deployment of systems whose behavior humans can no longer reliably control. Just as important is ensuring that safety protocols and domestic political support develop fast enough to keep pace with capability gains. Some futures give Washington room to build that foundation; others compress the timeline. Then there’s the question of diffusion—the spread and adoption of AI systems abroad. The systems that take root will decide whose values and governance ideals define the digital order, and which country or countries draw the most economic and strategic gains. Beijing already treats AI governance itself as a strategic export, using its systems, standards, and regulatory templates to shape how other countries use and oversee the technology. Washington demonstrates conviction on diffusion in theory but has yet to prove it in practice. U.S. allies and partners are the last critical piece of this puzzle. Working in concert with trusted partners multiplies American capacity and improves the chances that democratic systems—not authoritarian ones—define the shape of the AI age. WORLD ONE The three axes—superintelligence versus bounded and jagged intelligence, ease versus difficulty in catching up to another’s breakthrough, and a China that races to the frontier versus a China that does not—create eight possible worlds. The task of policymakers is to fill in this matrix with a range of reasonable policy choices in each one. First, consider a world in which superintelligence is achievable, the technology is hard to imitate quickly, and China is racing at full speed. This world looks and feels like something between an arms race and a space race: the contest would become a struggle to reach and secure the frontier first. The stakes would be immense. Whoever develops and controls the most advanced systems could gain enduring technological, economic, and military advantages. At the extreme end of this scenario, some argue that once recursive self-improvement begins, the lead may become self-reinforcing, making meaningful catch-up not merely difficult but effectively impossible. This framework treats that possibility as the limiting case of “hard to catch up” rather than assuming it as a baseline, and tests strategy accordingly. The United States might have to consider a Manhattan Project 2.0, which would entail the mobilization of public resources, extraordinary coordination between government and industry, and a level of secrecy more typical of military programs, potentially requiring new authorities or expanded use of the 1950 Defense Production Act, which grants the president broad authority to regulate industry for purposes of national defense. Such an effort would force policymakers to choose between centralizing development in a single entity to ensure strict security oversight or maintaining competition among multiple frontier laboratories on the assumption that parallel experimentation would yield results faster. A robot on display at a tech event in Taipei, Taiwan, November 2025 Ann Wang / Reuters Under these conditions, Washington would tighten export controls to the limits of enforceability. Every layer of the semiconductor supply chain would fall under stricter regimes, and coordination with allies would be essential to prevent circumvention. Model weights (the numerical parameters that determine how a system behaves), training data, and data centers would need to be hardened against theft and sabotage. Risk management with China, based on a shared interest in avoiding loss of human control of superintelligence, would move center stage. The faster systems advance, the greater the chance of accidents and unintended escalation as autonomous systems interact in ways neither side fully anticipates. One plausible move would be a mutual restraint agreement, limiting development while both Beijing and Washington build safety systems that can keep pace. But such an arrangement would be fragile and hard to sustain, given mutual distrust, verification challenges, and the potential gains from breaking the agreement and racing ahead. Because catching up is difficult and China’s success is not inevitable in this world, the United States might find itself with a narrow window in which it has reached superintelligence first. In that moment, Washington would face a decision: whether to take steps to prevent others from reaching the same capability. The opposite scenario is equally important: if Beijing reaches the frontier first, Washington would need to be ready to manage and mitigate the harms. And if both powers cross the threshold, they would need to reduce risk with clear guardrails, communication, and restraint while also working to prevent loss of control and the adoption of superintelligence by rogue states or nonstate actors. WORLD TWO In another world, superintelligence is still achievable and it is still hard to catch up to new technologies, but China is not racing toward the frontier. This scenario sees the United States achieve a unipolar AI moment. Even if Beijing pursued a strategy of partial frontier investment, the difficulty of catching up would all but guarantee that the United States would stand alone at the technological peak, with a real chance to define the structure of the world that follows. The central question would no longer be how to win the race, but how to wield and manage a lead. At the industrial level, AI development could progress at a more measured pace. Although R & D spending should remain elevated enough to reach superintelligence, no Manhattan Project–style mobilization would likely be needed. The United States would have to keep the frontier secure—protecting model weights, compute, and key talent—while allowing the innovation ecosystem to operate dynamically. Notably, as the market matures and some AI companies fail, China should not be allowed to buy up their intellectual property. This future would make many other countries uneasy. Concentrating such transformational power in one country would raise doubts about whether Washington would lead responsibly or pursue a narrower national interest. The task for the United States would be to build and maintain a democratic AI order that generates trust in American leadership at the frontier—a similar undertaking to the one Washington faced in 1945, but far more difficult in today’s political and geopolitical landscape. With no immediate rival at the cusp of superintelligence, the United States could more comfortably exercise unilateral restraint, pacing frontier development efforts to ensure safety keeps up. Diffusion would be strategic and selective: extending secure access to allies and partners while preventing uncontrolled proliferation. Domestically, the United States could focus on building a new social contract. If AI delivered enormous productivity and capability gains, the challenge would turn to channeling those gains into broad-based prosperity while reinforcing society’s resilience to AI-driven disruptions. Sensible regulation would ensure safety and accountability without stifling progress. Of course, this unipolar moment would not be guaranteed to be permanent. If the United States reached superintelligence, China would likely flip into racing mode overnight, and other powers would not stay idle for long. Washington would have to decide how to respond and how to use its position to shape how and where the technology spreads. WORLD THREE A third possibility is a world of all-out proliferation: superintelligence can be reached, it is easy to catch up, and China is racing ahead. Breakthroughs would compound quickly, but copying them would be quick, too. In this world, the task for the United States would be less about containment and more about resilience—that is, preparing the nation’s cyber, biosecurity, infrastructure, and defense systems to withstand the full range of AI-enabled threats. Whether to race or fast-follow would become a strategic choice. If breakthroughs proliferated quickly, the advantage gained from reaching the frontier first may be short-lived, but letting others get there first, even for a short period, would still create a meaningful window of vulnerability. And if progress continued to compound rapidly, arriving first would matter even more, because the early mover would begin climbing the curve first. The likely optimal path would be to race defensively, maintaining high R & D spending and frontier capability while matching advances with new layers of security and resilience. The innovation ecosystem itself would face stress. A single national champion would provide little security value, since whatever it builds would quickly be copied, and sustaining many private firms that work on leading-edge technology would be difficult if investors see profits vanish as innovations are quickly copied. Many of these companies would fail as superintelligence becomes commoditized. The firms that innovate to build better business models to capture value would succeed, but the firms that innovate to build better AI models may not. Risk management would rise in importance, and not only with regard to managing escalation and miscalculation. To mitigate the threat of uncontrolled proliferation to nonstate actors and rogue states, the United States would have to build new layers of global cooperation, with both allies and China, to slow or stop irresponsible players from gaining access to the technology. Although a joint U.S.-Chinese restraint agreement would still be difficult to enforce, the two countries’ awareness of the heightened danger in this scenario could make a deal more viable. Export controls could still be useful, but their effectiveness would depend on why catching up is easy. If China developed a viable alternative compute stack, then chip controls would become essentially useless and competition would shift to global deployment. If the ease of catching up stemmed from other factors (such as model distillation, theft, or the rapid spread of new algorithms and practical know-how), then chip controls would be less compelling than in other scenarios but still useful as a tool for buying time and slowing diffusion. WORLD FOUR If superintelligence could be achieved, catching up was easy, and China was not racing, the United States would find itself in a fleeting unipolar window. The United States could reach artificial superintelligence first, but others could follow quickly once they began to race. With China not trying to innovate too quickly, the logic of holding back on a major push to the frontier would be somewhat more compelling, especially if doing so could avoid the all-out proliferation scenario. Still, that path would be risky: China could secretly race or another actor could conceivably advance beyond American capabilities. If the United States continued to race, it would have to decide how to use its lead. Washington could attempt to use the narrow window to block others from reaching the frontier. Alternatively, it could use even a brief period of uncontested superintelligence to strengthen its own and allied defenses and work to implement safeguards against loss of control and unbounded proliferation scenarios. Since Beijing would not be racing, it would likely pursue a different strategy, positioning itself to commoditize American breakthroughs, embedding Chinese systems globally through low-cost AI exports, and linking AI to the physical world through robotics. That would make diffusion an important contest. The United States would need to invest in robotics and advanced manufacturing to translate digital breakthroughs into physical and industrial applications and move decisively to spread safe, democratic systems abroad before China filled the vacuum. WORLD FIVE Superintelligence is no longer on the table in the next set of possible worlds. In one of these scenarios, it is hard to catch up to breakthrough technologies, and China is racing to the frontier. The United States and China would enter a grinding innovation race. Although the stakes would be high, they would be lower than in the superintelligence scenarios. It would remain important to invest in R & D, even if not at emergency levels, and to support that spending with long-term industrial policy that builds durable robotics and advanced manufacturing capabilities. Policymakers would have to be mindful that markets often misjudge turning points—investors may panic and declare a “bubble” before AI reaches its full potential, or they may keep spending long after the technology has matured. Risk management would have to focus less on loss of control and more on misuse in biological, cyber, or military applications. The importance of diffusion and deployment would increase significantly. The United States would have to push aggressive adoption of AI across domestic industry and the military and move quickly to spread American and allied systems abroad. Even nonfrontier models—when well integrated, cheaply priced, or paired with robust infrastructure—could capture massive market share, as Beijing well knows from past experience. The security of models and data centers would still matter, since catching up would not be trivial, and frontier models would remain essential for securing U.S. and allied systems, but the overriding task would be to get capable systems into wide use early, building familiarity and dependence before Chinese alternatives took hold. Export controls would remain valuable to slow China’s advance, but the United States would have to be mindful not to hinder deployment abroad. WORLD SIX In a world without superintelligence, where catching up is hard, and where China is not racing, the United States would hold a comfortable lead and have a meaningful window to entrench its advantage, using AI to develop new lifesaving medicines, expand education, and revitalize lagging American industries. China would not necessarily exit AI entirely, but Beijing would limit its investment in frontier model development so much that it would effectively be out of the race for cutting-edge capability. Instead, China would focus on applications and commoditizing U.S. breakthroughs. Meanwhile, Washington would be able to focus on safety, accountability, and ensuring that AI-driven gains translate into broad-based prosperity. Internationally, the United States would have space to develop a positive vision for an AI-infused world, welcoming partners into its AI ecosystem and offering access to models, data, and infrastructure but keeping critical elements anchored at home. The aim would not be to diffuse American systems as widely and quickly as possible, but to ensure that the systems that spread are safe and aligned with democratic values. WORLD SEVEN The second-to-last scenario sees bounded and uneven AI, easy catch-up, and China racing to the frontier. In this world, the United States and China engage in a diffusion race. Because breakthroughs would be easy to imitate, no country could monopolize intelligence for long; advantage would come from developing and commercializing faster than one’s rivals. Private capital would be harder to corral. If the technology was easily copied, investors would likely underinvest, seeing little defensible return. But the United States would still need to run the race; the systems that spread first would shape the global environment and should reflect U.S. values. And because China would be racing, the United States would need to innovate at the same pace or faster to prevent Beijing from compromising American cybersecurity, biosecurity, and military and intelligence advantages. Diffusion would become not just a component of AI strategy but a core pillar of U.S. foreign policy. China already systematically pushes its technology into foreign markets, often bundling it with financing and large-scale development projects. The United States would rightly have serious concerns about allowing the world’s digital infrastructure to be built on Chinese models that can exfiltrate data, monitor communications, and run far-reaching influence operations. Washington would need to embed AI diffusion into its statecraft, expanding the remit and deployable capital of institutions such as the Development Finance Corporation to help American and allied firms build data centers, networks, and regionally tailored systems around the world. That would require an American leadership focused not on short-term profit but on bringing about a world that runs to a much greater extent on American systems than on Chinese ones. If copying was easy and proliferation inevitable, secrecy would offer little return. The better play may be to open-source or widely license safe versions of key systems, ensuring that they would be run on American or allied platforms rather than adversarial ones. In this world, export controls would offer less benefit and may in some extreme cases even undermine the diffusion race because China could reliably bypass them by quickly replicating American technologies. WORLD EIGHT In the final world, AI would resemble many past major technologies. The United States would lead in innovation, but advances would be easy to copy. This free-riding would make private investment for large frontier pushes harder to mobilize and, with China not racing, the national security rationale for public spending would become less all-encompassing. Instead, AI investment would follow projected revenue from diffusion. Open-source models would likely dominate. The race for AI leadership would also be primarily a race for diffusion. It would resemble earlier contests, such as the one over 5G, which was driven by deployment and scale. Washington’s task would be to ensure that trusted American and allied systems become the default infrastructure for global industry, leaving less space for Beijing to establish a low-cost viable alternative. FROM SCENARIOS TO STRATEGY Strategy in the AI age will be less about predicting a single outcome or one right policy and more about thinking in probabilities. To make use of this matrix, policymakers should start by selecting a base case—the world they believe is most likely. Each major policy proposal should be tested against that base case: Does the policy make sense in the world one thinks one is in? Policymakers must also determine what can be done to avoid or blunt the worst possible outcomes in the worlds where the United States is most exposed and the stakes are highest, such as in World One—even if they do not think those worlds are most likely. From there, they should hedge, aligning strategy to the base case while also making it resilient across the most challenging worlds. That means identifying which policies work across multiple worlds, which can be reversed if the predicted future shifts, and which would be damaging if the base case proves false. For each of the eight worlds, the government should have a ready-to-execute plan that can be adapted as conditions shift. That requires institutions to think probabilistically. The National Security Council should use the matrix to stress-test U.S. policy against alternative futures. And the intelligence community should track signals of movement along the three axes (such as the pace of progress at the frontier, the speed with which new capabilities are replicated, or shifts in Chinese investment) and update the odds of each future accordingly. Senior national security officials should be prepared to recommend policy adjustments when it begins to look as though a different world is most likely. The task is not to make perfect predictions but to balance risk and reward, adjust priorities as probabilities change, redraw the matrix as circumstances demand, and establish the systems and processes to do these things. This framework is not only for policymakers. It also offers a practical way for anyone to engage in debates about AI and geopolitics. These arguments too often end in two sides talking past each other; they could become more productive if the participants pin down which future is being assumed. Is AI expected to race toward something transformative or plateau? Will breakthroughs spread quickly or remain hard to replicate? And is China racing for the frontier or positioning itself to follow and commoditize? Asking these questions and mapping each side’s argument onto the matrix often reveals whether disagreements really lie in policy recommendations or in assumed futures. The point of this framework is not to forecast the final world but to discipline strategy in the face of uncertainty—to make assumptions explicit and test them against alternatives. The framework is also meant to evolve. There are more dimensions to the progression of AI than the three axes presented here; some of the questions that seem most pertinent today may eventually be resolved, and new ones will emerge. If it becomes apparent that superintelligence is within reach, for example, the possibility of more limited advancement will become irrelevant, and the matrix may feature a new axis that considers two new possibilities: beneficial superintelligence and dangerous superintelligence. Actors other than China could grow more important, too, as the technological landscape shifts. What matters is having a policy framework that can adapt as evidence accumulates. Geopolitics in the age of AI will not be simple. But without a disciplined way of thinking, strategy will collapse under the weight of hidden assumptions and agendas. By mapping possible worlds and the choices they demand, this framework offers a way to see through the fog. The task for policymakers now is clear: treat AI not as a single story but as a shifting landscape. If American leaders learn to think this way, they will define whatever AI age emerges. If not, others will do it for them. Loading...

谢谢,但不,谢谢克劳德群,凯文·罗斯

2026-01-27 02:36:06

《纽约时报》科技专栏作家凯文·罗斯 (Kevin Roose) 强调,使用多代理克劳斯群等先进人工智能工具的硅谷内部人士与由于限制性 IT 政策而无法获得此类技术的其他知识工作者之间存在巨大差距。在最近的一篇 X 帖子中,他表示担心这种分歧可能会为外来者的赶上造成难以逾越的障碍。Roose 讨论了他使用 Claude Code 快速开发应用程序的个人经验,并谈到了非编码人员利用人工智能执行以前需要编码技能的任务的趋势。然而,他承认人工智能的影响很大程度上局限于软件工程,并没有显着改变旧金山等科技中心以外人们的日常生活。

谢谢,但不,谢谢克劳德群,凯文·罗斯

Google 的愿景是让 Gmail 成为您的个人 AI 代理指挥中心

2026-01-27 02:11:00

谷歌正在将 Gmail 重新构想为一个主动的助理系统,以帮助用户更有效地管理自己的生活,而不仅仅是充当电子邮件容器。Gmail 产品副总裁 Blake Barnes 强调,虽然正在开发新的人工智能功能,但它们的目的是协助决策并减少不断收件箱通知带来的认知负担,而无需承诺特定的时间表或产品推出。未来的愿景包括理解简单分类之外的上下文和关系,有可能将 Gmail 变成一个能够自主解释用户请求并采取行动的主动助手。然而,谷歌对改变数十亿用户使用的既定工作流程持谨慎态度,并正在开发独立于传统收件箱界面的新功能,以尊重现有的习惯和信任。

Google 的愿景是让 Gmail 成为您的个人 AI 代理指挥中心

谷歌支付 6800 万美元和解其语音助手监视用户的指控TechCrunch

2026-01-27 00:43:35

谷歌同意支付 6800 万美元来和解一项集体诉讼,该诉讼指控其语音助手未经同意非法记录用户对话,并与第三方共享这些信息以用于有针对性的广告。和解协议不包括谷歌承认不当行为,谷歌否认了这些指控。该案具体涉及谷歌助理在用户没有有意提示的情况下,通过“错误接受”激活并记录用户通信的情况。此前,苹果等科技巨头也曾遭遇过类似的隐私相关诉讼,苹果于 2021 年以 9500 万美元和解了类似的诉讼。

谷歌支付 6800 万美元和解其语音助手监视用户的指控TechCrunch

Helix 02 简介:全身自主

2026-01-27 00:00:00

Helix 02 是Figure 最新的人形机器人,能够在无需人工干预的情况下进行自主、长视野运动和操纵任务。它具有统一的神经系统,可以直接从像素控制整个身体,从而实现复杂的行为,例如在整个厨房装载和卸载洗碗机。主要创新包括 System 0,它用基于人体运动数据的学习全身控制取代了传统控制器;用于全身控制的“所有传感器输入,所有执行器输出”视觉运动策略;以及先进的场景理解和语言处理能力。Helix 02 展示了使用触觉传感和手掌摄像头的灵巧操作任务,标志着通用人形机器人的重大进步。

Helix 02 简介:全身自主

OpenAI 泄露了有关其 AI 编码代理如何工作的技术细节

2026-01-26 23:05:17

OpenAI 和 Anthropic 都在 GitHub 上提供开源编码 CLI 客户端,供开发人员直接检查,但不提供对 ChatGPT 或 Claude Web 界面的相同访问权限。Bolin 的帖子讨论了“代理循环”,它是人工智能代理如何管理用户、人工智能模型和软件工具之间交互的核心。代理循环涉及一个循环,其中用户输入转换为模型的提示,响应要么提供最终答案,要么触发工具调用,然后由代理执行,直到得出结论。Bolin 揭示了 Codex 如何通过发送到 OpenAI 响应 API 的结构化提示来初始化此过程,该提示由系统指令、定义的工具和上下文输入组成。

OpenAI 泄露了有关其 AI 编码代理如何工作的技术细节