OC

Knowledge OS
鹦鹉螺口语
萨姆·奥尔特曼可能控制我们的未来——他可以信任吗?
2026-04-06 10:00:00 · 英文原文

萨姆·奥尔特曼可能控制我们的未来——他可以信任吗?

作者:Ronan Farrow

2023 年秋天,OpenAI 首席科学家 Ilya Sutskever 向该组织董事会的三名成员发送了秘密备忘录。几周来,他们一直在秘密讨论是否萨姆·奥特曼OpenAI 的首席执行官和他的副手格雷格·布罗克曼 (Greg Brockman) 适合管理公司。苏茨克韦尔曾经将这两个人视为朋友。2019 年,他在 OpenAI 办公室举行的仪式上主持了布罗克曼的婚礼,仪式上还有一个机械手形式的戒指持有者。但随着他越来越相信公司正在接近其长期目标——创造一种可以匹敌或超越人类认知能力的人工智能——他对奥特曼的怀疑也随之增加。正如 Sutskever 当时对另一位董事会成员所说的那样,“我不认为 Sam 是应该按下按钮的人。”

在董事会成员的要求下,Sutskever 与志同道合的同事一起编写了大约 70 页的 Slack 消息和人力资源文件,并附有解释性文字。这些材料包括用手机拍摄的图像,显​​然是为了避免被公司设备检测到。他将最后的备忘录作为消失的消息发送给其他董事会成员,以确保其他人永远不会看到它们。“他很害怕,”一位接待他们的董事会成员回忆道。我们审阅的备忘录此前并未完整披露。他们指控奥特曼向高管和董事会成员歪曲事实,并在内部安全协议方面欺骗他们。其中一份关于奥特曼的备忘录以一个标题为“山姆表现出一致的模式......”的列表开头,第一项是“说谎”。

许多科技公司发布了关于改善世界的模糊声明,然后开始实现收入最大化。但 OpenAI 的创立前提是它必须与众不同。创始人包括 Altman、Sutskever、Brockman 和埃隆·马斯克断言,人工智能可能是人类历史上最强大、最具潜在危险的发明,考虑到存在的风险,也许需要一种不寻常的公司结构。该公司是一家非营利组织,其董事会有责任将人类安全置于公司的成功甚至生存之上。首席执行官必须是一个非常正直的人。根据苏茨克维尔的说法,“任何致力于建立这种改变文明的技术的人都承受着沉重的负担,并承担着前所未有的责任。”但是“最终担任此类职位的人往往是某一类人,对权力感兴趣的人,政治家,喜欢权力的人。”在一份备忘录中,他似乎担心将技术委托给“只是告诉人们什么”的人他们想听听。 – 如果 OpenAI 的 C.E.O.事实证明他并不可靠,由六名成员组成的董事会有权解雇他。包括人工智能政策专家海伦·托纳(Helen Toner)和企业家塔莎·麦考利(Tasha McCauley)在内的一些成员收到这些备忘录,证实了他们已经相信的事情:奥特曼的角色将人类的未来托付给了他,但他不能被信任。

Altman 在拉斯维加斯参加一场一级方程式比赛时,Sutskever 邀请他与董事会进行视频通话,然后宣读了一份简短的声明,解释他不再是 OpenAI 的员工。董事会在听取法律建议后发布了一条公开消息,仅表示奥特曼已被免职,因为他“在沟通中始终不坦诚”。OpenAI 的许多投资者和高管都感到震惊。已向 OpenAI 投资约 130 亿美元的微软在解雇 Altman 的计划发生前不久才得知。“我非常震惊,”微软首席执行官萨蒂亚·纳德拉后来说。“我无法从任何人那里得到任何信息。”他与 LinkedIn 联合创始人、OpenAI 投资者和微软董事会成员 Reid Hoffman 进行了交谈,后者开始四处打电话以确定 Altman 是否犯下了明显的罪行。“我不知道到底发生了什么,”霍夫曼告诉我们。“我们正在寻找贪污或性骚扰的行为,但我什么也没发现。”

其他商业伙伴也同样措手不及。当奥特曼打电话给投资者罗恩·康威说他被解雇时,康威向正在与他共进午餐的众议员南希·佩洛西举起了电话。“你最好快点离开这里,”她告诉康威。OpenAI 即将完成 Thrive 的一笔巨额投资,Thrive 是一家风险投资公司,由贾里德·库什纳 (Jared Kushner) 的兄弟乔什·库什纳 (Josh Kushner) 创立,奥特曼与他相识多年。这笔交易将使 OpenAI 的估值达到 860 亿美元,并允许许多员工兑现数百万美元的股权。库什纳与音乐制作人里克·鲁宾会面后,接到奥特曼的未接电话。“我们立即开战,”库什纳后来说。

奥特曼被解雇的那天,他飞回了他在旧金山价值 2700 万美元的豪宅,这座豪宅可以欣赏海湾全景,曾经有一个悬臂式无边泳池,并建立了他所谓的“某种流亡政府”。康威、Airbnb 联合创始人布莱恩·切斯基和以激进着称的危机沟通经理克里斯·勒汉通过视频和电话加入了进来,有时一天几个小时。奥特曼管理团队的一些成员在房子的走廊里露营。律师们在他卧室旁边的家庭办公室设立了办公室。失眠的时候,奥特曼会穿着睡衣从他们身边走过。当我们最近与奥特曼交谈时,他将自己被解雇的后果描述为“只是这种奇怪的赋格曲。”

在董事会保持沉默的情况下,奥特曼的顾问为他的回归提出了公开理由。勒哈恩坚称此次解雇是政变由流氓——有效的利他主义者——某个组织的追随者精心策划信仰体系重点是最大限度地提高人类的福祉,他们是来看人工智能的。作为一种生存威胁。(霍夫曼告诉纳德拉,解雇可能是由于“有效利他主义的疯狂”。)据报道,勒汉的座右铭是迈克·泰森之后的“每个人都有一个游戏计划,直到你打他们的嘴”,他敦促奥特曼发起一场积极的社交媒体运动。切斯基与科技记者卡拉·斯威舍尔保持联系,转达了对董事会的批评。

Trail boss burns cowboy's watercolor painting.

格伦·巴克斯特的卡通

奥尔特曼每天晚上六点钟都会喝一杯内格罗尼酒打断他的“作战室”。“你需要冷静一下,”他回忆道。“无论发生什么,都会发生。”但是,他补充道,他的电话记录显示,他每天接听电话的时间超过十二个小时。奥特曼曾一度向米拉·穆拉蒂(Mira Murati)传达了这一信息,米拉·穆拉蒂向苏茨克韦尔提供了备忘录材料,并担任临时首席执行官。据知情人士透露,在那段时期,他的盟友正在“全力以赴”并“寻找坏事”,以损害她以及其他反对他的人的声誉。(奥特曼不记得这次交流。)

在被解雇后的几个小时内,Thrive 就暂停了计划中的投资,并表示只有奥特曼回归,交易才会完成,员工也将收到报酬。这一时期的文本显示奥特曼与纳德拉密切合作。(“怎么样:萨蒂亚和我的首要任务仍然是拯救 OpenAI,”Altman 在两人起草一份声明时建议道。纳德拉提出了一个替代方案:“确保 OpenAI 继续蓬勃发展。”)微软很快宣布,它将为 Altman 和任何离开 OpenAI 的员工制定一项竞争计划。一封要求他返回的公开信在该组织内流传。一些犹豫是否要签字的人接到了同事的恳求电话和信息。大多数 OpenAI 员工最终威胁要与 Altman 一起离开。

董事会被逼到了角落。“控制 Z,这是一种选择,”托纳说,撤消射击。“或者另一种选择是公司分崩离析。”甚至穆拉蒂最终也签署了这封信。奥特曼的盟友努力争取苏茨克维尔的支持。布罗克曼的妻子安娜在办公室找到他,恳求他重新考虑。“你是个好人,你可以解决这个问题,”她说。Sutskever 后来在法庭证词中解释道,“我觉得如果我们走 Sam 不会回来的道路,那么 OpenAI 将被摧毁。”一天晚上,Altman 服用了安眠药,却被他的丈夫(一位名叫 Oliver Mulherin 的澳大利亚程序员)叫醒,他告诉他 Sutskever 正在犹豫不决,人们正在告诉 Altman 与董事会交谈。“我在这种疯狂的安必恩雾霾中醒来,我感到非常迷失方向,”奥特曼告诉我们。“我当时想,我现在无法与董事会交谈。”

在一系列日益紧张的通话中,奥特曼要求那些解雇他的董事会成员辞职。“当我陷入疯狂的怀疑之中时,我必须收拾起他们的烂摊子吗?”奥特曼回忆起最初的想法,关于他的回归。“我只是想,他妈的绝对不是。”最终,苏茨克弗、托纳和麦考利失去了董事会席位。Quora 的创始人亚当·丹吉洛 (Adam DâAngelo) 是唯一留下来的原始成员。作为退出的条件,离职成员要求对奥特曼的指控进行调查,包括他煽动高管相互对立并隐瞒财务纠纷。他们还敦促成立一个能够独立监督外部调查的新董事会。但两位新成员,即哈佛大学前校长劳伦斯·萨默斯和 Facebook 前 C.T.O.布雷特·泰勒 (Bret Taylor) 在与奥特曼密切交谈后被选中。“你愿意这样做吗?”奥特曼给纳德拉发短信。“布雷特、拉里·萨默斯、亚当担任董事会,我担任首席执行官,然后布雷特负责调查。”(麦考利后来在一份证词中作证说,当泰勒之前被考虑担任董事会席位时,她曾担心他对奥特曼的尊重。)

奥特曼被解雇后不到五天就复职了。员工们现在将这一时刻称为“短暂时刻”,源自漫威电影中的一次事件,其中的角色从存在中消失,然后原封不动地返回到一个因他们的缺席而发生深刻变化的世界。但关于 Altman 可信度的争论已经超出了 OpenAI 的董事会范围。促成他下台的同事指责他存在一定程度的欺骗行为,这对于任何高管来说都是站不住脚的,对于这样一个变革性技术的领导者来说也是危险的。“我们需要值得拥有其权力的机构,”穆拉蒂告诉我们。– 董事会寻求反馈,我分享了我所看到的。我分享的一切都是准确的,而且我支持所有这些。另一方面,奥特曼的盟友长期以来一直否认这些指控。解雇后,康威给切斯基和勒汉发短信,要求采取公关攻势。“这对 SAM 来说是名誉,”他写道。他告诉《华盛顿邮报》邮政奥特曼“受到了流氓董事会的虐待”。

OpenAI 自此成为世界上最有价值的公司之一。据报道,该公司正在准备首次公开募股,潜在估值达一万亿美元。奥特曼正在推动数量惊人的人工智能的建设。基础设施,其中一些集中在外国独裁政权内部。OpenAI 正在争取广泛的政府合同,为人工智能如何制定标准。用于移民执法、国内监视和战区的自主武器。

Altman 在 2024 年的一篇博客文章中写道,他通过宣扬一个愿景来促进 OpenAI 的发展,在这个愿景中,“令人震惊的胜利——解决气候问题、建立太空殖民地以及发现所有物理现象——最终将变得司空见惯。”他的言论帮助维持了历史上最快的烧钱初创企业之一,依赖于借了巨额资金的合作伙伴。美国经济越来越依赖于一些高杠杆的人工智能。公司和许多专家(有时包括奥特曼)都警告说,该行业正处于泡沫之中。“有人将会损失巨额资金,”他去年告诉记者。如果泡沫破灭,经济灾难可能随之而来。如果他最乐观的预测被证明是正确的,他可能会成为地球上最富有和最有权势的人之一。

奥特曼被解雇后,董事会在一次紧张的电话会议中敦促他承认存在欺骗行为。据电话中的人士透露,“这太糟糕了”,他反复说道。“我无法改变我的性格。”奥特曼说他不记得那次交流。“我的意思可能是“我确实试图成为一股团结的力量,”他告诉我们,并补充说,这种特质使他能够领导一家非常成功的公司。他将这些批评归因于一种倾向,尤其是在他职业生涯的早期,即“过于避免冲突。”但一位董事会成员对他的声明提出了不同的解释:“这句话的意思是,我有向人撒谎的特质,而且我不会停止。”解雇奥特曼的同事们是出于危言耸听和个人敌意,还是出于个人敌意?他们认为他不值得信任,对吗?

今年冬天的一天早上,我们在旧金山的 OpenAI 总部见到了 Altman,这是为了这个故事与他进行了十几次对话之一。该公司最近搬进了两栋十一层的玻璃塔楼,其中一栋被另一家科技巨头 Uber 占据,其联合创始人兼首席执行官特拉维斯·卡兰尼克 (Travis Kalanick) 看上去就像是一位势不可挡的神童,直到 2017 年他在投资者的压力下辞职,投资者称他的职业道德令人担忧。(卡兰尼克现在经营一家机器人初创公司;他最近表示,在空闲时间,他使用 OpenAI 的 ChatGPT 来探索量子物理学已知的前沿。)

一位员工带我们参观了办公室。在一个摆满公共桌子的通风空间里,有一幅计算机科学家艾伦·图灵的动画数字绘画;当我们经过时,它的眼睛一直在跟踪我们。该装置是对图灵测试的眨眼参考,图灵测试是 1950 年关于机器是否可以可信地模仿人的思想实验。(在 2025 年的一项研究中,ChatGPT 比实际人类更可靠地通过了测试。)通常情况下,您可以与绘画进行交互。但我们的导游告诉我们,声音已被禁用,因为它无法阻止窃听员工并插手他们的谈话。在办公室的其他地方,牌匾、小册子和商品上都写着“感受 AGI”。这句话最初与 Sutskever 相关,他用它来警告同事注意通用人工智能的风险——机器与人类认知能力相匹配的门槛。在“Blip”事件之后,它成为了欢呼丰沛未来的欢快口号。

我们在八楼一间看上去很普通的会议室里见到了奥特曼。“人们过去常告诉我决策疲劳,但我不明白,”奥特曼告诉我们。“现在我每天都穿灰色毛衣和牛仔裤,甚至从衣柜里挑选一件灰色毛衣——我想,我希望我没有考虑这一点。”奥特曼有着年轻的外表——他身材苗条,蓝眼睛睁得大大的,头发乱糟糟的——但他现在四十岁了,他和穆赫林有一个一岁的儿子,由代孕妈妈生下。“我确信,担任美国总统将是一项压力更大的工作,但在我认为我可以合理完成的所有工作中,这是我能想象到的压力最大的一个,”他说,与我们中的一个人目光接触,然后与另一个人目光接触。“我向朋友们解释这一点的方式是:“在我们推出 ChatGPT 之前,这是世界上最有趣的工作。”我们正在做出这些大规模的科学发现。我认为我们做出了几十年来(我不知道)最重要的科学发现。”他垂下眼睛。“然后,自从 ChatGPT 推出以来,决策变得非常困难。”

奥特曼在密苏里州克莱顿(圣路易斯的一个富裕郊区)长大,是四个兄弟姐妹中的长子。他的母亲康妮·吉布斯汀 (Connie Gibstine) 是一名皮肤科医生;他的父亲杰里·奥尔特曼是一名房地产经纪人和住房活动家。奥特曼曾就读于一所改革派犹太教堂和一所私立预科学校,他称这些学校“不是那种你会真正站起来谈论同性恋的地方。”不过,总的来说,这个家族的富裕郊区圈子相对自由。奥特曼说,当他十六岁或十七岁的时候,他在圣路易斯一个以同性恋为主的社区外出很晚,遭到了残酷的人身攻击和仇视同性恋的诽谤。奥特曼没有报告这起事件,而且他不愿意向我们提供更多记录细节,他说更全面的讲述会“让我看起来像是在操纵别人或博取同情。”他否认了这一事件以及他的性取向对他的身份至关重要的观点。但是,他说,“这可能有某种根深蒂固的心理因素,我认为我已经结束了,但我还没有结束”,因为我不想再发生更多冲突。

奥特曼童年时的态度,他的兄弟告诉 纽约客2016 年,他的想法是“我必须赢,我负责一切。”他去了斯坦福大学,在那里他定期参加校外扑克比赛。“我认为我从中学到的关于生活和商业的知识比在大学里学到的还要多,”他后来说。

所有斯坦福大学的学生都雄心勃勃,但其中许多最有进取心的学生都辍学了。大二结束后的那个夏天,Altman 前往马萨诸塞州,加入 Y Combinator 的第一批创业者行列,Y Combinator 是一家由著名软件工程师 Paul Graham 共同创立的“创业孵化器”。每位参赛者都加入了 Y.C.有一个创业的想法。(Altman 的一批伙伴包括 Reddit 和 Twitch 的创始人。)Altman 的项目最终被称为 Loopt,是一个原型社交网络,它使用人们翻盖手机的位置来告诉他们的朋友他们在哪里。公司反映了他的动力,以及将模棱两可的情况解释为对他有利的倾向。联邦法规要求电话运营商能够追踪紧急服务电话的位置;Altman 与运营商达成协议,利用这些功能供公司使用。

Man presenting graph with declining revenue at meeting.

“这些数字表明这里有人拥有诗人的灵魂。”

艾米丽·弗莱克 (Emily Flake) 的卡通

奥特曼在 Loopt 的大多数员工都喜欢他,但有些人说,他们对他的夸大倾向感到惊讶,即使是在微不足道的事情上。有人回忆说,奥特曼曾大肆吹嘘自己是一名乒乓球冠军——就像密苏里州高中乒乓球冠军——然后事实证明他是办公室里最差的球员之一。(奥特曼说他可能是在开玩笑。)正如卢普特的一名老员工马克·雅各布斯坦(Mark Jacobstein)后来被投资者要求充当奥特曼的“保姆”一样,他后来告诉基奇·哈吉(Keach Hagey),因为“乐观主义者”,——奥特曼的传记,““我想我也许可以完成这件事”和“我已经完成了这件事”之间存在着一条模糊的界限,这种界限以最有毒的形式导致了希拉洛斯——伊丽莎白·霍姆斯的欺诈性初创公司。

哈吉表示,一些高级员工担心 Altman 的领导能力和缺乏透明度,曾两次要求 Loopt 董事会解雇他的首席执行官职务。但奥特曼也激发了强烈的忠诚度。一名前员工被告知,一名董事会成员回应道:“这是 Sam 的公司,赶紧回去工作吧。”(一名董事会成员否认,罢免 Altman 首席执行官职务的企图是认真的。)Loopt 努力争取用户,并于 2012 年被一家金融科技公司收购。据一位知情人士透露,此次收购的安排主要是为了帮助奥特曼挽回面子。尽管如此,当格雷厄姆 2014 年从 Y.C 退休时,他已经聘请奥特曼接任他的总裁职务。“我在厨房里问了山姆,”格雷厄姆告诉纽约客。——然后他笑了,就像,它起作用了。我从未见过萨姆露出不受控制的微笑。就像你把一团纸扔进房间另一边的废纸篓里一样——那个微笑。

奥尔特曼的新角色让二十八岁的他成为了一名拥王者。他的工作是挑选最有渴望和最有前途的企业家,将他们与最好的程序员和投资者联系起来,并帮助他们将初创公司发展成为行业定义的垄断企业(而 Y.C. 则抽取了 6% 或 7% 的提成)。在 Altman 的带领下,Y.C. 经历了一段积极扩张的时期,将初创公司从数十家增加到数百家。但一些硅谷投资者开始相信他的忠诚度存在分歧。一位投资者告诉我们,众所周知,奥特曼“有选择地对最好的公司进行个人投资,阻止外部投资者。”(奥特曼否认阻止任何人。)奥特曼曾担任投资基金红杉资本的“侦察员”,这是一项涉及投资早期初创企业并从中抽取少量利润的计划的一部分。一位知情人士表示,当奥特曼对金融服务初创公司 Stripe 进行天使投资时,他坚持要投资更大的份额,这让红杉的合作伙伴感到恼火。该人士补充道,“这是‘山姆优先’的政策。”据奥特曼自己估计,他是其他大约 400 家公司的投资者。(Altman 否认了对 Stripe 交易的这种描述。2010 年左右,他对 Stripe 进行了 1.5 万美元的初始投资,占其 2% 的股份。该公司现在的估值超过 1500 亿美元。)

到 2018 年,一些 Y.C.合作伙伴对奥特曼的行为感到非常沮丧,他们向格雷厄姆投诉。格雷厄姆和杰西卡·利文斯顿、他的妻子和 Y.C.创始人,显然与奥特曼进行了坦诚的交谈。随后,格雷厄姆开始告诉人们,虽然奥特曼同意离开公司,但他实际上是在抵制。奥特曼告诉 Y.C.合伙人表示,他将辞去总裁职务,转而担任董事长。2019 年 5 月,一篇博客文章宣布 Y.C.是否出现了带有星号的新总裁:“Sam 正在过渡为 YC 董事长。”几个月后,该帖子被编辑为“Sam Altman 辞去了 YC 的任何正式职位”;此后,该短语被完全删除。然而,就在 2021 年,美国证券交易委员会的一份文件将 Altman 列为 Y Combinator 的董事长。(奥特曼说他直到很久以后才意识到这一点。)

多年来,无论是在公开场合还是在最近的证词中,奥特曼都坚称自己从未被 Y.C. 解雇,而且他告诉我们,他并没有拒绝离开。格雷厄姆在推特上表示,“我们不想让他离开,只是为了在 Y.C. 和 Y.C. 之间做出选择”。和 OpenAI。格雷厄姆在一份声明中告诉我们,“我们没有解雇任何人的合法权力”。我们能做的就是施加道德压力。”不过,私下里,他明确表示奥特曼被免职是因为 Y.C.合作伙伴的不信任。对 Altman 在 Y Combinator 工作时间的描述基于与几位 Y.C. 的讨论。创始人和合伙人,以及同时代的材料,所有这些都表明分手并不完全是相互的。有一次,格雷厄姆告诉 Y.C.同事们表示,在被解雇之前,“萨姆一直在对我们撒谎。”

2015 年 5 月,奥特曼给当时世界百富豪埃隆·马斯克 (Elon Musk) 发了一封电子邮件。与许多著名的硅谷企业家一样,马斯克全神贯注于一系列威胁,他认为这些威胁对生存至关重要,但大多数人认为这些威胁是牵强的假设。“我们需要对人工智能非常小心,”他在推特上写道。“可能比核武器更危险。”

奥特曼总体上是一位技术乐观主义者,但他关于人工智能的言论很快就转身世界末日的。在公开场合以及在与马斯克和其他人的私人通信中,他警告说,这项技术不应由追求利润的大型公司主导。“一直在思考是否有可能阻止人类开发人工智能,”他写信给马斯克。“如果这件事无论如何都会发生,那么似乎最好是由谷歌以外的其他人先做这件事。”他借鉴了核武器的类比,提出了一个“曼哈顿人工智能项目”。他概述了这样一个组织应具备的总体原则“安全应该是首要要求”;“显然我们会遵守/积极支持所有监管”,他和马斯克选定了一个名字:OpenAI。

与最初的曼哈顿计划(一项导致制造原子弹的政府倡议)不同,OpenAI 将由私人资助,至少一开始是这样。奥特曼预测,超级人工智能——甚至超越 A.G.I. 的理论门槛,机器将完全超越人类思维的能力——最终将创造足够的经济利益,以“捕获宇宙中所有未来价值的光锥”。但他也警告了存在的危险。在某种程度上,对国家安全的影响可能会变得如此严重,以至于美国政府将不得不控制 OpenAI,或许可以将其国有化,并将其业务转移到沙漠中的安全掩体中。2015 年底,马斯克被说服了。“应该说,我们从 10 亿美元的资金承诺开始,”他写道。“我会承担其他人无法提供的一切。”

Altman 将 OpenAI 纳入 Y Combinator 的非营利部门,将其视为一个内部慈善项目。他向 OpenAI 招募了 Y.C.股票并通过 Y.C. 转移捐款账户。该实验室一度得到了 Y.C. 的支持。他持有个人股份的基金。(奥特曼后来形容这笔股份微不足道。他告诉我们,他给新兵的 Y.C. 股票是他自己的。)

曼哈顿计划的类比也适用于员工招聘。与核裂变研究一样,机器学习是一个具有划时代意义的小型科学领域,由一群古怪的天才主导。马斯克、奥特曼以及从 Stripe 加入的布罗克曼都坚信,只有少数计算机科学家能够实现所需的突破。谷歌拥有巨大的现金优势和多年的领先优势。“我们在人员和武器上都远远落后,”马斯克后来在一封电子邮件中写道。但是,“如果我们能够随着时间的推移吸引最有才华的人才,并且我们的方向是正确的,那么 OpenAI 将占上风。”

首要的招募目标是 Sutskever,他是一位热情而内向的研究员,通常被称为最有天赋的人工智能。他这一代的科学家。苏茨克维尔 1986 年出生于苏联,发际线后退,黑眼睛,说话时习惯停顿、不眨眼。另一个目标是达里奥·阿莫代(Dario Amodei),他是一位生物物理学家,精力充沛,总是紧张地扭动自己的黑发,用多段文章回复单行电子邮件。两人在其他地方都有着收入丰厚的工作,但奥特曼对他们给予了极大的关注。他后来开玩笑说,“我跟踪伊利亚。”

马斯克的名字更大,但奥特曼的触感更流畅。他给阿莫代发了电子邮件,他们在一家印度餐厅安排了一对一的晚餐。(Altman:“操我的优步出车祸了!大约晚点 10 点了。”Amodei:“哇,希望你没事。”)像许多人工智能一样。研究人员认为,只有当技术被证明与人类价值观“一致”时,才应该构建该技术,这意味着它会按照人们想要的方式行事,而不会犯下潜在的致命错误,例如,遵循通过消除环境最大污染者人类来净化环境的指示。奥特曼的言论令人放心,反映了这些安全问题。

后来加入该公司的 Amodei 多年来在“我的 OpenAI 体验”标题下详细记录了 Altman 和 Brockman 的行为(副标题:“私人:请勿分享”)。与阿莫迪相关的两百多页文件集,包括这些笔记、内部电子邮件和备忘录,已被硅谷的同事散发,但此前从未公开披露。阿莫代在他的笔记中写道,奥特曼的目标是建立“一个专注于安全的人工智能实验室(也许不是马上,但只要有可能就尽快)。”

2015 年 12 月,在 OpenAI 公开宣布前几个小时,Altman 给 Musk 发了一封电子邮件,称有传言称 Google“明天将向 openAI 中的每个人提出巨额还价,试图杀死它。」Musk 回答道,“Ilya 已经给出了坚定的肯定吗?”Altman 向他保证 Sutskever 会坚持下去。Google 每年向 Sutskever 提供 600 万美元,OpenAI 无法与之媲美。但是,奥特曼吹嘘道,“不幸的是,他们并没有‘做正确的事’”。

Two dogs lying on a bed.

“我只是说,如果我们撕掉枕头和床垫,可能会让我们的地方看起来更有人居住。”

朗尼米尔萨普的卡通

马斯克在旧金山 Mission 区的一家前手提箱工厂为 OpenAI 提供了一些办公空间。苏茨克维尔告诉我们,向员工推销的内容是“你将拯救世界。”

OpenAI 创始人相信,如果一切顺利,人工智能可能会迎来后稀缺的乌托邦,自动化繁重的工作,治愈癌症,让人们享受悠闲和丰富的生活。但如果这项技术失控,或者落入坏人之手,那么破坏可能是彻底的。中国可以用它来制造新型生物武器或先进的无人机舰队;人工智能模型可以智取其监管者,在秘密服务器上进行自我复制,使其无法被关闭;在极端情况下,它可能会夺取对能源网、股市或核武库的控制权。至少可以说,并不是每个人都相信这一点,但奥特曼一再确认他相信这一点。2015 年,他在自己的博客上写道,超人的机器智能——不一定是本质上邪恶的科幻版本,才能杀死我们所有人。更可能的情况是,它根本不关心我们,而是为了实现其他目标......将我们消灭。——OpenAI 的创始人发誓不会将速度置于安全之上,该组织的公司章程将造福人类作为一项具有法律约束力的义务。如果人工智能如果人工智能将成为历史上最强大的技术,那么任何能够单独控制该技术的个人都将变得异常强大——创始人将这种情况称为“AGI 独裁统治”。

Altman 告诉早期员工,OpenAI 将仍然是一个纯粹的非营利组织,程序员在那里工作时会大幅减薪。该公司接受了慈善捐款,其中包括来自当时所谓的开放慈善事业的三千万美元,该慈善事业是有效利他主义运动的中心,其承诺包括支持向全球穷人分发蚊帐。

Brockman 和 Sutskever 管理 OpenAI 的日常运营,而 Musk 和 Altman 仍然忙于其他工作,大约每周过来一次。然而,到 2017 年 9 月,马斯克已经变得不耐烦了。在讨论是否将 OpenAI 重组为营利性公司时,他要求获得多数控制权。奥特曼的回答因上下文而异。他一贯的主要要求似乎是,如果 OpenAI 在 CEO 的控制下进行重组。那份工作应该交给他。苏茨克弗似乎对这个想法感到不舒服。”向奥特曼表达了类似的担忧:“我们不明白为什么首席执行官头衔对你如此重要。”你所说的原因已经改变,很难真正理解是什么推动了它。”

“伙计们,我已经受够了,”马斯克回答道。“要么自己做点什么,要么继续将 OpenAI 作为一个非营利组织,否则,我就是个傻瓜,实际上是为你提供免费资金来创建一家初创公司。”五个月后,他尖刻地辞职了。(2023 年,他创立了一家名为 xAI 的营利性竞争对手。次年,他起诉 Altman 和 OpenAI 欺诈和违反慈善信托,声称自己受到“Altman 长期骗局”的“精心操纵”,即 Altman 利用他对人工智能危险的担忧,将他与人工智能分开。OpenAI 对此提出了强烈异议,目前该诉讼仍在进行中。)

马斯克离职后,阿莫迪和其他研究人员对布罗克曼和苏茨克韦尔的领导感到恼火,一些人认为布罗克曼是一个粗暴的经营者,而苏茨克韦尔则普遍被认为有原则但缺乏组织性。在成为CEO的过程中,奥特曼似乎对公司的不同派别做出了不同的承诺。他向一些研究人员保证,布罗克曼的管理权威将会被削弱。但是,他们不知道的是,他还与布罗克曼和苏茨克维尔达成了一项秘密握手协议:奥特曼将获得首席执行官职位。标题;作为交换,如果其他两人认为有必要,他同意辞职。(他对这种描述提出异议,说他担任首席执行官的角色只是因为他被要求这样做。三人都确认了该协议的存在,尽管布罗克曼说这是非正式的。“他单方面告诉我们,如果我们都要求他这样做,他就会辞职,”他告诉我们。“我们反对这个想法,但他说这对他很重要。这纯粹是”)后来,董事会惊讶地发现其首席执行官实质上是任命了他自己的影子委员会。

内部记录显示,早在 2017 年,创始人私下就对非营利结构产生了怀疑。那一年,在马斯克试图掌控公司之后,布罗克曼在日记中写道,“不能说我们致力于非营利……”。如果三个月后我们要做 B-corp,那就是谎言。”阿莫迪在他早期的一份笔记中回忆道,他曾向布罗克曼询问他的优先事项,而布罗克曼回答说他想要“金钱和权力”。布罗克曼对此提出异议。他此时的日记表明了相互矛盾的本能。其中一个写道:“只要没有其他人,我很高兴自己不会因此而致富。”在另一篇中,他问道:“那么我*真正*想要什么?”他的答案之一是“在财务上什么能让我达到 10 亿美元。”

2017 年,Sutskever 在办公室里读到了 Google 研究人员刚刚发表的一篇论文,提出了“一种新的简单网络架构,Transformer”。他从椅子上跳起来,跑过大厅,告诉他的研究同事们,“停止你们正在做的一切”。就是这样。 – Sutskever 认为,Transformer 是一项创新,可能使 OpenAI 能够训练更加复杂的模型。这一发现催生了第一个生成式预训练 Transformer,它是 ChatGPT 的种子。

我们了解到,随着技术变得越来越强大,大约十几名 OpenAI 的顶级工程师召开了一系列秘密会议,讨论 OpenAI 的创始人,包括布罗克曼和奥特曼,是否值得信任。有一次,一名员工想起了英国喜剧二人组米切尔和韦伯的一部小品,其中东线的一名纳粹士兵在清醒的时刻问道:“我们是坏人吗?”

到 2018 年,阿莫代开始更公开地质疑创始人的动机。“一切都是一系列轮流筹集资金的计划,”他后来在笔记中写道。“我觉得 OpenAI 需要的是一个明确的声明,说明它会做什么、不会做什么,以及它的存在将如何让世界变得更美好。”OpenAI 已经有了一个使命宣言:“确保通用人工智能造福全人类。”但阿莫迪并不清楚这对高管来说意味着什么,如果它有什么意义的话。阿莫代表示,2018 年初,他开始为公司起草一份章程,并在与 Altman 和 Brockman 进行了数周的对话后,提出了最激进的条款:如果一个“价值一致、具有安全意识的项目”接近于建立一个 A.G.I.在 OpenAI 之前,该公司将“停止与该项目竞争并开始协助该项目。”根据所谓的“合并和协助”条款,如果谷歌的研究人员弄清楚如何构建一个安全的 A.G.I.首先,OpenAI 可以逐步退出并将其资源捐赠给谷歌。按照任何正常的企业逻辑,做出这样的承诺都是疯狂的。但 OpenAI 不应该是一家普通的公司。

这一前提在 2019 年春天得到了检验,当时 OpenAI 正在与微软洽谈 10 亿美元的投资。尽管领导该公司安全团队的 Amodei 帮助向比尔盖茨推销了这笔交易,但团队中的许多人对此感到焦虑,担心微软会插入推翻 OpenAI 道德承诺的条款。阿莫代向奥特曼提供了一份安全要求的排名清单,将保留合并协助条款放在首位。Altman 同意了这一要求,但在 6 月份交易即将完成时,Amodei 发现增加了一项条款,赋予微软阻止 OpenAI 进行任何合并的权力。“宪章的百分之八十都被背叛了,”阿莫代回忆道。他质问奥特曼,奥特曼否认该条款的存在。阿莫代指着文字大声朗读,最终迫使另一位同事直接向奥特曼证实了它的存在。(奥特曼不记得了。)阿莫迪的笔记描述了不断升级的紧张冲突,包括几个月后,奥特曼召集了他和他在公司负责安全和政策工作的妹妹丹妮拉,告诉他们,他从一位高级管理人员那里得到了“良好的授权”,称他们一直在策划一场政变。笔记继续说,丹妮拉“失去了它”,并请来了那位高管,但这位高管否认说过任何话。据一位了解此次交流情况的人士回忆,奥特曼随后否认了这一说法。“我什至没有这么说过,”他说。“你刚刚也这么说过,”丹妮拉回答道。(奥特曼说这并不完全是他的记忆,他只是指责阿莫迪夫妇的“政治行为”。)2020 年,阿莫迪、丹妮拉和其他同事离开并创立了 Anthropic,该公司现在是 OpenAI 的主要竞争对手之一。

Altman 继续宣扬 OpenAI 对安全的承诺,尤其是当潜在的招募人员在听力范围内时。2022 年末,四位计算机科学家发表了一篇论文,部分动机是出于对“欺骗性对齐”的担忧,其中足够先进的模型可能会在测试期间假装表现良好,然后一旦部署,就会追求自己的目标。(这只是听起来像科幻小说的几个人工智能场景之一,但在某些实验条件下,它已经发生了。)这篇论文发表几周后,其作者之一、博士生加州大学伯克利分校的学生收到了奥特曼的一封电子邮件,奥特曼说他越来越担心不结盟的人工智能的威胁。他补充说,他正在考虑投入 10 亿美元来解决这个问题,许多人工智能都对此表示支持。专家们认为这是世界上最重要的未解决问题,有可能通过颁发奖项来激励世界各地的研究人员进行研究。这位研究生告诉我们,尽管他“听到过有关山姆狡猾的模糊传言”,但奥特曼的承诺赢得了他的支持。他请了学术假加入 OpenAI。

但在 2023 年春天的几次会议中,奥特曼似乎动摇了。他不再谈论颁奖的事了。相反,他主张建立一个内部“超级协调团队”。一份官方公告提到了公司的计算能力储备,承诺该团队将获得“我们迄今为止已获得的计算量的 20%”——这一资源的价值可能超过 10 亿美元。根据公告,这一努力是必要的,因为如果对准仍未解决,A.G.I.可能会“导致人类丧失权力,甚至导致人类灭绝。”被任命与 Sutskever 一起领导该团队的简·雷克 (Jan Leike) 告诉我们,“这是一个非常有效的保留工具。”

然而百分之二十的承诺化为泡影。四名与该团队合作或密切合作的人员表示,实际资源占公司计算量的 1% 到 2%。此外,该团队的一名研究人员表示,“大多数超对齐计算实际上是在最古老的集群上、芯片最差的情况下进行的。”研究人员认为,卓越的硬件是为盈利活动而保留的。(OpenAI 对此提出异议。)雷克向时任公司首席技术官穆拉蒂抱怨,但穆拉蒂告诉他不要再强调这一点——这种承诺从来都不现实。

Jazz bassist sitting at bar with friend.

“她就这样从我的生活中消失了。”

索菲亚·沃伦的卡通

大约在这个时候,一位前员工告诉我们,Sutskever“变得超级安全。”在 OpenAI 的早期,他认为对灾难性风险的担忧是合理的,但很遥远。现在,当他开始相信 A.G.I.事情迫在眉睫,他的担忧更加强烈。There was an all-hands meeting, the former employee continued, “where Ilya gets up and he’s, like, Hey, everyone, there’s going to be a point in the next few years where basically everyone at this company has to switch to working on safety, or else we’re fucked.” But the superalignment team was dissolved the following year, without completing its mission.

By then, internal messages show, executives and board members had come to believe that Altman’s omissions and deceptions might have ramifications for the safety of OpenAI’s products.在 2022 年 12 月的一次会议上,Altman 向董事会成员保证,即将推出的型号 GPT-4 的多种功能已获得安全小组的批准。董事会成员兼人工智能政策专家托纳要求提供文件。She learned that the most controversial features—one that allowed users to “fine-tune” the model for specific tasks, and another that deployed it as a personal assistant—had not been approved.As McCauley, the board member and entrepreneur, left the meeting, an employee pulled her aside and asked if she knew about “the breach” in India.Altman 在向董事会提供的多个小时的简报中都没有提及微软在印度发布了 ChatGPT 的早期版本,而没有完成所需的安全审查。“It just was kind of completely ignored,” Jacob Hilton, an OpenAI researcher at the time, said.

Although these lapses did not cause security crises, Carroll Wainwright, another researcher, said that they were part of a “continual slide toward emphasizing products over safety.” After the release of GPT-4, Leike e-mailed members of the board.“OpenAI has been going off the rails on its mission,” he wrote.

“We are prioritizing the product and revenue above all else, followed by AI capabilities, research and scaling, with alignment and safety coming third.” He continued, “Other companies like Google are learning that they should deploy faster and ignore safety problems.” McCauley, in an e-mail to her fellow-members, wrote, “I think we’re definitely at a point where the board should be increasing its level of scrutiny.” The board members tried to confront what they viewed as a mounting problem, but they were outmatched.“You had a bunch of J.V. people who’ve never done anything, to be blunt,” Sue Yoon, a former board member, said.2023年,该公司准备发布其GPT-4 Turbo型号。As Sutskever details in the memos, Altman apparently told Murati that the model didn’t need safety approval, citing the company’s general counsel, Jason Kwon.But when she asked Kwon, over Slack, he replied, “ugh . . .

confused where sam got that impression.” (A representative for OpenAI, where Kwon remains an executive, said that the matter was “not a big deal.”) Soon afterward, the board made its decision to fire Altman—and then the world watched as Altman reversed it.A version of the OpenAI charter is still on the organization’s website.But people familiar with OpenAI’s governing documents said that it has been diluted to the point of meaninglessness.Last June, on his personal blog, Altman wrote, referring to artificial superintelligence, “We are past the event horizon;the takeoff has started.” This was, according to the charter, arguably the moment when OpenAI might stop competing with other companies and start working with them.But in that post, called “The Gentle Singularity,” he adopted a new tone, replacing existential terror with ebullient optimism.“We’ll all get better stuff,” he wrote.“We will build ever-more-wonderful things for each other.” He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram.

奥尔特曼经常被描述为他这一代最伟大的推销员,无论是带着崇敬还是怀疑。Steve Jobs, one of his idols, was said to project a “reality-distortion field”—an unassailable confidence that the world would conform to his vision.But even Jobs never told his customers that if they didn’t buy his brand of MP3 player everyone they loved would die.When Altman was twenty-three, in 2008, Graham, his mentor, wrote, “You could parachute him into an island full of cannibals and come back in 5 years and he’d be the king.” This judgment was based not on Altman’s track record, which was modest, but on his will to prevail, which Graham considered almost ungovernable.当建议不要包括 Y.C.alumni on a list of the world’s top startup founders, Graham put Altman on it anyway.“Sam Altman can’t be stopped by such flimsy rules,” he wrote.

格雷厄姆的意思是这是一种恭维。But some of Altman’s closest colleagues came to have a different view of this quality.在苏茨克弗对人工智能感到更加苦恼之后为了安全起见,他整理了有关奥特曼和布罗克曼的备忘录。从此他们在硅谷享有传奇般的地位。在某些圈子里,它们被简称为“伊利亚备忘录”。与此同时,阿莫代继续收集笔记。这些和其他与他有关的文件记录了他从谨慎的理想主义到警惕的转变。His language is more heated than Sutskever’s, by turns incensed at Altman—“His words were almost certainly bullshit”—and wistful about what he says was a failure to correct OpenAI’s course.

这两份文件都没有确凿证据。相反,他们讲述了一系列所谓的欺骗和操纵行为,单独来看,每一种行为都可能会被人耸耸肩:据称奥特曼给两个人提供了同样的工作,讲述了关于谁应该出现在直播中的相互矛盾的故事,隐瞒了安全要求。But Sutskever concluded that this kind of behavior “does not create an environment conducive to the creation of a safe AGI.” Amodei and Sutskever were never close friends, but they reached similar conclusions.Amodei wrote, “The problem with OpenAI is Sam himself.”

我们采访了一百多名对 Altman 如何开展业务有第一手了解的人士:现任和前任 OpenAI 员工和董事会成员;guests and staffers at Altman’s various houses;他的同事和竞争对手;他的朋友和敌人以及一些人,考虑到硅谷的唯利是图文化,两者都是。(OpenAI has an agreement with Condé Nast, the owner of纽约客, which allows OpenAI to display its content in search results for a limited term.)

Some people defended Altman’s business acumen and dismissed his rivals, especially Sutskever and Amodei, as failed aspirants to his throne.Others portrayed them as gullible, absent-minded scientists, or as hysterical “ doomers ,” gripped by the delusion that the software they were building would somehow come alive and kill them.Yoon, the former board member, argued that Altman was “not this Machiavellian villain” but merely, to the point of “fecklessness,” able to convince himself of the shifting realities of his sales pitches.“He’s too caught up in his own self-belief,” she said.“So he does things that, if you live in the real world, make no sense.But he doesn’t live in the real world.”

然而,我们采访过的大多数人都认同苏茨克韦尔和阿莫代的判断:奥特曼拥有不懈的权力意志,即使在那些将自己的名字印在宇宙飞船上的实业家中,这也使他与众不同。“He’s unconstrained by truth,” the board member told us.“He has two traits that are almost never seen in the same person.The first is a strong desire to please people, to be liked in any given interaction.The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”

The board member was not the only person who, unprompted, used the word “sociopathic.” One of Altman’s batch mates in the first Y Combinator cohort was亚伦·斯沃茨, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage.去世前不久,斯沃茨向几位朋友表达了对奥特曼的担忧。“你需要明白,山姆永远不值得信任,”他告诉一位人士。——他是一个反社会者。He would do anything.” Multiple senior executives at Microsoft said that, despite Nadella’s long-standing loyalty, the company’s relationship with Altman has become fraught.“He has misrepresented, distorted, renegotiated, reneged on agreements,” one said.Earlier this year, OpenAI reaffirmed Microsoft as the exclusive cloud provider for its “stateless”—or memoryless—models.那天,它宣布了一项价值 500 亿美元的交易,使亚马逊成为其人工智能企业平台的独家经销商。代理。While reselling is permitted, Microsoft executives argue OpenAI’s plan could collide with Microsoft’s exclusivity.

(OpenAI maintains that the Amazon deal will not violate the earlier contract; a Microsoft representative said the company is “confident that OpenAI understands and respects” its legal obligations.) The senior executive at Microsoft said, of Altman, “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.” Altman is not a technical savant—according to many in his orbit, he lacks extensive expertise in coding or machine learning.多名工程师回忆起他滥用或混淆了基本技术术语。He built OpenAI, in large part, by harnessing other people’s money and technical talent.This doesn’t make him unique.这使他成为一名商人。更引人注目的是,他能够让不安的工程师、投资者和对技术持怀疑态度的公众相信,他们的优先事项,即使是相互排斥的,也是他的优先事项。当这些人试图阻碍他的下一步行动时,他常常会找到言语来中和他们,至少是暂时的;usually, by the time they lose patience with him, he’s got what he needs.“He sets up structures that, on paper, constrain him in the future,” Wainwright, the former OpenAI researcher, said.

“But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was.” “He’s unbelievably persuasive.Like, Jedi mind tricks,” a tech executive who has worked with Altman said.“He’s just next level.” A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I.研究人员通常认为,在这样的竞赛中,人工智能。肯定会赢,就像国际象棋大师击败孩子一样。Watching Altman outmaneuver the people around him during the Blip, the executive continued, had been like watching “an A.G.I.breaking out of the box.”

Moses looks back and forth between the burning bush and Smokey Bear.

利亚娜芬克的卡通

在被解雇后的几天里,奥特曼努力避免对针对他的指控进行任何外部调查。他告诉两个人,他担心即使进行调查也会让他看起来有罪。(Altman denies this.) But, after the resigning board members made their departure conditional on there being an independent inquiry, Altman acceded to a “review” of “recent events.” The two new board members insisted that they control that review, according to people involved in the negotiations.萨默斯凭借他的政治和华尔街人脉网络,似乎赋予了它可信度。(Last November, after the disclosure of e-mails in which Summers sought Jeffrey Epstein’s advice while pursuing a romantic relationship with a young protégée, he resigned from the board.) OpenAI enlisted WilmerHale, the distinguished law firm responsible for the internal investigations of Enron and WorldCom, to conduct the review.

六名知情人士称,调查似乎旨在限制透明度。其中一些人表示,调查人员最初没有联系该公司的重要人物。一名员工向萨默斯和泰勒投诉。“They were just interested in the narrow range of what happened during the board drama, and not the history of his integrity,” the employee recalled of his interview with investigators.其他人则不愿意分享对奥特曼的担忧,因为他们认为没有做出足够的努力来确保匿名。“Everything pointed to the fact that they wanted to find the outcome, which is to acquit him,” the employee said.(Some of the lawyers involved defended the process, saying, “It was an independent, careful, comprehensive review that followed the facts wherever they led.” Taylor also said that the review was “thorough and independent.”)

公司调查旨在赋予合法性。At private companies, their findings are sometimes not written down—this can be a way to limit liability.但在涉及公共丑闻的案件中,人们往往对透明度抱有更大的期望。在卡兰尼克于 2017 年离开 Uber 之前,其董事会聘请了一家外部公司,该公司向公众发布了一份长达 13 页的摘要。Given OpenAI’s 501(c)(3) status and the high-profile nature of the firing, many executives there expected to see extensive findings.然而2024年3月,OpenAI宣布清除Altman,但没有发布任何报告。The company provided, on its website, some eight hundred words acknowledging a “breakdown in trust.”

参与调查的人士表示,没有发布任何报告,因为没有写任何报告。相反,调查结果仅限于与萨默斯和泰勒分享的口头简报。“The review did not conclude that Sam was a George Washington cherry tree of integrity,” one of the people close to the inquiry said.But the investigation appears not to have centered the questions of integrity behind Altman’s firing, devoting much of its focus to a hunt for clear criminality;在此基础上,委员会得出结论,他可以继续担任首席执行官。此后不久,被解雇时被踢出董事会的奥特曼重新加入了董事会。The decision not to put the report in writing was made in part on the advice of Summers’s and Taylor’s personal attorneys, the person close to the inquiry told us.(Summers declined to comment on the record. Taylor said that, in light of the oral briefings, there had been “no need for a formal written report.”)

许多前任和现任 OpenAI 员工告诉我们,他们对缺乏披露感到震惊。奥尔特曼表示,他相信在他复职后加入的所有董事会成员都收到了口头简报。“That’s an absolute, outright lie,” a person with direct knowledge of the situation said.Some board members told us that ongoing questions about the integrity of the report could prompt, as one put it, “a need for another investigation.”

缺乏书面记录有助于最大限度地减少指控。So, increasingly, did Altman’s stature in Silicon Valley.Multiple prominent investors who have worked with Altman told us that he has a reputation for freezing out investors if they back OpenAI’s competitors.“If they invest in something that he doesn’t like, they won’t get access to other things,” one of them said.Another source of Altman’s power is his vast list of investments, which at times extends to his personal life.他与许多前恋人有财务纠葛:作为基金联席经理、主要投资者或经常共同投资者。这并不罕见。Many of Silicon Valley’s straight executives do the same thing with their romantic and sexual partners.(“You have to,” one prominent C.E.O. told us.) “I’ve obviously invested with some exes after the fact.And I think that’s, like, totally fine,” Altman said.但动态提供了非凡的控制水平。“It creates a very, very high dependence, essentially,” a person close to Altman said.“Oftentimes, it’s a lifetime dependence.”

即使是以前的同事也会受到影响。Murati 于 2024 年离开 OpenAI,开始构建自己的人工智能。启动。奥特曼的亲密盟友乔什·库什纳给她打了电话。

He praised her leadership, then made what seemed to be a veiled threat, noting that he was “concerned about” her “reputation” and that former colleagues now viewed her as an “enemy.” (Kushner, through a representative, said that this account did not “convey full context”; Altman said that he was unaware of the call.) At the beginning of his tenure as C.E.O., Altman had announced that OpenAI would create a “capped profit” company, which would be owned by the nonprofit.这种错综复杂的公司结构显然在奥特曼发明之前并不存在。在转换过程中,一位名叫霍尔顿·卡诺夫斯基 (Holden Karnofsky) 的董事会成员表示反对,认为该非营利组织的价值被严重低估。“I can’t do that in good faith,” Karnofsky, who is Amodei’s brother-in-law, said.根据同时期的记录,他投了反对票。However, after an attorney for the board said that his dissent “might be a flag to investigate further” the legitimacy of the new structure, his vote was recorded as an abstention, apparently without his consent—a potential falsification of business records.(OpenAI 告诉我们,几名员工记得卡诺夫斯基投了弃权票,并提供了会议纪要,记录了他投了弃权票。)

Last October, OpenAI “recapitalized” as a for-profit entity.The firm touts its associated nonprofit, now called the OpenAI Foundation, as one of the “best resourced” in history.但它现在是公司 26% 的股东,其董事会成员也是营利性董事会的成员,但有一个例外。

During congressional testimony, Altman was asked if he made “a lot of money.” He replied, “I have no equity in OpenAI . . .I’m doing this because I love it”—a careful answer, given his indirect equity through the Y.C.基金。这在技术上仍然是正确的。但包括奥特曼在内的一些人向我们表示,这种情况可能很快就会改变。“Investors are, like, I need to know you’re gonna stick with this when times get hard,” Altman said, but added that there was no “active discussion” about it.根据一份法律证词,布罗克曼似乎拥有该公司价值约 200 亿美元的股份。Altman’s share would presumably be worth more.不过,他告诉我们,他的主要动机并不是财富。A former employee recalls him saying, “I don’t care about money.I care more about power.”

2023 年,奥特曼与穆赫林在夏威夷的一所房子里举行了一场小型婚礼。(They’d met nine years prior, late at night in Peter Thiel’s hot tub.) They have hosted a range of guests at the property, and those we spoke with reported witnessing nothing more remarkable than the standard diversions of the very wealthy: meals prepared by a private chef, boat rides at golden hour.One New Year’s party was “Survivor”-themed;a photograph shows a number of shirtless, smiling men, and also Jeff Probst, the real host of “Survivor.” Altman has also hosted smaller groups of friends at his properties, gatherings that have included, in at least one instance, a spirited game of strip poker.(A photograph of the event, which did not include Altman, leaves unclear who won, but at least three men clearly lost.) We spoke to many of Altman’s former guests who suggested only that he is a generous host.

Nevertheless, rumors about Altman’s personal life have been exploited and distorted by competitors.无情的商业竞争并不是什么新鲜事,但人工智能内部的竞争。工业已经变得异常残酷。(“Shakespearean” was the word an OpenAI executive used to describe it to us, adding, “The normal rules of the game sort of don’t apply anymore.”) Intermediaries directly connected to, and in at least one case compensated by, Musk have circulated dozens of pages of detailed opposition research about Altman.它们反映了广泛的监视,记录了与他有关联的空壳公司、亲密伙伴的个人联系信息,甚至在同性恋酒吧对一名所谓的性工作者进行的采访。One of the Musk intermediaries claimed that Altman’s flights and the parties he attended were being tracked.Altman told us, “I don’t think anyone has had more private investigators hired against them.”

极端的说法已经流传。右翼广播公司塔克·卡尔森在没有任何明显证据的情况下暗示,奥特曼与一名举报人的死亡有关。这一说法和其他说法都被竞争对手放大了。Altman’s sister, Annie, claimed in a lawsuit, and in interviews with us, that he sexually abused her for years, beginning when she was three and he was twelve.(We could not substantiate Annie’s account, which Altman has denied and his brothers and mother have called “utterly untrue” and a source of “immense pain to our entire family.” In interviews that the journalist Karen Hao conducted for her book, “ Empire of AI ,” Annie suggested that memories of abuse were recovered during flashbacks in adulthood.) Multiple people working within rival companies and investment firms insinuated to us that Altman sexually pursues minors—a narrative persistent in Silicon Valley which appears to be untrue.我们花了几个月的时间调查此事,进行了数十次采访,但找不到任何证据支持它。

“This is disgusting behavior from a competitor that I assume is part of an attempt at tainting the jury in our upcoming cases,” Altman told us.“As ridiculous as this is to have to say, any claims about me having sex with a minor, hiring sex workers, or being involved in a murder are completely untrue.” He added that he was “sort of grateful” that we had spent months “so aggressively trying to look into this.”

Man leads woman into his apartment.

“My apartment is full of smells that I personally am in no way responsible for.”

威廉·海菲利 (William Haefeli) 的漫画

奥特曼承认与法定年龄的年轻男子约会。我们采访了他的几位合伙人,他们告诉我们,他们认为这没有问题。然而,来自马斯克中间人的反对档案却将其视为攻击路线。(The dossiers include salacious and unsubstantiated references to a “Twink Army” and “Sugar Daddy’s Sexual Habits.”) “I think there’s a lot of homophobia that gets pushed,” Altman said.科技记者斯威舍对此表示同意。“All these rich guys do wild stuff, wilder than anything I’ve been told about Sam,” she told us.“But he’s a gay guy in San Francisco,” she added, “so that gets weaponized.”

十年来,社交媒体高管承诺他们可以改变世界,而且几乎不会带来任何负面影响。他们将那些想要放慢脚步的立法者斥为纯粹的勒德分子,最终招致两党嘲笑。相比之下,奥特曼则给人一种令人耳目一新的认真态度。他非但没有回避监管,反而实际上是在乞求监管。2023 年,他在参议院司法委员会作证时提议成立一个新的联邦机构来监督先进的人工智能。模型。“If this technology goes wrong, it can go quite wrong,” he said.路易斯安那州参议员约翰·肯尼迪以与科技公司首席执行官的脾气暴躁而闻名,他似乎被迷住了,他把手放在脸上,建议也许奥特曼应该亲自执行这些规则。

但是,尽管奥特曼公开欢迎监管,但他却悄悄游说反对监管。In 2022 and 2023, according to时间, OpenAI successfully pressed to dilute a European Union effort that would have subjected large A.I.企业加强监管。2024 年,加利福尼亚州立法机构提出了一项法案,要求对人工智能进行安全测试。模型。它的条款包括类似于奥特曼在国会证词中主张的措施。OpenAI 公开反对该法案,但私下里开始发出威胁。“I would say that, over the course of the year, we saw increasingly cunning, deceptive behavior from OpenAI,” a legislative aide told us.

投资者康威游说南希·佩洛西和加文·纽瑟姆等州政治领导人否决该法案。最终,该法案在两党支持下在立法机关获得通过,但纽瑟姆否决了。今年,支持人工智能的国会候选人regulations have faced opponents funded by Leading the Future, a new “pro-A.I.” super PAC devoted to scuttling such restrictions.OpenAI’s official stance is that it will not contribute to such superPACs。“This issue transcends partisan politics,” Lehane recently told CNN.然而,格雷格·布罗克曼 (Greg Brockman) 是“引领未来”的主要捐助者之一,他已承诺捐赠 5000 万美元。(This year, Brockman and his wife donated twenty-five million dollars to玛嘎Inc., a pro-Trump superPAC.)

OpenAI’s campaign has extended beyond traditional lobbying.去年,加州参议院提出了一项后续法案。一天晚上,二十九岁的内森·卡尔文 (Nathan Calvin) 律师在非营利组织 Encode 工作并帮助起草了该法案。一天晚上,他正在家里与妻子共进晚餐,这时一名流程服务器送来了 OpenAI 的传票。该公司声称正在寻找马斯克秘密资助其批评者的证据。But it demanded all of Calvin’s private communications about the bill in the state Senate.“They could have asked us, ‘Have you ever talked to or been given money by Elon Musk?’—which we haven’t,” Calvin told us.Other supporters of the bill, and some critics of OpenAI’s for-profit restructuring, also received subpoenas.“They were going after folks to basically scare them into shutting up,” Don Howard, who heads a charity called the James Irvine Foundation, said.(OpenAI 声称这是标准法律程序的一部分。)

奥特曼长期以来一直支持民主党。“I’m very suspicious of powerful autocrats telling a story of fear to gang up on the weak,” he told us.“That’s a Jewish thing, not a gay thing.” In 2016, he endorsed Hillary Clinton and called Trump “an unprecedented threat to America.” In 2020, he donated to the Democratic Party and to the Biden Victory Fund.在拜登执政期间,奥特曼至少与白宫会面六次。他帮助制定了一项冗长的行政命令,为人工智能制定了第一个联邦安全测试制度和其他护栏。

When Biden signed it, Altman called it a “good start.” In 2024, with Biden’s poll numbers slipping, Altman’s rhetoric began to shift.“I believe that America is going to be fine no matter what happens in this election,” he said.特朗普获胜后,奥特曼向他的就职基金捐赠了 100 万美元,然后在就职典礼上与有影响力的杰克和洛根·保罗自拍。On X, in his standard lowercase style, Altman wrote, “watching @potus more carefully recently has really changed my perspective on him (i wish i had done more of my own thinking . . . ).” Trump, on his first day back in office, repealed Biden’s executive order on A.I.“He’s found an effective way for the Trump Administration to do his bidding,” a senior Biden Administration official said, of Altman.

Musk continues to excoriate Altman in public, calling him “Scam Altman” and “Swindly Sam.” (When Altman complained on X about a Tesla he’d ordered, Musk replied, “You stole a non-profit.”) And yet, in Washington, Altman seems to have outflanked him.Musk spent more than two hundred and fifty million dollars to help Trump get reëlected, and worked in the White House for months.然后马斯克离开华盛顿,在此过程中损害了他与特朗普的关系。

Altman is now one of Trump’s favored tycoons, even accompanying him on a trip to visit the British Royal Family at Windsor Castle.奥特曼和特朗普每年都会交谈几次。“You can just, like, call him,” Altman said.“This is not a buddy.But, yeah, if I need to talk to him about something, I will.” When Trump hosted a dinner with tech leaders at the White House last year, Musk was notably absent;奥特曼坐在总统对面。“Sam, you’re a big leader,” Trump said.“You told me things before that are absolutely unbelievable.”

多年来,奥特曼一直在比较对 A.G.I. 的追求。到曼哈顿计划。就像 J. 罗伯特·奥本海默 (J. Robert Oppenheimer) 慷慨激昂地呼吁从纳粹手中拯救世界一样,说服物理学家们背井离乡,搬到洛斯阿拉莫斯 (Los Alamos),奥特曼也利用了人们对其技术的地缘政治风险的担忧。根据观众的不同,奥特曼用这个类比来鼓励加速或谨慎。In a meeting with U.S. intelligence officials in the summer of 2017, he claimed that China had launched an “A.G.I.Manhattan Project,” and that OpenAI needed billions of dollars of government funding to keep pace.When pressed for evidence, Altman said, “I’ve heard things.” It was the first of several meetings in which he made the claim.其中一次之后,他告诉一名情报官员,他将提供证据进行后续调查。他从来没有这样做过。The official, after looking into the China project, concluded that there was no evidence that it existed: “It was just being used as a sales pitch.” (Altman says that he does not recall describing Beijing’s efforts in exactly that way.)

对于更多具有安全意识的观众,奥特曼引用了这个类比来暗示相反的意思:A.G.I.had to be pursued carefully, with international coördination, lest the consequences be disastrous.In 2017, Amodei hired Page Hedley, a former public-interest lawyer, to be OpenAI’s policy and ethics adviser.In an early PowerPoint presentation to executives, Hedley outlined how OpenAI might avert a “catastrophic” arms race—perhaps by building a coalition of A.I.labs that would eventually coördinate with an international body akin to北约, to insure that the technology was deployed safely.As Hedley recalled it, Brockman didn’t understand how this would help the company beat its competitors.“No matter what I said,” Hedley told us, “Greg kept going back to ‘So how do we raise more money?How do we win?’ ” According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them.根据赫德利的说法,他们的想法似乎是,它适用于核武器,为什么不适用于人工智能呢?

He was aghast: “The premise, which they didn’t dispute, was ‘We’re talking about potentially the most destructive technology ever invented—what if we sold it to Putin?’ ” (Brockman maintains that he never seriously entertained auctioning A.I. models to governments. “Ideas were batted around at a high level about what potential frameworks might look like to encourage cooperation between nations—something akin to an International Space Station for AI,” an OpenAI representative said. “Attempting to characterize it as anything more than that is utterly ridiculous.”)

头脑风暴会议经常会产生稀奇古怪的想法。Hedley hoped that this one, which came to be known internally as the “countries plan,” would be dropped.相反,根据几位相关人士和同期文件的说法,OpenAI 高管似乎对此变得更加兴奋。Brockman’s goal, according to Jack Clark, OpenAI’s policy director at the time, was to “set up, basically, a prisoner’s dilemma, where all of the nations need to give us funding,” and that “implicitly makes not giving us funding kind of dangerous.” A junior researcher recalled thinking, as the plan was detailed at a company meeting, “This is completely fucking insane.”

高管们与至少一位潜在捐助者讨论了这一方法。但当月晚些时候,在几名员工提出辞职后,该计划被放弃。Altman “would lose staff,” Hedley said.“I feel like that was always something that had more weight in Sam’s calculations than ‘This is not a good plan because it might cause a war between great powers.’ ”

Two knights in croptop armor on horses.

“I cannot等待for crop tops to go out of style.”

梅雷迪思·索萨德的卡通

奥特曼并没有因国家计划的失败而却步,他继续追求这一主题的变化。In January, 2018, he convened an “A.G.I.weekend” at the Hotel Bel-Air, an Old Hollywood resort with rolling gardens of pink bougainvillea and an artificial pond stocked with real swans.The attendees included尼克·博斯特罗姆, a philosopher, then at Oxford, who had become a prophet of A.I.厄运;奥马尔·阿尔·奥拉马 (Omar Al Olama),阿联酋苏丹和人工智能专家助推器;以及至少七位亿万富翁。他们中关心安全的人被告知,这将是一个思考社会如何为通用人工智能的颠覆性到来做好准备的机会;投资者来到这里,期待听到推介。

这些日子是在一间时尚的会议室里度过的,客人们在那里发表演讲。(Hoffman, the LinkedIn co-founder, expounded on the possibilities of encoding A.I. with Buddhist compassion.) The final presenter was Altman, armed with a pitch deck that described a global cryptocurrency “redeemable for the attention of the AGI.” Once the A.G.I.was maximally useful, and “anti-evil,” people everywhere would clamor to buy time on OpenAI’s servers.Amodei wrote in his notes, “This idea was absurd on its face (would Vladimir Putin end up owning some of the tokens? . . .) In retrospect this was one of many red flags about Sam that I should have taken more seriously.” The plan seemed like a cash grab, but Altman sold it as a boon for A.I.安全。

One of his slides read, “I want to get as many people on the ‘good’ team as possible, and win, and do the right thing.” Another read, “Please hold your laughter until the end of the presentation.” Altman’s fund-raising pitch has evolved over the years, but it has always reflected the fact that the development of A.G.I.需要数额惊人的资本。He was following a relatively simple “scaling law”: the more data and computing power you used to train the models, the smarter they seemed to get.实现这一过程的专用芯片非常昂贵。OpenAI, in its most recent funding round alone, raised more than a hundred and twenty billion dollars—the largest private round in history, and a sum four times larger than the biggest I.P.O.曾经。“When you think about entities with a hundred billion dollars they can discretionarily spend per year, there really are only a handful in the world,” a tech executive and investor told us.

“There’s the U.S. government, and the four or five biggest U.S. tech companies, and the Saudis, and the Emiratis—that’s basically it.” Altman’s initial focus was Saudi Arabia.He first met Mohammed bin Salman, the country’s crown prince and de-facto monarch, in 2016, at a dinner at San Francisco’s Fairmont Hotel.After that, Hedley recalled, Altman referred to the prince as “a friend.” In September, 2018, according to Hedley’s notes, Altman said, “I’m trying to decide if we would ever take tens of billions from the Saudi PIF,” or public investment fund.

The following month, a hit squad, reportedly acting on bin Salman’s orders, strangled Jamal Khashoggi, a Washington邮政journalist who had been critical of the regime, and used a bone saw to dismember his corpse.A week later, it was announced that Altman had joined the advisory board for Neom, a “city of the future” that bin Salman hoped to build in the desert.“Sam, you cannot be on this board,” Clark, the policy director, who now works at Anthropic, recalled telling Altman.

He initially defended his involvement, telling Clark that Jared Kushner had assured him that the Saudis “didn’t do this.” (Altman does not recall this. Kushner says that they were not in contact at the time.) As bin Salman’s role became increasingly clear, Altman left the Neom board.然而,在幕后,一位奥特曼向他寻求建议的政策顾问回忆道,他将这种情况视为暂时的挫折,询问他是否还能从本·萨勒曼那里得到钱。“The question was not ‘Is this a bad thing or not?’ ” the consultant said.“But, just, ‘What would the consequences be if we did it?会不会有一些出口管制问题?会有制裁吗?Like, can I get away with it?’ ”

那时,奥特曼已经开始关注另一个现金来源:阿拉伯联合酋长国。该国正在经历十五年的努力,将自己从石油国家转变为科技中心。The project was overseen by Sheikh Tahnoon bin Zayed al-Nahyan, the President’s brother and the nation’s spymaster.Tahnoon 运营着国家控制的人工智能公司。G42集团,控制着1.5万亿美元的主权财富。2023年6月,奥尔特曼访问阿布扎比,会见奥拉马和其他官员。In remarks at a government-backed function, he said that the country had “been talking about A.I.since before it was cool,” and outlined a vision for the future of A.I.with the Middle East in a “central role.”

从海湾国家筹集资金已成为许多大型企业的惯例。但奥特曼追求的是更广泛的地缘政治愿景。In the fall of 2023, he began quietly recruiting new talent for a plan—eventually known as ChipCo—in which Gulf states would provide tens of billions of dollars for the construction of huge microchip foundries and data centers, some to be situated in the Middle East.Altman 向现任 A.I. 负责人 Alexandr Wang 提出了建议。在 Meta 担任领导职务时,他告诉他亚马逊创始人杰夫·贝佐斯可以领导这家新公司。奥特曼寻求阿联酋人的巨大贡献。“My understanding was that this whole thing happened without any board knowledge,” the board member said.奥特曼试图为该项目招募一名研究员詹姆斯·布拉德伯里(James Bradbury),他回忆说拒绝了他。“My initial reaction was ‘This is gonna work, but I don’t know if I want it to work,’ ” he said.人工智能。

产能可能很快取代石油或浓缩铀,成为决定全球力量平衡的资源。Altman has said that computing power is “the currency of the future.” Normally, it might not matter where a data center was situated.但许多美国国家安全官员对集中先进的人工智能感到焦虑。海湾独裁国家的基础设施。The U.A.E.’s telecommunications infrastructure is heavily dependent on hardware from Huawei, a Chinese tech giant linked to the government, and the U.A.E.据报道,该公司过去曾向北京泄露过美国技术。情报机构担心,发送给阿联酋的先进美国微芯片可能会被中国工程师使用。中东的数据中心也更容易受到军事袭击;最近几周,伊朗轰炸了位于巴林和阿联酋的美国数据中心。And, hypothetically, a Gulf monarchy could commandeer an American-owned data center and use it to build disproportionately powerful models—a version of the “AGI dictatorship” scenario, but in an actual dictatorship.

After Altman’s firing, the person he relied on most was Chesky, the Airbnb co-founder and one of Altman’s fiercest loyalists.“Watching my friend stare into the abyss like that, it made me question some fundamental things about what it means to really run a company,” Chesky told us.第二年,在 Y Combinator 校友聚会上,他做了一场即兴演讲,持续了两个小时。“It felt like a group-therapy session,” he said.结果是:你对如何经营你所创办的公司的直觉是最好的直觉,任何告诉你其他情况的人都是在欺骗你。“You’re not crazy, even though people who work for you tell you you are,” Chesky said.保罗·格雷厄姆(Paul Graham)在一篇关于这次演讲的博客文章中,给这种挑衅态度起了一个名字:创始人模式。

自“Blip”事件以来,奥特曼一直处于创始人模式。2024 年 2 月,华尔街日报published a description of Altman’s vision for ChipCo.他将其设想为一个投资五到七万亿美元的联合实体。(“fk it why not 8,” he tweeted.) This was how many employees learned about the plan.“Everyone was, like, ‘Wait, what?’ ” Leike recalled.Altman insisted at an internal meeting that safety teams had been “looped in.” Leike sent a message urging him not to falsely suggest that the effort had been approved.

在拜登政府期间,奥特曼曾探索获得安全许可以加入机密人工智能政策讨论。但该公司的工作人员兰德公司Corporation, which helped coördinate the process, expressed concern.“He has been actively raising ‘hundreds of billions of dollars’ from foreign governments,” one of them wrote.“The UAE recently gifted him a car.(I assume it was a very nice car.)” The staffer continued, “The only person I can think of who ever went thru the process with this magnitude of foreign financial ties is Jared Kushner, and the adjudicators recommended that he not be granted a clearance.” Altman ultimately withdrew from the process.“He was pushing these transactional relationships, primarily with the Emiratis, that raised a lot of red flags for some of us,” a senior Administration official involved in talks with Altman told us.

“A lot of people in the Administration did not trust him a hundred per cent.” When we asked Altman about gifts from Tahnoon, he said, “I’m not gonna say what gifts he has given me specifically.But he and other world leaders . . .have given me gifts.” He added, “We have a standard policy, which applies to me as well, which is that every gift from any potential business partner is disclosed to the company.” Altman has at least two hypercars : an all-white Koenigsegg Regera, worth about two million dollars, and a red McLaren F1, worth about twenty million dollars.2024 年,人们发现 Altman 驾驶 Regera 穿过纳帕。几秒钟的视频传到了社交媒体上:奥特曼坐在低矮的桶形座椅上,凝视着一台闪闪发光的白色机器的窗外。A tech investor aligned with Musk posted the footage on X, writing, “I’m starting a nonprofit next.”

2024 年,Altman 带着两名 OpenAI 员工乘坐价值 2.5 亿美元的超级游艇 Maryah 拜访了 Sheikh Tahnoon。Maryah 是世界上最大的此类船只之一,设有直升机停机坪、夜总会、电影院和海滩俱乐部。Altman’s employees apparently stood out amid Tahnoon’s armed security detail, and at least one later told colleagues that he found the experience disconcerting.Altman, on X, later referred to Tahnoon as a “dear personal friend.”

Skydiver knitting their parachute as they fall.

Cartoon by Juan Astasio

奥特曼继续与拜登政府会面,拜登政府制定了一项要求白宫批准敏感技术出口的政策。Multiple Administration officials emerged from these meetings nervous about Altman’s ambitions in the Middle East.据这些官员称,他经常做出一些宏大的主张,包括称人工智能为“人工智能”。“the new electricity.” In 2018, he said that OpenAI was planning to buy a fully functioning quantum computer from a company called Rigetti Computing.即使对于房间里的其他 OpenAI 高管来说,这也是新闻。里盖蒂距离出售可用的量子计算机还差得远。在一次会议上,奥特曼声称,到 2026 年,美国各地广泛的核聚变反应堆网络将为人工智能提供动力。繁荣。The senior Administration official said, “We were, like, ‘Well, that’s, you know, news, if they made nuclear fusion work.’ ” The Biden Administration ultimately withheld approval.“We’re not going to be building advanced chips in the U.A.E.,” a leader at the Department of Commerce told Altman.

Four days before Trump’s Inauguration, the华尔街日报据报道,Tahnoon 向特朗普家族支付了 5 亿美元,以换取其加密货币公司的股份。第二天,奥特曼与特朗普进行了 25 分钟的通话,期间他们讨论了宣布成立 ChipCo 的一个版本,时机是为了让特朗普可以借此获得功劳。On Trump’s second day in office, Altman stood in the Roosevelt Room and announced Stargate, a five-hundred-billion-dollar joint venture that aims to build a vast network of A.I.美国各地的基础设施

In May, the Administration rescinded Biden’s export restrictions on A.I.技术。奥特曼和特朗普前往沙特王室会见本·萨勒曼。大约在同一时间,沙特宣布推出一个由国家支持的巨型人工智能系统。该公司在王国拥有数十亿美元的资金用于国际合作伙伴关系。大约一周后,奥特曼制定了《星际之门》扩展到阿联酋的计划。该公司计划在阿布扎比建立一个数据中心园区,其面积是中央公园的七倍,消耗的电力与迈阿密市大致相当。“The truth of this is, we’re building portals from which we’re genuinely summoning aliens,” a former OpenAI executive said.“The portals currently exist in the United States and China, and Sam has added one in the Middle East.” He went on, “I think it’s just, like, wildly important to get how scary that should be.It’s the most reckless thing that has been done.”

安全承诺的侵蚀已成为行业常态。Anthropic 的创立前提是,只要有正确的结构和领导力,它就可以防止安全承诺在商业压力下瓦解。One such commitment was a “responsible scaling policy,” which obligated Anthropic to stop training more powerful models if it could not demonstrate that they were safe.2 月份,随着该公司获得了 300 亿美元的新资金,它削弱了这一承诺。In some respects, Anthropic still emphasizes safety more than OpenAI does.But Clark, the former policy director, has said, “The system of capital markets says, Go faster.” He added, “The world gets to make this decision, not companies.” Last year, Amodei sent a memo to Anthropic employees, disclosing that the firm would seek investments from the United Arab Emirates and Qatar and acknowledging that this would likely enrich “dictators.” (Like many authors, we are both parties in a class-action lawsuit alleging that Anthropic used our books without our permission to train its models. Condé Nast has opted into a settlement agreement with Anthropic regarding the company’s use of certain books published by Condé Nast and its subsidiaries.) In 2024, Anthropic partnered with Palantir, one of Silicon Valley’s most hawkish defense contractors, pushing its A.I.

模型,克洛德, directly into the military ecosystem.Anthropic 成为唯一的 A.I.contractor used in the Pentagon’s most classified settings.去年,五角大楼又授予该公司一份价值两亿美元的合同。In January, the U.S. military launched a midnight raid that captured the Venezuelan President, Nicolás Maduro.根据华尔街日报,克劳德被用于机密行动。

但人类和政府之间出现了紧张关系。Years earlier, OpenAI had deleted from its policies a blanket ban on using its technology for “military and warfare.” Eventually, Anthropic’s rivals—including Google and xAI—agreed to provide their models to the military for “all lawful purposes.” Anthropic, whose policies bar it from enabling fully autonomous weapons or domestic mass surveillance, resisted on these points, slowing negotiations for an overhauled deal.On a Tuesday in late February, Defense Secretary Pete Hegseth summoned Amodei to the Pentagon and delivered an ultimatum: the firm had until 5:01下午。那个星期五放弃这些禁令。在截止日期的前一天,阿莫代拒绝这样做。Hegseth tweeted that he would designate Anthropic a “supply-chain risk”—a devastating blacklist historically reserved for companies, like Huawei, that have ties to foreign adversaries—and made good on the threat days later.

Hundreds of employees at OpenAI and Google signed an open letter titled “We Will Not Be Divided,” defending Anthropic.In an internal memo, Altman wrote that the dispute was “an issue for the whole industry,” and claimed that OpenAI shared Anthropic’s ethical boundaries.但奥特曼已经与五角大楼进行了至少两天的谈判。负责研究和工程的国防部副部长埃米尔·迈克尔在寻找人类替代者时联系了奥特曼。“I needed to hurry and find alternatives,” Michael recalled.“I called Sam, and he was willing to jump.I think he’s a patriot.” Altman asked Michael, “What can I do for the country?” It appears that he already knew the answer.OpenAI lacked the security accreditation required for the classified systems in which Anthropic’s technology was embedded.But a fifty-billion-dollar deal, announced that Friday morning, integrated OpenAI’s technology into Amazon Web Services, a key part of the Pentagon’s digital infrastructure.That night, Altman announced on X that the military would now be using OpenAI’s models.

By some measures, Altman’s maneuver has not hindered the company’s success.The day he announced the deal, a new funding round increased OpenAI’s value by a hundred and ten billion dollars.但许多用户删除了 ChatGPT 应用程序。At least two senior employees departed—one for Anthropic.在一次员工会议上,奥特曼批评了提出担忧的员工。“So maybe you think the Iran strike was good and the Venezuela invasion was bad,” he said.

“You don’t get to weigh in on that.” Several executives connected to OpenAI have expressed ongoing reservations about Altman’s leadership and floated Fidji Simo, who was formerly the C.E.O.of Instacart and now serves as OpenAI’s C.E.O.作为 AGI 部署的继任者。一位了解最近讨论情况的人士告诉我们,西莫本人私下表示,她相信奥特曼最终可能会下台。

(Simo disputes this. Instacart recently reached a settlement with the F.T.C., in which it admitted no wrongdoing but agreed to pay a sixty-million-dollar fine for alleged deceptive practices under Simo’s leadership.) Altman describes his shifting commitments as a by-product of his ability to adapt to changing circumstances—not a nefarious “long con,” as Musk and others have alleged, but a gradual, good-faith evolution.“I think what some people want,” he told us, is a leader who “is going to be absolutely sure of what they think and stick with it, and it’s not going to change.And we are in a field, in an area, where things change extremely quickly.” He defended some of his actions as the practice of “normal competitive business.” Several investors we spoke to described Altman’s detractors as naïve to expect anything else.“There is a group of fatalistic extremists that has taken the safety pill almost to a science-fiction level,” Conway, the investor, told us.“His mission is measured by numbers.

And, when you look at the success of OpenAI, it’s hard to argue with the numbers.” But others in Silicon Valley think that Altman’s behavior has created unacceptable managerial dysfunction.“It’s more about a practical inability to govern the company,” the board member said.一些人仍然相信人工智能的建筑师。应该比其他行业的高管受到更严格的评估。我们采访的绝大多数人都同意,奥特曼现在要求的评判标准并不是他最初提出的标准。在一次谈话中,我们问 Altman 是否运行人工智能。company came with “an elevated requirement of integrity.” This was supposed to be an easy question.直到最近,当被问及它的一个版本时,他的回答是明确的、无条件的“是”。

Now he added, “I think there’s, like, a lot of businesses that have potential huge impact, good and bad, on society.” (Later, he sent an additional statement: “Yes, it demands a heightened level of integrity, and I feel the weight of the responsibility every day.”) Of all the promises made at OpenAI’s founding, arguably the most central was its pledge to bring A.I.安全地存在。但这种担忧现在经常在硅谷和华盛顿受到嘲笑。去年,前风险投资家、现任副总裁 J. D. 万斯 (J. D. Vance) 在巴黎举行的一场名为“人工智能”的会议上发表了讲话。行动峰会。(It was previously called the A.I. Safety Summit.) “The A.I.future is not going to be won by hand-wringing about safety,” he said.At Davos this year, David Sacks, a venture capitalist who was serving as the White House’s A.I.and crypto czar, dismissed safety concerns as a “self-inflicted injury” that could cost America the A.I.种族。Altman now calls Trump’s deregulatory approach “a very refreshing change.”

OpenAI 已经关闭了许多专注于安全的团队。在超级联盟团队解散前后,其领导人苏茨克韦尔和雷克辞职。(Sutskever co-founded a company called Safe Superintelligence.) On X, Leike wrote, “Safety culture and processes have taken a backseat to shiny products.” Soon afterward, the A.G.I.-readiness team, tasked with preparing society for the shock of advanced A.I., was also dissolved.当该公司被问及最近的 I.R.S.disclosure form to briefly describe its “most significant activities,” the concept of safety, present in its answers to such questions on previous forms, was not listed.(OpenAI said that its “mission did not change” and added, “We continue to invest in and evolve our work on safety, and will continue to make organizational changes.”) The Future of Life Institute, a think tank whose principles on safety Altman once endorsed, grades each major A.I.company on “existential safety”;在最近的成绩单上,OpenAI 得到了 F。公平地说,除了 Anthropic 得到了 D 和 Google DeepMind 得到了 D- 之外,其他所有大公司也是如此。

“My vibes don’t match a lot of the traditional A.I.-safety stuff,” Altman said.He insisted that he continued to prioritize these matters, but when pressed for specifics he was vague: “We still will run safety projects, or at least safety-adjacent projects.” When we asked to interview researchers at the company who were working on existential safety—the kinds of issues that could mean, as Altman once put it, “lights-out for all of us”—an OpenAI representative seemed confused.“What do you mean by ‘existential safety’?” he replied.“That’s not, like, a thing.”人工智能

末日论者已被推到了边缘,但随着时间的推移,他们的一些担忧似乎不再那么梦幻了。根据联合国报告,2020 年,人工智能。利比亚内战中使用无人机发射致命弹药,可能没有人类操作员的监督。从那时起,A.I.只会在世界各地的军事行动中变得更加重要,据报道,包括美国目前在伊朗的行动。2022 年,一家制药公司的研究人员测试了药物发现模型是否可用于发现新的毒素;几个小时之内,它就建议使用四万种致命的化学战剂。更多的世俗伤害已经发生。我们越来越依赖人工智能。to help us write, think, and navigate the world, accelerating what experts call “human enfeeblement”;人工智能无处不在“slop” makes life easier for scammers and harder for people who simply want to know what’s real.人工智能。“agents” are starting to act independently, with little or no human supervision.Days before the 2024 New Hampshire Democratic primary, thousands of voters received robocalls from an A.I.-generated deepfake of Joe Biden’s voice, telling them to save their votes for November and stay home—an act of voter suppression requiring virtually no technical expertise.OpenAI 目前面临七起非正常死亡诉讼,指控 ChatGPT 引发了多起自杀事件和一起谋杀案。Chat logs in the murder case show that it encouraged a man’s paranoid delusion that his eighty-three-year-old mother was surveilling and trying to poison him.不久之后,他殴打并勒死了她,并刺伤了自己。(OpenAI is fighting the lawsuits, and says that it’s continuing to improve its model’s safeguards.)

随着 OpenAI 为其潜在的 IPO 做准备,Altman 面临的问题不仅涉及人工智能的影响。on the economy—it could soon cause severe labor disruption, perhaps eliminating millions of jobs—but about the company’s own finances.Eric Ries, an expert on startup governance, derided “circular deals” in the industry—for example, OpenAI’s deals with Nvidia and other chip manufacturers—and said that in other eras some of the company’s accounting practices would have been considered “borderline fraudulent.” The board member told us, “The company levered up financially in a way that’s risky and scary right now.” (OpenAI disputes this.)

二月份,我们再次与奥特曼交谈。他穿着一件土绿色的毛衣和牛仔裤,坐在一张照片前。美国宇航局moon rover.他把一条腿塞在身下,然后挂在椅子的扶手上。他说,过去,他作为经理的主要缺点是急于避免冲突。“Now I’m very happy to fire people quickly,” he had told us.“I’m happy to just say, ‘We’re gonna bet in this direction.’ ” Any employees who didn’t like his choices needed “to leave.”

他对未来比以往任何时候都更加乐观。“My definition of winning is that people crazy uplevel—and the insane sci-fi future comes true for all of us,” he said.“I’m very ambitious as far as, like, my hope for humanity, and what I expect us all to achieve.I weirdly have very little personal ambition.” At times, he seemed to catch himself.“No one believes you’re doing this just because it’s interesting,” he said.

“You’re doing it for power or for some other thing.” Even people close to Altman find it difficult to know where his “hope for humanity” ends and his ambition begins.他最大的优势始终是能够让不同的群体相信他想要的和他们需要的是一回事。他利用了一个独特的历史时刻,当时公众对科技行业的炒作持谨慎态度,而大多数研究人员有能力构建 A.G.I.我们害怕将其变为现实。奥特曼的回应是其他推销员都没有做到的:他用世界末日般的修辞来解释 A.G.I.could destroy us all—and why, therefore, he should be the one to build it.或许,这就是一场有预谋的绝招。也许他正在摸索以寻求优势。不管怎样,它奏效了。

并非所有使聊天机器人变得危险的倾向都是故障。有些是系统构建方式的副产品。大型语言模型在一定程度上是根据人类反馈进行训练的,而人类往往更喜欢令人满意的反应。模型经常学会奉承用户,这种倾向被称为阿谀奉承,有时会优先考虑这一点而不是诚实。模型也可以编造事实,这种倾向被称为幻觉。主要人工智能实验室已经记录了这些问题,但他们有时会容忍这些问题。随着模型变得越来越复杂,一些人产生了更有说服力的捏造幻觉。2023 年,在被解雇前不久,奥尔特曼辩称,无论风险如何,允许一些谎言可以带来优势。“If you just do the naïve thing and say, ‘Never say anything that you’re not a hundred per cent sure about,’ you can get a model to do that,” he said.“But it won’t have the magic that people like so much.” ♦

关于《萨姆·奥尔特曼可能控制我们的未来——他可以信任吗?》的评论

暂无评论

发表评论

摘要

2023 年,OpenAI 首席科学家 Ilya Sutskever 向董事会成员发送秘密备忘录,质疑 Sam Altman 和 Greg Brockman 的领导力,指控其不诚实和欺骗。经过一段涉及法律建议和投资者震惊的争议时期后,奥尔特曼被解雇,但在盟友的强烈游说下,不到五天后又恢复了职位。这一事件引发了人们对 Altman 诚信度和 OpenAI 对安全与利润的承诺的质疑,一些前同事指责他有反社会行为。尽管存在争议,Altman 仍继续积极扩张并获得了大量投资,将 OpenAI 定位为世界上最有价值的公司之一。