cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused cURL Error Code: 7 cURL Error Message: Failed to connect to 127.0.0.1 port 9200: Connection refused OurCoders (我们程序员)

全部新闻

您的品牌是否优化用于AI搜索?
您的品牌是否优化用于AI搜索?
2025-09-26 12:41:49
Adi Ignatius在2025年9月25日提出了HBR执行议程,重点介绍了AI和机器学习,技术,分析和品牌管理。AI和机器学习
委员会警告说,它如何将AI插入服务
委员会警告说,它如何将AI插入服务
2025-09-26 12:37:00
- 欧盟正在考虑执行其新的AI法规的暂停,这是在采用以减轻AI风险的一年后。 - 苹果由于监管问题而阻止了欧盟中Airpods的翻译功能。 - 欧洲委员会主席厄休拉·冯·莱恩(Ursula von der Leyen)重申,尽管施加了外部压力,欧盟仍将设定自己的数字标准。 - ASML和Mistral正在共同努力,以保持欧洲在AI技术中的竞争力。
查理·柯克(Charlie Kirk)和AI生成的烈士的制作
查理·柯克(Charlie Kirk)和AI生成的烈士的制作
2025-09-26 12:31:37
美国著名的政治人物在死后经常经历一系列造影的过程,尤其是当他们猛烈死亡时。这种实践受社交媒体的影响,将复杂的政治家转变为公义和纯洁的象征,增强了政治信仰和巩固权力。例如,罗纳德·里根(Ronald Reagan)的形象是抛光后的保守派胜利,而马丁·路德·金(Martin Luther King Jr.)的遗产经常在主流纪念中简化。查理·柯克(Charlie Kirk)最近被暗杀,同样导致支持者的成圣化,他们将死亡提升为言论自由的象征。社交媒体加速了这一过程,将哀悼变成了迅速的生产周期,使历史复杂性变平并使公众话语两极分化。
卡塔尔的主权财富基金与AI上的Blue Owl合作伙伴
卡塔尔的主权财富基金与AI上的Blue Owl合作伙伴
2025-09-26 12:12:00
卡塔尔投资管理局(Katar Investment Authority)和蓝猫(Blue Owl)通过旨在扩大数字基础设施的战略合作伙伴关系向AI基础设施投资30亿美元。此举与对云和AI服务的需求不断增长相吻合,这反映了卡塔尔支持满足数据中心需求的全球公司的战略。今年早些时候,阿布扎比的ADQ与蓝色猫头鹰的另一位合作者Energy Capital Partners合作,达成了250亿美元的交易。
为什么Reddit在AI搜索中推动对话 - 用户浏览短尾巴
为什么Reddit在AI搜索中推动对话 - 用户浏览短尾巴
2025-09-26 12:00:23
BartoszGóralewicz的网络研讨会讨论了AI搜索的影响及其对消费者决策的影响。要点包括: - Google现在每次点击爬行18页,Openai爬行1500页。 - 由于真实的同伴建议,在主要的AI平台上引用了Reddit。 - 消费者更喜欢解决问题的真正解决方案而不是传统的营销内容。 - 了解客户旅程阶段(例如,从初始查询到特定解决方案)会提高参与度和转化率。 - 新的成功指标专注于在整个客户旅程中推荐解决方案,显示上下文相关性,并通过实际的转换路径跟踪影响。 - 真实性至关重要;品牌必须讨论建立信任的积极和负面因素。 - 该方法与人类心理学保持一致,强调在决策中共同责任,并寻求经验而不是营销的验证。网络研讨会重点介绍了数字营销中更真实的参与策略的转变。这
开发人员如何使用iOS 26使用苹果的本地AI模型|TechCrunch
开发人员如何使用iOS 26使用苹果的本地AI模型|TechCrunch
2025-09-26 12:00:00
苹果在WWDC 2025期间介绍了其基础模型框架,使开发人员能够将公司的本地AI模型整合到其应用程序中,而不会产生推理成本。这些模型包括引导生成和工具调用等功能。随着iOS 26的推出,Lil Artist,Daylish,MoneyCoach,查找,任务,第一天,Crouton,Signeasy,Dark Noise,Lights Out,Capture,Lumy,CardPointers和Guitar Wiz都将Apple的AI模型融合在一起,以增强具有本地功能的用户体验,从而使iOS 26推出。
在联合国关于AI -Vatican News的联合国辩论中,有利的see敦促关于自动武器的暂停媒体
在联合国关于AI -Vatican News的联合国辩论中,有利的see敦促关于自动武器的暂停媒体
2025-09-26 11:36:16
大主教保罗·理查德·加拉格尔(Paul Richard Gallagher)在联合国安理会致辞,呼吁暂停致命自治武器(法律),因为他们缺乏道德决策能力。他强调需要采用以人为中心的人工智能发展方法,强调尊重人类尊严和共同利益。加拉格尔(Gallagher)警告不要在军事系统中涉及AI的军备竞赛,包括核指挥机构,强调了和平与安全的潜在风险。他敦促安全理事会密切监视技术进步,并确保新兴技术服务于人类,而不是造成分裂或破坏。
家庭办事处更喜欢押注与股票与初创公司和风险投资的AI繁荣
家庭办事处更喜欢押注与股票与初创公司和风险投资的AI繁荣
2025-09-26 11:30:01
超级富裕的家庭办事处,例如杰夫·贝佐斯(Jeff Bezos)和埃里克·施密特(Eric Sc​​hmidt)的家庭办事处,正在通过公共股票而不是直接投资在AI初创公司投资。高盛的一项调查显示,52%的全球家庭办事处通过原始公共股票或ETF暴露于AI,而只有四分之一的投资直接投资于初创企业。与私人市场相比,由于更合理的估值,家庭办事处越来越喜欢AI股票。他们还表现出对使用AI的公司的生产力和效率提高的公司以及能源提供者等次要受益人的兴趣。值得注意的是,有89%的被调查家庭办事处具有某种形式的AI投资或计划,这强调了AI在财富管理策略中的重要性日益增长。
2025-09-26 11:03:18
It’s time to push back against the AI internet The Dead Internet Theory,虽为虚假阴谋论,但在实际操作中却有其道理。据研究显示,2025年中期,约74%的新在线内容由AI或机器人生成,预计年底这一比例将超过90%。随着AI技术的发展,包括Podcast、视频和书籍在内的各种类型的内容越来越多地被机器生成,引发了对人类创造者角色的质疑。为了应对这一趋势,用户应要求平台提供工具来优先展示人类创作的内容,并选择避开由AI生成的信息,以保障网络内容的质量与真实性。 Everyone should be clamoring for tools and features that block AI content. And for content companies, blocking AI slop is your new Job One. The Dead Internet Theory is a false conspiracy theory. But in practical terms, it might as well be true.  Emerging from the deranged muck of 4chan and Wizardchan in the late 2010s, the Dead Internet Theory holds that secret cabals of all-powerful government or corporate conspirators use bots and AI-generated content to replace humans on the internet. The goal: to manipulate public perception, control narratives, and influence the public’s behavior.  A central tenet of the theory is that most online content is generated by bots and AI, not people.  The false part is the conspiracy. The true part is that most internet content is indeed bots and AI.  Midway through 2025, roughly 74% of newly created content online was generated with the help of AI or bots, according to several large-scale studies. Only about one-quarter of online content is created by people without AI assistance. And the rate of change is rising fast. By the end of the year, more than 90% of all content will be AI-generated, according to some predictions.  The Dead Internet Theory isn’t true. But it might as well be true. To quote the late comedian George Carlin: “You don’t need a formal conspiracy when interests converge.” And in Silicon Valley, the interests are definitely converging.  GenAI content, now on an epic scale A company called Inception Point AI is churning out AI-generated podcasts on an industrial scale, powered by custom AI agents that leverage OpenAI, Perplexity, Claude, Gemini, and other chatbots to build the content. The company’s Quiet Please Podcast Network has created more than 5,000 podcast shows — not episodes, shows! — hosted by more than 50 AI “personalities.” The company intends to create thousands more “personalities” in the future.  It costs them $1 per episode to produce. So, if an episode sells a $2 ad, they make a profit.  Based on what the company’s CEO says in public, Inception Point AI seems captured by the delusion that AI-generated personas are human beings. It’s unknown how many listeners have been deluded into believing that the AI podcast hosts are real people. (The AI hosts identify themselves as such at the top of each episode.)  “We believe that in the near future half the people on the planet will be AI, and we are the company that’s bringing those people to life,” said CEO Jeanine Wright, ignoring the advice of Microsoft AI CEO Mustafa Suleyman, who said in an essay last month that “We must build AI for people; not to be a digital person.” If they believe they’re “bringing people to life,” then they by definition believe themselves to be gods.  She added that the “people who are still referring to all AI-generated content as AI slop are probably lazy Luddites.”(People who create a podcast start to finish by pushing a button are calling others “lazy.”) Of course, Inception Point AI isn’t alone in the industry. Others, including PodcastAI, Wondercraft AI, and Jellypod,are flooding the zone with fake-people podcasts, too.  The AI podcast startups have a lot of catching up to do. The video startups are way ahead in terms of volume.  Companies like T-Series, Sony SAB, SET India, and Zee TV have produced between 20,000 and 234,000 videos each. T-Series leads with nearly 24,000 videos, while some channels like Zee TV have exceeded 215,000 uploads. It’s everywhere Google is helping them. Creator AI Studios — which is built into YouTube’s ecosystem — enables small teams and solo creators to publish hundreds of videos per day through auto-editing, thumbnail generation, scene detection, and AI-generated scripts. Platforms like Argil AI, RightBlogger, Team-GPT, and  Designs.ai  let creators generate scripted TikTok videos without cameras using AI models for ideas, editing, and even synthetic voiceovers. AI-generated books are on the rise, too. Two years ago, Amazon had to cap book uploads to three books per daybecause people were uploading far more than that.  Estimates suggest that more than 70% of new self-published Kindle books are partially or fully AI-generated. On Amazon.com alone, people may be collectively uploading as many as 1,000 AI-generated books per day.  While the state of the art in text, audio, and video fakery ranges now from “very good” to “perfect,” the ability to make or alter photos using AI took a huge leap forward with Google Gemini 2.5 Flash Image (a.k.a. “Nano Banana”). Perfect pictures are a banality now.  More than 15 billion images have been created using text-to-image algorithms since 2022. Roughly 34 million new such images are churned out every day now, using models based on Stable Diffusion and others.  They’re used for business objectives in advertising, media and entertainment, e-commerce, fashion, and architecture, as well as for digital art and the lucrative field of online influencing. The technology is also used maliciously to create fake news and propaganda, scams, non-consensual deepfake pornography, and AI-generated child sexual abuse material. The key benefit of AI-generated images — whether for benign and malignant uses — is low cost. AI pictures are super cheap to make.  Studies of platforms like Facebook show that users often fail to recognize images as synthetic, even when they’re badly made and look ridiculous. A 2025 Microsoft report, for instance, found that 73% of survey respondents found it hard to spot AI-generated images, and correctly identified them only 38% of the time.  We are months away from an internet where more than 99% of online content is AI-generated. As a result, the human race is being rapidly frog-marched into a world where we interact primarily, or even exclusively, with machines.  Whose idea was this? (Probably the same people who decided to replace “writers” and “readers” with “content creators” and “content consumers.”)  Has it occurred to anyone that the purpose of “content” — articles, books, photographs, videos and the recorded voice — exists for human beings to communicate with each other, rather than for machines to shovel data at people?  And why do we accept this? We need to start holding the content delivery companies to account.  Demand prioritization for people-created content It’s time users demand, pay for, or exclusively use services that give you the choice to prioritize human-generated content.  The paid-search engine Kagi Search, for example, enables users to avoid AI-generated content primarily through its image search filtering and labeling system, which allows users to choose whether to include, exclude, or exclusively display AI-generated images. Personalization features enable users to block or downrank specific domains if unwanted AI-generated or low-quality imagery slips past initial filters, and these controls are accessible both in image and web search. (Full disclosure: My son works at Kagi.) DuckDuckGo also allows users to filter out AI-generated images from search results by providing a dedicated dropdown menu in its image search interface, where users can select to hide all AI images.  Some stock photo platforms, such as Freepik, have added tools to exclude AI-generated results from search queries.  But this list is pathetically short. While Google offers hard-to-find and ineffectual “NOPE” buttons to turn off some AI content, Microsoft Bing, Facebook, Instagram, Reddit, X, LinkedIn, Pinterest, other sites do not offer users the ability to opt out of AI at all. If the industry is going to provide the tools for replacing nearly all online content with AI slop, then they must also provide the tools to opt out.  We must demand the option to see content created by people either primarily or exclusively. It’s time for the living to rise up against the dead internet.  SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below.
电影如何更好地为我们准备AI威胁
电影如何更好地为我们准备AI威胁
2025-09-26 10:45:59
本周的情节与内特(Nate)的讨论有关他的新书“如果有人建造它,每个人都死了:为什么超人AI会杀死我们所有人”,与Eliezer Yudkowsky共同撰写。该书通过寓言探索了超人AI的风险,这些寓言突出了智能众生如何在没有恶意意图人类意图的情况下追求资源。谈话还深入研究了媒体中AI的更好的讲故事方法,尤其是赞扬“ Ex Machina”,因为它对AI的动机的细微刻画。