OC

Knowledge OS
鹦鹉螺口语

全部新闻

AI安全平台Noma获2500万美元A轮融资 |华泰科技

2024-10-31 13:37:00

Noma 是一家专注于人工智能安全的以色列初创公司,继去年获得 700 万美元种子轮融资后,在 Ballistic Ventures 领投的 A 轮融资中筹集了 2500 万美元。该投资包括来自领先科技公司和行业领导者的战略天使投资者。Noma 正在推出一个应用程序开发平台,提供端到端的人工智能发现、安全、保护和合规性,涵盖整个数据和人工智能生命周期的供应链风险和威胁。Noma 由首席执行官 Niv Braun 和首席技术官 Alon Tron 于 2023 年创立,旨在解决人工智能领域新兴的安全挑战。

AI安全平台Noma获2500万美元A轮融资 |华泰科技

如何专注于数据科学/机器学习

2024-10-31 13:21:27

本文讨论了在数据科学职业中选择成为通才还是专家的困境。它建议专业化,但建议不要急于求成,建议人们在最初几年(2-3年)学习基本概念,如基本统计、线性代数、微积分和各种机器学习模型,包括回归技术和神经网络,然后再进行研究。决定。

如何专注于数据科学/机器学习

重新定义 SEO:人工智能概述和未来之路

2024-10-31 13:00:00

由于谷歌人工智能概述等新发展正在改变传统搜索结果,SEO 格局正在迅速发展。人工智能影响用户行为和内容交互,因此需要改变 SEO 策略。关键见解包括优化可能出现在人工智能概述中的特色片段,了解概述中可以引用多个来源,创建超越顶级排名的高质量内容,强调品牌提及而不是反向链接,在 Reddit 和维基百科等平台上建立存在,增强点击率、防止品牌操纵以及了解检索增强生成 (RAG) 技术。这些变化为 2025 年及以后的 SEO 工作做好准备。这搜索引擎优化

重新定义 SEO:人工智能概述和未来之路

2024-10-31 13:00:00

Google Cloud’s security chief warns: Cyber defenses must evolve to counter AI abuses Google Cloud’s CISO Phil Venables highlights the need for new security tactics针对生成型AI的固有风险,如模型幻觉、敏感数据泄露及偏见问题。他强调传统的网络安全措施需扩展以监测和防御AI滥用,并提出采用AI进行防御性应用的重要性。Venables还分享了在Google Cloud中学习到的经验教训,包括实施特定的安全控制和工具,确保训练数据的清洁性和保护性,以及维护模型、代码和测试数据的强大访问控制等策略。 Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More While many existing risks and controls can apply to generative AI , the groundbreaking technology has many nuances that require new tactics, as well.  Models are susceptible to hallucinations, or the production of inaccurate content. Other risks include the leaking of sensitive data via a model’s output, tainting of models that can allow for prompt manipulation and biases as a consequence of poor training data selection or insufficiently well-controlled fine-tuning and training.  Ultimately, conventional cyber detection and response needs to be expanded to monitor for AI abuses — and AI should conversely be used for defensive advantage, said Phil Venables, CISO of Google Cloud . “The secure, safe and trusted use of AI encompasses a set of techniques that many teams have not historically brought together,” Venables noted in a virtual session at the recent Cloud Security Alliance Global AI Symposium . Lessons learned at Google Cloud Venables argued for the importance of delivering controls and common frameworks so that every AI instance or deployment does not start all over again from scratch.  “Remember that the problem is an end-to-end business process or mission objective, not just a technical problem in the environment,” he said.  Nearly everyone by now is familiar with many of the risks associated with the potential abuse of training data and fine-tuned data. “Mitigating the risks of data poisoning is vital, as is ensuring the appropriateness of the data for other risks,” said Venables.  Importantly, enterprises should ensure that data used for training and tuning is sanitized and protected and that the lineage or provenance of that data is maintained with “strong integrity.” “Now, obviously, you can’t just wish this were true,” Venables acknowledged. “You have to actually do the work to curate and track the use of data.” This requires implementing specific controls and tools with security built in that act together to deliver model training, fine-tuning and testing. This is particularly important to assure that models are not tampered with, either in the software, the weights or any of their other parameters, Venables noted.  “If we don’t take care of this, we expose ourselves to multiple different flavors of backdoor risks that can compromise the security and safety of the deployed business or mission process,” he said.  Filtering to fight against prompt injection Another big issue is model abuse from outsiders. Models may be tainted through training data or other parameters that get them to behave against broader controls, said Venables. This could include adversarial tactics such as prompt manipulation and subversion.  Venables pointed out that there are plenty of examples of people manipulating prompts both directly and indirectly to cause unintended outcomes in the face of “naively defended, or flat-out unprotected models.”  This could be text embedded in images or other inputs in single or multimodal models, with problematic prompts “perturbing the output.” “Much of the headline-grabbing attention is triggering on unsafe content generation, some of this can be quite amusing,” said Venables. It’s important to ensure that inputs are filtered for a range of trust, safety and security goals, he said. This should include “pervasive logging” and observability, as well as strong access control controls that are maintained on models, code, data and test data, as well.  “The test data can influence model behavior in interesting and potentially risky ways,” said Venables.  Controlling the output, as well Users getting models to misbehave is indicative of the need to manage not just the input, but the output, as well, Venables pointed out. Enterprises can create filters and outbound controls — or “circuit breakers” —around how a model can manipulate data, or actuate physical processes.  “It’s not just adversarial-driven behavior, but also accidental model behavior,” said Venables.  Organizations should monitor for and address software vulnerabilities in the supporting infrastructure itself, Venables advised. End-to-end platforms can control the data and the software lifecycle and help manage the operational risk of AI integration into business and mission-critical processes and applications.  “Ultimately here it’s about mitigating the operational risks of the actions of the model’s output, in essence, to control the agent behavior, to provide defensive depth of unintended actions,” said Venables.  He recommended sandboxing and enforcing the least privilege for all AI applications. Models should be governed and protected and tightly shielded through independent monitoring API filters or constructs to validate and regulate behavior. Applications should also be run in lockdown loads and enterprises need to focus on observability and logging actions.  In the end, “it’s all about sanitizing, protecting, governing your training, tuning and test data. It’s about enforcing strong access controls on the models, the data, the software and the deployed infrastructure. It’s about filtering inputs and outputs to and from those models, then finally making sure you’re sandboxing more use and applications in some risk and control framework that provides defense in depth.” VB Daily Stay in the know! Get the latest news in your inbox daily By subscribing, you agree to VentureBeat's Terms of Service. Thanks for subscribing. Check out more VB newsletters here . An error occured.

Noma 正在构建工具来发现人工智能应用程序的安全问题TechCrunch

2024-10-31 12:30:47

公司越来越多地采用人工智能,但由于人工智能应用的复杂性和安全团队的技能差距,面临着更高的网络安全风险。HiddenLayer、Protect AI 和 Cranium 等初创公司旨在保护 AI 系统免受各种威胁。Noma Security 摆脱了隐形模式,提供了用于识别人工智能开发中使用的数据管道和代码中的漏洞的工具,解决了人工智能生命周期中独特的技术挑战。Noma 成立于 2023 年,已经在财富 500 强公司中拥有付费客户,最近获得了 2500 万美元的 A 轮融资,以扩大其团队和市场覆盖范围。

Noma 正在构建工具来发现人工智能应用程序的安全问题TechCrunch

AI 对布法罗比尔队 2024 年 NFL 赛季前 8 场比赛的预测效果如何?

2024-10-31 12:00:04

布法罗比尔队目前在亚冠东部领先,远远领先于最接近的竞争对手迈阿密海豚队。AI (ChatGPT) 对比尔队前八场比赛的每场季前赛预测的分析显示出不同程度的准确性。例如,在第二周对阵海豚队的比赛中,人工智能错误地预测了迈阿密队的胜利;然而,它准确地预见了其他比赛中的高分结果和势均力敌的比赛,例如对阵乌鸦队和喷气机队的比赛。总体而言,ChatGPT 对比尔队的预测相当准确,所有分析的比赛的平均等级为 B。AI 的季中模拟预测比尔队的战绩为 5 胜 3 负,这与他们的实际表现仅相差一场比赛。这布法罗比尔

AI 对布法罗比尔队 2024 年 NFL 赛季前 8 场比赛的预测效果如何?

无人机、烟花和人工智能在洛杉矶纪念体育馆上演蔡国强竞技场大小的奇观

2024-10-31 11:30:31

- 蔡国强在洛杉矶纪念体育馆为 PST ART 举办的爆炸活动系列包括题为“盗火”和“神圣之怒”的表演。- Aziza Kadyri 利用人工智能对乌兹别克斯坦的苏扎尼刺绣进行了现代化改造,放大了女性的声音,并解构了威尼斯双年展后台的传统剧院。- 威廉·肯特里奇 (William Kentridge) 讨论他的九集电影系列《咖啡壶自画像》,该系列电影将于 2024 年 10 月 18 日在 MUBI 上上映。- DesignBoom 巴黎指南涵盖巴黎巴塞尔艺术展和迈阿密设计展的必看亮点。巴黎 2024 以及该市的其他著名展览。- Es Devlin 在伦敦揭幕前讨论了她的装置作品,其中包括 50 名流离失所者的肖像,探讨了集会和流离失所的主题。

无人机、烟花和人工智能在洛杉矶纪念体育馆上演蔡国强竞技场大小的奇观

2024-10-31 11:01:34

Geoffrey Hinton’s misguided views on AI Geoffrey Hinton, often referred to as the "godfather of AI," has been awarded a Nobel Prize in Physics, despite争议他关于AI潜在风险的观点。Hinton对AI未来持悲观态度,认为它可能威胁人类生存,但许多研究人员批评他的观点过于夸张,并指出当前的AI系统如ChatGPT只是通过算法生成看似智能的结果,实际上并未达到真正的理解水平。尽管Hinton在推动AI技术发展方面做出了重要贡献,但他关于AI风险的看法可能会分散人们对实际问题的关注,例如环境影响和对边缘群体的影响。最近,Hinton捐赠了部分诺贝尔奖金给Water First组织,该组织致力于培训原住民开发安全的水资源系统。 Artificial Intelligence He may have made important contributions to artificial intelligence, but that doesn’t mean he knows where it’s going Oct 31, 2024 — 4 min read Photo: Flickr/ Collision Conf It will probably come as no surprise to you that I’m no big fan of the so-called “godfather of AI” Geoffrey Hinton, and it’s fair to say I was stunned when he was given a Nobel Prize in Physics — as he seems to have been as well. Not long after that announcement was made, I was asked to write a quick piece about it for the Toronto Star , and they’ve allowed me to share it with you. I think the perspective on AI that Hinton shares — which is often charitably termed an “AI safety” perspective (or, less charitably, he’s a doomer) — is very unhelpful in actually dealing with the realities and potential near futures of AI — the harms to workers and the wider society that have nothing to do with the sci-fi dream of superintelligence. But I do want to say something positive about him. Hinton joined the University of Toronto in 1987 and is a Canadian citizen. He’s seen a lot that’s happened in Canada over the past several decades. Earlier this week, he revealed that he donated half of his share of the $1.45 million CAD in prize money from the Nobel Committee to Water First, an organization in Ontario training Indigenous peoples to develop safe water systems. In recent years, Canada has been facing a reckoning for the cultural genocide it inflicted on Indigenous peoples within its borders, from the lack of clean drinking water to the horrors of the residential schools . At a news conference, Hinton said, “I think it’s great that they’re recognizing (who lived on the land first), but it doesn’t stop Indigenous kids getting diarrhea.” He may be misguided on AI, but good on him for that. Now, here’s my piece on Hinton’s Nobel Prize, first published by the Toronto Star . In the mid-1960s, MIT computer scientist Joseph Weizenbaum developed a program called ELIZA. It was a more rudimentary form of a chatbot like ChatGPT, designed to simulate a psychotherapist. Upon seeing how people engaged with it, however, Wiezenbaum’s optimism toward the technology soured. The program had no understanding of what users were inputting. Even still, Weizenbaum found that people wanted to believe it did. His secretary even asked him to leave the room as she responded to the system’s questions. Today, researchers call this the ELIZA effect: projecting human traits onto computer programs and overestimating their capabilities as a result. That phenomenon came to mind recently when I heard the news that Geoffrey Hinton was being honoured with the 2024 Nobel Prize in Physics alongside John Hopfield. While Hinton certainly helped move his field forward, his assertions about the risks of artificial intelligence could distract us from the real consequences. You’ve likely heard Hinton referred to as the “godfather of AI.” His work has been key to the development of neural networks and the algorithms that form the basis of chatbots like ChatGPT. Hinton is a professor emeritus at the University of Toronto and would split his time between the university and Google until he resigned from the company in May 2023. Don’t listen to the ‘godfather of AI’ Geoffrey Hinton is using fantasies to distract us from the real harms of AI Disconnect Paris Marx There’s no doubting that Hinton has made important contributions to his field. But since the rise of generative AI at the end of 2023, Hinton has become known in tech circles for another reason: he promotes the idea that AI systems are nearing human levels of intelligence, and that they therefore pose a threat to human survival. He is not alone. But there are also a large number of researchers who push back on that idea and charge he’s guilty of falling prey to the ELIZA effect. Hinton asserts that since artificial neural networks were modelled on biological brains, they must then work similarly to them. That means a tool like ChatGPT isn’t just using complex algorithms to churn out believable results; it’s actually developed a level of understanding that will continue to grow until it exceeds the intelligence of human beings. He says this would mark an “existential threat” to humanity – despite acknowledging to the BBC that as recently as a few years ago most experts “thought it was just science fiction.” But that’s still the case today. After theories like Hinton’s started gaining more traction as the hype around ChatGPT grew, science fiction author Ted Chiang started criticizing the excitement, calling the technology “ autocomplete on steroids .” Emily M. Bender, a computational linguist at the University of Washington, has similarly called out people like Hinton for conflating a chatbot’s ability to churn out text with the notion there’s any meaning behind it. Put more plainly: things like ChatGPT only appear to be intelligent because they’ve been designed mimic human language to a plausible enough degree. Their creators want to believe they’re creating intelligent machines, so that’s what they choose to see. When I spoke to Bender last year, she told me that people like Hinton “would rather think about this imaginary sci-fi villain that they can be fighting against, rather than looking at their own role in what’s going on in harms right now.” AI models present plenty of concerns beyond the supposedly existential and science fictional ones Hinton is most preoccupied with, including everything from their environmental costs to how they’re already being deployed against marginalized populations today. But when CNN asked Hinton about those concerns in May 2023, he said they “weren’t as existentially serious” and thus not as worthy of his time. For his contributions to his field, Hinton deserves recognition, and he’s received plenty of it. But just because he’s excelled at advancing AI models doesn’t mean we also need to turn to him for answers to the questions about their broader societal consequences. Hinton may be an intelligent man, but we shouldn’t assume the same about the technology he helped create.

关注人工智能:人工智能估值永无止境的升级

2024-10-31 11:00:40

人工智能行业持续快速增长,OpenAI、xAI、Anthropic 和 Perplexity 等主要参与者的大量筹款活动是其标志。尽管人们对高估值持怀疑态度,但这些公司仍在以越来越高的倍数筹集大量资金。例如,据报道,xAI 正在就一轮 50 亿美元的融资进行谈判,在上次融资后的五个月内,该公司的估值将达到约 450 亿美元。同样,Perplexity 正在谈判将其估值比一个多月前提高近两倍,达到 80 亿美元。这一趋势反映了持续不断的人工智能投资热潮,并引发了对风险投资家在动荡的市场条件下未来盈利能力的质疑。

关注人工智能:人工智能估值永无止境的升级

Super Micro 价值 500 亿美元的股票暴跌凸显了人工智能炒作的风险

2024-10-31 11:00:01

Super Micro Computer Inc.于3月份被纳入标准普尔500指数,其股价在两年内飙升了2,000%以上,随后暴跌75%。审计师安永以管理层财务报表问题为由辞职后,该公司面临退市风险。Super Micro 的收入在最近几个季度翻了一番,但该公司自 5 月份以来一直没有提交正式披露信息,并且落后于 SEC 的备案,导致纳斯达克发出合规警告。卖空者对潜在的会计操纵表示担忧,据报道,美国司法部正在调查。这些事态发展后,该股进一步下跌,引发了人们对人工智能投资炒作的质疑。

Super Micro 价值 500 亿美元的股票暴跌凸显了人工智能炒作的风险