全部新闻

税收,人工智能和医疗保健:您需要了解的有关科罗拉多州特别立法会议的所有信息
税收,人工智能和医疗保健:您需要了解的有关科罗拉多州特别立法会议的所有信息
2025-08-21 09:45:00
科罗拉多州州长贾里德·波利斯(Jared Polis)召集的特别立法会议旨在解决联邦“一项大型法案法案”带来的几个紧迫问题。这是关键点的摘要:###关键问题和建议1。**预算短缺:** - 新的联邦税法在2026财政年度在科罗拉多州的预算短缺约为7.5亿美元。 - 民主党计划通过结束公司税收漏洞,增加对富人的税收以及调整现有计划(如健康学校餐饮计划)来弥补这一差距。2。**健康的学校饮食计划:** - 投票措施将使国家能够保留健康学校餐饮计划的所有收入,并使用任何超额资金用于SNAP(补充营养援助计划)福利。 - 该提案旨在减轻与《一项大型法案法》施加的新联邦工作要求相关的行政成本。3。**关闭公司税收漏洞:** - 民主党计划通过结束公司税收漏洞并增加对高收入个人的税收,而不是削减基本计划或服务来解决预算短缺。4。**加班税收抵免:** - 共和党人采取了措施,允许科罗拉多州从2026年开始将联邦加班税收减免申请其州税,这将进一步降低该州的收入。5。**家庭负担能力所得税抵免(FAITC):** - 共和党参议员芭芭拉·柯克梅尔(Barbara Kirkmeyer)提议消除FAITC的可退还部分来解决预算短缺。 - 民主党人反对这一措施,认为这给低收入家庭和儿童带来了不适当的负担。###立法议程 - **收入措施:** - 关闭公司税收漏洞。 - 增加对高收入个体的税收。 - 调整现有计划,例如健康的学校餐点和快照以有效地管理成本。 - **减少支出:** - 民主党人愿意削减支出,但认为收入措施更加公平。 - 共和党人提出具体削减(例如,监狱囚犯的跨性别护理,州长的办公室预算),这将不涵盖整个不足。###共和党努力 - **提出无关的法案:**一些共和党人计划介绍与波利斯的电话无关的法案,重点关注诸如营业纳税折扣和供应商费用之类的问题。 - **选民批准要求:**提案要求选民批准对消除税收减免或漏洞的税法更改。###民主方法 - **通过收入措施平衡预算:** - 民主党人强调关闭公司税收漏洞并调整现有计划,而不是削减基本服务。 - 他们认为他们的方法更公平,尤其是在避免对低收入家庭负担的负担方面。### 结论特别的立法会议将是民主党人对收入措施的关注,以解决预算短缺和共和党人对削减支出的偏爱。鉴于这些问题的复杂性和争议性,谈判和妥协可能在确定最终结果中起着重要作用。
CoreWeave与Nvidia:购买更好的人工智能(AI)股票是什么?
CoreWeave与Nvidia:购买更好的人工智能(AI)股票是什么?
2025-08-21 09:30:00
NVIDIA在AI芯片制造中带有令人印象深刻的财务状况,而Coreweave则租用了进入Nvidia筹码的机会,经历了强劲的收入增长。尽管CoreWeave由于对AI计算能力的需求很高,但与NVIDIA相比,它仍然是一家较小的公司,尽管NVIDIA受益于更广泛的竞争护城河和盈利能力,但它仍然是较小的公司。建议投资者,尽管Coreweave对于那些在AI领域寻求较小股机会的人来说可能是一个有吸引力的选择,但鉴于其市场的优势和收益潜力,NVIDIA仍然是更安全,更有价值的长期投资。
2025-08-21 09:26:20
AI in science: is it useful? At the Metascience 2025 conference in London, discussions围绕人工智能在科学研究中的影响展开,尽管AI工具如AlphaFold带来了突破性进展,但专家们认为AI不太可能彻底变革科学工作。一些人担忧生成式AI可能导致大量低质量论文涌现,淹没科研人员并掩盖真正的新发现。虽然有观点认为AI能够加速生命科学领域的发现,但仍存在对其实际效用和回报的质疑。AI工具在简化文献搜索和总结等任务方面可能会提供帮助,但科学家们仍需对结果负责,并且担心过度依赖AI可能导致失去创新机会。 While advances have produced breakthrough tools like AlphaFold, AI is unlikely to be revolutionary, and may drown scientists in low quality papers, conference hears Photo credits: Emiliano Vittoriosi / Unsplash Artificial intelligence might be useful to scientists in some circumstances, but it’s unlikely to revolutionise the job, according to presentations at a conference held last month that discussed the technology’s impact on research.  At the Metascience 2025 conference, a major get-together in London to discuss how research is conducted, AI in science was one of the hottest topics, but overall, expectations are tempered. Some even fear generative AI could create a wave of scientific spam, overwhelming researchers with even more papers, burying genuinely new findings.  The most recent wave of AI breakthroughs has brought some new tools, “but it's not a fundamental transformation of what it means to be a scientist, necessarily,” said Matt Clancy, an expert in science and innovation at the not-for-profit funder Open Philanthropy.  “Science has long integrated new tools that open up new fields to study and new kinds of data,” he told delegates.  The emergence of large language models (LLMs), the basis for popular tools such as ChatGPT, and more dedicated software like AlphaFold, which predict protein structures from their amino acid sequences, has some hoping that that AI could generate hypotheses , replicate findings in computational research , or summarise existing literature . On August 19, the EU’s research Commissioner Ekaterina Zaharieva said that it was “impressive” how AI was “transforming research,” and promised an AI in science strategy “soon.” In 2023, the European Commission set up a dedicated AI in science unit and, earlier this year, published 15 case studies arguing AI was speeding up discovery in life sciences.  However, the debate over AI in science comes amid growing doubts over its usefulness in business, which caused a sell-off in technology shares on August 20. A survey of companies by the Massachusetts Institute of Technology in July found that “95% of organisations are getting zero return” from generative AI. Meanwhile, some AI company heads have made claims that critics see as absurdly overinflated . For example, Demis Hassabis, chief executive of Google DeepMind, the creator of AlphaFold, earlier this month suggested that “we can cure all disease with the help of AI [. . .] maybe within the next decade or so.” AI is not new Despite the excitement over new tools like LLMs and AlphaFold, the history of AI in science actually goes back more than half a century, Iulia Georgescu, science and innovation manager at the UK’s Institute of Physics, told the conference.  “Most people think it starts in the 2020s with AlphaGo and AlphaFold,” she said. But the potted history of AI in science that she presented to the conference traces it back to a 1956 tool for proving theorems. The use of machine learning, synonymous with what is today defined as AI, was widely used in physics in the 1990s, she said, for tasks such as pattern recognition. Machine learning was also used to analyse data that led to the discovery of the Higgs Boson, which was experimentally confirmed in 2012, Georgescu said.  Accountability Although Clancy doesn’t expect big transformations, he told the conference that AI tools might chip away at the more automatable tasks scientists currently have to do, such as searching and summarising the existing literature, and free them up for other parts of the job, such as explaining their work to policymakers.  However, academics still might be reluctant to hand over tasks to AI tools, because they remain accountable for their work. “I have to put my name on this paper,” said Clancy. “I have to really trust the results that come out of this machine”.  Indeed, the jury is still out on AI tools claiming they can summarise existing scientific literature. The Columbia Journalism Review recently warned science journalists that the results from five literature review tools it tested were “underwhelming” and in some cases “alarming.” The tools pulled completely different papers from the literature, disagreed on the scientific consensus, and returned different results when asked the same question days later.  AlphaFold AlphaFold is arguably the biggest scientific gain from recent advances in AI, with Hassabis winning a share of the Nobel Prize in chemistry last year for its invention.  Until its release, researchers had deciphered around 200,000 protein structures, said Anna Koivuniemi, head of DeepMind’s impact accelerator, at the conference. “It was a very time-consuming process,” she said. But AlphaFold has managed to crack the structures of 200 million proteins, with more than three million researchers using these discoveries in their work so far, she said.  Related articles China leads EU and US on using artificial intelligence in science Viewpoint: Time to strengthen Europe’s leadership through AI in science Commission launches new AI in science unit as part of research directorate reshuffle Koivuniemi acknowledged that AI was far from being able to help with all scientific problems. “I'm sure that you all have stories where an AI initiative didn't add so much,” she said.  Researchers need “good data to train your models,” she said, with AlphaFold reliant on the 200,000 protein structures previously deciphered by scientists. “The fact that we were able to develop AlphaFold was [due to] the work of all structural biologists over 50 plus years,” she said.  Eliminate the routine?  Researchers should be wary about offloading what seems like “routine” scientific work to AI, said Sabina Leonelli, a historian and philosopher of science and technology at the Technical University of Munich.  “What is seen as a routine activity becomes, in fact, a source of discovery, and vice versa,” she told the conference. Rosalind Franklin was the first person to imagine the structure of DNA while working on “supposedly very boring crystallography problems,” Leonelli said.  In academia, there is also a “tendency to constantly underestimate the costs, the significance and the very high demands of validating and maintaining AI models,” she added.  Not convinced In India, only a small minority of scientists are using large language models, said Moumita Koley, a research analyst at the Indian Institute of Science in Bangalore, who presented a survey of researchers in the country.  “They're not really yet convinced that we are into an era where AI is driving the science,” she said. One concern is cost. Although there are currently free versions of LLMs, “maybe these models will tomorrow become very expensive,” she said. “The pro versions of all these models [. . .] we cannot really afford.”  The one exception is that Indian researchers use LLMs extensively to help polish their writing, she said, and this could be a huge benefit to academics whose first language isn’t English.  But journal policies prohibiting the use of AI in writing meant this was a “lost opportunity” to level the playing field with native English speakers, she said.  However, earlier this year, Chinese researchers conducted an analysis of how LLMs had changed academic writing, and concluded the tools had caused a “significant decrease in cohesion and readability” in preprint abstracts.  Scientific spam? Finally, the conference heard fears that LLMs will be used to generate ballooning numbers of academic papers, either by helping academics write more, or by fraudulently generating fake articles. The risk is that this could further overwhelm researchers already drowning in an exponentially growing number of articles. Some researchers have sounded the alarm that AI-generated fake papers could cause an “existential crisis” for research. “There will be so much content that is of no value,” said Koley. “Probably these will crowd the space, and the good ideas will not be visible enough.”  “This week I found four papers on Google Scholar ‘written’ by me and my co-authors. Except we didn’t write them. They were AI-generated fake citations,” wrote Liudmila Zavolokina, a digital innovation professor at the University of Lausanne, on LinkedIn earlier this month. 
现在要购买的3个顶级人工智能(AI)股票是多少?
现在要购买的3个顶级人工智能(AI)股票是多少?
2025-08-21 08:08:00
要点: - AMD可能会挑战NVIDIA在AI市场中的主导地位。 - 由于近期挑战,ASML的股票被低估了,但具有长期的潜力。 - 尽管接近新的高点,但在Google Cloud和其他细分市场的AI增长的推动下,字母仍然具有很强的价值。关键点
AI如何挤压住房市场
AI如何挤压住房市场
2025-08-21 08:03:00
凯雷全球研究和投资策略负责人杰森·托马斯(Jason Thomas)表示,沮丧的住房市场归咎于人工智能(AI)的融资需求。托马斯(Thomas)的报告标题为“让他们吃的计算”。
Nvidia,Palantir,AMD,Meta Platforms和Microsoft(股市的人工智能(AI)Titans)对华尔街有230亿美元的警告|Motley傻瓜
Nvidia,Palantir,AMD,Meta Platforms和Microsoft(股市的人工智能(AI)Titans)对华尔街有230亿美元的警告|Motley傻瓜
2025-08-21 07:51:00
内部人士交易数据在五年内表明,来自领先的AI股票的高管(Nvidia,Palantir Technologies,高级微型设备,元平台和Microsoft)共同售出了超过230亿美元的股票,同时很少购买股票。尽管这些公司的财务业绩和三位数回报率很高,但大量的内部销售却引起了人们对未来增长前景的担忧。
苹果的AI Supercycle才刚刚开始(NASDAQ:AAPL)
苹果的AI Supercycle才刚刚开始(NASDAQ:AAPL)
2025-08-21 07:36:16
分析师披露:作者在上述公司中没有股票,期权或类似的衍生品立场,也没有计划在72小时内启动此类职位。本文表示个人意见,没有得到补偿(除了寻求alpha之外)。与提到的任何公司没有业务关系。寻求Alpha的披露:过去的表现并不能保证未来的结果。没有提供有关对特定投资者投资适用性的建议或建议。表达的观点可能不会反映出寻求Alpha的整体观点。寻求Alpha未获得证券交易,经纪,美国投资建议或银行业务的许可。分析师是第三方作者,他们无需任何研究所或监管机构的许可或认证即可成为专业或个人投资者。
亿万富翁大卫·特珀(David Tepper)知道华尔街没有什么吗?他出售了Appaloosa的Broadcom股票的100%,并堆积在这种备受瞩目的人工智能(AI)芯片制造商中|Motley傻瓜
亿万富翁大卫·特珀(David Tepper)知道华尔街没有什么吗?他出售了Appaloosa的Broadcom股票的100%,并堆积在这种备受瞩目的人工智能(AI)芯片制造商中|Motley傻瓜
2025-08-21 07:02:00
亿万富翁投资者戴维·泰珀(David Tepper)以预测市场趋势的成功而闻名,最近将Appaloosa管理层在半导体公司Broadcom中的所有股份销售了,而是将该公司对AI芯片制造商NVIDIA的投资增加了483%。这一转变是在AI驱动的,但股票估值高的驱动下的显着增长之后。同时,NVIDIA报告了Q1强劲的收入同比增长69%,这归因于在数据中心AI处理中广泛使用其GPU的强劲需求。Tepper的举动表明,尽管最近的价格波动,但Nvidia在AI领域提供了更具吸引力的投资机会。
AWS首席执行官说,AI取代初级人员是“最愚蠢的主意”
AWS首席执行官说,AI取代初级人员是“最愚蠢的主意”
2025-08-21 06:06:00
亚马逊网络服务首席执行官马特·加曼(Matt Garman)因AI功能是“我听过的最愚蠢的事情”而驳回了解雇初级工人的想法。在一次对话中,他强调了招聘和培训年轻人才的重要性,并指出它们对于将来的公司成长和使用AI工具的参与至关重要。Garman还批评了AI对代码量的贡献,而是主张教授关键推理,创造力和终身学习技能以适应快速技术变化。
客户挑战
客户挑战
2025-08-21 04:09:34
站点错误:关键组件无法加载,这可能是由于浏览器扩展,网络问题或浏览器设置引起的。建议用户检查其Internet连接,禁用广告阻滞剂或切换浏览器。