
Photo-Illustration: Intelligencer; Photo: Getty Images
Last week, Grammarly, the popular writing-assistance tool, announced the arrival of nine new “agents” in its software. “Whether you need expert feedback, grade predictions, or help anticipating reader reactions, these next-level AI agents offer help at every stage of your writing so you start easier and finish stronger,” the company said. Agents sit somewhere between a buzzword and an AI term of art, and definitions vary. OpenAI, for example, calls agentic AI “systems that can pursue complex goals with limited direct supervision,” which leaves a lot of room for interpretation. Google defines them as “software systems that use AI to pursue goals and complete tasks on behalf of users.”
A few of Grammarly’s new agents might fit such a definition, if you squint a little. The “AI Grader” tool claims to be able to estimate an assignment’s grade by “looking up your instructor,” “reviewing public teaching info,” and “identifying key grading priorities.” Setting aside whether such a feature is ethical, useful, or even functional — Jane Rosenzweig, director of the Harvard College Writing Center, found nothing redeeming in testing — such a feature might fit a bare description of an agent. You give it a task and it does a couple of unpredictable things in its attempt to fulfill it. Same for the “Citation Finder,” a tool that will, after your work is done, backfill it with “legit sources to back your claims.” (Now is as good a time as any to ask: What the hell are we even doing with this stuff?)
A Grammarly feature that claims to offer “feedback inspired by real experts” seems to drift further from “agency.” Then there are the rest of the new agents: simulated “reader reactions,” a “humanizer,” a “paraphraser.” These sound an awful lot like features already contained in Grammarly, as well as things you could get by pasting your text into any major chatbot and asking for it. (Are the new included summarization features in iOS, Android, Windows, Office, and Google Docs agentic too?) Finally, Grammarly suggests, there’s a “proofreader” agent, which can “refine grammar, clarity, structure, and more as you write.” It’s hard to find the agent here or even something new. Proofreading has been the entire thing with Grammarly for years, well before the company was talking about generative AI at all.
Lots of companies are racing to capitalize on the AI boom, or at least scrambling not to miss it. A couple of years ago, this might have meant bolting a chatbot onto the side of existing software interfaces, whether users wanted it or not. More recently, it means incorporating AI features into existing workflows and testing out new ones (if you’ve updated your phone in the past year, work with productivity software, or even just use Gmail, you probably know what I mean). Now, evidently, it means rolling out agents — or rebranding anything that resembles an AI tool as “agentic.”
When the big AI firms talk about agents, they’re often making promises about the future. When Sam Altman says AI agents will “join the workforce,” he’s evoking an image of software that comprehensively fills the role of a human and can perform a wide range of complicated tasks with minimal supervision and the assistance of tools. In the present, AI tools that use LLMs to carry out multistep tasks, like OpenAI or Google’s “deep research” tools, are often described as agentic. If you ask them a question, they formulate a plan to answer it and then call on search engines to help compile a report (the way tools like this tend to show their work, articulating in great detail how many steps there are to Googling something, is itself an interesting form of agentic marketing).
Other uses of the term are credible-ish, if a little fuzzy: Automated customer-support tools that can cancel or modify orders, for example, have been around for a while, and now that they’re built on LLMs, they’re getting a rebrand. By industry standards, the clearest examples of AI agents — again, a contested and arguably misleading term to start with — are for developing software. Ask the “AI coding agent” Devin to add a software feature or even build a simple piece of software from scratch, and it will attempt to do so, calling on multiple tools and external resources in the process, sometimes over a long period of time. It’s all interesting stuff with a lot of possible futures, and it’s become one of the AI industry’s strongest business pitches: after mainstream chatbots, agentic coding tools and coding assistance are arguably the clearest monetizable use cases for LLMs today. This helps explain, in part, why so many companies are fudging.
A survey of press releases and announcements over the past month contains plenty of examples of possible agent inflation: The NFL says “staff will soon be able to use AI agents to support and improve various workstreams including use cases like player scouting and salary cap management”; a bundle of new QuickBooks features that “categorize transactions,” draft emails, and “collect employee time and attendance data” are described as agents; Walmart’s CTO says that the company “is all in on agents,” citing examples like Sparky, an “AI digital assistant that can answer product questions and make suggestions,” that has been in the company’s app for a while. Google has extended the term to describe a new feature that enables users to “ask about getting a dinner reservation with friends that includes multiple constraints and preferences,” which is terminologically defensible enough — it does so in a novel way, using AI tools to search and compile results on the fly — but which also describes the ambitions and features of restaurant-booking apps for the past two decades.
Further from the heart of the AI industry, all sorts of companies are drafting behind big tech’s rebranding. Robot drive-throughs? Those are AI agents now. A June report from Gartner estimated that over 40 percent of announced agentic AI projects will be canceled by the end of 2027 and claimed to have found widespread “agent washing,” which it describes as the “rebranding of existing products, such as AI assistants, robotic process automation (RPA) and chatbots.” Of the thousands of agentic AI vendors found by the firm, it estimated that around 130 of them are “real.”
There’s no reason to expect that tech marketers will start adhering to strict definitions of buzzy terms, but there are signs that this one might soon expand to consume a lot of what we think of as software — if not technically, then at least rhetorically. After all, what sort of dreary company wants to get caught releasing mere apps or features or useful tools in 2025? Software is for losers. We’re building little guys in the computer now.
- Suddenly, Silicon Valley Is Lowering AI Expectations
- How Meta Became Uniquely Toxic for Top AI Talent
- Maybe AI Chatbots Shouldn’t Flirt With Children