Comment As AI pilots within enterprises increasingly flame out, OpenAI is making a pivot to consumers, suggesting AI is more likely to sneak into the enterprise through users than walk in through the front door. But IT departments will still have to deal with it once it arrives.
New technology usually takes over an enterprise in one of two ways. Most traditional big B2B products come in through the front door. The CIO plays golf and eats steak dinners with salespeople, places a big-ticket purchase order, and rolls the new kit out to whoever the business decides needs it.
There are variations on this process, like the "land-and-expand" strategy, where a team or department leader uses their budget to make a small purchase of tech they think will help their employees do the job, and the product metastasizes throughout the org from there. Think about how Adobe conquered graphics departments, or how Salesforce and other CRMs attracted sales leaders who wanted to keep their salespeople from going rogue (and make sure their contact lists stuck with the org if they left). Lore has it that even Microsoft's SQL Server started off primarily as a departmental solution.
But the point is: Some tech comes in because business leaders buy it and force everybody to use it.
Then, there's the back-door approach. In this case, end-users take something they've found useful in their personal lives and demand that their employers let them use it for work too. Think smartphones pre-iOS 4, Dropbox, Slack, and the like. Once upon a time, this was called "the consumerization of IT in the enterprise" and there was even an entire publication and conference series devoted to it. (RIP CITEworld.)
Over the last month or so, it appears that OpenAI is betting on the second path.
First, the AI valuation leader hired Fidji Simo as its new CEO of applications. Simo ran the Facebook app at the corporation now known as Meta before leading Instacart. A competent consumer executive, but far from the kind of ex-Oracle sub-boss or lifelong IBMer you try to hire when you want to penetrate the enterprise, as Google did with Thomas Kurian in 2018.
Then, this week, the ChatGPT maker dropped $6.5 billion - not a paltry sum for a firm that's reportedly losing billions a year and dependent on investors for operations - on a company run by former Apple design guru Jony Ive and brought him aboard. The Wall Street Journal reported Ive and Altman - perhaps along with Airbnb CEO Brian Chesky, as a recent Wired profile hinted - are building some sort of AI-enabled gizmo that they believe could ship 100 million units and add $1 trillion to OpenAI's market value.
We frankly hope it's a delivery device for whatever they're smoking, but it's more likely something like the personal digital assistant featured in the 2014 movie Her and embodied in something that will no doubt look at least as beautiful and classic as the original iPod.
Sarcasm aside, this move makes a lot of sense. Who cares if consumers use AI for helping a friend plan a road trip, informal therapy sessions, or astrological readings? The stakes are low, and the providers of wrong information derived from the internet are generally not held legally liable.
Within enterprises, successful AI implementations are mostly coming from employees trying to save themselves time on mundane tasks. It's widely used in law firms to summarize legal docs (although legal eagles have to be careful to verify its output should it hallucinate details of citations). We've heard about similar use cases in finance - it's not perfect for summarizing or pulling data from thousands of pages of documents, but it's a lot faster and a lot better than a team of 22-year-olds, which is how those kinds of tasks had been done.
Programmers are also increasingly using AI for very particular tasks, like writing repetitive code.
But so far, the top-down AI mandates don't seem to be working very well. A recent IBM survey of 2,000 CEOs found that only one-quarter of their companies' AI pilots drove the expected return on investment. Johnson and Johnson reportedly shut down most of its pilots recently and is focusing on a select few that are proven to drive value. Programmers working for large corporations are increasingly reporting how commands to use AI are backfiring in a hilarious fashion.
As far as all those companies saying that AI is allowing them to lay off unneeded workers, some of them are having second thoughts and hiring those workers back. (A cynic might suggest the trope is just a cover for having to cut costs after execs over-hired during the COVID-19-driven tech boomlet.)
Now, try to imagine entrusting AI with tasks that involve real sums of money or life-and-death scenarios, say in banking or the medical field. In those cases, an 80 percent or even a 99 percent success rate is not sufficient. If an assistant at a big bank blows a few numbers in a spreadsheet to the tune of $100 million, real heads will roll. If an AI does it, there's nobody to blame except the bozo at the head of the company, who mandated that everybody use AI in their work without laying out strict guidelines for checking its results.
The point is that AI may indeed help people do their jobs. But exactly how is best left to workers, who will bring it in with them. Then it'll be up to IT departments to figure out all the ways it's being abused, and vendors to figure out all the necessary controls to help IT mitigate those problems.
That's how we see the rise of generative AI in business, walked in by staff who've allowed it to seep into their lives or actively sought it out. No golf games or steak dinners with the CIO required. ®