作者:Navin Chaddha, managing partner at Mayfield Fund
Will generative AI live up to its hype? On this episode of At the Edge, tech visionaries Navin Chaddha, managing partner at Mayfield Fund; Kiran Prasad, CEO and cofounder of Big Basin Labs; and Naba Banerjee, McKinsey senior adviser and former director of trust and operations at Airbnb, join guest host and McKinsey Senior Partner Brian Gregg. They talk about the inevitability of an AI-supported world and ways businesses can leverage AIâs astonishing capabilities while managing its risks.
The following transcript has been edited for clarity and length. For more conversations on cutting-edge technology, follow the series on your preferred podcast platform.
Brian Gregg: Between you three, we have the investor, the product manager, and the trust and safety thought leader. Iâm going to get us started with a question thatâs rumbling around all of Silicon Valley and beyond. A trillion dollars later, is this so-called AI revolution real?
Naba Banerjee: When I first heard about ChatGPT, I was one of the first people to get the app and pay for it. And then I used it like crazy. We were also going to replace all our customer support agents with AI. It was going to change the world in record time. And here we areâitâs not happened yet. I am a little frustrated. Itâs taking too long. But Iâm hopeful about the future.
Kiran Prasad: AI has a bunch of different parts. When I was at LinkedIn and then at Nextdoor, we used AI to rank a userâs feed. So AI is already having a massive business impact.
The big, new thing is gen AI. Itâs only been a year and a half to two years. Youâve got to give it some time before it starts to get adopted. People must learn how to use it, and you have to build apps on top of it.
If you think about the iPhone when it first came out in 2007, it was almost six to eight years before you really got to a point where you could say, âOh, iPhone is great.â
Now iPhoneâs made Apple into a trillion-dollar company, where historically up to 60 to 70 percent of the revenues are primarily from that one device. I feel like weâre on that same path. Itâs inevitable. Over the next eight years, AI and AI agents will be the future.
Naba Banerjee: But the first version of iPhone did not have this much hype. Whereas with AI, there was so much hype. And now, it feels like itâs taking too long.
Navin Chaddha: The first thing is that AI has been around for 60, 70 years, before any of us were born. I look at it as an evolution. Thereâs hype with any new technology, especially for one weâre all interested in. Gen AI is only two years old from the launch of ChatGPT.
For AI-based applications, there was more hype, because ten years back, the technology couldnât do things that were advertised. Ten years later, a lot of the things that were talked about are going to become a reality. With semiconductors, which Silicon Valley was built on, per Mooreâs law, as predicted by Gordon Moore, the cofounder of Intel, processor speeds double every two years. I expect the impact of gen AI is going to be four times every two years.
Per Mooreâs law, as predicted by Gordon Moore, the cofounder of Intel, processor speeds double every two years. I expect the impact of gen AI is going to be four times every two years.
Brian Gregg: What does the investor mind and balanced view say about how fast this is coming?
Navin Chaddha: It depends on the use case. If you look at consumers and prosumers, where on the other end you donât have bureaucracy buying this stuff, itâs going to happen very quickly. If you look at the adoption of ChatGPT, it had 100 million monthly active users in just two months.
For Instagram, it took two to three years for adoption. For Facebook [Meta], about five years. With each innovation, adoption took about half the time. If youâre selling to enterprise IT and thereâs a human buyer, thatâs where friction will slow things down. It will also depend upon whether you are taking human jobs in the enterprise, or youâre filling jobs that humans donât want to do or canât do, or if thereâs a shortage of talent.
Iâm very bullish on fast adoption among consumers, prosumers, developers, and microbusinesses. In an enterprise, thereâs often friction. Thereâs an IT buyer. Thereâs a chief legal officer, with concerns about privacy data, so then training canât be done.
A big issue is that companies often want to host their data in-house. Then when you ask them what their top two problems are, they say, âI donât have a business caseâ and âI donât have talent to implement it.â
Brian Gregg: Give us a different view from the front line, Kiran, and maybe Naba, too. Youâre starting your company right now, Kiran. How fast is this AI adoption happening?
Kiran Prasad: For a start-up like mine, itâs happening now. If you look at all of the tools weâre using, everything is AI. I probably use AI 300 times a day, easily, and not just for the coding side of it. Itâs to build our logo, build our website, build our marketing materials, and build our customer support site.
Everything is all AI first. But I still think, like with any new technology, itâs going to take a long time for adoption. When the Google search engine and AltaVista first came out, there were companies you could call to do searches for you. They would do a search on Google and then give you the answer. That was because people did not know how to use things like âand,â âor,â and âsite colon.â It took years before people learned how to use prompts.
So I donât think the tech is far away. Usersâ ability to understand how to engage with an agent and use it to accomplish things will just take time. The solutions will be there, but the adoption rate will be potentially lower.
Brian Gregg: Naba, take us back to 2020, when you were coming into Airbnb in the trust and safety role. How did AI influence what you did and how you did it?
Naba Banerjee: Iâll give the moral of that story before I give the story, which is that the biggest mistake we make is thinking about AI for the sake of AI. What will never go away is what humans do really well, which is articulate the problem clearly.
When I joined as the head of trust and safety at Airbnb, it was a really difficult problem. The world had gone into lockdown. Bars and hotels had shut down. And teenagers were throwing parties in Airbnb rentals. I remember just sitting down and not having a clue as to how to even start.
What saved me was that I had a group of cross-functional experts to go to for advice, including police chiefs, our communications partners, our designers, and our developers. We knew we needed intelligence that could keep up with the trends in the world.
Thatâs when we built the first AI model, rolled it out in America, and then the whole world. Today, party incidents are 55 percent less compared with when we started. So thatâs something we should never forget: Itâs never AI for the sake of AI, but to solve problems.
Brian Gregg: Letâs flash-forward a little bit. Navin, if you were to look ahead by two or three years, and letâs say the adoption curve is what you say it is, what do the institutions of 2028 look like? Are they half machines, half people? Whatâs your view of that?
Navin Chaddha: We believe every human is going to have a digital companion, and we call them AI teammates. Our strong belief is that these AI teammates and humans will work together so that humans can work at their exponential potential, what I call âhuman squared.â
What it means is that AI will have to do more than automate tasks and accelerate productivity. Essentially, we have to start thinking about how AI can augment human capabilities. How does AI help me amplify my creativity? Then itâs really a teammate.
Itâs not assisted intelligence with a copilot that you instruct, âGo do this task.â How do we get better together, so a future organization can have digital workers alongside human workers? The organization of the future will be hybrid. The CEOs and executives who endorse it will get on the other side, and people who donât will end up becoming dinosaurs. This is what happened with the internet and e-business. If you donât have a mobile app, you know where you go. So thatâs what will happen.
Kiran Prasad: This is a critical thing. The agent or teammate approach versus the copilot approach. Iâm a believer in the agent approach. Think about it like this: If youâre going to write a book, most people would start by opening Google Docs or [Microsoft] Word. Youâll probably get spell-check and grammar check to help you. Thereâs a little AI thatâs kind of helping you write the book. Then you send it off to an editor, the editor would give you feedback, youâd update your document, and then youâd ship your book.
You can think about the agentic world like having a ghostwriter. If youâre going to tell your story, you go to the ghostwriter. The ghostwriter writes the book. You then provide editorial feedback on whether the book is good and which parts to fix. So itâs this idea that youâre going to have a teammate, somebody whoâs going to do the work, and youâre going to give it a bit of direction. Thatâs the future.
You can think about the agentic world like having a ghostwriter.
Kiran Prasad, CEO and cofounder of Big Basin Labs
Naba Banerjee: I love that collaboration, Navin and Kiran. As an operator, every year we would go to our CFO saying, âI donât have enough money. I want more resources. I want more engineers to do the work.â They would say, âBut, Naba, now you have AI assistants or copilots. Why do you need people anymore?â
But if we go back and say, âAI will help our engineers be even more productive,â theyâll say, âThat means I have to pay the engineers and pay for AI? Youâre going to make me spend double. How much time will it take for you to be doubly productive?â I want to imagine this beautiful world [of exponential productivity with AI], but when?
Navin Chaddha: We start from the fringes and look at jobs that canât hire talent. Look at DevOps engineers, ITOps engineers, security engineers, and chip engineers. Same thing as happened with IT outsourcing or manufacturing outsourcing. Donât take the high end of the knowledge work and replace those. Go to the fringes.
Second, go after things humans are not good at. Sifting through case law, preparing for litigationâcan you boil it down to 90 percent that is irrelevant? Ten percent is what you give to humans.
Kiran Prasad: The other thing is that if CFOs are not using AI, they canât understand what it means. I recently tried to raise funding and needed legal advice, so I set up five different AI lawyers with different personalities. I uploaded the contract, and they diagnosed and argued with each other about what the pros and cons were.
This is what I mean. Weâre in those early Google days where people are, like, âThat doesnât even sound real.â I think your CFO just doesnât understand AI yet.
Navin Chaddha: One way to solve the budget problem is if people who are on agentic architectures donât charge for the number of hours or per seat. Instead, they charge for the work they do and the outcomes they create.
Itâs a complete change in the business model. The same happened with perpetual license. First it was, âPay me up front for five years.â Then the next company wanted to be paid monthly. And then cloud compute happened, where you pay as you use, like you do for electricity.
So, get these digital workers. Theyâre off most of the time. They answer calls. They reconcile AR [accounts receivable]. They file tax returns. Then you pay for them [the digital workers]. The tech is getting there, but the workflow isnât there, because they need enough practice to get better.
In an enterprise setting, thereâs one more thing I donât like. Itâs the amount of training that is required on closed data. Training on open data on the internet is much easier to create a scalable service. But I have custom data, which is complicated. Thatâs why you have to go to the fringes, which requires business model innovation.
Kiran Prasad: The mapping shift is like Uber. If you originally wanted to have a driver, you had to make enough money to have a driver and pay them full time. Then Uber made drivers easily accessible. That did not mean everybody got rid of their cars.
Brian Gregg: Naba, in a world where you have half machines and half humans, what does the leadership team of tomorrow look like? How does a CEO and her or his team operate in this hybrid world?
Naba Banerjee: I think it will take away a lot of the fear associated with leadership. People who want to start their own companies, or who want to lead companies, or be a senior leader, they think they have to be this person of exceptional talent with very creative vision and make the best decisions all the time.
They will be able to use AI to say, âSimulate these five scenarios for me and give me all of my risk-versus-benefit numbers. Help me understand if Iâm going to get sued or not.â Exactly like what you are doing, Kiran. They can ask AI to come up with creative ideas and challenge each other.
Everyone cannot be exceptional at everything, but everyone is exceptional in at least one thing. But those other areas of your personality that may have kept you behind, now you can push forward with AI.
But those other areas of your personality that may have kept you behind, now you can push forward with AI.
Naba Banerjee, McKinsey senior adviser and former director of trust and operations at Airbnb
We will probably see many more leaders emerge. On the flip side, itâll get harder to distinguish yourself because suddenly itâs an equalizer. Everyone has the same resources available. So thatâs the conundrum that, though Iâm not a fortune teller, Iâm very excited to see.
Brian Gregg: Many of todayâs CEOs followed a certain track, such as an MBA or a graduate degree, then a job usually in a commercial function like marketing and sales, and then worked their way up. Kiran, what does this CEO of the future look like? Is it the same pathway with a few tweaks? Youâre playing the role right now.
Kiran Prasad: Itâs the same pathway, but with more than a few tweaks. Part of what you do as you get into larger and larger leadership roles is you get really effective at understanding strategically where you want to go and then delegating tasks.
In an agentic world, you will be able to choose which tasks to delegate to an employee versus delegating to an agent. But you still need somebody whoâs setting the strategy. What will continue to be an even more important skill is communication.
How effectively and concisely can you convey what youâre trying to accomplish to a person versus an agent? And what happens as it permeates through an organization is that it typically dissolves. In an agentic world, youâre going to be able to maintain fidelity going from an agent to an agent thatâs trying to accomplish things. The more effective you are, the game of telephone will be more precise.
You have to be able to predict where the futureâs going and guide strategy more effectively. So the whole âIâm going to just A/B test itâ baloney is going to be less.
Brian Gregg: Navin, do you agree with this version of the CEO?
Navin Chaddha: I look at it as the CEO will always have to be raising money, because without money, you canât do anything. Second, theyâre in the business of mobilizing resources. This time, it wonât just be human talent. Itâll also be AI teammates. Then you have to make a decision. But smart CEOs, like athletes, surround themselves with coaches. And this time around, Iâm going to have a lot of digital coaches who can improve my âserve.â CEOs have a tough time giving feedback. Iâll have a candor coach. They might be afraid of speaking. The best ones demonstrate vulnerability. They have to maintain a persona.
But with a digital teammate, itâs all confidential. My only input to AI-native CEOs is to get somebody with a fresh mind as their chief of staff. Now the question is: Will it be a digital teammate or a human teammate? Maybe itâs a combination.
Kiran Prasad: My view is that itâs a digital teammate. If you look right now at what is the biggest adoption for AI beyond ChatGPT, two other ones are Character.AI and Replika. They are effectively psychiatrists.
Naba Banerjee: AI therapists.
Kiran Prasad: âWeirdly enough, people keep saying, âI donât know if I trust AI.â But the number-one use case that seems to be working is the one where they have to trust the AI, which is insane!
Brian Gregg: If weâre talking about 2028, when half the jobs are done by these digital teammates, what is the downside effect on humanity, on the employee base, and on society?
Navin Chaddha: I think humans are smart. I look at AI as yet another horse. Itâs yet another tool. Humans will figure out how to ride it the way they did PCs and mobile. Weâll just get better. And when this productivity, amplification, and implementation comes, more revenues, more profitability, and more jobs get created. So essentially, when GDP growth happens, now it just turns out that the population canât keep up, so some of them will be AI teammates. So Iâm very bullish.
Every time a tech wave happens, humans win. Tech is the great equalizer. When offshoring happened, people thought India would take away all US jobs, but the US got richer and richer with globalization.
Naba Banerjee: I feel like I have to balance that view. The trust and safety world exposed me to another part of humanity that at times I wish I hadnât seen. I know different marketplaces, dating sites, are trying to create an environment where humans can meet each other.
After COVID, so many people are meeting for the first time digitally. And it is always scary for humans. Stranger danger is still considered to be one of the top fears that prospective hosts on Airbnb have. About 60 percent of prospective guests say theyâre scared of being scammed.
Itâs not true. Very few incidents actually happen, but this is a fear that humans fundamentally have. With synthetically generated humans through AI, itâs so easy to re-create voices of people, digital twins, and fake IDs.
We are seeing the way we have typically kept communities safeâtrust and safety and risk teams and what they have doneâthat all those defenses are failing. We are not ready for the world that is coming. Thereâs also a lot of bias in the data that is being used to train these [synthetically generated] humans.
So, yes, it feels like red tape when the privacy team and the antidiscrimination team say, âYou cannot launch this model. We have to watch the data.â I used to push against these teams, but I realized that it is happening.
We should be solving for this. We need to go in with eyes wide open, that the same AI is in the hands of good and bad actors. We have to constantly think about the two sides of the coin.