In October, Box unveiled a new pricing approach for the company’s generative AI features. Instead of a flat rate, the company designed a unique consumption-based model.
Each user gets 20 credits per month, good for any number of AI tasks that add up to 20 events, with each task charged a single credit. After that, people can dip into a company pool of 2,000 additional credits. If the customer surpasses that, it would be time to have a conversation with a salesperson about buying additional credits.
Box CEO Aaron Levie explained that this approach provides a way to charge based on usage with the understanding that some users would take advantage of the AI features more than others, while also accounting for the cost of using the OpenAI API, which the company is using for its underlying large language model.
Meanwhile, Microsoft has chosen a more traditional pricing model, announcing in November that it would charge $30 per user per month to use its Copilot features, over and above the cost of a normal monthly Office 365 subscription, which varies by customer.
While it became clear throughout last year that enterprise software companies would be building generative AI features, at a panel on generative AI’s impact on SaaS companies at Web Summit in November, Christine Spang, co-founder and CTO at Nylas, a communications API startup, and Manny Medina, CEO at sales enablement platform Outreach, spoke about the challenges that SaaS companies face as they implement these features.
Spang says, for starters, that in spite of the hype, generative AI is clearly a big leap forward, and software companies need to look for ways to incorporate it into their products. “I’m not going to say it’s like 10 out of 10 where the hype meets the [current] reality, but I do think there is real value there and what’s really going to make the difference is how people take the technology and connect it to other systems, other apps and sort of drive real value in different use cases with it,” she said.
It’s also about finding a balance between providing the kind of features that customers are suddenly demanding, and figuring out a way to price it in a way that provides real customer value, yet allows the company to make money. “In reality, those of us who are bundling [generative AI features] need to repeatedly check back with our [large language model] provider, and that’s going to get expensive quickly. So until we create experiences that are 10x differentiated, and for which somebody wants to pay for it, it’s going to be challenging,” Medina said.
It’s worth noting that model makers like OpenAI are already announcing price cuts as they find ways to run models more efficiently, or cut prices on older products as new ones are announced. For example, in June, the company announced some new features that increase processing power, which provide more bang for the buck, while also lowering the cost of prior versions for developers who don’t require all the latest bells and whistles.
Spang says her company is already using a consumption model based on the number of connected email or calendar applications, and she expects to follow a similar approach as they add generative AI features.
“We already have the case where some people send a lot more messages, or they receive a lot more messages and I think it’s important to map [to a similar pricing model] that people understand, and then hopefully we can find a price point that kind of works through the median,” she said.
But Medina says for an application, it’s more difficult to use a consumption model than an API provider like Nylas. “I just don’t know that that’s an acceptable model in applications. When you’re a provider of Legos [like Nylas], it’s a different story, but for application providers, [it’s more difficult],” he said.
But it’s also not clear that companies will be willing to pay a flat rate like Microsoft’s $30 a month per user for Office 365, unless they can see real value from that additional cost. “The jury’s still out until somebody either lowers the cost and it makes it very accessible for the rest of us, or we figure out a way to monetize it,” Medina said.
One big unknown also is the compliance costs that could be related to using this technology, which remains a big open question for companies and their customers. “So if you start embedding some of these applications and the U.S. [or other government] passes a law where you have to disclose the list of ingredients of your AI, you’re not getting that from OpenAI, so that’s going to be difficult,” he said.
CIOs who control the company technology budget are taking a close look at this technology, but they are still trying to figure out if the extra cost being passed on to them will pay for itself in terms of higher employee productivity.
Sharon Mandell, CIO at Juniper Networks, says she is looking closely at the ROI on these features. “In 2024, we’re going to be testing the GenAI hype, because if those tools can produce the types of benefits that they say, then the ROI on those is high and may help us eliminate other things,” she said. So she and other CIOs are running pilots, moving cautiously and trying to find ways to measure whether there is truly a productivity increase to justify the increased cost.
Regardless, companies will continue to experiment with pricing models, while their customers are conducting pilots and proofs of concept. It seems like they both have something to gain here, but until we start to see more of these tools in production, it’s hard to know the real benefits to everyone involved.
When it comes to generative AI in the enterprise, CIOs are taking it slow