For years, companies have been adopting financial operations (FinOps) practices to manage and optimise their cloud computing expenses. However, the growing adoption of generative artificial intelligence (GenAI) is rewriting the rules of technology budgeting. The sheer scale of AI workloads, combined with unpredictable usage patterns and the high cost of specialized hardware, is forcing organizations to rethink how they allocate and track IT spending.
According to the FinOps Foundation’s 2026 State of FinOps report, an overwhelming 98% of global FinOps practitioners are now tasked with managing AI spend, an increase from just 31% in 2024. Furthermore, AI cost management has become the single most sought-after skill set for technology finance teams this year. This shift reflects a broader transformation—AI is no longer an experimental expense but a core operational line item that demands rigorous oversight.
“It’s still fairly early days with AI adoption. Most organisations are in the proof-of-concept phase, figuring things out,” said Matt Pinter, Asia-Pacific field chief technology officer at Apptio, an IBM company specialising in software for technology cost management. The transition from experimentation to production introduces complexities that traditional cloud cost management tools were not designed to handle.
AI pricing can vary based on the types of services and deployment models. For off-the-shelf tools such as ChatGPT or Google’s Gemini, the primary billing metric is the token, a fundamental unit of data processed by the AI. “That seems to be what the industry has standardised on. Tokens are the main billing mechanism,” Pinter said. As a result, optimising queries to reduce token usage is becoming one of the most effective ways to control AI costs.
Against this backdrop, companies are beginning to treat tokens like a corporate currency. Some organisations are exploring tokenomics, giving developers a monthly allowance of tokens for coding and code reviews. “You give somebody a budget of tokens and say, ‘Here’s what you have to do your job.’ They then figure out how to get their work done within the allocated budget,” Pinter said. “You can see that mindset shift starting to happen, where engineers are saying, ‘I want to make sure I’m using it responsibly.’”
The focus on developers reflects the growing trend of shifting left in FinOps, where costs are optimised through mechanisms such as committed usage discounts and right-sized instances earlier in the software development lifecycle before a workload reaches production. According to the FinOps Foundation, FinOps teams have also started to engage with platform engineering and enterprise architecture teams, building pricing calculators and offering pre-deployment guidance.
The hidden costs of homegrown AI
While off-the-shelf AI services offer convenience, building homegrown AI can be significantly more expensive. It requires securing highly coveted graphics processing units (GPUs) in the datacentre or the cloud, and addressing what Pinter calls “the hidden cost of AI.” “It gets a lot more complex because now you’re talking about the infrastructure to support homegrown AI solutions,” he said. “If they are in the datacentre, then you need to consider the electricity costs to power these systems.”
GPUs consume far more energy than traditional CPUs, and the cooling requirements for high-density AI racks add another layer of expense. Organizations that choose to build their own AI models must also factor in the cost of training data, storage, networking, and the expertise required to manage these systems. Many find that the total cost of ownership (TCO) for homegrown AI far exceeds the pay-per-use pricing of SaaS models, especially when usage is sporadic.
Increasingly, the environmental footprint of AI is tying FinOps to GreenOps, particularly in the Asia-Pacific region where new climate laws mandate companies to measure and reduce carbon emissions. By optimising cloud usage, organisations can simultaneously lower their bills and carbon footprints. Beyond public cloud services, nearly half of FinOps teams are actively managing physical datacentre costs to capture the full footprint of AI computing demands, according to the FinOps Foundation report. These teams are also working with environmental, social and governance (ESG) teams on sustainability initiatives.
The search for ROI
Despite significant investments in AI, many companies struggle to articulate its return on investment (ROI). “Many customers are missing that right now,” Pinter said. “They’ve been told, ‘Go do AI’, but they don’t have a clear end state in mind.” With just 7.5% of enterprises baking FinOps into AI projects, according to IDC, practitioners are encouraging more businesses to calculate the exact unit economics of AI.
For example, a bank that processes home loans could establish a baseline cost, say, $8 per loan for 1,000 loans a month, and measure the financial impact of AI implementation. “Ideally with AI, you should see the number of mortgages increase and the processing time decrease,” Pinter said. “You could say, ‘We’ve tripled that and lowered our unit cost by 10%.’” This kind of granular measurement requires a robust framework that connects technical consumption to business outcomes.
This is where the Technology Business Management (TBM) model can help. Pinter noted that the latest version of the model provides a way for enterprises to work out the cost structure of different AI services and deployment models, bringing together traditional IT financial management (ITFM) and FinOps. “It’s about being able to look at multiple different disciplines and provide that single pane of glass, where you can get into chargeback and look at SaaS and on-premise applications,” he said. “It’s bringing all that together and providing a vehicle to effectively charge back all the costs that the IT organisation is incurring.”
Ironically, the solution to managing AI costs involves more AI. Pinter expects AI-driven anomaly detection to become essential for preventing bill shocks from misconfigured cloud instances. Natural language chatbots could also replace business intelligence dashboards, allowing executives to query data for instant insights. But technology alone isn’t enough to drive cost-saving FinOps practices. The single biggest barrier to adopting FinOps, whether in mature cloud markets such as Australia or technology hubs like Taiwan and Singapore, is human resistance.
“It’s the culture shift to get everybody bought into it,” he said. “You might not have executives fully on board, and engineers might be apprehensive. Getting organisational buy-in, where everyone says, ‘Yes, this is what we’re going to do’, is the biggest challenge.” To overcome this, some companies are implementing gamification strategies that reward cost-conscious behavior, while others are creating cross-functional FinOps councils that include representatives from finance, engineering, and executive leadership.
As AI continues to permeate every aspect of business operations, the role of FinOps will only grow in importance. The ability to manage AI costs effectively will not only protect the bottom line but also enable faster innovation. Organizations that invest in FinOps skills today will be better positioned to scale their AI initiatives sustainably tomorrow. The key is to start small, measure relentlessly, and align cost management with business value from the outset.
Source: ComputerWeekly.com News