In our recent monthly webinar, a dynamic panel discussion unfolded as experienced Nicus customers shared real-world insight from their experience implementing new showback and chargeback, along with the lessons and best practices gleaned along the way.
The panel cemented key principles applicable to every organization and offered useful perspectives from both private and public sector environments. Nicus Marketing Director Amy Robertson led the discussion as moderator, and Nicus Chief Evangelist Rob Mischianti joined to offer additional insight.
The panel’s participants were as follows:
- Alonso Martinez – Senior IT Expense Analyst, American National Insurance
- Velincia Jones – Technology Business Operations, VISA
- Kendra Coates – Finance Director, Office of Information Technology, State of Maine
- Jennifer Garcia – IT Financial Advisor, American Family Insurance
In case you missed the live webinar, we’ve distilled the highlights down for you in this post…
Question #1: Tell us a little about your background. What was the biggest problem you were trying to solve by rolling out showback or chargeback?
I serve as the Senior CTS Data Analyst at American National. I’ve been with the company for about five years – and in my current role for about three of those years. For the most part, my job is to work with our IT financial management tool to perform the necessary reporting and analysis to give more visibility into our spend, as well as how that spend is impacting our different marketing areas and departments.
Like most organizations, a lot of our issues were related to overall visibility into the spend of our IT department. People were referring to IT spend as a “big black box.” Nobody could really tell how much was going to each area, or each department, or for what reason.
I’ve been at VISA for about two years. And my particular group is actually very unique in the sense that we’re hilariously called the ‘shadow finance’ or ‘shadow sourcing’ team.
We support the spin lifecycle of any infrastructure item. So, from a hardware and software perspective, we do a lot of the cost modeling for the teams. We do a lot of the data modeling to enable purchases and things like that. That’s kind of our group as a unique function. And as we’ve benchmarked against other industries, we found that we’re kind of a combination of other functional teams.
I’m the Finance Director for the State of Maine Office of Information Technology. The way our office is set up makes us a centralized service, and we have to fund ourselves through money we collect from the agencies we serve.
So we do budgeting, rate setting, billing, and then collection to fund ourselves through a two-year budget cycle. And we work to set our rates 18 months in advance.
It can be a little challenging to make sure we are adequately funded, and that’s what we’ve been trying to work on. But we have been on a chargeback model the whole time I’ve been here, which has been for the last three years.
Another area where we spend a lot of time and effort is in satisfying the various reporting requirements for government funding… trying to get as detailed and granular as possible.
I joined American Family Insurance in early 2016. And I was brought on specifically to guide the organization’s IT financial management effort. Prior to that, I was doing revenue analysis in the public relations industry.
When I arrived at AmFam, we didn’t have any ITFM processes. I was starting totally from scratch, so my role has been to build out our practice from the ground up, including finding a tool to help us on our journey.
The main problem we were trying to solve was our lack of understanding of who or what was driving our costs. Prior to our ITFM effort, we had three dimensions of cost analysis: our internal department IDs, our general ledger account IDs, and our project IDs. And since we didn’t understand our costs internally, nobody else could understand them either.
We also wanted to overcome the issue of excessive manual effort. The time it took to do something like a cost-benefit analysis was elongated, and the consistency of the numbers was all over the board.
Question #2: How has an accurate cost of IT service and bill of IT improved things for you and the business units you support?
The real purpose of implementing our ITFM tool was to give a new transparency to business unit stakeholders – to be able to show them how much they’re consuming of our different services and why that consumption costs what it does… all the various drivers behind it.
Our ability to do that has helped with two things: providing a bit of insight into all the different applications and the large application portfolio we have to see if we can get rid of some redundancy, but also giving visibility into the levels of service we deliver to different areas of the business – really showing the value they get from us, so they don’t just see us as a big, ambiguous cost center.
It has definitely had an impact. We use a showback process internally and that’s been extremely helpful overall, especially as we look at our disaster recovery locations and expansion of data centers globally.
Even just two days ago, our technology leadership team had some requests and we were able to address them much faster than before we had this system in place.
It really helps build out the full story of our utilization across the organizations. We have leadership taking a deeper look at beginning of the year, when we do budget planning and mid-year forecast cycle review against our utilization, and that helps drive better business behavior.
So as we get down into the renewal periods, we take a look back at these bill of IT monthlies, and then we do – not exactly a variance report – kind of like a baseline against how our utilization has grown or changed. And that lends itself to either better negotiation strategies or working with our sourcing partners to figure out what our alternatives are – even for something as simple as do we need a full prod environment, or can it be moved to non-prod? Even looking at use cases for certain types of technology.
We serve 26 major agencies out of our office, and it’s important that we provide them with detailed, traceable spend information to prevent them from losing appropriations and funding. Over time, we’ve learned to provide that kind of information in better ways. We’ve been on a chargeback model for over three years, and with each year we’ve refined things more and more.
So, the reporting and tracking requirements for agencies to stay in compliance with their funding sources is one of our biggest priorities – and something we’ve been able to continually improve using the chargeback model.
The chargeback model also allows us to help agencies become more responsible for their spend, which I think is important. It builds a sort of shared accountability between us and the agencies.
Previously, we had no way to communicate the value of our spend to our business partners or corporate finance. Our costs were increasing, but we couldn’t communicate why they were increasing or what we planned to do about it.
But through our new ITFM processes, we’re able to both report and forecast by any given service, talk about application total cost of ownership, and understand how our services are being consumed by our business areas and our operating companies. Now we have consistent processes and on-demand, agreed-upon numbers – which is key – that we’re able to utilize for our analysis and strategic discussions.
Question #3: How is your system structured in terms of rates, allocations, and so forth? Do you use showback or chargeback?
Well, with our bill of IT, the only service that we use a fixed rate for currently is our programming labor service. And that’s mainly because that’s the service we feel the most comfortable with in terms of the utilization information that we’re getting – at least at the start of our ITFM journey when we implemented Nicus.
Another big driver behind that decision was the fact that our finance reporting department has to make sure we have all our expenses, each month, at a reporting level that’s at the same level as our profit. So, we had to get everything each month down to the profit level, and in order to that we have to allocate 100% of our expenses every month.
As we continue to grow and mature our system, we are looking to move towards fixed rates for the entire service catalog. And that’s mainly so we can be more defensive and have better data to defend services, the rates that we’re charging for the services, and then also the utilization behind those services. And now, we’ve finally gotten to a point where we feel like we’re ready to be able to start doing that.
We use a combination of fixed rates and allocations on a monthly basis, and that really goes back, more specifically, to our capacity and utilization. So when we have these conversations with leadership – and partner heavily with our finance teams – it really drives a more informed level of decision-making.
We’re still improving on the education side, teaching others to read these reports, especially as we’ve had leadership changes.
One of the questions that’s come up more recently is: How do we put this report information into a rolling schedule, or even in a dashboard, to make it easily accessible? So, we’re trying to work on a method to automate those pieces wherever possible.
We’re on a fully rate-based, fixed chargeback system. What we do is, 18 months in advance, we take the best information available from our budget cycle and use it to set our rates.
But of course, everybody knows things change over time, so we do have to perform the over-under recovery based on many different situations.
Our system is somewhat similar to what Velincia described at VISA. We use a chargeback methodology based on actual expenditures; and we rely on our cost model to set rates, measure usage, and ultimately bill customers.
Question #4: Rob, is there a preferred method for showback/chargeback from a best practice standpoint? What are the pros and cons we need to be aware of?
I think I want to answer that two ways. First, and this isn’t even about method, but just pointing out the fundamental difference between showback and chargeback, because each one really drives different outcomes.
With a showback, you’re more focused on demonstrating the value of IT and driving some basic decision-making. But when you move to something like a full chargeback, that adds another layer for driving accountability and influencing behavior.
So that’s the first thing I want to point out, just understanding the difference between showback and chargeback – thinking about your approach and which one makes sense for your organization.
The second part I want to address, dealing more with the methodology, is just the basic math. As in, what are the pros and cons of either doing a full allocation, aka a percent of use where you spread the cost of service across all consumers based on consumption, versus doing a full rate-based model?
And you’ve heard everyone talking about this. Some people are full rate like Kendra. Other organizations or somewhere in-between. Sometimes they’re challenging and enabling one method or the other, but there are definitely benefits and disadvantages to both approaches.
That said, I think that cost allocation piece is the real crux of this pros and cons discussion on showback versus chargeback.
The primary benefit of the full allocation, percent-of-use model is that you get total recovery. Everything you spend it gets allocated out. But one of the key negatives is that some of your consumers lose control over how much of the bill they shoulder.
So, if there are five users of a service one month then only four users the next month, and the service rate stays the same, those four users end up having to bear the burden of the fifth that departed. So that takes away a little bit of control for the customer.
On the flip side, with a rate-based system, the customer has a lot of control. What they use is what they pay for, nothing more and nothing less. But it puts finance organizations in challenging position, because they have to effectively manage the over-under and make sure they’re recovering costs appropriately.
Overall, I think the rate-based model is more mature. But that give you a high-level overview for the pros and cons of both. Just keep in mind, it’s normal and perfectly fine to start small and refine over time; a lot of people start with one and evolve to the other.
Question #5: How do you monitor over-under in your bill? What does your cost recovery process look like?
What we do is a monthly analysis by cost center. So, we know who is over or under-collected, but we don’t true anything up during the two-year budget. Since everybody has locked down their budgets and they’ve been approved, the rates have to stay as we set them.
But then when we go into the next budget development, we look cost center by cost center to see who’s over or under-collected, and then we just add or subtract that amount from the budget to rectify it going forward.
That’s how we do that piece of it in a nutshell. Also, we do have a 60-day window where we can keep 60 days of operating funds. But the true-up between budget cycles is when things really get reconciled.
Our process works very similarly. We have rates set within the budget as a sort of baseline. And we use comparisons against previous rates to inform what we’re doing.
We try to have market conversations to understand, for a specific type of technology, if a rate should be increasing by ‘X’ amount, and we’ll try to adjust for that in the following year’s budget.
If we’re over, looking at the software as an example, we’ll try to see if there’s a true-up opportunity and pay the costs right then and there. And if it’s below a certain threshold, it’s a little bit easier to do. Because anything above $150,000 requires D&A, and that impacts the budget a little differently. So, we’ll look and ask ourselves… “Okay, can that cost be deferred until the next fiscal year, so we have a better way of accounting for it?”
From an under perspective, it actually just is a win for the business. And that’s something that we track as kind of a savings, if you will, and we try to look at that as a new baseline. At the same time, we try to spot any anomalies that could explain why we were under – so we can possibly leverage those moving forward and also so we can explain to the leadership team how the impact is either positive or negative.
Question #6: What are the most important factors for success in terms of collaboration and communication with customers?
One of our main keys to success was engaging our IT service and application managers right from the beginning of the process. Being new to AmFam, and at the same time trying to stand this up, it was very beneficial for me to have those conversations and understand their costs, so that we were both comfortable with how we were categorizing and applying everything appropriately in the model.
We knew once we started publishing these numbers for consumption by business areas that the Information Services leaders would be go-to resources for those business partners to answer questions. So, we needed those leaders to understand and have buy-in from the start – to be sure they could have those informed, insightful conversations with their partners.
And then from the customer perspective, it’s a little different. Because of the way we’re structured, our IT finance group rolls up to our Chief Technology Officer, but our corporate finance department is responsible for the allocations of Information Services expenses on our revenue-generating P&L lines.
So, from the customer perspective, corporate finance has been leading that conversation. Because along with utilizing our model output for allocations, starting in 2020 corporate finance is also going to be making some other changes. As a result, we were letting corporate finance drive that, but also making sure we have a seat at the table to explain what would change from the Information Services allocation perspective. But having a mutually-beneficial relationship, a good relationship with our corporate finance department, was key to driving those conversations.
And then lastly, we started talking to our business partners in 2018 to get them comfortable with what was about to happen in 2020. So we gave them this very long grace period to ask questions and to understand what was happening before the changes were actually implemented. Obviously, we didn’t want to just change things from one month to another on them because that could cause all kinds of uproar and disagreements.
There are a couple things I’d like to add here. One is – and I’ve been involved in a lot of implementations – make sure you stay focused on your primary mission. You have to stay focused on value, because so many of these initiatives can have scope creep and end up going a lot of different directions – due to the fact so many parties are involved.
The second thing I want to mention, if you can get away with it, is to iterate. Try to limit your scope; start with a handful of applications; start with a handful of services. Whether you want to pilot services or pilot certain consumers. Take small steps and improve as you go.
Now, not everybody will have that luxury. But if you do, it really gives you a chance to refine your model and refine your process – a chance to manage complexity and deal with data challenges. Because data challenges and data quality issues are something most of you will encounter; several people on this panel have had that challenge.
The third thing is a big one, and that is to always be educating. You heard from Jen about the experience she had and how she got out in front of it – making sure everyone understands what’s coming ahead and what it means. That education component is extremely important, and it always has to be there.
If you’re really trying to enable decision-making with a showback, or drive accountability and change behavior with a chargeback, people have to understand everything they’re looking at. And that’s a big education process before and after implemenation.
Question #7: How many services do you have in your catalog? How do you achieve the right level of detail while managing complexity?
Today, we have five main service categories that include of 18 different total services. Different areas require different levels of detail, and we’re doing our best to be sensitive to that.
For example, in our programming labor service, we’re breaking that down into different categories based on the level of work and the level of expertise. But then there are other services grouped more broadly into one main bucket.
In some situations, breaking those out isn’t necessary; but in others, it could be beneficial. For example, any area where we have costs related to an employee grouped and gathered together; that cost could be made up of purchasing a laptop or providing of network or providing them a phone… if we’re able to break that out further then maybe we could drive a bit better decision-making to understand what’s truly necessary and what’s not.
For the State of Maine, we have 43 different service categories. But under those four different categories are 1100 charges or service rates so you can pick from.
So, I think we’ve gone to the extreme. And when you ask are there things that we could change, I’d say that’s a yes. Right now, we’re currently looking for ways to simplify the bill. Because, at this point, we have over 60,000 records on our monthly bill.
However, that’s down from half a million 3 years ago, because we used to report every three-cent phone call for every minute. But we’ve made changes over time, and we’ve tried to bundle and simplify wherever we can.
But what we struggle with is the amount of personal time spent preparing, processing, reviewing, and paying the almighty bill. It’s still excessive. The amount of time that we’re spending is huge. So, we’re really looking for ways to leverage a fair and reasonable cost allocation approach when possible. That’s one of our main focuses right now.
We knew up-front that we couldn’t have as many services as Kendra just described. We needed a bill that was going to be consumable by our non-technical business partners.
Initially, especially when it came to the labor side of things, we tried to mimic some services and processes we already had in-place for our project portfolio recording, because our business partners and our finance teams were used to seeing Information Services reporting in that way.
So, we use our projects to drive a lot of what’s in the service catalog, and we use roles for our labor also – so database administrators, developers and infrastructure engineers, testers, things of that nature.
But then we have three other large service categories.
One category is our application infrastructure. That’s where we have the different types of servers – because our servers have different costs – as well as mainframe and service desk services.
Another category would be our maintenance and licensing related to the applications that the business units own.
And then the last category is our end user services, and these are headcount-driven – our workstation hardware like desktop, laptop, our end user connectivity for internet… or like our collaboration, groupware and sharing tools like SharePoint and Box. And then end user computing, which are just licenses that everyone has like office.
We’ve tried to keep it very high level, so that the individuals reading and consuming the reports aren’t getting stuck in the weeds or having to talk to our technologists about things like middleware and DevOps.
Question #8: Ultimately, everyone is aiming to add value without introducing excess complexity. Rob, do you have any closing thoughts to keep processes simple and straightforward?
Well, it’s really challenging to find the perfect level of detail. Because you need enough detail to guide decisions and for your bill to make sense to consumers… but if you go too far, it becomes unsustainable.
You’ve heard examples today of how too much detail makes your bill too big to manage and too complicated for the consumer. It’s easy for it to get out of hand and start gumming things up.
Like Kendra’s example of going all the way down to every individual phone call. That detail wasn’t helping anybody, so they got rid of it. But it’s hard to figure these things out, and it takes time. It’s just a delicate line you have to dance on.
But I think the litmus test for finding the right level of detail is actually simple…
The test is, if extra detail won’t change decisions or result in worthwhile cost difference, then it’s useless. In other words, if you increase detail and complexity, is it going to produce a material difference in the decision that gets made or not? It’s not a magic bullet, but it’s a helpful way to think about things to guide how to start refining these processes.
You’re always going to be driven to a deeper level of detail. It rarely, if ever, goes the other way. Every force in your universe wants more detail.
But again, just keep that litmus test in mind… if you provide more details and the decision isn’t any different, then what’s the point? If you tell me that extra detail is going to help you make a decision, then I provide it. If not, I don’t.