Nicus logo
Doodles

7 Steps to Great ITFM Data

Jun 27, 2018 | By Amy Robertson

Application TCO Bill of IT Cost Optimization IT Cost Transparency Planning, Budgeting & Forecasting Service Cost Modeling

IT data management

About a month ago, I attended a phenomenal IT Financial Management (ITFM) Week in Chicago. The weather was perfect, the presenters were excellent, and Chicago’s hospitality never disappoints.

One presentation that stood out was by David DuBois from BJ’s Wholesale Club. His main point was that while there is a lot of value to be gained from ITFM, basic blocking and tackling around controlling and maintaining your data is key to gaining that value. As I was riding to the airport, this thought resonated: DATA is king.

Data is the lifeblood of your ITFM program. While an ITFM tool can do a lot of heavy lifting, with robust calculations, assignments, and allocations with reports and graphs, even the most fancy and complex ITFM programs will produce garbage results if you are starting with bad data.

I’ve seen well-intentioned programs unravel and become unreliable, obsolete, and useless because of bad data.

On the airplane home, I spent some time thinking about how to maintain good data, and the key things every ITFM leader needs to do to ensure the data flowing through the system is in tip-top shape:

1. Understand your source systems.

Your source systems form the backbone of your ITFM practice. It’s critical to understand what data is available, and how to access it, in order to build and maintain proper interfaces between the source systems and your ITFM tool. Once you know what data is available from your source systems, you then just have to figure out how to “connect the dots” to your end goals of your ITFM program.

Spend the time to understand your source systems – the better your understand the data found in them and its context, the better you’ll be able to meet your goals.

2. Show your data owners some love.

Don’t underestimate the value of building and maintaining good working relationships and open lines of communication with the individuals who own each source system. It will be critical for you to stay abreast of any changes in the data at the source, so you can make any necessary changes in your ITFM tool, and you will want to know who to contact to make changes to the structure or content of a file exported from your source systems which often happens as your ITFM program matures.

Often, the only interaction data owners have with ITFM is at the beginning, or when problems arise. Spend a little extra time to develop those relationships – it will make life much easier down the road.

3. Master your data mapping.

Your various source systems contain a wide variety of data, which are translated to your ITFM structures via data mapping. For example, your GL might have fields called “Project ID”, “Investment ID”, and “Asset ID” but in your ITFM tool, these data elements all map to one field called “Internal ID”. Really understanding how data is mapped across systems will be helpful when it comes to building more maturity around your ITFM program.

Be sure to document your data map – how source systems match up with your ITFM tool. This will make future enhancements much easier, and is critical for bringing new team members up to speed.

4. Optimize your timespan and refresh cadence for each data source.

Different data sources have different refresh rates based on the type of data and system – optimizing the timespan of data for each data interface is critical for building and maintaining your import definitions and processes.

Let’s say you have a data interface for time tracking data. There are several options for the timespan of this data, which affect how you maintain this data (refresh cadence) in your ITFM tool: the timespan could be year-to-date, current month, current week, or it could be just new records since the last interface file was sent.

Getting this right when you’re creating the interface is crucial. If time tracking data is imported to your Bill of IT weekly, for example, and only includes hours worked for a given week, then your import definition must be built to append the records to the current month’s time tracking table. By the end of the month, the table will have the entire month of labor hours which allows you to calculate and chargeback your labor costs. If the time tracking interface is imported at the end of each month and includes all the hours worked for the given month, then your import definition should be built to truncate the table and replace it with the contents of the new file. Getting these wrong will lead to lots of confusion and bad data.

When you are adding a new source to your system, be sure to understand how it works and how you’ll use the data and choose – and document! – an optimal refresh cadence and timespan for importing new data.

5. Cleanse and maintain your data on a regular schedule.

The reality is that data trends towards messiness.

When your system is based on multiple data sources that you don’t control (as all systems are), eventually some bad data will creep in. For example, you may find multiple entries for a single vendor, like “International Business Machines,” “IBM,” “IBM, Inc.”, and “International Business Machines, Inc.” It is important to normalize your data by picking ONE name for your vendor and regularly reviewing the data in the tool to ensure you don’t have these types of duplicates, which lead to inefficient and ineffective reporting to support your analysis.

Many tools allow you to build rules to look for these situations and systematically assign the desired value or, at a minimum, highlight a potential duplicate in a control report to trigger an Admin to manually resolve the issue.

Regardless of rules you build, make a practice of periodically reviewing key data tables, like your vendor, application, consumer, and services listings, to catch problems before they snowball.

6. Build data validations appropriate to the data’s risk level.

Robust data validation can be time consuming and is often skipped. This may work for some data sources, but not for others – knowing when to build robust data validation will help you maintain the integrity of your system without incurring unneeded costs or delays.

Develop a risk assessment based on two factors: the data source’s level of trust, and the amount of transformation required in the data interface (low control to high control interfaces).

Data that is imported with no transformation from a highly trusted source – data from a Chart of Accounts that becomes a lookup table, for example – typically requires no validation.

Data that is either imported from a low trust source, or that requires some transformation, requires some validation.

On the far end, “high control” data interfaces include complex import rules related to timespan and cadence, data transformation, and often require robust data validations against other lookup tables.

Time tracking data is an example of risky data. Plan to configure processes to include a preliminary staging table with rules for archiving, validation, transformation, version control, and balance verification before the data ultimately progresses to the final target table.

There are several examples of how such data might be validated in a tool: validating the Cost Centers against a Cost Center lookup table, validating the employee ID against an employee table, validating the Project ID against the Project lookup table, etc.

Consider requesting a summary of hours for the equivalent timespan of the file that is imported and matching the sum of hours in the time tracking file with the summary cross-check file. If any validation errors exist, create a record and report it on your exception/control reports.

7. Finally, understand the context around your data.

Clean data isn’t the end; it’s still important to guard against poor analysis by maintaining a solid understanding of your data’s context. When data is not well understood it can easily lead to unhelpful – or worse, misleading – analysis.

For example, one team developed a server rate using budget data and calculated a cost of $1,000 per server operating system. However, when it came time to charge back server costs based on utilization, the data they were receiving was actually the number of CPUs for each server.  Without realizing it, the team was charging their partners incorrectly because they used one metric to develop the rate and were not receiving the same metric for use in their Bill of IT.

Making sure you understand the data you are using will ensure the results of your calculations are meaningful and helpful in driving effective decision-making.

There are a lot of things to consider to ensure you maintain clean data in your ITFM tool, but these key steps will help give you the data quality you need, and developing a thorough understanding of your data yields accuracy and usability, which in turn results in credibility and better decision-making.

Happy analyzing!

Get actionable insights and updates about ITFM in your inbox.