How do you plan a move to your next house or apartment? Do you keep everything you have and move it all? I’m guessing you don’t. We would never think of simply throwing everything in the truck and sending it off to the new house.
It saves time, money, effort, and space to purge first. Why move across town, across the country or (in my case) across the globe, with flotsam and jetsam we no longer need?
Working with business data is not appreciably different than moving to a new home. Data volumes are growing at a terrifying pace, more than 60% growth every month, according to these IT professionals, and most of it will shift from storage on company premises to the cloud eventually. Yes, storage is growing, but growth often happens because there is no proactive process to control it. Reactive planning and data handling accelerate data growth and inflate IT costs.
“Storage is cheap” is what you hear. I call BS. The total cost of ownership of storage is expensive and bears huge risks.
The way enterprises dealt with this in the past is to decommission an older storage device and move its contents to a newer, larger one. And in three years or so, you do it again.
Despite the falling cost of disk storage, this method can’t accommodate the flood of data enterprises experience today. Imagine if the clothes in your bureau grew 60% per month!
Fortunately, we have other options. Unfortunately, these options are neither a bigger cloud nor cheaper storage. That’s what we hear, but it’s simply wrong.
The more storage you’ve got, the more processing power you need. Processing power is expensive, and so are cloud egress fees. If you’ve got more, you will need to egress more if you are looking for something. Unfortunately, enterprises treat the move to the cloud the same old way we moved to more capacious physical storage devices: Move all the things, haphazardly. The way we’d never move from one house to another.
How strange that when dealing with business data, most don’t even practice the basics. Most are data hoarders and have zero active intelligence and automation. The true cost of this old-fashioned practice is enormous.
Certainly, many organizations fear deleting anything. Many are under regulations that address data retention. The U.S. Securities and Exchange Commission requires that that brokerages keep their client account data for six years, which means they are free to delete that data at six years and one day, with no regulatory implications. These processes can be fully automated with the right data intelligence tools, keeping capacity growth in check, reducing costs and risk.
The European Union’s General Data Protection Regulation, GDPR, gives consumers the “right to be forgotten,” meaning companies must delete personally identifying information. This is ridiculously impossible if you don’t know what repository contains that information in the first place. Is the Social Security number of an employee or a client sitting somewhere on your system, creating a multimillion-dollar risk? Even crazier is the fact that you might have to move that information from point A to point B to point C.
Keeping everything is a poor corporate policy. There is no need to keep data that has no organizational value. Even regulatory and legal scenarios are easily addressed by understanding files according to their content or attributes.
There’s good reason to fear deleting a file that may be useful someday or a file that a judge may order you to produce someday. It’s simple: Don’t delete files of potential importance.
But how do you know what files are potentially important? This isn’t particularly complicated, either. It requires searching and classifying content in an automated way, which is typically followed by applying policies so the files are managed and stored according to the business need.
For example, there’s no need to keep redundant copies of the same file. If there are 20 copies of the same slide deck, it’s safe to purge 19. Irrelevant files have no value and can simply be purged. Obsolete, valueless data such as the personnel files of former employees should be deleted, while “cold” files of questionable value that are rarely or never accessed can move to the cheapest of cloud tiers.
In my line of work, it’s not uncommon to see as much as 30% of business files that are unnecessary; accordingly, reducing the amount of data before shipping it to cloud storage reduces costs by 30% or more. It also ensures those migrations are more efficient in terms of transfer and landing. It can even have an impact on local application performance and overall productivity. And there’s no doubt it reduces unnecessary risk.
Do not allow IT to burn more money before they can explain the following to you:
Data storage is not cheap if you do the math. And keeping data regardless of its value is never going to lead to good outcomes. It is a bad practice, pushed by firms that profit from your doing exactly this. Old storage, backup, and archive firms all have business models that are based on data usage. You should be very afraid that these folks have a vested interest when telling you how to handle your data.
Finally, aren’t we all envious of how some of the most valuable firms in the world make their money with data?
Ever wondered whether you’ve got some hidden treasure nobody can envision?
Data intelligence is the foundation of such treasure and comes as a wonderful side effect of your investment in proactive data intelligence and automation tools.
This article, written by Aparavi CEO Adrian Knapp, was originally published on Forbes.