Unchecked growth in corporate data combined with the volatility of data protection systems creates a nightmare...
for corporations and millions of dollars go wasted on unnecessary infrastructure, over-burdened operating budgets and failed recoveries.
Now more than ever, companies need to take aggressive action to improve the reliability and reduce the cost of their backup operations. The solution lies in getting visibility into the backup-operating environment. Here are a few steps that companies can take to improve backup reliability.
1. Consolidate your information, not your platforms
Centralizing control of your backups is a good idea. It can reduce your total operational and capital costs. However, companies moving to centralized backup often err by consolidating backup systems, rather than consolidating administration of existing systems. This wastes invested capital and creates transition costs.
Using "storage intelligence" tools for backup visibility, companies can consolidate vital data across the enterprise from a mixed backup software environment to improve the reliability and reduce the costs of their backup infrastructure.
2. Get managers the information necessary to understand, plan and control
Without the necessary tools, many backup operations waste time on log parsing and cumbersome manual report creation in order to understand their environment. This is not only costly, but the results are usually incomplete. There is simply too much data and too little time to process it and then act on it. Storage intelligence software solves this problem by automating the aggregation and presentation of vital statistics from the backup environment allowing customers to:
- Track error rates and corrective response times for critical assets.
- Share data on results and resource consumption to build their awareness in managing their use of resources.
- Reduce the time-to-productivity for new hires.
3. Optimize the use of your existing infrastructure
Without visibility into the usage of the backup infrastructure, companies are prone to over buy capacity for the anticipated needs. Instead, using tools to provide backup visibility, the IT staff can bring efficiency to a new level:
- Balance the load between backup servers to reduce the burden on over-taxed resources and ensure full usage of invested resources.
- Measure backup server and media throughput.
- Identify and eliminate the unintended redundancy.
- Identify backup jobs running too slow and adjust the system configuration.
4. Add capacity when it is needed -- not before and not after
Backup is typically managed either ahead of the curve -- buying excess capacity in advance of need, or behind the curve -- adding capacity after limits are reached. Both cost the organization in real dollars and increase the exposure to failure.
Cost management means understanding your actual usage trends, sources of growth and rates of increase. This data can then be applied to create forecast of future demand for the "right" improvements. Optimization can be achieved by:
- Understanding backup trends and data sources -- are critical databases growing, or are end-users abusing storage policies?
- Tracking storage growth by DR tier and application to ensure that critical dollars are being expended on critical assets.
- Using data to take advantage of falling storage prices and get the best price at the right time, avoiding expensive purchases.
5. Fix reliability to forego costs of lost data and lost data replacement
Responsible cost management for your enterprise means ensuring the reliability of your backup operations enterprise-wide. Spot check, parse and other manual work arounds only tell part of the story. The complexity of heterogeneous open platform environments creates risk of data loss in many more areas than can be checked.
Instead, IT managers should use troubleshooting tools to gather data enterprise wide on backup success and failure, allowing them to:
- Make sure backups occur reliably and reduce the need for redundant or repeated backup attempts.
- Shorten the time to respond to failed backups by automating the process of sorting through logs to identify failures.
- Automate the prioritization of failure remediation on a daily basis.
6. Institute charge-back to end-users for backup resources consumed
Show end users the costs of supporting their backup requirements, and suddenly they pay more attention to the criticality of data backup services they request. By implementing fixed and variable (volume) based charges for backup allows your managers to:
- Show data owners their consumption of backup resources to reshape/reduce consumption.
- Include retention period in calculation of bill back rates.
- Enable financial accountability at the end user level for corporate resources consumed.
7. Take control of your backup windows
Backup windows are the spans of time when backups are supposed to run reliably. Unfortunately, the volatility and complexity of backup environments often create window overruns and impact to the business environment.
With backup resource management tools to gather data on backup performance and system throughput, administrators can:
- Identify unused backup windows, particularly during low network traffic.
- Spread backup jobs evenly across the total weekly backup window.
- Eliminate network degradation caused by bandwidth deficiencies.
8. Reduce your costs through intelligent media management and utilization
Optimizing the usage and throughput of tape libraries and drives has huge impact to your organization. Given the cost of libraries, drives, media and the management process, tracking efficient usage is a must.
Using an optimization tool to provide media visibility, backup administrators are able to:
- Identify tape usage trends for capacity planning and budgeting purposes.
- Identify tapes that can be recycled earlier than their planned rotation cycle.
- Reduce long-term costs by using tapes more efficiently.
9. Do your part to comply with regulations
The enactment of legislation like Sarbanes-Oxley and HIPPA has greatly increased the burden on companies to ensure the long-term, successful safekeeping of their data. Companies that cannot successfully ensure the reliable, recoverable status of their backup data stand to face penalties and fines as mandated by these new measures.
Instead, a compliance plan should start with a baseline understanding of the operating environment, so that companies can identify areas where data protection is unsuccessful and the corporation faces potential penalty.
About the author: Drake Pruitt is Vice President of Marketing for Bocada. For more information, see Bocada's Web site.
Do you have comments on this tip? Let us know.
Please let others know how useful this tip is via the rating scale below. Do you have a useful Lotus Notes, Domino, Workplace or WebSphere tip or code snippet to share? Submit it to our tip contest and you could win a prize.