The fundamental role of data infrastructure is to protect, preserve, secure, serve applications and data, transforming them into information. Data protection is an encompassing topic, as it spans security (logical and physical), reliability availability serviceability (RAS), privacy and encryption, backup/restore, archiving, business continuance (BC), business resiliency (BR) and disaster recovery (DR).
Recently, we’ve seen news about data infrastructure and application outages, including Amazon Web Service (AWS) Simple Storage Service (S3), Gitlab, and the Australian Tax Office (ATO).
+ Also on Network World: 5 lessons from Amazon’s S3 cloud blunder – and how to prepare for the next one +
What’s concerning about these and other scenarios, many of which do not make the headlines, is that they are preventable with proper application and data protection. Now, when I say many, if not most, disasters or outages can be prevented or minimized, I mean that if you know something can fail, you should be able to take steps to prevent, isolate and contain problems.
Keep in mind that accidents happen. Likewise, technology can and does fail. This applies particularly when humans are involved in defining and configuring hardware, software, services and policies. Data loss can be complete (all gone) or partial (some data deleted or damaged). Scope, scale and focus on data protection range from geographic region locations to sites and data centers, systems, clusters, stamps, racks, cabinets, shelves, and individual servers or other components (hardware and software).
Software-defined data infrastructure and software-defined threats
With many environments moving toward or already leveraging some aspect of software-defined data centers (SDDC) and software-defined data infrastructure (SDDI), now is a good time to talk about software-defined threats and software-defined data protection management.
Software-defined data protection is about enabling availability for data infrastructure and the applications (as well as data) they support.
Today data infrastructure and the applications, as well as data they are tasked with protecting, preserving, securing and serving, are under attack from legacy, as well as software-defined threats. This means protecting against different threads from acts of man or nature, accidental or intentional. There are also configuration errors, software bugs and human mistakes that can cause disruptions.
Software-defined threats include ransomware, spyware, bots, denial of service, phishing, virus and other malware. Note that when I say the above are software defined, I am not talking about software-defined marketing, rather, those threats risks are software-based and defined to cause damage, destruction, disruption, theft or all the above to your data infrastructure and the applications they support.
Enabling application and data protection (be prepared)
If you know something can fail, why not protect data and applications with resiliency? A common theme I see in and around data infrastructure is that many outages could have been prevented by investing in resiliency instead of saving money by cutting costs.
For example, putting all your data and applications in a single cloud, a single region, means there’s a single point of failure. On the other hand, you could spend a little more and have your data infrastructure span multiple clouds or regions (as well as on-premise), plus have improved resiliency.
Data infrastructure tips and recommendations:
- How much resiliency you need, the cost, and the business benefits will vary with different applications and environments. Look at it this way: What’s the cost of not doing something, particularly after an outage or disaster when your customers find out you could have done better?
- Anything in the cloud should have a copy elsewhere (same cloud different region, different cloud or on site). This includes data, meta data, applications, keys, certificates and other resources, as well as standby DNS capabilities for web access. Likewise, anything on site should have a copy elsewhere, either online or off, cloud or some other venue.
- Only you can prevent data loss. That is a bit strong. However, you (or your management) can make the decisions to invest in residency or cut costs to reduce availability and spending. Likewise, you can verify your service providers can do what you need and expect from them. However, you also need to configure and pay for when (not if) they fail. Instead of thinking about cost overhead, get your management to see data protection and residency as a business asset.
- In addition to making sure your data is protected, also make sure you can use actual restore and use it.
By not treating all applications and data the same—along with leveraging data footprint reduction (DFR) such as compression, de-duping and rethinking backups—you can have more copies and versions with less overhead cost. What this all means is that your data infrastructure needs to be resilient and durable.
This article is published as part of the IDG Contributor Network. Want to Join?