Cloud-First Transformation: 4 Data Roadblocks to Prepare For

Staci Korske

Senior Manager, Marketing Campaigns

Staci has 14 years of marketing experience supporting clients of various industries. Most recently, Staci has spent over 4 years working in the data management and information governance market.

  

The cloud-first priority shift is one of the biggest transformative changes impacting businesses as a result of Covid-19. According to Gartner, by 2024 more than 45% of IT spending on system infrastructure, infrastructure software, application software and business process outsourcing will shift from traditional solutions to cloud. And by 2025, 85% of large enterprises will have a cloud-first principle. Pre-Covid, many organizations were just setting out on their cloud transformation journeys, with some just making the shift towards cloud-first thinking and others in the early stages of executing a cloud strategy – but the pandemic has accelerated the cloud-first transformative shift exponentially across the board. According to Forbes, large volumes of data-heavy workloads stemming from changes like employees working from home and increased customer demand for online services has resulted in the need for modern, agile, scalable, secure, and resilient technology infrastructures becoming the new imperative in 2021.

While the benefits of a cloud-first model are extensive, that’s not the focus of this post however – instead, we’ll dive into some of the data-related roadblocks that Congruity360 has uncovered from businesses who have realized the need for a cloud-first strategy but have gotten stuck in executing their cloud transformations. Below are the top four data-related roadblocks to be aware of in your cloud transformation journey:

1. Not having a full understanding of your data

Executing a cloud strategy without having in-depth insights into the makeup of your data footprint is kind of like flying blind. How can you effectively and securely migrate data and workflows to the cloud when you don’t have an accurate picture of what types of data you’re dealing with, where different data types are stored, and what risk might lie inside? Unstructured data adds even more of a challenge here as it’s more difficult to analyze and understand this “dark” data.

Prior to a cloud migration, organizations should quantify and categorize file types and map data by location. They should aim to identify data with no business value like ROT (redundant, obsolete, trivial), duplicates, or data outside of retention that can be cleaned up or defensibly deleted instead of just picking up and moving a current problem to a new location. Not addressing and cleaning up this data prior to a migration will just create a mess in your pristine cloud environments and will also significantly reduce the potential savings you could see from the cloud.

Another critical component is being able to identify which repositories, shares, or containers hold sensitive and risk data. Being able to do data analysis and classification at the content level and not just the metadata level is crucial to get the deepest and most accurate understanding of your data to confidently remediate risk during and after your migration. (We’ll get into this one more in #3)

2. Maintaining regulatory compliance

Regulatory compliance is not getting any easier, with most organizations governed by both privacy regulations (such as GDPR, CCPA, with more being introduced constantly) and industry regulations (like HIPAA, NYDFS, PCI-DSS, FERPA, and many more). Being able to respond to audits and prove that your data practices are in compliance with the regulations that govern your business is critical – and must be maintained in the cloud. Organizations shifting to a cloud strategy want to ensure that existing policies and controls they have in place for compliance in their current infrastructure model remain in place in the cloud.

To ensure compliance is maintained, data to be migrated that is impacted by privacy and industry regulations must be confidently identified and classified prior to migrating and controls and workflows must be set up and maintained for that data both during and after your cloud migration is complete. Utilizing a classification tool that can identify and classify both current data and future incoming data at the content and metadata level into regulatory-based classes and then execute appropriate automated actions (like tagging, injecting, encrypting, moving to appropriate cloud tiers, etc) on these classes will ensure that compliance protocol is maintained.

3. Security concerns with sensitive data

One of the major roadblocks we hear from clients during cloud transformation initiatives is the concern especially from legal and compliance teams about moving data without knowing if there is risk inside or being able to quantify the risk. They want to know if the impacted data includes any risk, where that risk lives, and if risk controls will remain enforced during and after the transformation. Many also have concerns about the security of the cloud compared to legacy on-premises data centers. Moving sensitive data without classification or proper controls in place in the cloud could potentially lead to catastrophic consequences that could cost organizations millions.

Before moving data, organizations should conduct content- and metadata-level analysis on the impacted data to identify and classify risk data according to their own business and industry rules – machine learning and guided and unguided modeling are extremely helpful here. Risk data classes could include data such as Personal Identifiable Information (PII), Personal Health Information (PHI), Financial Documents, Intellectual Property, among many others. Data within risk classes should be tagged/labeled at rest before it is moved so that once the data is migrated to the cloud, the effectiveness of your Data Loss Prevention or Information Protection solution is maximized to be able to protect that data. This pre-migration content-level analysis could also help you identify sensitive data that you might not want to move to the cloud at all and prevent you from unknowingly moving it with your other data.

4. “The Great Disconnect”

The last challenge that we come across often is one we’ve coined, “The Great Disconnect”. In many organizations a project team is tasked with executing the transformation project itself and the actual moving of the data. The challenge however is that that team oftentimes does not include the data owners/stakeholders themselves who typically have little to no connection to the project even though their data is impacted. We find this typically results in the project team uncovering crucial decisions and actions that must be undertaken prior to the data movement that they do not have the bandwidth or authority to execute, usually leading to a stalled transformation project.

The solution we recommend to clients is to for them provide those content-level data insights mentioned above through reporting and dashboards to the stakeholders responsible for the data along with the ability to approve workflows or take actions on the data themselves instead of having the burden be on IT or the project team. An example of this would be showing a stakeholder all of their out of retention data and giving them the authority and the means to delete that unneeded data from behind the firewall or in the cloud. Only with this level of project stakeholder and data owner alignment can data truly begin to move to its future state platform in a secure, compliant manner.

To see how a Congruity360 client leveraged Classify360, our Data Classification & Governance platform, to solve all of the challenges that we’ve highlighted above to unblock their cloud transformation project, read this client success story

Related Posts

Learn More About Us

Interested in Learning More on an Intro Call?

© Copyright 2024 - Congruity 360 InfoGov, Inc. All Rights Reserved. Privacy Policy.