One of the biggest developments for data managers in the last decade has been the creation of the enterprise data lake—a collection and collation of all enterprise data from legacy systems and sources, data warehouses and analytics systems, and anything else that might be considered useful information by the enterprise.
Is it actually possible though? Do we have the infrastructure and skills to make this work? Will we end up with better insights and opportunities than we have today or will they just wind up submerged in the data lake? These concerns are greater at organizations with extensive legacy data sources—such as mainframes and data warehouses—that can store decades’ worth of critical information.
This checklist will help you and your team plan and launch successful data lake projects with your legacy data sources. It reviews the critical success factors for these projects as well as the risks and issues to mitigate.