Hadoop is usually a separate storage for ingest, analysis, results. This means from a workflow perspective: that data needs to move between systems, into and out of Hadoop clusters, which increases the Time to Insight.
Also, effort is explicitly required to manage more data workflows, data lifecycle management is more complicated and more time is spent on data governance, curation, backup and archival.
John Sing is a Big Data Evangelist and will discuss the challenges and propose an architectural approach to solve and review several use cases that can remove excessive unnecessary data transfers, lower Hadoop storage/server costs and accelerate time to insight.
Join us for pizza and beer.----