Welcome!

Machine Learning Authors: Liz McMillan, Zakia Bouachraoui, Roger Strukhoff, Pat Romanski, Carmen Gonzalez

Related Topics: Machine Learning , Recurring Revenue, Server Monitoring, @CloudExpo, @DXWorldExpo, @ThingsExpo

Machine Learning : Blog Post

Data Lake and Data Refinery | @ThingsExpo #BigData #IoT #M2M #API #InternetOfThings

The data lake strategy is part of a greater movement toward data liberalization

Data Lake and Data Refinery – Gartner Controversy!

Much discussion has been going on the new phrase called Data Lake. Gartner wrote a report on the ‘Data Lake’ fallacy, saying to be careful about ‘data lake’ or ‘data swamp’. Then Andrew Oliver wrote in the InfoWorld these beginning words, “For $200, Gartner tells you ‘data lakes’ are bad and advises you to try real hard, plan far in advance, and get governance correct”. Wow, what an insight!

During my days at IBM and Oracle, Gartner wanted to get time on my calendar to talk about database futures. Then afterwards, I realized that I paid significant fee to attain the Gartner conference to hear back what I had told them. Good business of information gathering and selling back. Without meaning any disrespect, many analysts like to create controversial statements to stay relevant. Here is such a case with Gartner.

The concept of a ‘data lake’ was coined by James Dixon of Pentaho Corp. and this is what he said – If you think of a datamart as a store of bottled water – cleansed and packaged and structured for easy consumption – the data lake is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in, or take samples. Think of a data lake as an unstructured data warehouse, a place where you pull in all of your different sources into one large “pool” of data. In contrast to a data mart, a data lake won’t “wash” the data or try to structure it or limit the use cases. Sure, you should have some use cases in mind, but the architecture of a data lake is simple: a Hadoop File System (HDFS) with lots of directories and files on it.

The data lake strategy is part of a greater movement toward data liberalization. Given the exponential growth of data (specially with IoT and myriads of sensors), there is need for storing data in its native format for further analysis. Of course you can drown in a data lake! But that’s why you build safety nets like security procedures (for example, access is allowed only via Knox), documentation (what goes where in what directory and what roles you need to find it), and  governance.

Without offering any concrete alternative, Gartner seems to say that a new layer (call it data refinery if you like) is needed to make sense of this ‘raw’ data, thus heading back to the ETL days of data warehousing. Gartner loves to scare clients (so that they seek help for a fee) on new technology and would want everyone to stay with the classic data warehousing business. This is not in line with the Big Data movement which involves some risk, as always with any new technology. 

Read the original blog entry...

More Stories By Jnan Dash

Jnan Dash is Senior Advisor at EZShield Inc., Advisor at ScaleDB and Board Member at Compassites Software Solutions. He has lived in Silicon Valley since 1979. Formerly he was the Chief Strategy Officer (Consulting) at Curl Inc., before which he spent ten years at Oracle Corporation and was the Group Vice President, Systems Architecture and Technology till 2002. He was responsible for setting Oracle's core database and application server product directions and interacted with customers worldwide in translating future needs to product plans. Before that he spent 16 years at IBM. He blogs at http://jnandash.ulitzer.com.

CloudEXPO Stories
Public clouds dominate IT conversations but the next phase of cloud evolutions are "multi" hybrid cloud environments. The winners in the cloud services industry will be those organizations that understand how to leverage these technologies as complete service solutions for specific customer verticals. In turn, both business and IT actors throughout the enterprise will need to increase their engagement with multi-cloud deployments today while planning a technology strategy that will constitute a significant part of their IT budgets in the very near future. As IoT solutions are growing rapidly, as well as security challenges growing exponentially, without a doubt, the cloud world is about to change for the better. Again.
GCP Marketplace is based on a multi-cloud and hybrid-first philosophy, focused on giving Google Cloud partners and enterprise customers flexibility without lock-in. It also helps customers innovate by easily adopting new technologies from ISV partners, such as commercial Kubernetes applications, and allows companies to oversee the full lifecycle of a solution, from discovery through management.
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Longer function invocation times (referred to as cold starts) become very important to track, because they impact the response time of the service and will impose additional cost. Additionally, the transition to smaller individual components (much like breaking a monolithic application into microservices) results in a simpler deployment model, but makes the system as a whole increasingly complex.
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Longer function invocation times (referred to as cold starts) become very important to track, because they impact the response time of the service and will impose additional cost. Additionally, the transition to smaller individual components (much like breaking a monolithic application into microservices) results in a simpler deployment model, but makes the system as a whole increasingly complex.
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost and increase in speed. Sometimes in order to reduce complexity teams compromise features or change requirements