Click here to close now.

Welcome!

IoT User Interface Authors: Pat Romanski, Elizabeth White, Liz McMillan, AppDynamics Blog, John Wetherill

Related Topics: Apache, JAVA IoT, Open Source Cloud, IoT User Interface, CloudExpo® Blog

Apache: Blog Feed Post

GridGain and Hadoop: Differences and Synergies

Now data can be analyzed and processed at any point of its lifecycle

GridGain is Java-based middleware for in-memory processing of big data in a distributed environment. It is based on high performance in-memory data platform that integrates fast In-Memory MapReduce implementation with In-Memory Data Grid technology delivering easy to use and easy to scale software. Using GridGain you can process terabytes of data, on 1000s of nodes in under a second.

GridGain typically resides between business, analytics, transactional or BI applications and long term data storage such as RDBMS, ERP or Hadoop HDFS, and provides in-memory data platform for high performance, low latency data storage and processing.

Both, GridGain and Hadoop, are designed for parallel processing of distributed data. However, both products serve very different goals and in most cases are very complementary to each other. Hadoop is mostly geared towards batch-oriented offline processing of historical and analytics payloads where latencies and transactions don’t really matter, while GridGain is meant for real-time in-memory processing of both transactional and non-transactional live data with very low latencies. To better understand where each product really fits, let us compare some main concepts of each product.

GridGain In-Memory Compute Grid vs Hadoop MapReduce
MapReduce
is a programming model developed by Google for processing large data sets of data stored on disks. Hadoop MapReduce is an implementation of such model. The model is based on the fact that data in a single file can be distributed across multiple nodes and hence the processing of those files has to be co-located on the same nodes to avoid moving data around. The processing is based on scanning files record by record in parallel on multiple nodes and then reducing the results in parallel on multiple nodes as well. Because of that, standard disk-based MapReduce is good for problem sets which require analyzing every single record in a file and does not fit for cases when direct access to a certain data record is required. Furthermore, due to offline batch orientation of Hadoop it is not suited for low-latency applications.

GridGain In-Memory Compute Grid (IMCG) on the other hand is geared towards in-memory computations and very low latencies. GridGain IMCG has its own implementation of MapReduce which is designed specifically for real-time in-memory processing use cases and is very different from Hadoop one. Its main goal is to split a task into multiple sub-tasks, load balance those sub-tasks among available cluster nodes, execute them in parallel, then aggregate the results from those sub-tasks and return them to user.



Splitting tasks into multiple sub-tasks and assigning them to nodes is the *mapping* step and aggregating of results is *reducing* step. However, there is no concept of mandatory data built in into this design and it can work in the absence of any data at all which makes it a good fit for both, stateless and state-full computations, like traditional HPC. In cases when data is present, GridGain IMCG will also automatically colocate computations with the nodes where the data is to avoid redundant data movement.

It is also worth mentioning, that unlike Hadoop, GridGain IMCG is very well suited for processing of computations which are very short-lived in nature, e.g. below 100 milliseconds and may not require any mapping or reducing.

Here is a simple Java coding example of GridGain IMCG which counts number of letters in a phrase by splitting it into multiple words, assigning each word to a sub-task for parallel remote execution in the map step, and then adding all lengths receives from remote jobs in reduce step.

    int letterCount = g.reduce(
        BALANCE,
        // Mapper
        new GridClosure<String, Integer>() {
            @Override public Integer apply(String s) {
                return s.length();
            }
        },
        Arrays.asList("GridGain Letter Count".split(" ")),
        // Reducer
        F.sumIntReducer()
    ));

GridGain In-Memory Data Grid vs Hadoop Distributed File System
Hadoop Distributed File System (HDFS) is designed for storing large amounts of data in files on disk. Just like any file system, the data is mostly stored in textual or binary formats. To find a single record inside an HDFS file requires a file scan. Also, being distributed in nature, to update a single record within a file in HDFS requires copying of a whole file (file in HDFS can only be appended). This makes HDFS well-suited for cases when data is appended at the end of a file, but not well suited for cases when data needs to be located and/or updated in the middle of a file. With indexing technologies, like HBase or Impala, data access becomes somewhat easier because keys can be indexed, but not being able to index into values (secondary indexes) only allow for primitive query execution.

GridGain In-Memory Data Grid (IMDG) on the other hand is an in-memory key-value data store. The roots of IMDGs came from distributed caching, however GridGain IMDG also adds transactions, data partitioning, and SQL querying to cached data. The main difference with HDFS (or Hadoop ecosystem overall) is the ability to transact and update any data directly in real time. This makes GridGain IMDG well suited for working on operational data sets, the data sets that are currently being updated and queried, while HDFS is suited for working on historical data which is constant and will never change.

Unlike a file system, GridGain IMDG works with user domain model by directly caching user application objects. Objects are accessed and updated by key which allows IMDG to work with volatile data which requires direct key-based access.



GridGain IMDG allows for indexing into keys and values (i.e. primary and secondary indices) and supports native SQL for data querying & processing. One of unique features of GridGain IMDG is support for distributed joins which allow to execute complex SQL queries on the data in-memory without limitations.

GridGain and Hadoop Working Together
To summarize:

Hadoop essentially is a Big Data warehouse which is good for batch processing of historic data that never changes, while GridGain, on the other hand, is an In-Memory Data Platform which works with your current operational data set in transactional fashion with very low latencies. Focusing on very different use cases make GridGain and Hadoop very complementary with each other.



Up-Stream Integration
The diagram above shows integration between GridGain and Hadoop. Here we have GridGain In-Memory Compute Grid and Data Grid working directly in real-time with user application by partitioning and caching data within data grid, and executing in-memory computations and SQL queries on it. Every so often, when data becomes historic, it is snapshotted into HDFS where it can be analyzed using Hadoop MapReduce and analytical tools from Hadoop eco-system.

Down-Stream Integration
Another possible way to integrate would be for cases when data is already stored in HDFS but needs to be loaded into IMDG for faster in-memory processing. For cases like that GridGain provides fast loading mechanisms from HDFS into GridGain IMDG where it can be further analyzed using GridGain in-memory Map Reduce and indexed SQL queries.

Conclusion
Integration between an in-memory data platform like GridGain and disk based data platform like Hadoop allows businesses to get valuable insights into the whole data set at once, including volatile operational data set cached in memory, as well as historic data set stored in Hadoop. This essentially eliminates any gaps in processing time caused by Extract-Transfer-Load (ETL) process of copying data from operational system of records, like standard databases, into historic data warehouses like Hadoop. Now data can be analyzed and processed at any point of its lifecycle, from the moment when it gets into the system up until it gets put away into a warehouse.

Read the original blog entry...

More Stories By Thomas Krafft

Over 15 years of experience in marketing and demand creation, with strategies driving over $500 million in revenue for a variety of companies in several high-growth and competitive markets, including consumer software and web services, ecommerce, demand creation through web and search, big data, and now healthcare.

@CloudExpo Stories
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water,...
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrateg...
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. It also ensured scalability and better service for customers, including MUY! Companies, one of the country's largest franchise restaurant companies with 232 Pizza Hut locations. This is one example of...
One of the biggest challenges when developing connected devices is identifying user value and delivering it through successful user experiences. In his session at Internet of @ThingsExpo, Mike Kuniavsky, Principal Scientist, Innovation Services at PARC, described an IoT-specific approach to user experience design that combines approaches from interaction design, industrial design and service design to create experiences that go beyond simple connected gadgets to create lasting, multi-device exp...
There is little doubt that Big Data solutions will have an increasing role in the Enterprise IT mainstream over time. 8th International Big Data Expo, co-located with 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - has announced its Call for Papers is open. As advanced data storage, access and analytics technologies aimed at handling high-volume and/or fast moving data all move center stage, aided by the cloud computing bo...
Containers Expo Blog covers the world of containers, as this lightweight alternative to virtual machines enables developers to work with identical dev environments and stacks. Containers Expo Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. Bookmark Containers Expo Blog ▸ Here Follow new article posts on Twitter at @ContainersExpo
Some developers believe that monitoring is a function of the operations team. Some operations teams firmly believe that monitoring the systems they maintain is sufficient to run the business successfully. Most of them are wrong. The complexity of today's applications have gone far and beyond the capabilities of "traditional" system-level monitoring tools and approaches and requires much broader knowledge of business and applications as a whole. The goal of DevOps is to connect all aspects of app...
DevOps is all about agility. However, you don't want to be on a high-speed bus to nowhere. The right DevOps approach controls velocity with a tight feedback loop that not only consists of operational data but also incorporates business context. With a business context in the decision making, the right business priorities are incorporated, which results in a higher value creation. In his session at DevOps Summit, Todd Rader, Solutions Architect at AppDynamics, discussed key monitoring techniques...
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
Want to enable self-service provisioning of application environments in minutes that mirror production? Can you automatically provide rich data with code-level detail back to the developers when issues occur in production? In his session at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how to accomplish this and more utilizing technologies such as Microsoft Azure, Visual Studio online, and Application Insights in this demo-heavy session.
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will addresses this very serious issue o...
SYS-CON Events announced today that BMC will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. BMC delivers software solutions that help IT transform digital enterprises for the ultimate competitive business advantage. BMC has worked with thousands of leading companies to create and deliver powerful IT management services. From mainframe to cloud to mobile, BMC pairs high-speed digital innovation with robust...
Over the years, a variety of methodologies have emerged in order to overcome the challenges related to project constraints. The successful use of each methodology seems highly context-dependent. However, communication seems to be the common denominator of the many challenges that project management methodologies intend to resolve. In this respect, Information and Communication Technologies (ICTs) can be viewed as powerful tools for managing projects. Few research papers have focused on the way...
As the world moves from DevOps to NoOps, application deployment to the cloud ought to become a lot simpler. However, applications have been architected with a much tighter coupling than it needs to be which makes deployment in different environments and migration between them harder. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, Netflix and so on is at the heart of CloudFoundry – a complete developer-oriented Platform as a Service (PaaS...
As cloud gives an opportunity to businesses to buy services externally – how is cloud impacting your customers? In his General Session at 15th Cloud Expo, Fabio Gori, Director of Worldwide Cloud Marketing at Cisco, provided answers to big questions: Do you see hybrid cloud as where the world is going? What benefits does it bring? And how does Cisco connect all of these clouds? He also discussed Intercloud and Cisco’s investment on it.
The Internet of Things is not new. Historically, smart businesses have used its basic concept of leveraging data to drive better decision making and have capitalized on those insights to realize additional revenue opportunities. So, what has changed to make the Internet of Things one of the hottest topics in tech? In his session at @ThingsExpo, Chris Gray, Director, Embedded and Internet of Things, discussed the underlying factors that are driving the economics of intelligent systems. Discover ...
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager – Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, will review next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discuss how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Speaker Bios Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has b...
SYS-CON Events announced today that O'Reilly Media has been named “Media Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York City, NY. O'Reilly Media spreads the knowledge of innovators through its books, online services, magazines, and conferences. Since 1978, O'Reilly Media has been a chronicler and catalyst of cutting-edge development, homing in on the technology trends that really matter and spurring their adoption...
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.