Welcome!

Machine Learning Authors: Elizabeth White, Pat Romanski, Yeshim Deniz, Liz McMillan, Zakia Bouachraoui

Related Topics: @DXWorldExpo, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo, SDN Journal

@DXWorldExpo: Blog Post

The Link Between In-Memory Computing and Big Data Success

Terracotta offers novel solutions to speed enterprise software

By

Note: excerpted from Terracotta’s blog (here).

In the last few posts, we have outlined how in-memory solutions are being adopted in industries such as financial servicestravel & logistics, and big pharma to overcome many kinds of Big Data challenges. Here I’d like to share an independent report from a leading analyst, Aberdeen Group, that reinforces the view that in-memory computing infrastructure is essential for successful Big Data initiatives—especially those related to volume and velocity.

Terracotta offers novel solutions to speed enterprise software

Terracotta offers novel solutions to speed enterprise software

In its report “In-memory Computing: Lifting the Burden of Big Data,” Aberdeen arrived at some pretty exciting conclusions about the link between in-memory computing and Big Data success. The report examined 196 organizations worldwide that are currently dealing with Big Data, of which 33 reported implementing in-memory computing solutions. These companies represent a large cross-section of industries and sizes, and deal with business data ranging from a handful of terabytes to multiple petabytes. In aggregate, the organizations that implemented in-memory computing were able to process over 3x the data at over 100x the speed versus organizations that had not done so. Interestingly, overall employee satisfaction at these organizations also increased, contributing to better business growth and results.

At Terracotta, we have observed similar results first-hand at organizations that have implemented our own in-memory solutions. The key differences between the report’s data set and ours is that our deployments have been even more varied (in terms of enterprise size and breadth of verticals) and have typically been completed in 60 to 90 days.

With data growth accelerating, in-memory computing is attracting increased attention as a key strategy for making the most of Big Data. To learn more, download Aberdeen’s report.

Read the original blog entry...

More Stories By Bob Gourley

Bob Gourley writes on enterprise IT. He is a founder of Crucial Point and publisher of CTOvision.com

CloudEXPO Stories
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a multi-faceted approach of strategy and enterprise business development. Andrew graduated from Loyola University in Maryland and University of Auckland with degrees in economics and international finance.
The revocation of Safe Harbor has radically affected data sovereignty strategy in the cloud. In his session at 17th Cloud Expo, Jeff Miller, Product Management at Cavirin Systems, discussed how to assess these changes across your own cloud strategy, and how you can mitigate risks previously covered under the agreement.
Digital Initiatives create new ways of conducting business, which drive the need for increasingly advanced security and regulatory compliance challenges with exponentially more damaging consequences. In the BMC and Forbes Insights Survey in 2016, 97% of executives said they expect a rise in data breach attempts in the next 12 months. Sixty percent said operations and security teams have only a general understanding of each other’s requirements, resulting in a “SecOps gap” leaving organizations unable to mobilize to protect themselves. The result: many enterprises face unnecessary risks to data loss and production downtime.
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and Big Data teams at Autodesk. He is a contributing author of book on Azure and Big Data published by SAMS.
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the public cloud best suits your organization, and what the future holds for operations and infrastructure engineers in a post-container world. Is a serverless world inevitable?