IoT User Interface Authors: Liz McMillan, Elizabeth White, Pat Romanski, David Paquette, Greg O'Connor

Related Topics: Java IoT, Microservices Expo, Open Source Cloud, IoT User Interface, @CloudExpo

Java IoT: Article

Real-World Application Performance with MongoDB

Choosing a data mapping technology

Recently FireScope Inc. introduced the general availability of its Stratis product. Stratis brings all of the FireScope Unify capabilities to the cloud, with the added advantage of a new architecture that delivers near infinite scalability. Moreover, the new Stratis architecture provides scalability at all application layers including its back-end operations, which were newly designed to leverage the benefits of MongoDB. In this article we will discuss several of the architecture choices that were made as part of this effort with the hope that others might benefit from the research and analysis that was performed to bring this product to market.

As background a functioning FireScope deployment has the ability to gather metrics from all forms of existing IT assets, normalize the gathered metrics, provide historical analysis of the metrics, and most importantly provide service views for worldwide operations which is unparalleled in the IT industry. In the early phases of designing the Stratis product, FireScope undertook significant research into the scalable persistence architectures that were production ready at the time of this effort. FireScope ultimately chose MongoDB for its ability to scale and its flexibility in supporting an easy transition from a relational persistence model to a NoSQL model. While researching MongoDB FireScope took the time to understand the application impact of the following architecture facets:

  1. Data mapping technologies
  2. Minimal field retrieval vs full document retrieval
  3. Data aggregation
  4. Early space allocation

In this article we detail each of the above mentioned research efforts and discuss the impact that our subsequent choices had on the FireScope Stratis product.

Application performance was a key driver in all research activities. Even though we were deploying these new application elements to the cloud, ignoring the importance of performance would mean more resources would be needed to get the job done. It's also worth noting that not all applications have the same considerations, so what may be an appropriate technology or architecture choice for FireScope Stratis might not be the appropriate choice for your application. With that said, let's address these research efforts in more detail.

Data Mapping Technologies
The FireScope Stratis application accesses persistent storage via Java, and PHP. As a result, we needed to make persistence access choices that would be compatible between Java and PHP. While Java and PHP were both requirements the main performance driven consideration was access via Java. In considering how to get information into and out of the database with Java, FireScope researched access using the following two approaches:

  1. Java Mongo driver with an in-house developed DAO layer
  2. Spring Data

We built narrowly focused prototype access solutions using both of these options. We saved and retrieved the same large graph of objects and compared the relative performance for each approach. One of the key findings in this analysis was the performance impact of "single binding" versus "double binding" of retrieved data.

When data is returned via the MongoDB Java driver each document is returned in the form of a HashMap where the fields of the persisted document form the keys of the HashMap and the corresponding values associated with each field are stored as HashMap values. FireScope designed its domain model to use getters and setters that simply accessed the appropriate field in the HashMap and ensured that each corresponding field has the correct Java type. In this model there is no additional overhead to bind each field to a corresponding Java field, we simply referenced the data in the HashMap. We refer to this model as "single binding" because the only binding performed is that of the Mongo Java driver.

By contrast, when Spring Data is used to render a document from MongoDB all fields in the HashMap returned by the Mongo Java driver are subsequently bound to a member field in the appropriate Java object. This binding is performed using reflection during the object retrieval process. We refer to this model as "double binding" because the initial HashMap rendering is then reflectively bound to the appropriate Java object fields and the initial HashMap is subsequently discarded.

In our comparative analysis we found that the "double binding" process used by Spring Data carried with it a performance overhead of greater than 2X but less than 4X. These comparative results were derived from multiple runs using each technology retrieving and saving the same large data graph on the same hardware. Furthermore, we alternated between technology choices in order to prevent differences in class loading, network, CPU, disk, and garbage collection from obscuring the analysis results.

Please do not take from the above that I have some issue with Spring Data. I absolutely love Spring, and nearly everything they do is 100% top notch! It just so happens that in this instance our performance-centric considerations directed us away from the use of Spring Data for FireScope's Stratis back-end operations. We do however use Spring in nearly every other area of the FireScope Stratis product. As a final thought, we also briefly considered the use of Morphia, but due to time constraints we never completed a comparative analysis using Morphia.

Minimal Field Retrieval
One of the key performance impacting areas of the FireScope Stratis product is the data normalization engine. Every metric retrieved by FireScope passes through this engine and as a result the ability to do more with less is critically important to FireScope. In an effort to verify our architecture choices, FireScope performed another analysis comparing the relative performance of retrieving all fields of a queried document to an alternative scenario where only one-fourth of the full fields were retrieved. The intent here is that many use cases do not need all of the data for a given object. Of course we knew that reducing the bandwidth between the database servers and the application servers would be a good thing, but being new to Mongo we weren't sure if the overhead of filtering some fields from the document would outweigh the benefits of the reduced bandwidth between the servers.

In this analysis we setup long running retrieve / save operations. Once again, we alternated between retrieve / save operations where the full document was passed, and retrieve / save operations where the one-fourth populated document was passed. Alternation was used to prevent the impact of class loading, network, CPU, disk, and garbage collection from obscuring the analysis results. When the one-fourth populated document was used we specified a set of fields for Mongo to retrieve. For the full document no field specification was provided and as a result the full document was retrieved.

The analysis results indicated an overwhelming 9X performance benefit to using limited field retrieval. But be aware that using limited field retrieval also has its downside. If other developers on your team are not keenly aware that the object they just queried for might not have all of its fields populated, then application defects can easily result from using this approach. To avert possible defects, FireScope leverages an extensive unit testing, functional testing, and peer review / test process to ensure that such defects do not arise.

Data Aggregation
A portion of the section is based on ideas from this blog.

We acknowledge and thank Foursquare Labs Inc. for its contributions.

The suggestion offered in the blog is to aggregate a series of historical entries into a single document, rather than creating a separate document for each historical record. The motivation for aggregation is to improve the locality of associated information and as a result improve its future access time. While the FireScope system performance is not driven by user access, it does rely extensively on aggregated historical metrics collected throughout a day and we leveraged aggregation to achieve improved locality.

What was not discussed in the Foursquare Labs blog was a second and equally significant benefit of aggregation which is a huge reduction in the size of an index for the FireScope historical records. For those not familiar with Mongo it is important to understand that Mongo attempts to keep all indexes in memory for fast access. As a result any reduction in the size of an index allows Mongo to keep more data in memory which improves overall system performance.

For better understanding consider the following two data storage scenarios where a reference id, time stamp, and value of several collected metrics are stored using two alternative approaches:

  1. Collected metrics are simply added to a collection which is indexed on the ref_id + time fields.
    { ref_id : ABC123, time : 1336780800, value : XXX }
    { ref_id : ABC123, time : 1336780800, value : ZZZ }
  2. All collected metrics for one day are added to an array. The document for the day is indexed on the ref_id and midnight fields.
    { ref_id : ABC123, midnight : 1336780800, values : [ time : 1336780805, value : XXX, ... ] }

Note that for option 1 both the ref_id and the time are two elements in an index. If the system collects this metric once every 5 minutes, then the system would collect 288 ref_id, time, value entries in one day. If each entry is added to an index then the corresponding index size will be significantly larger for option 1 above than for option 2, because option 2 does not index the actual collection time but only midnight of the current day. As a result, the index size is reduced nearly 300 to one due to the aggregation of data with no loss of information.

Early Space Allocation
If documents are created from metrics collected throughout the day, then both space allocation, as well as index updates are required throughout the day as a part of normal business operations. As discussed above if documents are nested then locality of accessed information is improved. But if normal operations append to an existing document then in most instances, the document must be moved and all associated indexes must be updated in order to accomplish the document append operations.

With FireScope Stratis optimal update operations are achieved by allocating a full days worth of history records for each expected metric. Each history record contains default values for the expected collection interval. The space for one day's worth of data is created in a scheduled operation that is run once per day. Then as metrics are collected throughout the day the appropriate bucket (array entry) is simply updated. Since the update does not change the size of the document no document movements are needed throughout the day nor are index updates needed. The end result is a system that achieves optimal performance. While I am unable to share actual performance metrics for this approach, I can share that the relative performance difference is significant. It is also worth noting that you would need to take great care in measuring the performance impact of this architecture choice because MongoDB has the inherent ability to queue update operations, thus masking the real performance benefit of this enhancement.

If you are undertaking a transition to MongoDB, or new development on MongoDB then choosing a data mapping technology wisely can have a significant impact on your application performance. Consider also the performance benefits of Minimal Field Retrieval, Data Aggregation, and Early Space Allocation as vehicles to optimize your applications' performance. You may also realize additional benefits, such as the reduced network bandwidth that comes with minimal field retrieval, and the reductions of index size that might result from data aggregation. We sincerely hope that you have benefited from the time invested in reading this article and wish you the best in all of your Mongo development endeavors.


More Stories By Pete Whitney

Pete Whitney is a Solutions Architect for Cloudera. His primary role at Cloudera is guiding and assisting Cloudera's clients through successful adoption of Cloudera's Enterprise Data Hub and surrounding technologies.

Previously Pete served as VP of Cloud Development for FireScope Inc. In the advertising industry Pete designed and delivered DG Fastchannel’s internet-based advertising distribution architecture. Pete also excelled in other areas including design enhancements in robotic machine vision systems for FSI International Inc. These enhancements included mathematical changes for improved accuracy, improved speed, and automated calibration. He also designed a narrow spectrum light source, and a narrow spectrum band pass camera filter for controlled machine vision imaging.

Pete graduated Cum Laude from the University of Texas at Dallas, and holds a BS in Computer Science. Pete can be contacted via Email at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@CloudExpo Stories
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
“Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. CloudBerry Backup is a leading cross-platform cloud backup and disaster recovery solution integrated with major public cloud services, such as Amazon Web Services, Microsoft Azure and Google Cloud Platform.
In the next forty months – just over three years – businesses will undergo extraordinary changes. The exponential growth of digitization and machine learning will see a step function change in how businesses create value, satisfy customers, and outperform their competition. In the next forty months companies will take the actions that will see them get to the next level of the game called Capitalism. Or they won’t – game over. The winners of today and tomorrow think differently, follow different...
Ask someone to architect an Internet of Things (IoT) solution and you are guaranteed to see a reference to the cloud. This would lead you to believe that IoT requires the cloud to exist. However, there are many IoT use cases where the cloud is not feasible or desirable. In his session at @ThingsExpo, Dave McCarthy, Director of Products at Bsquare Corporation, will discuss the strategies that exist to extend intelligence directly to IoT devices and sensors, freeing them from the constraints of ...
So you think you are a DevOps warrior, huh? Put your money (not really, it’s free) where your metrics are and prove it by taking The Ultimate DevOps Geek Quiz Challenge, sponsored by DevOps Summit. Battle through the set of tough questions created by industry thought leaders to earn your bragging rights and win some cool prizes.
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ...
Fifty billion connected devices and still no winning protocols standards. HTTP, WebSockets, MQTT, and CoAP seem to be leading in the IoT protocol race at the moment but many more protocols are getting introduced on a regular basis. Each protocol has its pros and cons depending on the nature of the communications. Does there really need to be only one protocol to rule them all? Of course not. In his session at @ThingsExpo, Chris Matthieu, co-founder and CTO of Octoblu, walk you through how Oct...
Everyone knows that truly innovative companies learn as they go along, pushing boundaries in response to market changes and demands. What's more of a mystery is how to balance innovation on a fresh platform built from scratch with the legacy tech stack, product suite and customers that continue to serve as the business' foundation. In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, will discuss why and how ReadyTalk diverted from healthy revenue an...
In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, will be adding the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor...
SYS-CON Events announced today that Cemware will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Use MATLAB functions by just visiting website mathfreeon.com. MATLAB compatible, freely usable, online platform services. As of October 2016, 80,000 users from 180 countries are enjoying our platform service.
Established in 1998, Calsoft is a leading software product engineering Services Company specializing in Storage, Networking, Virtualization and Cloud business verticals. Calsoft provides End-to-End Product Development, Quality Assurance Sustenance, Solution Engineering and Professional Services expertise to assist customers in achieving their product development and business goals. The company's deep domain knowledge of Storage, Virtualization, Networking and Cloud verticals helps in delivering ...
SYS-CON Events announced today that Enzu will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to focus on the core of their online busine...
November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Penta Security is a leading vendor for data security solutions, including its encryption solution, D’Amo. By using FPE technology, D’Amo allows for the implementation of encryption technology to sensitive data fields without modification to schema in the database environment. With businesses having their data become increasingly more complicated in their mission-critical applications (such as ERP, CRM, HRM), continued ...
In the next five to ten years, millions, if not billions of things will become smarter. This smartness goes beyond connected things in our homes like the fridge, thermostat and fancy lighting, and into heavily regulated industries including aerospace, pharmaceutical/medical devices and energy. “Smartness” will embed itself within individual products that are part of our daily lives. We will engage with smart products - learning from them, informing them, and communicating with them. Smart produc...
OnProcess Technology has announced it will be a featured speaker at @ThingsExpo, taking place November 1 - 3, 2016, in Santa Clara, California. Dan Gettens, OnProcess’ Chief Analytics Officer, will discuss how Internet of Things (IoT) data can be leveraged to predict product failures, improve uptime and slash costly inventory stock. @ThingsExpo is an annual gathering of IoT and cloud developers, practitioners and thought-leaders who exchange ideas and insights on topics ranging from Big Data in...
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
SYS-CON Events announced today that Cloudbric, a leading website security provider, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Cloudbric is an elite full service website protection solution specifically designed for IT novices, entrepreneurs, and small and medium businesses. First launched in 2015, Cloudbric is based on the enterprise level Web Application Firewall by Penta Security Sys...
SYS-CON Events announced today that Transparent Cloud Computing (T-Cloud) Consortium will exhibit at the 19th International Cloud Expo®, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. The Transparent Cloud Computing Consortium (T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data proces...