Welcome!

Machine Learning Authors: Aruna Ravichandran, William Schmarzo, Mark Ross-Smith, Gregor Petri, Jason Bloomberg

Related Topics: Java IoT, Microservices Expo, Open Source Cloud, Machine Learning , @CloudExpo

Java IoT: Article

Real-World Application Performance with MongoDB

Choosing a data mapping technology

Recently FireScope Inc. introduced the general availability of its Stratis product. Stratis brings all of the FireScope Unify capabilities to the cloud, with the added advantage of a new architecture that delivers near infinite scalability. Moreover, the new Stratis architecture provides scalability at all application layers including its back-end operations, which were newly designed to leverage the benefits of MongoDB. In this article we will discuss several of the architecture choices that were made as part of this effort with the hope that others might benefit from the research and analysis that was performed to bring this product to market.

As background a functioning FireScope deployment has the ability to gather metrics from all forms of existing IT assets, normalize the gathered metrics, provide historical analysis of the metrics, and most importantly provide service views for worldwide operations which is unparalleled in the IT industry. In the early phases of designing the Stratis product, FireScope undertook significant research into the scalable persistence architectures that were production ready at the time of this effort. FireScope ultimately chose MongoDB for its ability to scale and its flexibility in supporting an easy transition from a relational persistence model to a NoSQL model. While researching MongoDB FireScope took the time to understand the application impact of the following architecture facets:

  1. Data mapping technologies
  2. Minimal field retrieval vs full document retrieval
  3. Data aggregation
  4. Early space allocation

In this article we detail each of the above mentioned research efforts and discuss the impact that our subsequent choices had on the FireScope Stratis product.

Application performance was a key driver in all research activities. Even though we were deploying these new application elements to the cloud, ignoring the importance of performance would mean more resources would be needed to get the job done. It's also worth noting that not all applications have the same considerations, so what may be an appropriate technology or architecture choice for FireScope Stratis might not be the appropriate choice for your application. With that said, let's address these research efforts in more detail.

Data Mapping Technologies
The FireScope Stratis application accesses persistent storage via Java, and PHP. As a result, we needed to make persistence access choices that would be compatible between Java and PHP. While Java and PHP were both requirements the main performance driven consideration was access via Java. In considering how to get information into and out of the database with Java, FireScope researched access using the following two approaches:

  1. Java Mongo driver with an in-house developed DAO layer
  2. Spring Data

We built narrowly focused prototype access solutions using both of these options. We saved and retrieved the same large graph of objects and compared the relative performance for each approach. One of the key findings in this analysis was the performance impact of "single binding" versus "double binding" of retrieved data.

When data is returned via the MongoDB Java driver each document is returned in the form of a HashMap where the fields of the persisted document form the keys of the HashMap and the corresponding values associated with each field are stored as HashMap values. FireScope designed its domain model to use getters and setters that simply accessed the appropriate field in the HashMap and ensured that each corresponding field has the correct Java type. In this model there is no additional overhead to bind each field to a corresponding Java field, we simply referenced the data in the HashMap. We refer to this model as "single binding" because the only binding performed is that of the Mongo Java driver.

By contrast, when Spring Data is used to render a document from MongoDB all fields in the HashMap returned by the Mongo Java driver are subsequently bound to a member field in the appropriate Java object. This binding is performed using reflection during the object retrieval process. We refer to this model as "double binding" because the initial HashMap rendering is then reflectively bound to the appropriate Java object fields and the initial HashMap is subsequently discarded.

In our comparative analysis we found that the "double binding" process used by Spring Data carried with it a performance overhead of greater than 2X but less than 4X. These comparative results were derived from multiple runs using each technology retrieving and saving the same large data graph on the same hardware. Furthermore, we alternated between technology choices in order to prevent differences in class loading, network, CPU, disk, and garbage collection from obscuring the analysis results.

Please do not take from the above that I have some issue with Spring Data. I absolutely love Spring, and nearly everything they do is 100% top notch! It just so happens that in this instance our performance-centric considerations directed us away from the use of Spring Data for FireScope's Stratis back-end operations. We do however use Spring in nearly every other area of the FireScope Stratis product. As a final thought, we also briefly considered the use of Morphia, but due to time constraints we never completed a comparative analysis using Morphia.

Minimal Field Retrieval
One of the key performance impacting areas of the FireScope Stratis product is the data normalization engine. Every metric retrieved by FireScope passes through this engine and as a result the ability to do more with less is critically important to FireScope. In an effort to verify our architecture choices, FireScope performed another analysis comparing the relative performance of retrieving all fields of a queried document to an alternative scenario where only one-fourth of the full fields were retrieved. The intent here is that many use cases do not need all of the data for a given object. Of course we knew that reducing the bandwidth between the database servers and the application servers would be a good thing, but being new to Mongo we weren't sure if the overhead of filtering some fields from the document would outweigh the benefits of the reduced bandwidth between the servers.

In this analysis we setup long running retrieve / save operations. Once again, we alternated between retrieve / save operations where the full document was passed, and retrieve / save operations where the one-fourth populated document was passed. Alternation was used to prevent the impact of class loading, network, CPU, disk, and garbage collection from obscuring the analysis results. When the one-fourth populated document was used we specified a set of fields for Mongo to retrieve. For the full document no field specification was provided and as a result the full document was retrieved.

The analysis results indicated an overwhelming 9X performance benefit to using limited field retrieval. But be aware that using limited field retrieval also has its downside. If other developers on your team are not keenly aware that the object they just queried for might not have all of its fields populated, then application defects can easily result from using this approach. To avert possible defects, FireScope leverages an extensive unit testing, functional testing, and peer review / test process to ensure that such defects do not arise.

Data Aggregation
A portion of the section is based on ideas from this blog.

We acknowledge and thank Foursquare Labs Inc. for its contributions.

The suggestion offered in the blog is to aggregate a series of historical entries into a single document, rather than creating a separate document for each historical record. The motivation for aggregation is to improve the locality of associated information and as a result improve its future access time. While the FireScope system performance is not driven by user access, it does rely extensively on aggregated historical metrics collected throughout a day and we leveraged aggregation to achieve improved locality.

What was not discussed in the Foursquare Labs blog was a second and equally significant benefit of aggregation which is a huge reduction in the size of an index for the FireScope historical records. For those not familiar with Mongo it is important to understand that Mongo attempts to keep all indexes in memory for fast access. As a result any reduction in the size of an index allows Mongo to keep more data in memory which improves overall system performance.

For better understanding consider the following two data storage scenarios where a reference id, time stamp, and value of several collected metrics are stored using two alternative approaches:

  1. Collected metrics are simply added to a collection which is indexed on the ref_id + time fields.
    { ref_id : ABC123, time : 1336780800, value : XXX }
    { ref_id : ABC123, time : 1336780800, value : ZZZ }
  2. All collected metrics for one day are added to an array. The document for the day is indexed on the ref_id and midnight fields.
    { ref_id : ABC123, midnight : 1336780800, values : [ time : 1336780805, value : XXX, ... ] }

Note that for option 1 both the ref_id and the time are two elements in an index. If the system collects this metric once every 5 minutes, then the system would collect 288 ref_id, time, value entries in one day. If each entry is added to an index then the corresponding index size will be significantly larger for option 1 above than for option 2, because option 2 does not index the actual collection time but only midnight of the current day. As a result, the index size is reduced nearly 300 to one due to the aggregation of data with no loss of information.

Early Space Allocation
If documents are created from metrics collected throughout the day, then both space allocation, as well as index updates are required throughout the day as a part of normal business operations. As discussed above if documents are nested then locality of accessed information is improved. But if normal operations append to an existing document then in most instances, the document must be moved and all associated indexes must be updated in order to accomplish the document append operations.

With FireScope Stratis optimal update operations are achieved by allocating a full days worth of history records for each expected metric. Each history record contains default values for the expected collection interval. The space for one day's worth of data is created in a scheduled operation that is run once per day. Then as metrics are collected throughout the day the appropriate bucket (array entry) is simply updated. Since the update does not change the size of the document no document movements are needed throughout the day nor are index updates needed. The end result is a system that achieves optimal performance. While I am unable to share actual performance metrics for this approach, I can share that the relative performance difference is significant. It is also worth noting that you would need to take great care in measuring the performance impact of this architecture choice because MongoDB has the inherent ability to queue update operations, thus masking the real performance benefit of this enhancement.

Conclusion
If you are undertaking a transition to MongoDB, or new development on MongoDB then choosing a data mapping technology wisely can have a significant impact on your application performance. Consider also the performance benefits of Minimal Field Retrieval, Data Aggregation, and Early Space Allocation as vehicles to optimize your applications' performance. You may also realize additional benefits, such as the reduced network bandwidth that comes with minimal field retrieval, and the reductions of index size that might result from data aggregation. We sincerely hope that you have benefited from the time invested in reading this article and wish you the best in all of your Mongo development endeavors.

References

More Stories By Pete Whitney

Pete Whitney is a Solutions Architect for Cloudera. His primary role at Cloudera is guiding and assisting Cloudera's clients through successful adoption of Cloudera's Enterprise Data Hub and surrounding technologies.

Previously Pete served as VP of Cloud Development for FireScope Inc. In the advertising industry Pete designed and delivered DG Fastchannel’s internet-based advertising distribution architecture. Pete also excelled in other areas including design enhancements in robotic machine vision systems for FSI International Inc. These enhancements included mathematical changes for improved accuracy, improved speed, and automated calibration. He also designed a narrow spectrum light source, and a narrow spectrum band pass camera filter for controlled machine vision imaging.

Pete graduated Cum Laude from the University of Texas at Dallas, and holds a BS in Computer Science. Pete can be contacted via Email at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
The unique combination of Amazon Web Services and Cloud Raxak, a Gartner Cool Vendor in IT Automation, provides a seamless and cost-effective way of securely moving on-premise IT workloads to Amazon Web Services. Any enterprise can now leverage the cloud, manage risk, and maintain continuous security compliance. Forrester's analysis shows that enterprises need automated security to lower security risk and decrease IT operational costs. Through the seamless integration into Amazon Web Services, ...
In the next five to ten years, millions, if not billions of things will become smarter. This smartness goes beyond connected things in our homes like the fridge, thermostat and fancy lighting, and into heavily regulated industries including aerospace, pharmaceutical/medical devices and energy. “Smartness” will embed itself within individual products that are part of our daily lives. We will engage with smart products - learning from them, informing them, and communicating with them. Smart produc...
"We provide DevOps solutions. We also partner with some key players in the DevOps space and we use the technology that we partner with to engineer custom solutions for different organizations," stated Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
“DevOps is really about the business. The business is under pressure today, competitively in the marketplace to respond to the expectations of the customer. The business is driving IT and the problem is that IT isn't responding fast enough," explained Mark Levy, Senior Product Marketing Manager at Serena Software, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, explained the best practices of continuous testing at high scale, which is rele...
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
"What is the next step in the evolution of IoT systems? The answer is data, information, which is a radical shift from assets, from things to input for decision making," stated Michael Minkevich, VP of Technology Services at Luxoft, in this SYS-CON.tv interview at @ThingsExpo, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
WebRTC sits at the intersection between VoIP and the Web. As such, it poses some interesting challenges for those developing services on top of it, but also for those who need to test and monitor these services. In his session at WebRTC Summit, Tsahi Levent-Levi, co-founder of testRTC, reviewed the various challenges posed by WebRTC when it comes to testing and monitoring and on ways to overcome them.
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed ...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
Every successful software product evolves from an idea to an enterprise system. Notably, the same way is passed by the product owner's company. In his session at 20th Cloud Expo, Oleg Lola, CEO of MobiDev, will provide a generalized overview of the evolution of a software product, the product owner, the needs that arise at various stages of this process, and the value brought by a software development partner to the product owner as a response to these needs.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of D...
Smart Cities are here to stay, but for their promise to be delivered, the data they produce must not be put in new siloes. In his session at @ThingsExpo, Mathias Herberts, Co-founder and CTO of Cityzen Data, discussed the best practices that will ensure a successful smart city journey.
@ThingsExpo has been named the ‘Top WebRTC Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @ThingsExpo ranked as the number one ‘WebRTC Influencer' followed by @DevOpsSummit at 55th.
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
SYS-CON Events announced today that Enzu will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive ad...
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex softw...
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...