Welcome!

Machine Learning Authors: Pat Romanski, Yeshim Deniz, Liz McMillan, Elizabeth White, Corey Roth

Related Topics: Microservices Expo, Java IoT, Containers Expo Blog, @CloudExpo, @DXWorldExpo, SDN Journal

Microservices Expo: Article

Welcome to the Show of CDN Monitoring: Act 2

How and how not to monitor CDNs

In my first blog Act 1 - The What and Why I talked about the benefits and some risks around using a Content Delivery Network (CDN). Today I will cover some details around some common misunderstandings regarding how to monitor CDNs and explain the right monitoring strategy.

Which monitoring options do you have?
In order to manage any complex system you need quality data as quickly as possible. All the enterprise CDN solutions offer some level of insight on the performance they deliver. It is basically based on high level aggregation of relevant log file data and tells you for example how many requests have been received, how much data has been sent out, what status codes were returned, how fast the servers responded, etc.

However this data has two major problems:

  • It is not detailed enough
  • It's provided by the vendor you want to monitor

So instead of only relying on this information you need to add your own monitoring strategy. And there are two different flavors for such solutions.

To read about Synthetic Monitoring and Real User Monitoring of CDNs, Click Here.

In summary the five CDN specific monitoring rules are:

  • Get object level visibility
  • A distributed system needs to be monitored by a distributed system.
  • Make sure you know what your CDN is doing everywhere your end users are.
  • Use as many synthetic locations as possible and check how many CDN PoPs you are actually hitting.
  • Use RUM monitoring of your CDN to ensure you don't need to guess the total impact on your business.

In my next blog (Act 3: Things going wrong) I will share some key findings we see very often and what you need to be aware of.

More Stories By Kristian Skoeld

Kristian Skoeld is a Performance Analyst at Compuware APM Center of Excellence. He coaches and supports teams across Europe as a Performance Analyst and Product Specialist in Web Performance Management. He is an expert in optimizing IT processes, develop web strategies and putting them into action, and a subject matter expert on Web Performance and Web Monitoring within the Compuware APM business unit.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
The technologies behind big data and cloud computing are converging quickly, offering businesses new capabilities for fast, easy, wide-ranging access to data. However, to capitalize on the cost-efficiencies and time-to-value opportunities of analytics in the cloud, big data and cloud technologies must be integrated and managed properly. Pythian's Director of Big Data and Data Science, Danil Zburivsky will explore: The main technology components and best practices being deployed to take advantage of data and analytics in the cloud, Architecture, integration, governance and security scenarios and Key challenges and success factors of moving data and analytics to the cloud
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addresses many of the challenges faced by developers and operators as monolithic applications transition towards a distributed microservice architecture. A tracing tool like Jaeger analyzes what's happening as a transaction moves through a distributed system. Monitoring software like Prometheus captures time-series events for real-time alerting and other uses. Grafeas and Kritis provide security polic...
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.