Welcome!

Machine Learning Authors: William Schmarzo, Elizabeth White, Liz McMillan, Pat Romanski, James Carlini

Related Topics: Microsoft Cloud, Java IoT, Microservices Expo, Machine Learning , Silverlight, @CloudExpo

Microsoft Cloud: Article

.NET and SharePoint Performance

Don’t let default settings ruin your end user experience

SharePoint is a popular choice for intranet applications and therefore it is important that it performs well to ensure employee productivity. Waiting ten seconds just to load the initial dashboard doesn't necessarily support that. At a recent customer engagement we identified an interesting source of a potential performance problem that impacts all SharePoint and .NET-based installations that use the ServicePointManager to access web services. It turns out that ServicePointManager comes with a default setting that allows two concurrent connections. If you happen to have a SharePoint dashboard that queries data from more than two data sources, your end users will suffer from a very long page load time. In this blog I explain the steps that one of our customers took to identify and solve this particular problem. This issue has a broader impact beyond SharePoint; I suggest all .NET developers to look into this issue.

Step 1: Identify Problematic Pages
The first question is: do we have a problem or not? You can either wait for your end users to start complaining to the help desk or be proactive and look at end-user response times. The following screenshot shows data from each individual visitor who accessed a SharePoint application. Focusing on one visit that was identified as having a frustrating experience shows the individual visited pages. Starting with an extremely costly first page (43 seconds to load) the visitor also suffered from an extremely slow response time for the following two actions (click on Home and reloading Default.aspx):

More than six seconds were spent only on server-side processing. There was also a very long wait at the Client (JS Time) for the initial page.

The long wait time at the Client (high JS-Time) could be explained as inefficient JavaScript that caused problems especially on older browsers such as Internet Explorer 7. The server-side problem, however, impacts every user - regardless of the browser used.

Step 2: Identify Problematic Web Parts
Drilling into the actual details of the Default.aspx page for that particular Visitor shows which Web Parts are involved in that Page as well as where these Web Parts spend most of their time:

Web Parts such as the DataFormWebPart or ContentEditorWebPart spend most of their time waiting on resources

Looking at the details shows us that these Web Parts actually spawn multiple background threads to retrieve data from different back-end web services and then they spend time waiting (sync) on those threads when they are done with their work. A closer look also reveals that each of these threads is taking a significant amount of time in I/O:

The WebParts spawn multiple background threads. These threads are all performing I/O operations that take up a very long time (up to 5.8s)

Step 3: Identify Root Cause of Problem
Why are all of these background threads taking so much time? Expanding the internals of the first call shows that it takes 5.8 seconds until the web service actually sends a response back on the socket. So - that explains why the first asynchronous thread takes 5.8 seconds:

First web service call takes 5.8s until we receive data on the socket that is used internally by the ServicePoint implementation.

The other web service calls executed by the remaining background threads pretty much go down through the same execution path spending most of their time waiting for an available outgoing connection:

We have a total of 10 background threads that try to execute a web service call. Most of them spend their time waiting in the ServicePoint instead of sending the actual request.

Step 4: Fix the Problem
Doing a little research (aka use your favorite search engine) on this brings up the following post on stackoverflow.com: Max Number of concurrent connections that ultimately leads us to the MSDN documentation for ServicePointManager where we learn that the default setting for concurrent connections is two.

The default of two concurrent connections causes parallel executing web service requests to wait until there is a free connection available.

So - the solution to this problem is to change the default value. In this case, it should be at least set to the number of allowed Web Parts on a single dashboard because most of them will execute asynchronous web service calls.

Next Steps
This problem is obviously not only relevant for custom SharePoint development but can impact any .NET application that uses ServicePointManager. It was a very interesting case with this customer as they only used out-of-the-box components provided by SharePoint and still ran into this problem. If you are implementing custom SharePoint solutions I also recommend checking out our blog series about the Top SharePoint Performance Problems when implementing your custom Web Parts and Web Controls starting with: How to Avoid the Top 5 SharePoint Performance Problems.

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Vulnerability management is vital for large companies that need to secure containers across thousands of hosts, but many struggle to understand how exposed they are when they discover a new high security vulnerability. In his session at 21st Cloud Expo, John Morello, CTO of Twistlock, addressed this pressing concern by introducing the concept of the “Vulnerability Risk Tree API,” which brings all the data together in a simple REST endpoint, allowing companies to easily grasp the severity of the ...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
"NetApp is known as a data management leader but we do a lot more than just data management on-prem with the data centers of our customers. We're also big in the hybrid cloud," explained Wes Talbert, Principal Architect at NetApp, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
"We're focused on how to get some of the attributes that you would expect from an Amazon, Azure, Google, and doing that on-prem. We believe today that you can actually get those types of things done with certain architectures available in the market today," explained Steve Conner, VP of Sales at Cloudistics, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Data scientists must access high-performance computing resources across a wide-area network. To achieve cloud-based HPC visualization, researchers must transfer datasets and visualization results efficiently. HPC clusters now compute GPU-accelerated visualization in the cloud cluster. To efficiently display results remotely, a high-performance, low-latency protocol transfers the display from the cluster to a remote desktop. Further, tools to easily mount remote datasets and efficiently transfer...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...