|By Deborah Strickland||
|February 12, 2013 10:26 AM EST||
Heterogeneous networks (HetNets) consist of large (macro) cells with high transmit power (typically 5 W – 40 W) and small cells with low transmit power (typically 100 mW – 2 W). The small cells are distributed beneath the large cells and can run on the same frequency as the large cell (co-channel), or on a different frequency. As an evolution of the cellular architecture, HetNets and small cells have gained much attention as a technique to increase mobile network capacity and are today one of the hot topics in the wireless industry. Many of the initial deployments of small cells are of the co-channel type. Standards such as LTE have focused on incorporating techniques to improve the performance of co-channel deployments in earlier releases of the technology standard leaving the handling of multi-frequency deployment type to later releases. In all, operators today have multiple options of small cell deployment scenarios, operational techniques and technology roadmaps to choose from.
Figure 1 Simplified Heterogeneous Network Architecture.
To illustrate some of the deployment issues related to small cells, I will provide in this article a qualitative review of small cell performance and explore their impact on the operator's small cells deployment strategy. The focus is on co-channel deployments which aside from being common in this early stage of HetNet evolution, they provide for a complex radio frequency environment.
Throughput Performance: The overall throughput experienced by users on both downlink (base station to the mobile subscriber) and uplink (mobile to base station) paths will generally increase as small cells are deployed. This applies to both users camped on the macro cell and those on the small cells, but for different reasons:
- The users on the macro cell will benefit as more small cells are added because fewer users will share the common capacity resources. Therefore, the more small cells are added, the better likelihood a user on the macro cell will experience higher throughput; meanwhile,
- Users on the small cell will experience better throughput than those on macro cell because of higher probability of line-of-sight connection to the serving base station.
If the mobile subscribers are uniformly distributed over the coverage area, then the likelihood a user will experience a certain level of throughput is approximately similar as the number of small cells increases. But in reality, the distribution of users is not uniform as they tend to concentrate in certain "traffic hotspots." In this case, a small cell in a traffic hotspot is expected to provide lower throughput than a small cell in a uniform user distribution area. In the meantime, a user on the macrocell will experience a more pronounced increase in throughput because a higher proportion of users are offloaded from the macro cell. As even more small cells are added, interference will increase leading to successively diminishing marginal increase in throughput.
This last note is an important one: small cells are beneficial up to a point. The user experience will be affected by the density of small cells with a diminishing marginal return followed by actual degradation of service as the number of small cells exceeds a certain threshold. When this threshold is reached depends on a number of factors that include the type of technology, morphology, and cell density and distribution. Inter-small cell interference is one factor that limits small cell performance. Another factor is that as we add more small cells, we create more 'cell-edge' regions within the coverage area of macrocells that can also limit performance as I will expand upon below.
The throughput performance will depend on the location of the small cells and their proximity to macrocells. A small cells close to a macrocell is more likely to be affected by interference than one located at the cell-edge resulting in lower throughput performance. Correspondingly, the performance will depend on the size of the macrocell, or rather, the macrocell density. Small cells deployed close to the cell edge of a large macrocell will provide better performance than those deployed in high-density macrocell area where the average radius is relatively small.
Throughput performance will also depend on the output power of the small cell. Simulations show that for a certain macrocell radius, higher power small cells provide better throughput performance than lower power ones given the same small cell base station density.
Nevertheless, the key take away here is this: it pays to find out where the traffic hot spots are as otherwise, the gain achieved from small cells will be small. Small cell deployment would have to be 'surgical' in select areas to achieve the maximum return on investment.
Interference and Coverage Performance: While small cells improve performance in general, there are certain situations where they cause interference or even a coverage hole. One decisive factor is the large power imbalance between the small cell and the macrocell. The power imbalance is larger than simply the rated transmit power because macrocells implement high-gain sectored antennas (13-16 dBi) while small cells typically implement a much lower gain omni-directional antenna (3-6 dBi). The power imbalance results in asymmetric downlink and uplink coverage areas. Because the macrocell has much higher power than the small cell, the downlink coverage area of the small cell would be smaller than the uplink coverage area. This shifts the handover boundary closer to the small cell increasing the possibility of uplink interference to the small cell with which the interfering mobile might have a line-of-sight path. This type of interference is potentially very damaging since it affects all the users in a cell and forces the mobile units served by the small cell to transmit at higher power. The power imbalance also increases the risk of downlink interference although this type of interference is more limited because it affects a single user. The uplink-downlink imbalance is a leading reason why LTE Release 8 small cell gain is limited because cell selection is decided by downlink signal strength and the options for interference mitigation are limited.
Figure 2 Co-channel interference scenarios in small cell deployments.
To address the uplink-downlink coverage imbalance, the coverage area of the small cell base station is extended to allow the small cell to capture more traffic. This is accomplished by adding a bias to the small cell received signal during the cell selection process. But extending the small cell coverage also increases the chances of downlink interference to a mobile subscriber operating at the edge of the small cell.
Aside from co-channel interference, there's also a risk of adjacent channel interference in multicarrier networks where macrocells implement two or more frequency carriers. Consider for example a mobile attached to a macrocell on frequency A while it is very close to a small cell operating on adjacent frequency B. The mobile is susceptible to adjacent channel interference from the small cell which would likely have a line-of-sight path to the mobile in contrast to a non-line-of-sight connection with the macrocell. Another example is that for the uplink: a mobile attached to a macrocell and operating from the edge of a small cell on an adjacent frequency could cause interference to the small cell.
There are other potential interference scenarios in addition to those described here. But the basic fact is that the actual performance and benefit of small cells will vary, and will do so more widely in the absence of interference mitigation/performance enhancing techniques. This is one reason why some requirements for small cell deployments have been hotly debated, without a firm resolution. For example, a basic requirement is that of small cell backhaul capacity: what should it be? Should the backhaul link be designed to handle the peak throughput rate, which is a function technology, or the average throughput rate which is much harder to ascertain and put a value on because it depends on many factors related to the deployment scenario?
Based on the above description, we know that throughput of small cells will depend largely on the load. The more clustered the subscribers, the lower the overall small cell throughput. On the other hand, if there's a light load (few users), then the capacity will be high. If you are an operator, you sure would need to think carefully about the required backhaul capacity! And while we're on the backhaul topic, let's not forget that we also need to make sure that backhaul on the macrocell is dimensioned properly to support higher traffic load which will certainly come as more small cells are deployed.
In this post, I went through some aspects of small cell performance. These problems are well recognized and certain techniques are being developed and integrated into the standards to address them. This raises other important questions to the operator's strategic network planning process, such as: what interference management and performance enhancement features should be considered? And, what is the technology roadmap for these features? I will expand more on some of these techniques in a future blog post.
Follow Frank Rayal on Twitter @FrankRayal
Advanced Persistent Threats (APTs) are increasing at an unprecedented rate. The threat landscape of today is drastically different than just a few years ago. Attacks are much more organized and sophisticated. They are harder to detect and even harder to anticipate. In the foreseeable future it's going to get a whole lot harder. Everything you know today will change. Keeping up with this changing landscape is already a daunting task. Your organization needs to use the latest tools, methods and ex...
Mar. 4, 2015 01:30 AM EST Reads: 3,485
In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, C...
Mar. 4, 2015 01:00 AM EST Reads: 4,359
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize sup...
Mar. 4, 2015 12:30 AM EST Reads: 3,523
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS soluti...
Mar. 3, 2015 11:30 PM EST Reads: 1,137
Even as cloud and managed services grow increasingly central to business strategy and performance, challenges remain. The biggest sticking point for companies seeking to capitalize on the cloud is data security. Keeping data safe is an issue in any computing environment, and it has been a focus since the earliest days of the cloud revolution. Understandably so: a lot can go wrong when you allow valuable information to live outside the firewall. Recent revelations about government snooping, along...
Mar. 3, 2015 11:15 PM EST Reads: 725
The Workspace-as-a-Service (WaaS) market will grow to $6.4B by 2018. In his session at 16th Cloud Expo, Seth Bostock, CEO of IndependenceIT, will begin by walking the audience through the evolution of Workspace as-a-Service, where it is now vs. where it going. To look beyond the desktop we must understand exactly what WaaS is, who the users are, and where it is going in the future. IT departments, ISVs and service providers must look to workflow and automation capabilities to adapt to growing ...
Mar. 3, 2015 10:00 PM EST Reads: 1,043
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures...
Mar. 3, 2015 09:15 PM EST Reads: 840
Business and IT leaders today need better application delivery capabilities to support critical new innovation. But how often do you hear objections to improving application delivery like, “I can harden it against attack, but not on this timeline”; “I can make it better, but it will cost more”; “I can deliver faster, but not with these specs”; or “I can stay strong on cost control, but quality will suffer”? In the new application economy, these tradeoffs are no longer acceptable. Customers will ...
Mar. 3, 2015 07:30 PM EST Reads: 928
Red Hat has launched the Red Hat Cloud Innovation Practice, a new global team of experts that will assist companies with more quickly on-ramping to the cloud. They will do this by providing solutions and services such as validated designs with reference architectures and agile methodology consulting, training, and support. The Red Hat Cloud Innovation Practice is born out of the integration of technology and engineering expertise gained through the company’s 2014 acquisitions of leading Ceph s...
Mar. 3, 2015 06:30 PM EST Reads: 692
The free version of KEMP Technologies' LoadMaster™ application load balancer is now available for unlimited use, making it easy for IT developers and open source technology users to benefit from all the features of a full commercial-grade product at no cost. It can be downloaded at FreeLoadBalancer.com. Load balancing, security and traffic optimization are all key enablers for application performance and functionality. Without these, application services will not perform as expected or have the...
Mar. 3, 2015 05:30 PM EST Reads: 599
VictorOps is making on-call suck less with the only collaborative alert management platform on the market. With easy on-call scheduling management, a real-time incident timeline that gives you contextual relevance around your alerts and powerful reporting features that make post-mortems more effective, VictorOps helps your IT/DevOps team solve problems faster.
Mar. 3, 2015 05:00 PM EST Reads: 1,355
As organizations shift toward IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection &E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 16th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships, will disc...
Mar. 3, 2015 05:00 PM EST Reads: 948
Skytap Inc., has appointed David Frost as vice president of professional services. David joins Skytap from Deloitte Consulting where he served as Managing Director leading SAP, Cloud, and Advanced Technology Services. At Skytap, David will head the company's professional services organization, and spearhead a new consulting practice that will guide IT organizations through the adoption of DevOps best practices. David's appointment comes on the heels of Skytap's recent $35 million Series D fundin...
Mar. 3, 2015 04:45 PM EST Reads: 677
Cloud data governance was previously an avoided function when cloud deployments were relatively small. With the rapid adoption in public cloud – both rogue and sanctioned, it’s not uncommon to find regulated data dumped into public cloud and unprotected. This is why enterprises and cloud providers alike need to embrace a cloud data governance function and map policies, processes and technology controls accordingly. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance...
Mar. 3, 2015 04:15 PM EST Reads: 913
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
Mar. 3, 2015 04:00 PM EST Reads: 1,565
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been ...
Mar. 3, 2015 04:00 PM EST Reads: 1,437
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. In his session at 15th Cloud Expo, Michael Meiner, an Engineering Director at Oracle, Corporation, will analyze a range of cloud offerings (IaaS, PaaS, SaaS) and discuss the benefits/challenges of migrating to each of...
Mar. 3, 2015 04:00 PM EST Reads: 900
Platform-as-a-Service (PaaS) is a technology designed to make DevOps easier and allow developers to focus on application development. The PaaS takes care of provisioning, scaling, HA, and other cloud management aspects. Apache Stratos is a PaaS codebase developed in Apache and designed to create a highly productive developer environment while also supporting powerful deployment options. Integration with the Docker platform, CoreOS Linux distribution, and Kubernetes container management system ...
Mar. 3, 2015 04:00 PM EST Reads: 969
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focu...
Mar. 3, 2015 03:15 PM EST Reads: 1,519
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing...
Mar. 3, 2015 02:00 PM EST Reads: 1,477