Welcome!

Machine Learning Authors: Progress Blog, Liz McMillan, Elizabeth White, Darren Anstee, Dan Blacharski

Related Topics: Containers Expo Blog, Microservices Expo, @CloudExpo, Cloud Security, @DXWorldExpo, SDN Journal

Containers Expo Blog: Article

Bare Metal Blog: Quality Is Systemic, or It Is Not

In all critical systems the failure of even one piece can have catastrophic results for the user

February 5, 2013

BareMetalBlog talking about quality testing of hardware, in all its forms. F5 does a great job in this space.

For those of you new to the Bare Metal Blog series, find them all right here.

In all critical systems – from home heating units to military firearms – the failure of even one piece can have catastrophic results for the user. While it is unlikely that the failure of an ADC is going to be quite so catastrophic, it can certainly make IT staff’s day(s) terrible and cost the organization a fortune in lost revenue. That’s not to mention the problems that downtime’s impact on an organizations’ brand can have over the longer term. It is actually pretty scary to ponder the loss of any core system, but one that acts as a gateway and scaling factor for remote employee workload and/or customer access is even higher on the list of Things To Be Avoided ™.

In general, if you think about it the number of hardware failures out there is relatively minimal. There are a ton of pieces of network gear doing their thing every day, and yes, there is the occasional outage, but if you consider the number of devices NOT going down on a given day, the failure rate is very tiny.

Still, no one wants to be in that tiny percentage any more than they absolutely must. Hardware breaks, and will always do so, it is the nature of electronic and mechanical things. But we should ask more questions of our vendors to make certain they’re doing all that they can to keep the chances of their device breaking during their otherwise useful lifetime to a minimum.

For an example of doing it right, we’ll talk a bit about the lengths that F5 goes to in an attempt to make devices as reliable as possible from an  electro-mechanical perspective. While I am an F5 employee, I will note that there is no doubt that F5 gear is highly reliable. It was known for quality before I came to F5, and I have not heard anything since joining that would change that impression. So I use F5 because (a) I am aware of the steps we take as an organization and (b) because our hardware testing is an example of doing it right.

And of course, there are things I can’t tell you, and things that we just will not have room to delve into very deeply in this overview blog. I am considering extending the Bare Metal Blog series to include (among other things) more detail about those parts that I would want to know more about if I were a reader, but for this blog, we’re going to skim so there is space to cover everything without making the blog so long you don’t read to the end.

I admit it, I’ve talked to a lot of companies about testing over the years, and can’t recall a vendor that did a more thorough job – though I can think of a few whose record in the field says they probably have a similar program. So let’s look at some of the quality testing done on hardware.

Parts are not just parts.
An ADC, like any computerized system, is a complex beast. There is a lot going on and the quality of the weakest link is the piece that sets the life expectancy and out-of-the-box quality standards for the overall product. As such there are some detailed parts and subassembly tests that gear must go through.

For F5, these tests include:

  • Signal Integrity Tests to test for signal degradation between parts/subsystems.
  • BIOS Test Suites to validate that BIOS performs as expected and handles exception cases reliably.
  • Software Design Verification Testing to detect and eliminate software quality issues early in the development process.
  • Sub- Assembly Tests to verify correct subsystem performance and quality.
  • FPGA System Validation Tests determines that the FPGA design and hardware perform as expected.
  • Automated Optical Inspection used on the PCB production line to prevent and detect defects.
  • Automated X-Ray Inspection takes 3D slices of an assembled circuit board to prevent and detect defects.
  • In-Circuit Test using a series of probes to test the populated circuit board with power applied to detect defects.
  • Flying Probe uses a “golden board” (perfect sample) to compare against a newly produced board to verify there are no defects.

Now that’s a lot of testing, though I have to admit I’m still learning about the testing process, there may well be more. But you’ll note that some things aren’t immediately called out here – like items picked from suppliers, which could be caught in some of these tests but might not  either. That is because supplier quality standards are separate from actual testing, and require that suppliers whose parts make it into F5 gear are up to standard.

Supply demands
So what do we, as an organization, require from a quality perspective of those who wish to be our suppliers? Here’s a list. This list I KNOW isn’t complete, because I pared it down for the purposes of this blog. I think you’ll get the idea from what’s here though.

  • All assembly suppliers are ISO9000 and 140001 certified.
  • Suppliers assemble and test their products to F5 specifications.
  • Suppliers are monitored with closed loop performance metrics including delivery and quality.
  • Formal Supplier Corrective Action Response program – when a fault is determined in supplier quality, a formal system to quickly address the issue.
  • Quarterly reviews with senior management utilizing a formal supplier scorecard to evaluate supplier quality, stability, and more.

The biggest one in the list, IMO, is that suppliers assemble and test product to F5 specifications. Their part is going in our box, but our name is going on it. F5 has a vested interest in protecting that name, so setting the standards by which the suppliers put together and test the product they are supplying is huge. After all, many suppliers are building tiny little subsystems for inside an F5 device, so holding them to F5 standards makes the whole stronger.

By way of example, we require the more reliable but more expensive version of capacitors from our suppliers. For a bit of background on the problem, there is an excellent article on hardwaresecrets.com (and a pretty good overview on wikipedia.com) about capacitors. By demanding that our suppliers use better quality components, the overall life expectancy of our hardware is higher, meaning you get less calls in the middle of the night.

The whole is different than the sum of the parts
While an organization can test parts until the sun rises in the west, that will not guarantee the quality of the overall product. And in the end, it is the overall product that a vendor sells. As such, manufacturers generally (and F5 specifically) keep an entire suite of whole-product tests on-hand for product quality assessment. Here are some of them used at F5.

  • Mechanical Testing Test the construction of the system by  applying shock, drop, vibe, repetitive insertion/extractions, and more.
  • Highly Accelerated Life Testing -  Heat and vibration are used to determine the quality and operational limits of the device. The goal is to simulate years of use in a manageable timeframe.
  • Environmental Stress Screening – Expose the device to extremes of environment, from temperature to voltage.
  • MFG Test Suite System Stress testing - turn everything on, Reboot, Power Cycle, et cetera. By way of example, we cycle power up to 10,000 times during this testing.
  • On-Going Reliability Testing - The products currently in the manufacturing line are randomly picked and then put in a burn-in chamber which then test the device at elevated temperature.
  • Post Pack out Audit – Pull random samples from our finished good inventory to verify quality.

That’s a lot of testing, and it is not anywhere near all that F5 does to validate a box. For example, while software testing got a hat-tip at the component level, our Traffic Management Operating System (TMOS) has a completely separate set of testing, validation, and QA processes that are not listed here because this is the Bare Metal Blog. Maybe at some point in the future I’ll do a series like Bare Metal Blog on our software. That would be interesting for me, hopefully for you also.

It’s not over when it’s over
The entire time that Lori and I were application developers, there was a party to celebrate every time we finished a major piece of software. From an evening out with the team when our tax prep software shipped to a bottle of champagne on the roof of an AutoDesk office building when AutoCAD Map shipped, we always got to relax and enjoy it a bit.

While our hardware dev teams get something similar, our hardware test teams don’t pack up the gear and call it a product. For the entire lifecycle of an F5 box – from first prototype to End of Life – our test team does continuous testing to monitor and improve the quality of the product. Unlike most of what you will find in this blog, that is pretty unique to F5. Other companies do it, but unlike ISO certification or HALT testing, continuous testing is not accepted as a mandatory part of product engineering in the computing space. F5 does this because it makes the most sense. From variations in quality of chips to suppliers changing their suppliers, things change over the production of a product, and F5 feels it is important to overall quality to stay on top of that fact. This system also allows for continuous improvement of the product over its lifecycle.

One of the many reasons I think F5 is a great company. I have twice run into scenarios that involved a vendor who did not do this type of testing, and it cost me. Once was as a reviewer, which means it was worse for the vendor than for me, and once as an IT manager, which means it was worse for me than the vendor. I would suggest you start asking your vendors about lifetime testing, because a manufacturing or supplier change can impact the reliability of the gear. And if it does, either they catch it, or you could be walking into a nightmare. The perfect example (because so many of us had to deal with it) was a huge multinational selling systems with “DeskStar” disks that we all now lovingly call “Death Star” disks.

You can rely on it
This process is a proactive investment by F5 in your satisfaction. While you might think “doesn’t all that testing – particularly when continuous testing occurs over the breadth of devices you sell – cost a lot of money?”, the answer is “nowhere near as much as having to visit every device of model X and repair it, nowhere near as much as the loss of business persistent quality issues generates”. And it is true. We truly care about your satisfaction and the reliability of your network, but when it comes down to it, that caring is based upon enlightened self interest. The net result though is devices you can trust to just keep going.

I know, we have one in our basement from before we came to F5, It’s old and looks funny next to our shiny newer one. But it still works. It’s EOL’d, so it isn’t getting any better, and when it breaks it’s done, but the device is nearly a decade old, and still operates as originally advertised.

If only our laptops could do that.

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...