Click here to close now.

Welcome!

IoT User Interface Authors: Hovhannes Avoyan, Pat Romanski, Liz McMillan, Elizabeth White, Ian Goldsmith

Related Topics: CloudExpo® Blog, JAVA IoT, Microservices Expo, Microsoft Cloud, Containers Expo Blog, Agile Computing

CloudExpo® Blog: Article

Intel Fields Atom for Microservers

It has forecast that microservers could get to be 10% of the server market by 2015

Intel is going to try going after the data center with a brand new Atom System-on-a-Chip (SoC) that can be built into relatively cheap, high-density microservers for cloud providers.

It really rather not - it really wants to sell its high-end chips - but it has no choice. It has forecast that microservers could get to be 10% of the server market by 2015 and it will have to fight for a piece of it after losing a head start earlier this year when AMD plopped down $334 million in cash and stock for SeaMicro, a microserver start-up that already had Intel designed in.

But, given the tone in its voice this week, Intel is apparently serious about the sector, which it's blown off before for defensive purposes.

Intel says the new 22nm dingus, code-named Centerton and seemingly in development since 2007, is the first low-power 64-bit dual-core SoC for these data center systems that's in production and shipping to customers.

Intel makes the production and shipping point because it's looking over its shoulder at ARM, which is promising to deliver a four-core 64-bit version of its widget for microservers by 2014. Like Intel says there's currently no enterprise-class ARM-based server chip but just wait. The ARM contingent is in major test sites.

ARM vendors have trouble buying the Centerton as a real server chip since it lacks on-chip management, I/O, networking and fabric.

Intel's part sips an un-Intel-like 6W of power - which sounds low to Intel camp followers but it's still hot and therefore expensive by ARM standards. It delivers four threads with Intel Hyper-threading.

It's also got familiar server features like Error-correcting Code (ECC) memory support for higher reliability and Intel Virtualization technology for enhanced workload management. (It's suspected that Atom always had ECC and virtualization but Intel turned the features off in earlier generations.)

Microservers, which could be sold in the droves, are supposed to be good at un-intensive compute chores like serving up web pages, content delivery, large distributed memory caching, simple Big Data search systems and MapReduce apps. Within reason, the Centerton is supposed to run the x86 server-class software data centers are used to, which ARM can't do at all.

It's unclear how many nodes Centerton can support. It pretty much depends on how the OEMs finagle the networking. Rival Calexda, which has got ARM-based microservers out for test at major accounts, say it can theoretically support 4,000 nodes and practically support 500-1,000.

See, it takes a lot of systems to process huge numbers of smaller workloads while keeping the power consumption down and such workloads can run many small but highly parallel chunks of code.

Officially designated the S1200, the Intel widget is also expected to be used in storage and networking systems and Intel says - without indicating who's doing what - that the part's got more than 20 low-power server and storage and networking systems design-wins at Dell, HP, Huawei, Inspur, Quanta, Wiwynn, CETC, Supermicro, Accusys, Microsan, Qsan and Qnap.

In fact an unnamed storage vendor reportedly swapped out an ARM design for the Intel SoC and ARMs are supposed to be pretty darn good in storage applications.

HP, which is already in bed with Calexa and its ARM-based boxes as part of its processor-agnostic Project Moonshot, means to try the Intel part in a hush-hush server dubbed Gemini.

This summer HP said the first Moonshot servers would be based on Centerton, with initial systems shipping by the end of this year. It's now more likely to be in the first quarter.

Dell's been partnering with Marvell to create so-called Copper servers using Marvell's ARM-based Armada XP chip but - since Marvell has gone dark about its development - Dell may be closer to selling Calexa boxes.

SeaMicro, the microserver pioneer that AMD had the temerity to buy - considering all of SeaMicro's gear is based on Intel parts - even Intel parts made especially for it - and will be until it switches over to ARM - has a so-called supercompute fabric that connects thousands of processor cores, memory, storage and input/output traffic and supports multiple processor instruction sets.

Calexda, which is hobbled by the fact that it's neither x86 or 64-bit, useful propaganda points for Intel though in the final analysis it may not matter, has fabric, I/O and management built into its chip.

Apparently OEMs will have to wait until later this year or early next when Intel's supposed to deliver a next-generation Avoton Atom that could make the ARM boys sweat.

It'll be built using Intel's fancy new 22nm 3D Tri-gate transistors and should have 16GB-32GB of memory and four or eight cores.

By then Intel might have a fabric too.

Karl Freund, Calexda's VP of marketing, sent around a message about the Centerton saying, "Intel didn't specify the additional chips required to deliver a real ‘server-class' solution like Calxeda's, but our analysis indicates this could add at least 10 additional watts plus the cost. That would imply the real comparison between ECX and S1200 is 3.8 vs 16 watts, so roughly 3-4 times more power for Intel's new S1200. And again comparing two cores to four, internal Calxeda benchmarks indicate that Calxeda's four cores and larger cache deliver 50% more performance compared to the two hyper-threaded Atom cores. This translates to a Calxeda advantage of 4.5 to six times better performance per watt, depending on the nature of the application."

He provided this chart to make the comparison plain:

 

ECX1000

Intel S1200

Watts

3.8

6.1

Cores

4

2

Cache (GB)

4

1

PCI-E

8 lanes

8 lanes

ECC

Yes

Yes

SATA

Yes

No

Ethernet

Yes

No

Management

Yes

No

Fabric Switch

80 Gb

NA

Fabric ports

5

NA

The new Intel S1200 product family will consist of three processors with frequency ranging from 1.6GHz to 2GHz. They start at $54 in quantities of 1,000.

Despite the design-win parade Intel didn't show off any boxes so competitors figure it won't really have the chip for a while. Microsoft and Facebook are supposed to fancy the widget but it's unclear if they're using it.

Atom SoC configuration in a highly dense rack will reportedly net more revenue than a rack of way fewer, more powerful Xeon processors.

In 2014 Intel will move to a 14nm process first for low-power Xeons and then Atoms.

More Stories By Maureen O'Gara

Maureen O'Gara the most read technology reporter for the past 20 years, is the Cloud Computing and Virtualization News Desk editor of SYS-CON Media. She is the publisher of famous "Billygrams" and the editor-in-chief of "Client/Server News" for more than a decade. One of the most respected technology reporters in the business, Maureen can be reached by email at maureen(at)sys-con.com or paperboy(at)g2news.com, and by phone at 516 759-7025. Twitter: @MaureenOGara

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
Collecting data in the field and configuring multitudes of unique devices is a time-consuming, labor-intensive process that can stretch IT resources. Horan & Bird [H&B], Australia’s fifth-largest Solar Panel Installer, wanted to automate sensor data collection and monitoring from its solar panels and integrate the data with its business and marketing systems. After data was collected and structured, two major areas needed to be addressed: improving developer workflows and extending access to a b...
When an enterprise builds a hybrid IaaS cloud connecting its data center to one or more public clouds, security is often a major topic along with the other challenges involved. Security is closely intertwined with the networking choices made for the hybrid cloud. Traditional networking approaches for building a hybrid cloud try to kludge together the enterprise infrastructure with the public cloud. Consequently this approach requires risky, deep "surgery" including changes to firewalls, subnets...
Containers Expo Blog covers the world of containers, as this lightweight alternative to virtual machines enables developers to work with identical dev environments and stacks. Containers Expo Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. Bookmark Containers Expo Blog ▸ Here Follow new article posts on Twitter at @ContainersExpo
There is little doubt that Big Data solutions will have an increasing role in the Enterprise IT mainstream over time. 8th International Big Data Expo, co-located with 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - has announced its Call for Papers is open. As advanced data storage, access and analytics technologies aimed at handling high-volume and/or fast moving data all move center stage, aided by the cloud computing bo...
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the...
Move from reactive to proactive cloud management in a heterogeneous cloud infrastructure. In his session at 16th Cloud Expo, Manoj Khabe, Innovative Solution-Focused Transformation Leader at Vicom Computer Services, Inc., will show how to replace a help desk-centric approach with an ITIL-based service model and service-centric CMDB that’s tightly integrated with an event and incident management platform. Learn how to expand the scope of operations management to service management. He will al...
You use an agile process; your goal is to make your organization more agile. But what about your data infrastructure? The truth is, today's databases are anything but agile - they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver new features and capabilities needed to make your organization competitive. As your application an...
Over the years, a variety of methodologies have emerged in order to overcome the challenges related to project constraints. The successful use of each methodology seems highly context-dependent. However, communication seems to be the common denominator of the many challenges that project management methodologies intend to resolve. In this respect, Information and Communication Technologies (ICTs) can be viewed as powerful tools for managing projects. Few research papers have focused on the way...
As the world moves from DevOps to NoOps, application deployment to the cloud ought to become a lot simpler. However, applications have been architected with a much tighter coupling than it needs to be which makes deployment in different environments and migration between them harder. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, Netflix and so on is at the heart of CloudFoundry – a complete developer-oriented Platform as a Service (PaaS...
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, ...
High-performing enterprise Software Quality Assurance (SQA) teams validate systems that are ready for use - getting most actively involved as components integrate and form complete systems. These teams catch and report on defects, making sure the customer gets the best software possible. SQA teams have leveraged automation and virtualization to execute more thorough testing in less time - bringing Dev and Ops together, ensuring production readiness. Does the emergence of DevOps mean the end of E...
The term culture has had a polarizing effect among DevOps supporters. Some propose that culture change is critical for success with DevOps, but are remiss to define culture. Some talk about a DevOps culture but then reference activities that could lead to culture change and there are those that talk about culture change as a set of behaviors that need to be adopted by those in IT. There is no question that businesses successful in adopting a DevOps mindset have seen departmental culture change, ...
Amazon and Google have built software-defined data centers (SDDCs) that deliver massively scalable services with great efficiency. Yet, building SDDCs has proven to be a near impossibility for companies without hyper-scale resources. In his session at 15th Cloud Expo, David Cauthron, CTO and Founder of NIMBOXX, highlighted how a mid-sized manufacturer of global industrial equipment bridged the gap from virtualization to software-defined services, streamlining operations and costs while connect...
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. The DevOps Summit at Cloud Expo – to be held June 3-5, 2015, at the Javits Center in New York City – will expand the DevOps community, enable a wide...
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series dat...
Cloud Expo, Inc. has announced today that Andi Mann returns to DevOps Summit 2015 as Conference Chair. The 4th International DevOps Summit will take place on June 9-11, 2015, at the Javits Center in New York City. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great team at ...
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will addresses this very serious issue o...
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using ...
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enter...