Even though strides are being made to define standards for extending Ethernet to handle data center applications, these advances will not be a panacea, according to vendors. Indeed, proprietary extensions to those standards, which are being defined by the IEEE and the Technical Committee T11 of the InterNational Committee for Information Technology Standards, will still be required to address customer requirements for data center-optimized Ethernet. Additionally, vendor marketing may confuse the issue even more as some have adopted different acronymic brands that essentially refer to the same technology.
A group of vendors is driving standards for Converged Enhanced Ethernet (CEE), an extended version of Ethernet for data center applications. Cisco participates in the CEE standards efforts, though refers to the technology as Data Center Ethernet (DCE).
A new kind of Ethernet CEE and DCE describe an enhanced Ethernet that will enable convergence of LAN, storage-area network and high-performance computing applications in data centers onto a single Ethernet interconnect fabric. Currently, these applications have separate interconnect technologies, including Fibre Channel, InfiniBand and Myrinet. This forces users and server vendors to support multiple interconnects to attach servers to the various networks, a situation that is costly, energy and operationally inefficient and difficult to manage.
So, many in the industry — including Brocade, EMC, NetApp, Emulex, Fujitsu, IBM, Intel, Sun Microsystems and Woven Systems, in addition to Cisco and Force10 — are proposing Ethernet as a single, unified interconnect fabric for the data center. These vendors point to its ubiquity, familiarity, cost and speed advances: 10Gbit/sec. But in its current state, Ethernet is not optimized to provide the service required for storage and high-performance computing traffic — speed alone won’t cut it, vendors said.
Ethernet, which drops packets when traffic congestion occurs, needs to evolve into a low- latency, “lossless” transport technology with congestion management and flow control features, according to backers. “You need to make sure Ethernet will behave in the same way as Fibre Channel itself,” said Claudio DeSanti, a technical leader in Cisco’s storage technology group. DeSanti is vice chair of T11 and technical editor of the IEEE’s 802.1Qbb priority-based flow control project within the Data Center Bridging (DCB) task group.
T11’s FCoE defines the mapping of Fibre Channel frames over Ethernet so storage traffic can be converged onto a 10Gbit/sec. The IEEE’s DCB task force is defining three standards — 802.1Qau for congestion notification, Qaz for enhanced transmission selection and Qbb for priority-based flow control. Where Ethernet standards fall short Vendors said these standards should be solid enough to implement in products and deploy in data centers by late 2009 or early 2010. The DCB standards will be final in March 2010, four months later than initially planned because of some outstanding but not insurmountable issues, according to Pat Thaler, chair of the DCB task group in the IEEE.
But some leading-edge customers need a pre-standard lossless Ethernet implementation now, vendors said; and even when these standards are complete they will be incomplete, others pointed out. “A particular area where we feel these standards don’t really address is the avoidance of congestion — primarily with respect to load-balancing traffic first before we rate limit traffic at the source,” said Bert Tanaka, vice president of engineering at Woven Systems. “They are really targeted for a fairly small fabric — maybe hundreds of nodes,” he said. “But if you’re trying to scale to multiple hops and larger fabrics, it’s not clear it would scale to something like that.”
Apart from the standards efforts, CEE and DCE may raise some operational challenges, according to Chuck Hollis, EMC’s global marketing chief technology officer.
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9117639&source=NLT_PM&nlid=8