workload optimized servers
Distributed Server load balancing Cloud applications Virtualization   Products > CloudScale Rack
 
offload NIC adapters  
Cavium CloudScale Rack and cloud workload requirements

CloudScale Rack

OVERVIEW
Today’s data centers consist of rows and rows of what are called racks. Racks hold servers which includes the CPU and memory, storage, offload NIC adapters, uninterruptable power supplies, and a fixed function top of rack (ToR) Ethernet switch which lets the servers talk to each other and let the rack talk to other racks or outside the datacenter. However the server architecture by itself was created as an additive process, where each generation was built on top of each other, driven by the varying server workloads, innovations in CPU, storage, memory, and networking protocols. The end result is a sub-optimal vanilla solution that costs a lot more than it needs to on an upfront (acquisition) and ongoing (op-ex) basis and limits the overall performance & scalability of the rack specific to the modern cloud workload requirements.

As cloud industry starts to move towards more accelerated computing models that utilize faster emerging networking protocol & architecture, storage and offload co-processors there arises a need for a new approach for a software defined rack solution across network, storage and compute to deliver the scalability, flexibility and efficiency to compete in the future where hyperscale is mandatory. This new approach is the scalable rack solution with a range of workload optimized servers for compute, storage, networking and management modules that work together to build a wide range of logical, virtual systems, which are more adoptable to emerging networking protocol needs.

Cavium delivers silicon and software building blocks engineered from the ground up for flexible, scalable software defined Data Centers. These include:

  • Leaf and Spine, software programmable (SDN) switches enabled through Xpliant® Ethernet switching technology.
  • Virtualized Data Center ready, elastic security, authentication and key management enabled by Cavium’s LiquidSecurity™.
  • Distributed Server load balancing, firewall and VM acceleration enabled through LiquidIO™ adapters
  • A variety of Cloud applications that can be deployed using ThunderX® based workload optimized servers.




cloud workload requirements
workload optimized servers for compute, storage, networking and management
Related Links
load balanced in hardwares

CloudScale Rack™ is a flexible & scalable solution deploying Cavium’s data center centric product offerings which allows scale-out of individual rack elements, while the entire rack cabinet behaves as scalable unit of Data Center (DC) with shared cooling and power. CloudScale Rack

Servers: ThunderX® processors along with excellent compute and memory capabilities, integrate required network storage capabilities and the hardware accelerators eliminating the need for any additional adapter or offload cards. A ThunderX based server can be configured to run any Data Center workload and the number of servers servicing a particular workload can be changed on demand using OpenStack. Seamless deployment of virtual machines is enabled trough hardware support for dedicated IO for each VM, VM to VM isolation, VM security and VM to VM hardware based switching. Encrypted client traffic is load balanced in hardware using a ThunderX based server with integrated Nitrox accelerator.

Rack Fabric Connectivity: Cisco’s Global Cloud Index tells us that, unlike in campus networks, the dominant volume of traffic in the DC traverses in an “East-West” direction (76%), followed by “North-South” traffic (17%). The primary drivers for this increase in east-west traffic is virtual machine (VM) migration, which is becoming increasingly common because of the wide spread adoption of virtualization in cloud computing. East-West traffic between ThunderX based clusters in the rack can be directly switched using integrated fabric in ThunderX lowering overall latency and reducing number of ToR ports. Within the rack several of the ThunderX servers can be configured as self-contained clusters connected in 3D or 2D Torus topology, forming a mesh with optimal connectivity with the ToR/Leaf switch. Also leveraging upon the flexible parser feature implemented on the Cavium’s data center product offerings, the fabric connectivity can support any emerging tunneling or proprietary protocols to provide efficient low latency fabric connectivity.

Xpliant® Software Defined (SDN) Ethernet switches: XPliant switches can be tailored to the requirements of new protocols with unprecedented flexibility including changes to parsing, lookups, traffic scheduling, packet modification and traffic monitoring. Xpliant switches work in conjuction with ThunderX SoC providing efficient VM level access control and packet monitoring capabilities. Overall end to end fabric management is supported using Openflow APIs.

LiquidIO® adapters: Enables data centers to rapidly deploy high performance SDN applications for both installed and new infrastructure while clearly enhancing server utilization, response times and network agility. The LiquidIO Server Adapters are designed for deployment by cloud service providers, enterprises and private data centers. Plugged into ThunderX servers enable elastic storage, NVMe fabrics and SLA based traffic management.

LiquidSecurity
™: HSM family provides a FIPS 140-2 level 2 and 3 partitioned, centralized and elastic key management solution with the highest transaction/sec performance. It addresses the high performance security requirements for private key management and administration while also addressing elastic performance per virtual / network domain for the virtualized cloud environment. It provides seamless security and key management to VMs and scales to support tens of thousands of VMs.

CloudScale Rack for Flexible and Scalable Software Defined Data Centers

 


Software Defined (SDN) Ethernet switches

All contents are Copyright © 2000 - 2017 Cavium. All rights reserved.     Privacy Policy   |   Copyright Policy   |   Site Map