Information Technology in Transition - Training
Data Center Operations and Virtualization

Home


Internetworking Virtualization Cloud_Computing Mainframe Virtualization
     

Information Technology in Transition

Data Center Operations

Data centers contain large numbers of linked and clustered computers; geographically dispersed data centers are linked through a high speed private network. Applications and services are supported and accessed over the public Internet and utilize computing power on-demand. The Open Compute Project mission is to improve data center energy efficiency. Specifications are placed online and vendors are invited to participate. The premise is that standardization of data center environments will facilitate and control maintenance costs. Founded by Facebook and Rackspace, the Open Compute Project is being evaluated and could become an international standard.

Market research industry firms data indicates that a significant change is occurring with data center server hardware and the processing requirements to accommodate the growth in cloud computing environments. This is occurring at both ends of the market: 1- New and expanding web server data center operations. 2- Existing proprietary hardware and UNIX-operating system environments. Gartner Inc. reports that the global server market for x86 servers has been steadily increasing and there has been a significant decline in the sales of RISC/Itanium UNIX server units.

As workloads move from running on internal data centers, a higher percentage of spending on servers, storage, networking, and infrastructure software is coming from public cloud providers and the hyperscale operators. In 2018, nearly 30% of total first quarter revenue in the public cloud infrastructure market was generated by ODM: original device manufacturers providing server and storage hardware for large cloud vendors. The market leaders are Dell, Inc., Hewlett Packard Enterprise Co., and Cisco Systems, Inc. Since 2015, the public cloud has increased from 25% to 31% of all enterprise cloud computing.

Wintel hardware/operating system configurations, Intel server x86 compatible processors running MS Windows or the Linux operating system, have been replacing proprietary scale-up servers. The primary reasons underlying this movement towards Wintel are lower initial and fewer hidden costs, improvements in the computing capabilities of X86-based platforms to scale and handle complex workloads, and the perceived risks associated with loss of control and lock in proprietary platform. The Wintel servers and Intel Xeon processor have become easier to migrate from proprietary hardware and UNIX operating systems. Data centers with significant investments in proprietary servers and associated applications tools have been re-platforming onto x86-based servers and migrating workloads to lower cost servers with Intel processors.

There have been competitive responses by IBM, Hewlett-Packard, and Oracle. IBM has been supplanting UNIX with Linux as the primary operating system for its Power Systems. Hewlett-Packard arrived at the decision to discontinue its Itanium processor and is porting most of the NonStop platform to the x86 platform. Oracle Corporation is balancing support of its UNIX-Solaris operating system with an increased investment in the Linux operating system. It has announced joint ventures with Microsoft in server virtualization and cloud based computing. In addition, third party companies have emerged to specialize in modernizing and re-hosting legacy products and services onto open industry-standard platforms.

Software-defined data centers is a concept where workloads are shared globally independent of resource location. Intelligent data center workloads can be used for responding to changes in operational requirements and shifting critical workloads to address disaster threatening operations. SDN: Software-defined Networking represents a way to configure network operation from a single location. The concept is to enhance workloads and optimize and balance network traffic. SDN creates a control plane to manage hardware changes through a series of rules which can be dynamically configured. It provides the capability to shorten provisioning of software and devices and mitigate issues associated with workloads crossing datacenter boundaries. Software-defined Storage is being developed as a resource for addressing the growth of data. According to leading market information technology research firms, over the next five years, organizational enterprise will experience on average a 8 fold growth in data capacities. SDN technology is being used in achieving low-cost and mixed storage. It will be able to accommodate a multiple vendor solution.

The OpenDaylight Project is a collaborative open source project hosted by The Linux Foundation to accelerate the adoption of software-defined networking and create a standardized foundation for NFV: Network Functions Virtualization. Commercial hardware and software companies have committed software and engineering resources to help define an open source SDN platform. The OpenDayLight Project goal is to deliver common code and industry standards including an open controller, a virtual overlay network, protocol plug-ins and switch device enhancements for customers, partners, and developers.

The IoT: Internet of Things will be changing how data centers are designed and managed and as massive volumes of devices stream data around the world. According to a report by Gartner, Inc. the number of IoT connected devices/micro-sized processing units in use worldwide will be in excess of 20 billion by 2022. Total spending on end-points and services in 2017 was closing in on $2 trillion. China, North America, and Western Europe are driving the use of connected things and the three regions together were 67 percent of the overall IoT installed base in 2017.


Virtualization

Virtualization allows enterprises to reduce the number of physical machines in their data centers without reducing the number of underlying applications; these efficiencies serve to streamline cost on hardware, power, rack space, and cabling. Operating systems leverage virtualization in order to provide flexible, scalable, and cost effective cloud computing infrastructures. Virtualization will continue to evolve as more technologies become less dependent on rigid operating environments. 1

There are eight core areas of data center virtualization: operating systems, application servers, applications, management, networks, hardware, storage, and services. 2

Virtualization Explanation

Operating System

Virtual operating systems, also known as virtual machines, are becoming a core component of the IT infrastructure. It is the most prevalent form of virtualization in use. Virtual machines are typically full implementations of standard operating systems, such as Windows 7 -10 or Red Hat Enterprise Linux, running simultaneously on the same physical hardware.

VMM: Virtual Machine Managers manage each virtual machine individually; each operating system instance is unaware that other virtual operating systems may be running simultaneously.

Application Server

Application server virtualization is a synonym for advanced load balancing. It is a one-to-many virtualization representation: one server is presented as a virtual interface, hiding and balancing the availability of multiple web servers or applications as a single instance. This provides a more secure and efficient topology than allowing direct access to individual web servers.

Application server virtualization can be applied to application deployments and architectures, from fronting application logic servers to distributing the load between multiple web server platforms. It also can be used in a data center through data and storage tiers as part of database virtualization.

Application

Application virtualization is equivalent to the longstanding utilization of thin clients. The local workstation provides the CPU and RAM required to run the software; however, nothing is installed locally on the machine.

Browser-based applications are implementations of application virtualization; the applications run locally on the workstation and the management and application logic execute remotely.

Management

Management virtualization is an integral component in data center management and the segmentation of administration roles. Network administrator roles can be defined with full access to the infrastructure routers and switches, but without administrative-level access to servers.

Network

Network virtualization is implemented in the form of virtual lP management and segmentation. A VLAN is a single Ethernet port supporting multiple virtual connections from multiple IP addresses and networks which is virtually segmented using VLAN tags. Each virtual lP connection over this single physical port is independent and unaware of the existence of other connections; however, the switch is aware of each unique connection and manages each one independently.

Virtual routing tables also are a form of network virtualization. Virtual routing tables provide a one to many relationship, where any single physical interface can maintain multiple routing tables, each with multiple entries. This provides the interface with the ability to dynamically bring up and discard routing services for one network without interrupting other services and routing tables on that same interface.

Hardware

Hardware virtualization subdivides components and locations of physical hardware into independent segments and manages those segments separately. Asymmetric multiprocessing is a form of pre-allocation virtualization where certain tasks are only run on certain CPUs. In contrast, symmetric multiprocessing is a form of dynamic allocation, where CPUs are interchangeable and used as needed by any part of the management system.

Each classification of hardware virtualization is unique and, depending on the implementation, has value. Both symmetric and asymmetric multiprocessing are forms of hardware virtualization. The process requesting CPU time is not aware which processor its going to run on; there is a request for CPU time from the OS scheduler and the scheduler takes the responsibility for allocating processor time. From the perspective of the process, processor time could be spread across any number of CPUs and any part of RAM.

Pre-allocation virtualization is well suited for specific hardware tasks, such as offloading functions to a highly optimized, single-purpose chip. However, pre-allocation of commodity hardware can cause artificial resource shortages if the allocated chunk is underutilized.

Dynamic allocation virtualization is a more standard approach and typically offers greater benefit when compared to pre-allocation. For true virtual service provisioning, dynamic resource allocation is important because it allows complete hardware management and control for resources as needed; virtual resources can be allocated as long as hardware resources are still available. The downside to dynamic allocation implementations is that they typically do not provide full control, leading to processes which can consume all available resources.

Storage

There are two general classes of storage virtualization: block virtualization and file virtualization.

Block virtualization utilizes SAN: Storage Area Network and NAS: Network Attached Storage technologies and presents itself as a single physical device. SAN devices make utilize RAID: Redundant Array of Independent Devices, which is another form of storage virtualization.

SCSI: Small Computer System Interface is a common implementation of block virtualization; it allows an operating system or application to map a virtual block device, such as a mounted drive, to a local hardware or software network adapter instead of a physical drive controller. The SCSI network adapter bi-directionally translates block calls from the application to network packets recognized by the SAN. It provides a virtual hard drive.

File virtualization moves the virtual layer up into the file and directory structure level. Most file virtualization technologies serve as an interface to storage networks and keep track of the files and directories which reside on storage devices and maintain global mappings of file locations. When a request is made to read a file, the user will work with the file as if it is statically located on their personal remote drive. However, the file virtualization appliance recognizes that the file is physically located on a server in a data center at a different geographic location. File-level virtualization renders the static virtual location pointer of a file immaterial to its physical location, allowing the back-end network to remain dynamic. If the IP address for the server is changed or the connection needs to be re-routed to another data center, only the virtual appliances location map needs to be updated.

Service

Service virtualization consolidates operating system, application server, application, management, network, hardware, virtualization, and storage virtualizations. Service virtualization connects all of the components utilized in delivering an application over the network; it includes the process of making all pieces of an application work together regardless as to where those pieces physically reside. Service virtualization typically is used as an enabler for application availability.


Technology Partner - Resource Center


Sources

Forrester Research, International Data Corporation, Synergy Research Group statistics reported in the CIO Journal - Wall Street Journal by Angus Loten.


Footnote 1

Leading players in open projects working to separate the physical relationship between an operating system and its native hardware include:

AMD Amazon Web Services Arista Networks Big Switch Networks Brocade
Cisco Dell EMC Ericsson Facebook
Google Hewlett-Packard IBM Intel Juniper Networks
Microsoft NEC NetApp Nuage Networks Oracle
PLUMgrid Red Hat Rackspace SAP VMware


Footnote 2

Sources - The assumptions and terminology described in the page were aggregated and validated from:

Fortune Small Business Gartner Google Corporation website IBM Corporation white papers International Data Group
Microsoft Corporation authorized white papers New York Times - News hard copy and online articles Oracle Corporation - white papers Red Hat, Inc. website and white papers SD Times - www.sdtimes.com
Stratecast, division of Frost Sullivan TechTarget Wall Street Journal hard copy and online articles Web Buyers Guide Technology Product Update Yankee Group Global Server Operating System Reliability Study