An architecural overview
 

User liberation from computer hardware necessity was always considered one of the most revolutionary ideas in computer science. This projection of physical freedom within the virtual domain seems closer than ever with the recent increase of cloud computing popularity and the drastic alterations it brings to user's perception of software delivery and installation, infrastructure architectures and development models. Being an extremely sophisticated and innovative information system architecture, cloud computing is often considered to be the future of computing, some sort of progression that will shape the imminent way that people use their devices, for it has already reduced the overall client side complexity along with hardware requirements.

Cloud Computing isn't much a single technology as it is a combination of many existing software solutions and architectures. Elements of cloud computing can be traced in the very beginning of modern telecommunications but it was the great advances in processing power, connectivity storage and virtualization that actually created this miraculous technical ecosystem. Shared services are becoming dominant over isolated products and this approach allows corporations to focus on their primary business goals based on a much more efficient usage-based cost model.

Alongside with this evolution in computing, the telecommunications industry is also transformed by consumer service expectations together with technological leaps. Service Providers (SPs) strived from cost reduction necessity and their main objective which is no other than service offering expansion seem to consider cloud computing as a brave new field of business opportunities. Users' proliferation in adoption of cloud architectures and services is taken under serious consideration in their ultimate goal of creating a dynamic profile as trusted partners in an increasing market. Telecommunication companies have some significant benefits on cloud services capitalization, provided that they own the network, therefore are able to provide secure and scalable solutions including guaranteed Quality of Service (QoS) and sophisticated management capabilities through dedicated portals that monitor service performance.

Despite the notion that cloud computing is virtually automatic, elastic and effortless, SPs are aware of significant issues that make this platform one of the most demanding in terms of technological background and skills. Besides security, scalability, billing and monitoring barriers, it takes quite an effort to generate revenue from such an intense capital investment. Cloud computing infrastructure is really expensive and is an investment not all players have the luxury to do. This is the reason that only corporations with significant cash flow may have an actual chance in the particular field and even so, the actual comprehension of as many possible characteristics, key elements that might make the difference along with some assessment is considered essential.

image1

Software as a Service (SaaS): The particular service model provides the client the capability to use over the web, provider's applications that operate on a cloud infrastructure. The applications are accessible from various interfaces such as a web browser or even a terminal command line. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. The provider may host the application on his own server infrastructure or use another vendor's hardware and the application may be licensed directly to a user, a group of users or an organization. Unlike the traditional method of purchasing and installing software, the SaaS customer leases the usage of the software operating on the provider's equipment using an operational expense model. This pay-peruse licensing model is also known as on-demand-licensing since some applications are billed on a metered usage and time-period basis, rather than paying the common upfront costs of traditional licensing.

Platform as a Service (PaaS) is similar to SaaS but the service is an entire application development environment, not just the use of an application. Providers deliver a cloud-hosted virtual development environment along with the solution stack all accessible via a Web browser, an approach which greatly accelerates development and deployment of software applications. PaaS simply encapsulates a layer of software and provides it as a service that can be used as a solid foundation for higher-level services to be implemented. PaaS is the capability provided to the consumer to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage to control the underlying cloud infrastructure including network, servers, operating systems or storage, but has control over the deployed applications and possibly application hosting environment configurations. Creating a ready channel for sales and distribution is also a benefit of the particular model. Small or start-up software developers are keen on using a PaaS provider to access development resources that would otherwise be unavailable to them due to significant cost.

Infrastructure as a Service (IaaS) is the cloud model that clearly demonstrates the difference between traditional IT approach and the cloud-based infrastructure service. The consumer has the capability to provision processing, storage, networks and other fundamental computing resources, being able to deploy and run arbitrary software, including operating systems and applications. Although the client has no access to the underlying cloud infrastructure, has total control over storage, deployed applications and possibly limited control over selected networking components such as host firewalls and QoS monitoring platforms. IaaS benefits are similar to other XaaS models for smaller businesses now gain access to much higher level of IT talent and technology solutions thus dynamic infrastructure scalability enables consumers to tailor their requirements at a more granular level. It can also deliver basic or complex capabilities as a service over the Internet. This enables the pooling and sharing of hardware resources, such as servers, storage as well as perimeter devices like firewalls and routers.

References

[1] P. Mell and T. Grance, The NIST Definition of Cloud Computing, National Institute of Standards and Technology, Special Publication 800-145, September 2011

[2] C. Tselios, I. Politis, V. Tselios, S. Kotsopoulos and T. Dagiuklas, "Cloud Computing: A Great Revenue Opportunity for Telecommunication Industry", in 51st FITCE Congress (FITCE), Poznan, Poland

[3] C. Tselios and G. Tsolis "A survey on software tools and architectures for deploying multimedia-aware cloud applications", Algorithmic Aspects of Cloud Computing, LNCS 9511, 168-180, Springer International Publishing

Describing the engine of Cloud Computing

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources such as networks, servers, storage, applications, and services that can be rapidly provisioned andreleased with minimal management effort or service provider interaction. Such a sophisticated and innovative computational system has revolutionized the imminent way that users interact with their personal devices by reducing the overall client side complexity and the hardware requirements it demanded.

A Hypervisor is a dedicated software or firmware component that is able to virtualize system resourcesby utilizing highly efficient and sophisticated algorithms, thus allowing multiple operating systems running on different Virtual Machines (VMs) to share a single hardware host[4]. The hypervisor is actually in charge of all available resources, which are allocated accordingly, making sure that all VMs operate inde-pendentlywithout disrupting each other.

Container-based Virtualization is a server virtualization method in which the virtualization layer runs as an application within the operating system allowing the kernel to support several completely functional yet totally isolated user space instances called guests. In this approach guests share hardware resources in a more direct way without having the overhead of installing an operating system in each one. Performance is significantly improved since hardware calls are handled by a single operating system while guests are not subjected to any short of software emulation. In addition, container-based virtualization implementations capable of live migration can also be used for dynamic load balancing inside a cluster. The major drawback of this approach is the diminished flexibility since all guests need to haveidentical kernels to the host. Three of the most significant hypervisor examples are Xen,VMware ESXi and Kernel-based Virtual Machine (KVM) while the most popular Container-based virtualization schemas and management software are Linux Containers, Docker, and Kubernetes.

Xen is an open-source hypervisor consisted by a small software layer on top of the physical hardware that provides all necessary services for allowing multiple operating systems to be concurrently executed on the same underlying hardware. It introduces the notion of separate domains,which are VMs build on top of the hypervisor itself. The most privileged of those VMs with direct access to hardware, called dom0 (domain zero), is created first and is used to initiate management tasks (i.e. create, discard, migrate, save, restore) and allowing access to I/O devices for all other VMs. One of the main advantages of Xen-based virtual machines is live migration between physical hosts without any availability loss or service interruption. During live migration Xen copies VM memory to the destination node and executes a certain synchronization process thus providing an illusion of seamless migration. Such an attribute might strongly benefit Multimedia clouds, which demand constant transfor-mation and highavailability under stress.

VMware ESXi is an enterprise-class hypervisordeveloped by WMware for deploying and serving virtual computers. It is categorized as Type-1 hypervisor mean-ing that it is able to operate on bare metal infrastructure and includes all necessary components to do so, such asmodified microkernel, known as vmkernel which direct-ly handles CPU and memory utilization. Access to other hardware resources such as network and storage devices is enabled through specific modules most of which derived from modified versions of the same pieces of code used in the official Linux kernel. For facilitating the overall interface connection to all modules ESXi uses the vmklinux module, an intermediate emulation layer with direct access to the vmkernel itself. One of the main features of ESXi bare-metal hypervisor is its significantly small footprint, which only reveals a very small attack surface for malware and over-the-network threats, thus improving reliability and security.

Kernel-based Virtual Machine (KVM) is a free, open-source virtualization solution, which enables advanced hy-pervisor attributes on the Linux kernel. It consists of a loadable kernel module, kvm.ko, which facilitates the core virtualization infrastructure and a processor-specific module kvm-intel.ko or kvm-am.ko for Intel and AMD processors respectively. Upon loading the aforementioned kernel modules, KVM converts Linux kernel into a bare metal hypervisor and leverages the advanced features of modern hardware, thus delivering unsurpassed performance levels

Linux Container (LXC) is an operating-system-level virtualization environment, which allows a single Linux host to deploy and control multiple isolated Linux containers. It is based on native kernel support for isolated namespaces along with cgroups, a kernel feature that handles resource usage for a collection of processes enabling resource limitation, prioritization, accountability and control. In particular, cgroups is designed to cooperate with the kernel in order to handle the CPU, memory, block I/O and networking demands of each process separately thus ensuring the aforementioned isolation of an overall application, including aspects such as process trees (through a separate process identifier allocation scheme), networking parameters, user IDs and even mounted file systems. In this way the container is able to execute native instructions to the whole spectrum of hardware resources without any special interpretation mechanism.

Docker is an open-source project that uses a custom container type to auto-mate application deployment. Once heavily dependent not only to kernel virtualization capabilities but to cgroups and namespaces as well, Docker evolved recently to a more independent solution after introducing the libcontainer library. Using Docker, a hardware/platform-agnostic element as a universal method of container creation and management, the creation of highly distributed and lightweight systems which might operate on both localhost and cloud is going to be significantly more simple, allowing scaling up to be fast and precise, exactly the type of service a multimedia cloud success depends upon.

Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster, which includes certain mechanisms for facilitating application deployment, scaling, scheduling and maintenance. One of its main features is the introduction of Pods, defined as a collocated group of applications connected by a common context. This element, that also defines the smallest deployable unit that can be created, scheduled and managed, is a conjunction of several namespaces all of them having access to shared resources. Once Pods are created, the system continuously monitors their health as well as the state of the machine they are operating on. If a failure is detected, the system utilizes an API object caller Replica-tion Controller, which automatically creates new Pods on a healthy machine. The replicated set of Pods might constitute an entire application, a micro-service or one layer in a multi-tier application. Such level of granularity is ideal for multimedia clouds in order to obtain a high level of end user Quality of Experience by introducing all necessary services for network monitoring, transmission control and error correction that ensure seamless media delivery under complex network conditions.

References

[1] P. Mell and T. Grance, The NIST Definition of Cloud Computing, National Institute of Standards and Technology, Special Publication 800-145, September 2011

[2] C. Tselios, I. Politis, V. Tselios, S. Kotsopoulos and T. Dagiuklas, "Cloud Computing: A Great Revenue Opportunity for Telecommunication Industry", in 51st FITCE Congress (FITCE), Poznan, Poland

[3] C. Tselios and G. Tsolis "A survey on software tools and architectures for deploying multimedia-aware cloud applications", Algorithmic Aspects of Cloud Computing, LNCS 9511, 168-180, Springer International Publishing

[4] Xen Hypervisor [Online]. Available: http://www.xenproject.org/

[5] Kernel Virtual Machine [Online]. Available: http://www.linux-kvm.org/page/Main_Page

[6] Docker [Online]. Available: https://www.docker.com/

[7] VMware Inc., "vCenter Server" [Online]. Available: http://www.vmware.com/products/vcenter-server/

[8] Kubernetes [Online]. Available: http://kubernetes.io/

[9] IBM, "LXC: Linux Container Tools" [Online]. Available: http://www.ibm.com/developerworks/linux/library/l-lxc-containers/l-lxc-containers-pdf.pdf