En vue de l'obtention du DOCTORAT DE L'UNIVERSITÉ DE TOULOUSE Délivré par : Institut National Polytechnique de Toulouse (INP Toulouse) Discipline ou spécialité : Réseaux, Télécommunications, Systèmes et

Please download to get full document.

View again

of 112
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Court Filings

Publish on:

Views: 7 | Pages: 112

Extension: PDF | Download: 0

En vue de l'obtention du DOCTORAT DE L'UNIVERSITÉ DE TOULOUSE Délivré par : Institut National Polytechnique de Toulouse (INP Toulouse) Discipline ou spécialité : Réseaux, Télécommunications, Systèmes et Architecture Présentée et soutenue par : M. GIANG SON TRAN le mercredi 4 juin 2014 Titre : COOPERATIVE RESOURCE MANAGEMENT IN THE CLOUD Ecole doctorale : Mathématiques, Informatique, Télécommunications de Toulouse (MITT) Unité de recherche : Institut de Recherche en Informatique de Toulouse (I.R.I.T.) Directeur(s) de Thèse : M. DANIEL HAGIMONT Rapporteurs : M. DIDIER DONSEZ, UNIVERSITE GRENOBLE 1 M. JEAN-MARC MENAUD, ECOLE DES MINES DE NANTES Membre(s) du jury : M. NOËL DE PALMA, UNIVERSITE GRENOBLE 1, Président M. ALAIN TCHANA, INP TOULOUSE, Membre M. DANIEL HAGIMONT, INP TOULOUSE, Membre ii To my family... No one s born a computer scientist, but with a little hard work, and some math and science, just about anyone can become one. Barrack Obama. Acknowledgement Firstly, I would like to express my gratitude to all the members of the jury who spent invaluable time to evaluate my work. I would like to thank especially the reviewers Mr. Jean-Marc Menaud and Mr. Didier Donsez who have given me the constructive critics. I also would like to thank Mr. Daniel Hagimont, Professor at Institute National Polytechnique de Toulouse for his supervision throughout my work. His advices were important and essential for my research during these years. I was very happy to swork within the SEPIA research team at IRIT-ENSEEIHT. I deeply felt the working spirit of our team in each weekly meeting. Thanks Alain Tchana for a lot of your welcome and help in my personal life when I initially joined the group, and for our joint work as a background of my thesis. Thanks Larissa Mayap - my office mate - for our discussions and works during our thesis. I also would like to thank Laurent Broto, Suzy Temate, Aeiman Gadafi and Ekane Brice for all of your help. I would like to thank the Vietnamese Government, the Institute of Information Technology (IOIT) and the University of Science and Technology of Hanoi (USTH) for the grant to work in France. Many thanks to the laboratory IRIT for funding all of my trips to the conferences. Finally, I would like to express my deep gratitude to my parents who have raised me, covered me and educated me for such a long time. Thank you my little sister for all the fun we have had and shared together. Last but not least, I cannot finish my work without everyday help and encourgement from my family members, my wife and my little princess. They are my source of love and strength to continue my pursuit of science. v Abstract Recent advances in computer infrastructures encourage the separation of hardware and software management tasks. Following this direction, virtualized cloud infrastructures are becoming very popular. Among various cloud models, Infrastructure as a Service (IaaS) provides many advantages to both provider and customer. In this service model, the provider offers his virtualized resource, and is responsible for managing his infrastructure, while the customer manages his application deployed in the allocated virtual machines. These two actors typically use autonomic resource management systems to automate these tasks at runtime. Minimizing the amount of resource (and power consumption) in use is one of the main services that such cloud model must ensure. This objective can be done at runtime either by the customer at the application level (by scaling the application) or by the provider at the virtualization level (by migrating virtual machines based on the infrastructure s utilization rate). In traditional cloud infrastructures, these resource management policies work uncoordinated: knowledge about the application is not shared with the provider. This behavior faces application performance overheads and resource wasting, which can be reduced with a cooperative resource management policy. In this research work, we discuss the problem of separate resource management in the cloud. After having this analysis, we propose a direction to use elastic virtual machines with cooperative resource management. This policy combines the knowledge of the application and the infrastructure in order to reduce application performance overhead and power consumption. We evaluate the benefit of our cooperative resource management policy with a set of experiments in a private IaaS. The evaluation shows that our policy outperforms uncoordinated resource management in traditional IaaS with lower performance overhead, better virtualized and physical resource usage. vii Résumé L évolution des infrastructures informatiques encourage la gestion séparée de l infrastructure matérielle et de celle des logiciels. Dans cette direction, les infrastructures de cloud virtualisées sont devenues très populaires. Parmi les différents modèles de cloud, les Infrastructures as a Service (IaaS) ont de nombreux avantages pour le fournisseur comme pour le client. Dans ce modèle de cloud, le fournisseurs fournit ses ressources virtualisées et il est responsable de la gestion de son infrastructure. De son coté, le client gère son application qui est déployée dans les machines virtuelles allouées. Ces deux acteurs s appuient généralement sur des systèmes d administration autonomes pour automatiser les tâches d administration. Réduire la quantité de ressources utilisée (et la consommation d énergie) est un des principaux objectifs de ce modèle de cloud. Cette réduction peut être obtenue à l exécution au niveau de l application par le client (en redimensionant l application) ou au niveau du système virtualisé par le fournisseur (en regroupant les machines virtuelles dans l infrastructure matérielle en fonction de leur charge). Dans les infrastructures de cloud traditionnelles, les politiques de gestion de ressources ne sont pas coopératives: le fournisseur ne possède pas d informations détaillées sur les applications. Ce manque de coordination engendre des surcoùts et des gaspillages de ressources qui peuvent être reduits avec une politique de gestion de ressources coopérative. Dans cette thèse nous traitons du problème de la gestion de ressources séparée dans un environnement de cloud virtualisé. Nous proposons un modèle de machines virtuelles élastiques avec une politique de gestion coopérative des ressources. Cette politique associe la connaissance des deux acteurs du cloud afin de réduire les coûts et la consommation d énergie. Nous évaluons les bénéfices de cette approche avec plusieurs expériences dans un IaaS privé. Cette évaluation montre que notre politique est meilleur que la gestion des ressources non coordonnée dans un IaaS traditionnel, car son impact sur les performances est faible et elle permet une meilleure utilisation des ressources matérielles et logicielles. viii Contents 1 Introduction 1 2 Cloud and Resource Management Cloud Computing Introduction and Definition Classifications Advantages and Disadvantages Synthesis Virtualization Definitions Classifications Advantages and Disadvantages Synthesis Resource Management in a IaaS Resource Management by the Provider Resource Management by the Customer Synthesis Motivation and Orientation Fixed Size Virtual Machine Resource Holes Performance overhead Elastic Virtual Machine Cooperative IaaS Approach overview Characteristics Comparison with a conventional IaaS or a PaaS Synthesis State of the Art Resource Management with Static Virtual Machines Traditional Resource Management The Customer Side The Provider Side 4.1.2 Multi-Level Resource Management Uncoordinated Policies Cooperative Policies Elastic Virtual Machines Synthesis Contribution: Cooperation Design Cooperation Protocol Tier Subscription Changing Tier Resource Splitting or Merging Tier Instances Quota Management Cooperation Calls Upcall Proposals of Virtual Machine Allocation Notifications of Virtual Machine Allocation Elasticity Proposals Downcall Requests Confirmations Synthesis Contribution: Cooperative Resource Management System jtune Framework System Representation Application Deployment Control Loop Communication between Components Runtime Management Usages jcoop: Cooperative Resource Manager XenManager AmpManager Synthesis Evaluations Experimental Setup Hardware Testbed Customer Application: RUBiS Metrics Workload Profile Evaluations x 7.2.1 Scalability and Elasticity Performance Overhead Virtual Machine Occupation Physical Server Utilization Synthesis Conclusion and Perspectives Conclusion Perspectives Short Term Perspectives Long Term Perspectives Bibliography 112 xi Chapter 1 Introduction Computing systems are continuously becoming more and more complex. These structures evolved from a single machine (personal computer or large mainframe) to clusters, to grids, and recently to hosting centers with complicated distributed systems. The rapid increase in number of machines leads to the complexity of administration. This is often considered as an error-prone and costly task: the administrator not only deploys the applications into the system, but also maintains its state and repairs as failure occurs. Maintaining such large clusters or grids needs to be automated in order to be cost- and time-effective. Autonomic administration was proposed as a potential option to solve the complex problem of managing clusters and grids [40]. In an autonomic administration system, the system self-manages with given high-level objectives from the administrator, such as application deployment or reactions to failures. As a result, the intervention from the administrator is greatly reduced. TUNe [22] was developed as an autonomic administration system with a high-level formalism for the specification of deployment and management policies. TUNe has been experimented with a variety of application domains: web applications, grid computing systems, and cloud computing systems. Cloud computing is becoming a global trend for companies to externalize their hardware resources instead of managing themselves. The companies managing the hardware, so-called providers, are expected to ensure quality of service for their customers while minimizing costs. Power consumption in data centers in 2011 was predicted at 100 billion kwh with peak load at 12GW, equivalent to 25 baseload power plants. Additionally, American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) estimated that by 2014, infrastructure and energy would contribute 75% to the total IT cost of companies [16]. Many solutions CHAPTER 1. INTRODUCTION 2 are proposed and applied to address this power issue. Virtualization, being one of the answers, allows resource mutualization of users on the same resource pool. With virtualization, hardware resources are divided and encapsulated inside virtual machines. The usage of virtual machines in cloud computing increases utilization rate of data centers and speedups application deployments. To effectively manage resources in cloud infrastructures, the provider and the customer are interested in using autonomic administration systems to handle their resource management tasks. Using such a system, the customer dynamically allocates and deallocates his virtual machines to fulfill his resource needs at runtime, to deal with different load situations of his application and to minimize resource cost. To do this, the customer defines his resource management policy before deployment and instructs his autonomic administration system to follow this predefined plan. On the other hand, the provider specifies his resource management policy so that a minimum amount of physical resources is used. This strategy aims at reducing energy cost for the provider. These resource management policies are complementary and should be coordinated. Very few works in literature focused on exploiting the benefit of a cooperative management policy between these two actors. From this point of view, the objectives of this research work are: (1) pointing out the missing potential cooperation between the customer and the provider, and (2) contributing to the exploration and confirmation of this benefit. This document presents the work done in the domain of autonomic administration in a cooperative resource management policy. This dissertation is organized as follows. Part 1: Thesis context This part consists of chapter 2, describing the context that motivates this work. This chapter gives an overview of cloud computing in section 2.1. Section 2.2 reviews the base of cloud computing: the virtualization technology. Section 2.3 briefs resource management policies in cloud infrastructures. Part 2: Thesis position This part covers the problem analysis, approach and state of the arts with the above management policies. It includes chapter 3 and 4. Chapter 3 motivates our work by discussing the problems of fixed-size virtual machine (section 3.1) and analyzing elastic virtual machine as a straightforward solution (section 3.2). It then presents the general orientation of our research in section 3.3. Finally, chapter 4 presents the related works with respect to this orientation. Part 3: Contributions CHAPTER 1. INTRODUCTION 3 The main contributions of this research work are presented in this part, including an in-depth explanation of the specifications for a cooperative resource management policy in chapter 6. Chapter 6 details the design and implementation of the jtune autonomic administration framework, and jcoop as the implementation the cooperation specification. This part also covers the evaluation of our policy with jtune and jcoop in chapter 7. Finally, we conclude our work and discuss the future works in chapter 8. Chapter 2 Cloud and Resource Management Contents 2.1 Cloud Computing Introduction and Definition Classifications Advantages and Disadvantages Synthesis Virtualization Definitions Classifications Advantages and Disadvantages Synthesis Resource Management in a IaaS Resource Management by the Provider Resource Management by the Customer Synthesis 2.1. CLOUD COMPUTING Cloud Computing Introduction and Definition The difficulties of self-managing infrastructures. Computer infrastructure has been evolving very quickly, from a single machine to clusters and grids. Large companies usually require large amount of machines to host their business applications (for example: web servers, application servers, database servers, file servers, servers, authentication services, load balancers, etc). These servers must also be secured from intrusions. On the other hand, small companies need to reduce initial investment for IT to focus on their business. These requirements lead to the following difficulties in order to build and maintain the company s computer infrastructure [15]: Increasing human power. More complicated infrastructures also mean more requirements on the deployment, configuration, launch, and reconfiguration at runtime. These tasks are either manually performed or automatically managed (but still need to be supervised) by the IT department of the company. However, the first deployment of the whole infrastructure (setting up servers, networks, cooling systems) still must be manually done. Human power investment for the IT department is usually underestimated. Waste of resources. After deployment, the infrastructure must be well utilized (i.e. it must have load and not being idle). Idle machines not only contribute to the initial investment but also waste power at runtime. Electricity for operating the whole computer infrastructure is always one of the highest parts contributing to the Total Cost of Ownership (TCO). The average cost for powering the servers and their cooling systems accounts for 20% of the total cost [48]. The company must ensure that it provides enough power to keep these servers running. As a result, the company usually needs to improve the usage of the infrastructure, to reduce resource and energy waste. Difficulties to dimension the IT infrastructure. The infrastructure s workload at runtime cannot be perfectly predicted and allocated at deployment time. There are idle and peak load phases (e.g. during the night and in business hours, respectively). If being over-dimensioned to deal with peak loads, the infrastructure will be more under-utilized during idle periods and contribute to the resource waste. Therefore, it must have the ability to self-adapt to the current load by increasing or decreasing the number of active servers to 2.1. CLOUD COMPUTING 6 handle application s needs. To do this, the design of the infrastructure must be flexible enough to deal with such various situations. What is cloud computing. Cloud computing is now a current trend for companies to externalize their computing infrastructure into another type of company. The first actor is called customer, and the later is provider. This movement is to improve the concentration of each actor: the customers only focus on their business and leave the infrastructure management to the providers. By improving the dedication of each actor, cloud computing model connects the customer s needs with the provider s services. Since there was no exact formal definition of cloud computing, we can quote a proposal definition, which is rather widely accepted in the research community, from National Institute of Standards and Technology (NIST 1 ): Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction [42]. From this definition, we can summary the following characteristics of cloud computing: On-demand Self-service: the customer can request more or less computing resources at runtime. These requests will be automatically served by the service provider without any human intervention. Remote Access: the resource is available and can be accessed remotely over the network. With the vast improvement of network infrastructures, accessing resource over the network is not a challenge for the customers. Resource Pooling: the provider s resource is shared by multiple customers. These resources are dynamically assigned and reassigned to different customers at runtime, when requested. The customer generally does not have any knowledge about the physical location of his allocated resources. Rapid Elasticity: the provided resource can be elastically expanded and collapsed rapidly at runtime, according to the customer s request. From the customer s view, this resource pool often appears with unlimited amount and can be requested at any time. 1 2.1. CLOUD COMPUTING 7 Monitored and Measured: the cloud manager monitors its resource usage with various metrics, and tries to optimize the resource pool in an efficient way. These activities are often transparent to the customer. Cloud computing is the combination and adoption of many existing technologies. Cloud computing is similar to grid computing in terms of hardware deployment and management, but different as cloud computing mostly provides its resource by utilizing virtualization [33]. This is the main technology behind cloud infrastructure s manageability and portability of resources. Additionally, utility computing concepts are widely used in cloud computing [34]: resources (such as CPU, memory, storage and bandwidth) provided to the customers are metered and billed. The pay-as-yougo billing model is very popular in cloud services nowadays. In cloud computing, resource sharing is the nature and the main factor bringing the benefit: the provider switches off unused resources while sharing his resource pool among the customers to satisfy their resource needs. Therefore, Service Level Agreements (SLAs) are offered by the provider to
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks