Cloud Computing, Data Centres, Green Computing

1. Fine-grain Resource Allocation Exploiting Application Diversity in Clouds

Cloud computing can be best characterized by its essentially elastic and scalable resource allocation for various applications with a pay-as-you-go pricing model. These characteristics are realized primarily by virtualization technologies with (massively) multi-core processors. As the adopt of clouds prevails, their use is becoming increasingly diverse ranging from hosting web services, performing data analytics to running high performance scientific and engineering applications. To this extent, many cloud service providers (e.g., Amazon EC2) provision a number of different service offerings, such as High-CPU, High-Memory and Cluster GPU. This project investigates virtual machine composition and placement exploiting the heterogeneity in both resources and applications for efficient resource utilization with respecting QoS requirements of application. The student is expected to have an understanding of operating systems and good algorithm skills, but he/she is not required to have prior experience with cloud computing.


2. Exploiting Heterogeneous Computing Systems for Energy Efficiency

Large-scale distributed computing systems like clouds can be best described by their essentially heterogeneous and dynamic characteristics both in resources and applications. However, it is often the case that resources in a particular system are homogeneous. Recently, a number of efforts have been witnessed to build those systems with heterogeneous resources to better manage a diverse set of workloads (or applications). In other words, a computer system may be built with typical high-performance processors and low-power processors like Intel Atom processors. This project investigates energy-aware scheduling and resource allocation strategies for different workloads exploiting power consumption characteristics of heterogeneous processors. Students, with good programming background particularly in a Linux environment, are encouraged to apply.


3. Fairness and Efficiency Analysis of Cloud Resource Management Algorithms

Resource allocation in cloud computing plays an important role in improving data centre efficiency and end user satisfaction. In this project, we consider fairness as an important factor of user satisfaction. Achieving efficiency and fairness simultaneously is difficult sometimes. Recent research shows that Hadoop Fair Schedule is not efficient for applications with heterogeneity resource demands. The project will review the performance of existing resource allocation algorithms on the two aspects. It will further work towards a fair and efficient resource allocation algorithm in the context of the Pay-As-You-Go (PAYG) cloud pricing model. The student is expected to be familiar with distributed computing concepts and comfortable with Java programming.  


4. Problem Diagnostic Mechanisms for Resource Provisioning in Cloud Computing

There are many ways things may go wrong when running applications in the cloud. Software bugs and misconfigurations are common causes of system failures. When problems like these happen, the resources these applications run on are not effectively used and in the worst case, these applications may jeopardize the resource use of other normal applications. This project intends to analyze the patterns of system behaviours when these problems occur and investigate the impact of these problems for resource provisioning mechanisms. It also intends to develop automatic method to detect these problems and enhance cloud resource provisioning algorithms to deal with these problems. The student is expected to have good understanding of distributed computing principles and be comfortable with Java programming.


5. Modelling Consolidation Behaviour of Collocated Virtual Machines in Private Clouds

Clouds are literally changing the way we think, we program and we even solve our problems. Besides many advantages public clouds provide to users, they also introduce serious concerns (e.g., security) that particularly stop many large organisations to migrate their computation to public clouds; private cloud seems to be their solution. In private clouds, each organization is in charge of handling its Virtual Machines (VM) to better provision its available resources. One of the most effective ways to provision recourses is to collocate several VMs on a Physical Machine (PM); a process that may also degrade performance of the provided services hosted by VMs. The projects aims to investigate relations between performance degradation of VMs according to their own loads as well as their neighbours that are also competing to use the shared recourses on PMs. Students who are interested need moderate knowledge of programming. Tools to monitor, benchmarks to test, and a private cloud to experiment are all already developed and provided; students only need to collect raw data and model the not-so-easy ecology of such complicated systems.

6. Application Isolation Techniques in Cloud Computing Platforms

The cloud computing model allows people to use CPU, storage and even network bandwidth from remote resource providers. These resource providers often host lots of third-party applications in tens of thousands machines in their data centres. As many third-party applications share physical CPUs, storage and networks, how to isolate these applications becomes an issue. The project will review the technologies, such as virtual machines, used by existing “cloud computing” infrastructure providers for achieving application level isolation and examine the effectiveness of these technologies. It will also investigate how to make a “cloud computing” platform trustworthy.

7. Application-Specific Service Level Agreement and Energy-Efficiency Improvement in Cloud Computing Platforms

Cloud computing environments are gaining popularity as the de facto platforms for many applications. These systems bring a range of heterogeneous resources that should be able to function continuously and autonomously. However, these systems expend a lot of energy. Thus, this project aims to develop new algorithms and tools for energy-aware resource management allocation for large-scale distributed systems enabling these systems to become environmentally friendly. The proposed framework will be ‘holistic’ in nature seamlessly integrating a set of both site–level and system–level/service–level energy–aware resource allocation schemes addressing a range of complex scenarios and different operating conditions.

8. Interruption Prediction Models for Business Applications in Clouds

Although performance interruption of cloud-based applications is a well known fact when migrating virtual machines (VMs), no actual model was ever proposed to predict magnitude of such interruptions prior to reorganizing VMs in virtualized environments. This project starts by collecting data for modelling transition behaviour of VMs (cloud applications) during their live-migration process. The collected data is then analyzed to design mathematical, empirical, and/or, analytical models to predict application transient behaviour during a given service. The last step is to evaluate the proposed model on business applications already running in the cloud.

9. Designing a QoS-Aware Controller for Dynamic Scheduling in Processing of Cloud-Based Data Streaming Platforms

More and more companies have to deal with huge amounts of streaming data that need to be quickly processed in a real time fashion to extract meaningful information. Stream data processing is different from well-studied batch data processing which does not necessarily need to be done in real-time (the issue of velocity). In such environments, data must analyzed/transformed continuously in the main RAM before it is stored on the hard drive. Especially in environments that the value of the analysis decreases with time. Normally, this is done by a cluster of server (worker) nodes that continuously work on processing the incoming stream of data. One of the major issues posed by streaming data processing is keeping the QoS level under fluctuations of request rates. Past research showed that the presence of high arrival rate of streaming data within short periods causes serious degradation to the overall performance of underlying system. In this project, we are looking for creating some advanced controller techniques to allocate effectively available resources to handle the big data streaming with complex arrival patterns in order to preserve the QoS enforced by end-users.



Back to the School Home Page

Last changed: April 22, 2018