Home Projects
Projects
UF Projects

CAC projects at the University of Florida:

Demand-driven Service and Power Management in Data Centers

Power consumption is an increasingly significant percentage of the cost of operating large data centers. These centers are used by banks, investment firms, IT service providers, and other large enterprises. One possible approach to reduce power consumption is to keep machines in standby or off modes except when the data center workload requires them to be fully on. This approach depends on being able to monitor performance, workload or resource demands and to anticipate the need for resources in order to meet service-level agreements of the users who generate the workload.

This project seeks to devise mechanisms that perform the following functions: to monitor, model and predict workload associated with individual services; to model and predict global resource demand; to dynamically allocate and de-allocate virtual machines to physical machines; to devise methods based on control theory and/or market-based approaches to use the above-described mechanisms to minimize the cost of providing individual services while globally minimizing power consumption and delivering contracted service levels; and to develop and evaluate software that implements these methods.

Self-organizing IP-over-P2P Overlays for Virtual Networking

This project is relevant to industries interested in provisioning virtualized environments in data centers and cloud computing infrastructures by enabling self-configuring, seamless IP-layer connectivity in wide-area environments. This project is relevant to CAC as it applies various autonomic techniques in the area of overlay networking.

The research and development focuses on self-organizing, peer-to-peer virtual IP overlays with the objective of enabling seamless deployment and use of virtual networks that support existing, unmodified operating systems and TCP/IP applications. It builds upon and extends the self-configuring IP-over-P2P (IPOP) overlay system developed at the University of Florida, which enables scalable, robust, self-configuring virtual network overlays interconnecting physical or virtual resources within a LAN or across a WAN (even in the presence of NATs and firewalls), and supports IPsec-based virtual private networking. This project has the following focus activities:

  • Cybersecurity: self-organizing VPN (virtual private network) links and name resolution by integrating infrastructures such as online social networks (to establish trust and store public-key cryptographic credentials) and decentralized overlays (for resource discovery and routing of IP packets);
  • Resource discovery in distributed systems: techniques to efficiently support self-configuring multicast trees and unstructured queries for resource discovery in overlay networks and IP-over-P2P virtual networks;
  • Applications in cloud computing: Integration with virtual machines and performance enhancements

To learn more about applications of the IPOP overlay software, visit our project websites: SocialVPN and Grid Appliance.

Also see the video demonstration below to see one of the products of the IPOP project in action:
SocialVPN demo

 

Real-time scheduling of Ensemble Systems with Limited Resources

Autonomic management can typically be mapped to the well-known MAPE-K loop [1], whose components are monitor, analyze, plan, execute and knowledge, respectively. Each of these components can become more complex as the size or the complexity of the systems to be managed grows. Instead of having one model responsible for each
component, a collection or an ensemble of computational models are often used as a divide-and-conquer approach to simplify the implementation and improve overall performance of the entire system. For example, to enable a self- caring IT system, multiple models for different kinds of faults can be used for the analyze and plan components. As a result, complex and ensemble-based systems generally require larger number of resources to support their operations. This project aims to derive scheduling mechanisms that can allow such systems to operate with limited resources, while still achieve acceptable and predictable performance.

A generic Mixed Real-Time Task (MixRTT) scheduling framework [5] is proposed as a solution to this problem. Each computational model, typically referred as an expert, in an ensemble system is a real-time task. Based on the ensemble’s aggregation policy, a subset of experts is selected as winners whose outputs will contribute in the generation of a final system output. MixRTT treats winner experts as hard real-time tasks whose deadlines cannot be missed, while non-winners are soft real-time tasks and executed based on available resources and application policies. To schedule experts with limited resources, MixRTT utilizes three main components, namely a task utilization adaptor (TUA), a real-time scheduler (RTS) and a task priority predictor (TPP). The TUA efficiently adjusts allocation of resource for tasks according to their priorities determined by an application policy, such as a learning objective, so that the entire ensemble is schedulable by the RTS using available resources. The RTS creates schedules of experts so that they achieve the allocated resource utilization in each cycle. Since expert priorities usually used by the TUA generally become available only at the beginning of each cycle, the TPP estimates expert priorities ahead of time to reduce scheduling delay, and subsequently decrease task deadline misses.

As a proof of concept, MixRTT was implemented using an earliest deadline first (EDF) scheduler as the RTS, the task compression algorithm [2, 3] with a sensitivity analysis based heuristic [4] as the TUA, and a responsibility predictor as the TPP. Experiment results when scheduling an ensemble system in the application of agent position prediction in 2D space show comparable ensemble performance (i.e. output accuracy is 92 to 100 percent) to the system with unlimited resources. Another potential application is 3D simulations with large number of objects, where resources might be insufficient for updating all objects. From our preliminary experiments, a simulation of ten 3D objects when using 60 percent of typically required amount of resources can achieve noticeably better performance with the schedules generated by MixRTT than those created without MixRTT.

In conclusion, MixRTT shows strong potentials in enabling operation of ensemble-based autonomic systems using limited resources. Future works include the analysis of task-level and ensemble-level timeliness, mapping from application policies to the TUA’s adaptation policies, and approximation algorithms that can further reduce scheduling overhead possibly incurred in the TUA.

References
1. J. O. Kephart and D. M. Chess, The Vision of Autonomic Computing, IEEE Computers, 36(1), 2003.
2. G. Buttazzo, G. Lipari, M. Caccamo and L. Abeni, Elastic Scheduling for Flexible Workload Management, IEEE Computers, 51(3), 2002.
3. T. Chantem, X. S. Hu, and M.D. Lemmon, Generalized Elastic Scheduling for Real-Time Tasks, IEEE Computers, 58(4), 2009.
4. P. Rattanatamrong and José A.B. Fortes, Real-Time Scheduling of Mixture-of-Experts Systems with Limited Resources, ACM International
Conference on Hybrid Systems: Computation and Control, 2010.
5. P. Rattanatamrong and José A.B. Fortes, Mixed Hard and Soft Real-Time Scheduling of Embedded Ensemble Systems, in submission.


Also see the following two demos to see the real-time scheduler in action:

Demo #1: Movement in 2D space
Demo #2: 3D physics simulation

 

Improving Timer Accuracy in Virtualized Systems for Real-time Computing

Real-time systems are frequently used in defense, transportation, financial, medical and other applications. They require precise timing information which is hard to obtain when using virtual machines. One possible approach to addressing this problem is to automatically adjust how processors are allocated to time-critical processes at runtime in order to increase the timing accuracy of those processes as needed. This approach depends on being able to build autonomic capabilities into processor affinity management middleware.

The goals of this project are to devise processor allocation mechanisms for automatically adjusting processor affinity to time-critical processes that require accurate timing signals; to characterize timing accuracy achievable by these mechanisms in virtualized environments; and to study the robustness of these mechanisms in the presence of varying workloads and job mixes.

 

Health Management of IT Infrastructures

Larger and more powerful computational infrastructures are becoming prevalent due to technological advances and increasing scientific and business needs. Ensuring healthy operation of these facilities despite their scale and complexity is necessary to successfully utilize these resources. 

This research focuses on the use of a modeling framework that will enable systematic design, operation and healthy management of IT facilities. 

 
UA Projects

Brief descriptions of ongoing projects at the CAC University of Arizona site are listed below. Additional information about these projects is available at the CAC at University of Arizona Web site.

Autonomic Power and Performance Management of Large-scale Data Centers

The goal of this project is to design innovative autonomic framework and architecture to optimize performance/Watt in traditional server platforms and enclosures. In CAC’s first year, we have modeled and simulated the operations of an interleaved memory system and developed a runtime algorithm to automatically allocate memory blocks to applications based on its current requirements. We have shown in our evaluation that we can reduce power consumption by more than 48% without compromising the performance of the applications. 

Autonomic Network Defense (AND) System

The complexity, multiplicity, and impact of cyber attacks have been increasing at an alarming rate in spite of the significant research and development investment in cyber security products and tools. The current techniques to detect and protect cyber infrastructures from these smart and sophisticated attacks are mainly characterized as being ad hoc, labor-intensive, and too slow. We are developing an innovative intrusion detection system based on an alternative approach inspired by biological systems, which can efficiently handle complexity, dynamism and uncertainty. In the past year, we have implemented a proof-of-concept prototype that demonstrates the ability of the AND prototype to detect and protect against any type of worms, denial of service attacks, and scanning attacks. Our detection rate is more than 99% with a very low rate of false alarms. Our accurate detection rate and infrequent false alarms are due to the use of a multi-level intrusion detection algorithm that integrates the behavior analysis results from several layers (application, transport, network, and link layer).

Autonomic High-productivity Computing

In recent years, the holistic view of high productivity (HPC) is being used to address raw productivity or specific system requirements such as high performance or high availability, rather than traditional supercomputers or large clusters. HPC factors in all physical (e.g., computing resources, tools, and physical space), runtime environment (e.g., workload changes, faults and automation) and human (e.g., programmers, skills and proficiency with tools) aspects in quantifying its merit. Although the majority of performance-targeted HPC deployments are expensive custom-built solutions, they still suffer form several pain points including, but not limited to, minimal degree of autonomic capabilities, efficiency and speed of data transfer, benchmarks to evaluate productivity and difficulty of migration to off-the-shelf commodity components.

In this research, we look at establishing a high productivity computing (HPC) testbed environment, defining HPC metrics and targeted benchmarks and integrating autonomic middleware to enable third-party intelligence add-ons. The autonomic HPC-enabling technologies that will be developed at the proposed HPC lab will enable us to efficiently: 1) optimize resource allocation to respond to runtime workload changes for best performance and/or throughput via support for manageability, virtualization, and dynamicity of runtime adaptation; 2) handle the recovery from hardware and/or software failures; 3) autonomize this highly dynamic environment; and 4) provide support for application workflows  consisting of heterogeneous and couple tasks/jobs through programming and runtime support for a range of computing patterns. The ultimate goals for this project are: first, to make high productivity the ultimate goal for high performance computing and promote awareness of HPC utilities; and second, to incorporate and expose autonomic middleware and tools in HPC environments to mitigate deployment and user constraints and enable automation through third-party intelligence. HPC utility from the user's perspective refers to the value the user places on getting results through faithfully executing their tasks and guaranteeing agreed-upon service-level agreements (SLAs). However, from the provider's perspective, HPC utility refers to dependably executing users' tasks and at the same time minimizing the provider's total cost of ownership (TCO). We will research, develop, test, evaluate and integrate specific autonomic capabilities and establish metrics and benchmarks for HPC. We will then measure and predict HPC efficiencies given the above constraints.

Autonomic Protection System for DNS Protocol (APS-DNS)

Nowadays the Internet is almost meaningless without Domain Name System protocol that is used frequently when one browses websites, sends an email or connects to remote PC’s. The users of the DNS protocol trust the DNS protocol and they assume it is a secure protocol, but it's not as secure as they think.  The problem is that most of the current DNS systems that are being used today are based on two RFC1035 and RFC1034 which has been written in 1987, when the performance was the most challenging problem.  Consequently, the DNS protocol is not as secure and extremely vulnerable for exploitation by attackers’ misuse of any security holes in network protocols. The importance of the DNS security has led some researchers to redesign the DNS protocol with security consideration as a new DNSSEC protocol. But since the DNS protocol is a distributed system the deployment cost of any change in the protocol is very expensive. In addition the DNSSEC itself can be targeted by new attacks.
We propose an alternative approach based on autonomic computing by continuously monitoring and analyzing the behavior of the DNS protocol to detect any anomalous behavior that might be triggered by DNS attacks. In this project we are designing a DNS anomaly detection system which employs the behavior analysis of DNS protocol to define the normal protocol activity model. Since most of the attacks will ignore the normal behavior of protocol, any deviation from these models can be detected as a potential threat. During the training phase, pattern generator will produce n-grams of a wide range of normal DNS traffic patterns during a window of interval T.  These n-grams will define the frequency of normal usage of these n-grams. During the testing phase the Behavior Analysis will analyze protocol transition sequences to match them with normal behavior profile. When an attack exploits the vulnerabilities in the DNS protocol, it typically generates illegal or abnormal transitions in the protocol that can be detected by the DNS behavior analysis module.

Scale-Right Provisioning Architecture for Next Generation Data Centers (SRPA for NGDC)

The dramatic and unpredictable fluctuation in the resource provisioning for real-time web-applications calls for an elastic delivery of computing services. Current datacenter deployments have a strong tie between servers and running applications, which results in inefficiencies in terms of multiple peak loads provisioning, optimal average resources utilization, autonomic features for recording varying runtime workloads, datacenter manageability, and overhead control on the datacenter Total Cost of Ownership (TCO). Novel research approaches (e.g. Cloud Computing, Virtualization) in parallel and distributed computing and web services have emerged as paradigms for utility maximization and cost minimization.
Inspired by these hottest technologies and motivated by datacenters inefficiencies, this research is conducted by addressing the followings: (i) Understand workloads’ resources requirements, including computer, memory, network and storage, as well as their constraints such as Service-Level Agreement (SLA) and workload signature; (ii) Map such workload to the most-optimal set of physical or virtual resource to run the workloads, which does not only refer to performance and guaranteeing SLAs, but also to minimal power envelopes and no thermal hotspots during the operational period; and (iii) Continuously monitor runtime workloads’ resources requirements, and scale resources up/down accordingly keeping the above goals in mind.

Anomaly Based HTTP Attack Detection System

HTTP has become a universal transport protocol being used for file sharing, web services, media streaming, payment processing, and even for protocols such as SSH. With the advent of Web Services and Cloud Computing technologies, more and more businesses are being hosted in the Internet, meaning that we can expect increased used of HTTP in future. There have been many application level attacks using HTTP in the past and new attacks are emerging continually. This work aims at developing a robust Anomaly Based HTTP Attack Detection System. Our current prototype collects data, trains the system based on the data collected and then observes the network behavior for any deviation from the normal traffic behavior. Currently we consider only HTTP headers and use multiple features to capture the behavior. RIPPER Association Rule Generation technique is employed to build normal and abnormal profiles. We have performed a small scale testing with an attack library consisting of 15 attacks. The initial results are very encouraging with over 90% detection rate and very few false negatives. We are currently refining our detection approach to improve the performance of our system further by including Temporal Behavior Analysis and using a richer attack library.

 
RU Projects

The Rutgers University CAC site houses the following ongoing projects. Additional information about these projects is available at the CAC at Rutgers Web site.

 

Adaptive policy application for autonomic system management using decentralized online clustering

Autonomic techniques based on dynamic policy application provide a powerful and promising approach for the effective management of distributed computational infrastructures, by reducing management complexity and allowing human administrators to focus primarily on the definition of these policies at a high level. However, these high-level polices (which we refer to as meta-policies) are typically defined with static constraint thresholds and are either associated with specific system goals or with known states of the managed entities, obtained through feedback from events or actions. This limits their applicability in situations where the appropriate management actions depend on dynamic system properties, which require adapting application thresholds and parameters
without modifying absolute policy definition constraints.

This project addresses the gap that exists between goal-driven meta-policies expressed in terms of these absolute constraints and the actual thresholds on operational parameters of low-level policies (simply, policies) that must be applied so that these constraints are met. The main contributions of this research project are: 1) a conceptual framework for meta-policy definition in terms of event-based descriptions of system state (clustering profiles), and 2) a mechanism for dynamic policy generation based on a mapping of system states to an agglomeration of patterns in run-time events.

 

Autonomic Data Streaming and In-transit Processing

Emerging  enterprise   and  grid  applications   deploy  complex, end-to-end  application   workflows,  which connect  interacting components and  integrate services that are  distributed in space and  time,  on  widely  distributed environments.  Couplings  and interactions,   e.g.,  data   or  parameter   exchanges,  between components and  services in  these applications are  varied, data intensive  and  time  critical.  As  a  result,  high-throughput, low-latency data acquisition,  data streaming and in-transit data manipulation become  critical requirements. This  project studies the  problem  of autonomic  data  streaming  and addresses  these requirements at  three levels: (1) at the  data acquisition level through  support for  data extraction  from  running applications using advanced network capabilities to minimize the overheads, as well   as   the  impact   of   I/O   operations  on   application execution, (2) at the data sharing level through a virtual shared data  space  that supports  associative  accesses from  different components/services   and  flexible   data   querying  and   data processing (e.g., reduction, min, max, data redistribution, range querying,  etc.),  and (3)  at  the  data  transport level  using efficient data streaming  over wide-area networks with in-transit data  processing,  to  satisfy  strict end-to-end  data  coupling constraints.

DART:  Data extraction from  running applications  is implemented using DART.  DART is a high  throughput, low-latency asynchronous communication  framework  built   on  the  Remote  Direct  Memory Access  (RDMA)   communication  paradigm  for   advanced  network interconnections  such  as  HyperTransport-2 and  InfiniBand.  It provides  a flexible  API for  truly  asynchronous communications that   allows  an  application   to  overlap   computations  with communications  and thus  reduce the  CPU overhead  spent  in I/O operations.   DART  provides   higher   application  throughputs, minimizes  the overhead of  I/O operations,  overlaps computation with  communication, and  increases the  CPU availability  for an application.

ADAPT: ADAPT provides services for high-throughput data streaming and in-transit data manipulation,  and provides the mechanisms as well as the  management strategies for large-scale data-intensive scientific and engineering  workflows. The ADAPT architecture for autonomic  data-streaming and  in-transit  processing focuses  on scheduling  the  in-transit processing  of  data using  available resources while ensuring that  the end-to-end QoS constraints are satisfied  and  the  data  arrives  at the  sink  “in-time”.  The specific  research   was  driven  by  the   requirements  of  the data-intensive   workflows   associated   with   coupled   fusion simulations  and focused on  the definition  of a  “slack metric” that estimates  the time between  when the data was  produced and when it  is required  at the sink,  and determines the  amount of processing that can be performed in transit. The approach used is to  develop  a  two-level  strategy:  slack is  estimated  in  an end-to-end  manner between  the source  and sink  based  on prior interactions, and this slack is then used by the in-transit nodes to make  provisioning and processing/forwarding  decisions at the in-transit  nodes.  The slack  along  with  the application  data generation  rates are  also used  to  determine the  size of  the in-transit node overlay.

DataSpaces: DataSpaces  is a data sharing  framework that enables dynamic and  asynchronous applications interactions.  It provides the  abstraction  of a  virtual  semantically specialized  shared space that can be associatively and asynchronously accessed using simple  yet  powerful and  flexible  operators  (e.g., put()  and get())  with   appropriate  data  selectors   or  filters.  These operators are agnostic of the location, e.g., source/destination, as  well  as  the  data  distribution and  decomposition  of  the interacting  application components. It  also provides  a runtime system  for “in-the-space”  data  manipulation and/or  reduction,
using predefined or  customized and user-defined functions, which can be  dynamically downloaded and executed at  runtime while the data  is   in-transit  through  the  space.   DataSpaces  has  an extensible architecture and can  provide new data services, e.g., data subscription and notification.

The  DataSpaces   framework  provides  flexible,   decoupled  and asynchronous  data sharing  semantics  that enables  interactions between  multiple  distributed  application services.  It  easily integrates  into the data  pipeline of  data workflow  engines to complement   or   replace   the   more   traditional   file-based approaches.  It alleviates  the performance  penalties associated with these  approaches, e.g., latency,  variability, by providing transparent and memory-to-memory data sharing.

 

Autonomic Computing Engines

Consolidated and virtualized cluster-based computing centers have become dominant computing platforms in industry and research for enabling complex and compute intensive applications. However, as scales, operating costs, and energy requirements increase, maximizing efficiency, cost-effectiveness, and utilization of these systems becomes paramount. Furthermore, the complexity, dynamism, and often time critical nature of application workloads makes on-demand scalability, integration of geographically distributed resources, and incorporation of utility computing services extremely critical. Finally, the heterogeneity and dynamics of the system, application, and computing environment require context-aware dynamic scheduling and runtime management.

 

Autonomic workflow management in dynamically federated, hybrid cloud infrastructures using CometCloud.

Public clouds have emerged as an important resource class enabling the renting of resources on-demand, and supporting a pay-as-you-go pricing policy. Furthermore, private clouds or data centers are exploring the possibility of scaling out to public clouds to respond to un-anticipated resource requirements. As a result, dynamically federated, hybrid cloud infrastructures that integrate private clouds, enterprise datacenters and grids, and public clouds are becoming increasingly important. Such federated cloud infrastructures also provide opportunities to improve application quality of service by allowing applications tasks to be mapped to appropriate resource classes. For example, typical application workflows consists of multiple applications stages, which in turn can be composed of different application components with heterogeneous computing requirement in terms of the complexity of the tasks, their execution time, as well as their data requirements. Managing and optimizing these workflows on dynamically federated hybrid clouds can be challenging, especially since it requires simultaneously addressing resource provisioning, scheduling and mapping while balancing QoS with costs.

We explore autonomic approaches to address these challenges, and describe an autonomic framework that provides programming abstractions and runtime services to support complex application workflows. This framework is implemented on top of the CometCloud autonomic cloud engine, which supports dynamic federation, autonomic cloud bursting to scale out to public clouds, and cloud bridging to integrate multiple datacenters, grids and clouds. We also use real-world enterprise application workflows executing on a real federated hybrid cloud composed of three public and private clouds (ACS, Amazon EC2, Rutgers cluster), to demonstrate the operation of the framework and to evaluate its performance. 

Key components are the following.
Web server: All services regarding workflow submission, result retrieval as well as resource and workflow status monitoring are provided as web services. The user submits requests to the web server, which are then forwarded by the web server to the workflow manager.
Workflow manager: It handles most requests coming from the web server. When a workflow is submitted, it calls the autonomic scheduler to schedule resources based on user objectives. It inserts workflow stages into CometCloud and gets results from the agents as well as monitors and manages workflow status.
Autonomic scheduler: It schedules cloud types and computes the number of nodes required to achieve the user’s given objective. It manages resource view in terms of node capabilities and makes scheduling decision considering node availability. It provisions resources for each stage based on the request for resource from agents.
Agent: Each agent manages an application, The number of agents changes based on the application load. An agent pulls tasks (stages) from CometCloud that it is responsible for and sends results to the workflow manager when a task is completed. Each agent manages its local resource view and sends resource provisioning or release requests to the autonomic scheduler based on the state of the local resources it manages. 
  

Accelerating Hadoop/MapReduce for Heterogeneous Moderate-Sized Datasets using CometCloud - Deploying real-world applications 

The objective of this research is to (1) deploy three real world application from BMS, Protein Data Bank, MapDistances and ScorePose and evaluate the performance of the applications as well as MapReduce-CometCloud (2) develop an interface to support multi-threaded worker for multi-processor node. In this research we use CometCloud and its services to build a MapReduce infrastructure that address the above requirements. CometCloud is a decentralized (peer-to-peer) computational infrastructure that supports distributed applications with asynchronous coordination and communication requirements. Specifically, we use CometCloud to enable pull based scheduling of Map tasks as well as stream based coordination and data exchange. Also as many nodes have multiple processors, we developed an interface of multi-threaded worker to maximize the utilization of multi-processor node. A representative worker takes the responsibility to communicate with Comet space to pull tasks and communicate with the master to send results. Other threaded workers concentrate on computation.

We deployed the real world applications using the CometCloud-based MapReduce/Hadoop framework on BMS cluster as well as Rutgers campus cluster. The applications ran with multi-threaded workers and demonstrated the performance improvement with multi-threaded workers. Overall, the CometCloud-based MapReduce solution can accelerate the computations on heterogeneous, medium sized datasets by delaying or avoiding the use of distributed file reads and writes. Also it can accelerate the computations more by enabling multi-threaded workers.  In this research, we showed preliminary results and ongoing efforts are focused on the extended evaluation of the application performance in various aspects and MapReduce/Hadoop-CometCloud overhead. Besides, we are working on event notification based pull-tasks consumption model and load-balancing on Hadoop-CometCloud.

 

Sensor-based Autonomic Monitoring of Datacenters

Due to the increasing demands for computing and storage, energy consumption, heat generation, and cooling requirements have become critical concerns in datacenters both in terms of the growing operating costs (power and cooling) as well as their environmental and societal impacts. Many current datacenters are not following a sustainable model in terms of energy consumption growth as the rate at which computing resources are added exceeds the available and planned power capacities. One of the main fundamental problems in datacenters is the uneven heat generation and heat imbalances, which may lead to thermal hotspots, CPU temperature increases, and thermal fugues.

In synergy with CAC researchers, the CPS Lab is developing and validating information fusion algorithms on a wireless infrastructure for temperature and heat profiling in datacenters (Figure 3). This sensing infrastructure is composed of low-power sensors (collecting temperature, humidity, airflow velocity) and infrared cameras (collecting thermal images). The intelligent system, which will leverage measurements collected and shared across the three CAC sites, is expected i) to collect measurable heterogeneous data, ii) to process it and generate information (e.g., the heat produced by CPUs and conducted/radiated in the datacenter as well as the location of hotspots), and iii) to acquire knowledge (e.g., the estimated heat to be extracted by the AC system and hence, the heat imbalance). With this knowledge that allows  us to ‘predict’ temperature increases, we will design smart thermal-aware, energy efficient workload distribution strategies and closed-loop proactive AC controllers to optimize i) AC compressor duty cycles (which control the temperature of cold air) and ii) fan speeds (which control the air circulation).

 

Real-time Activity Recognition and Monitoring of Vital Signs using Wireless Body Area Network (WBAN)
 
Researchers at CPS lab are investing communication, data fusion and information processing challenges associated with Wireless Body Area Networks (WBANs) for bio-medical applications. A topic that is being currently studied is "real-time online correlation of human physical activities and vital signs (e., ECG)" remotely collected using a WBAN consisting of inertial sensors (accelerometers and gyroscopes) and bio-medical sensors (ECG sensors). Online correlation of physical activities with vital signs wil have a tremendous impact on solutions for inexpensive solutions for remote diagnosis and health monitoring in the developing world. Human physical activities will be recognized on the fly using a novel window based algorithm that employs machine learning (in particular Support Vector Machine) on data collected from inertial sensors.  Also, simultaneously, information (e.g., heart rate variability) will be extracted from vital signs (e.g., ECG) collected using bio-medical sensors. The recognized physical activity will be correlated with derived metrics from vital signs for preliminary diagnosis about a person's health status. For example, switching from a state of rest to exertion should be accompanied by noticeable variation in heart rate. Otherwise, the person is deemed unhealthy.