Research projects
I am currently involved in the RePhrase H2020 European project (website http://rephrase-ict.eu/)
Editorial Activity
- Local Chair at the 24th International European Conference on Parallel and Distributed Computing (Europar 2018), “Theory and Algorithms for Parallel Computation and Networking” track.
- Program Chair of the Special Session in “Smart Home and E-Health to improve the Quality of Life” at the 3rd EAI International Conference on Smart Objects and Technologies for Social Good (GoodTechs 2017)
- PC Member and Publicity Chair of the 1st International Workshop on Autonomic Solutions for Parallel and Distributed Data Stream Processing (Auto-DaSP 2017), held in conjunction with the Euro-Par 2017 conference.
Research interests
My principal research interests are related to Parallel and Distributed Computing with a particular focus in Parallel Data Stream Processing, High-Level Parallel Programming and Energy Awareness.
Parallel and Adaptive Data Stream Processing
Nowadays we are living an Information revolution. The amount of data generated by automatic sources such as sensors, infrastructures and stock markets or by human interactions via social media is constantly growing. The numbers of this data deluge are impressive: every day 2.5 exabytes of data are created, so much that 90% of the data in the world today has been created in the last two years alone 1) . Furthermore, these numbers are expected to constantly grow driven by an ever-increasing adoption of sensors, towards tens of billions of Internet-connected devices over the next few years.
The possibility of gathering and analyze all this data to extract insights and detect trends is clearly a valuable opportunity for scientific and businesses applications. The topic of Data Stream Processing (DaSP) is a recent and highly active research area dealing with the processing of this streaming data. Several important on-line and real-time applications can be modelled as DaSP, including network traffic analysis, financial trading, data mining, and many others.
The development of Data Stream Processing (DaSP) applications poses several challenges, from efficient algorithms for the computation to programming and runtime systems to support their execution. In this field, my research interests are focused on two main problems that this kind of applications should face:
-
need for high performance: high throughput and low latency are critical requirements for DaSP problems. Applications necessitate taking advantage of parallel hardware and distributed systems, such as multi/manycores or cluster of multicores, in an effective way;
-
dynamicity: due to their long running nature (24hr/7d), DaSP applications are affected by highly variable arrival rates and changes in their workload characteristics. Adaptivity is a fundamental feature in this context: applications must be able to autonomously scale the used resources to accommodate dynamic requirements and workload while maintaining the desired Quality of Service (QoS) in a cost-effective manner.
Starting with my PhD research, I am focusing on the study and proposal of efficient solutions to these problems. Common and recurrent problems (for example windowed operators) have been (and currently are) studied and re-usable solutions can be proposed for inter-operator parallelism exploitation, considering as target platforms Multi-/Many-Core CPUs and GPUs. Furthermore, the knowledge of the communication/computation pattern implied by the use of well know parallelization schemes, allows the development of reconfiguration mechanisms that can be encapsulated in the run time support and of adaptation strategies that may benefit from the presence of performance models to achieve the desired QoS requirements. During my Ph.D. thesis I have investigated Model Predictive Control approaches in devising scaling strategies with well known properties of stability, QoS assurance and cost awareness (e.g. energy-aware strategies).
High Level Parallel Programming
In recent years, parallelism is becoming pervasive in our life. Everyday tools, such as smartphones and laptops, are equipped with multi-/many-core CPUs. Therefore the ability to efficiently exploit this computational power is mandatory in real-life applications. On the other hand, this clearly imposes problems from a programmability point of view.
Historically, parallel programmers resort to handmade parallelization and low-level libraries that, giving a complete control over the parallel application, allowed them to manually optimize the code and exploit at best the architecture. Besides being an impediment to software productivity and reduced time to development, such low-level approaches prevent code and performance portability. In this scenario, high-level approaches to express parallel programs have the advantage of making the programming effortless costly and less time-consuming. The programming environment has to offer the high-level parallel constructs directly to programmers that can use them to compose their parallel application. This simplifies programming by raising the level of abstraction: the programmer concentrate on computational aspects, having only an abstract high-level view of the parallel program, while all the most critical implementation choices are left to the programming tool and runtime support.
In these terms, one of my current research topics regards the study and definition of parallel patterns for the exploitation in recurrent real-life problems and the development of runtime systems mechanisms enabling fine-grained parallelism on modern CPUs.
Energy Awareness in Parallel Computing
Power efficient computing systems have drawn the attention in the last years, due to both environmental and economical reasons. Clearly, this has implications also on how parallel programs exploit their execution platform. One of my research interest involves the study and design of power aware management policies and runtime systems that allow the execution of parallel programs able to satisfy some user requirements in terms of performance and power consumption. In a multi-core CPU, power management policies, the operating frequency of single cores or of groups of cores in a socket may be dynamically varied (frequency scaling) or single cores or groups of cores may be dynamically switched on and off. In this scenario the choices of which parallelism degree to use–i.e. of how many cores to switch/keep on–and of which frequency setting to use–i.e. what speed is to be used on each of the powered cores–are critical. On the other hand, a power-aware runtime system must be able to dynamically find and adopt the best application configuration while the application is running, in order to adapt the computation to changing execution conditions.