Research Interests

My main research interests are in the area of parallel computing, with particular focus on high-level parallel programming models. I am mainly interested in:

  1. exploring mechanisms that enable software systems to take advantage of concurrency and parallelism to improve performance;
  2. in studying and developing new algorithms and methodologies that simplify the development of complex parallel programs.

Motivations

In almost all computing domains, the explosion of availability of parallel computing resources signals a significant change: improving performance it is not any longer responsibility of the systems designer, instead, it became more and more responsibility of the application programmer. Now, more than ever, it is critical that the research community makes a significant progress toward making the development of parallel code accessible to all programmers, rather than allowing parallel programming to continue to be the domain of specialised expert programmers.

While the advent of multi-core processors has alleviated several problems that are related to single-core processors (i.e. memory wall, power wall, and instruction-level parallelism wall) it (re-)raised the issue of the programmability wall. On the one hand, parallel program development for multi-core processors, and particularly for heterogeneous ones, is significantly more complex than developing application for a single processor system. On the other hand, programmers have been traditionally trained for the development of sequential programs, and only few of them have experience with parallel programming.

Current approaches to exploit multi-core parallelism mainly rely on sequential flows of control (threads), and the sharing of data is coordinated by using locks. Locks serialise access to critical regions. This abstraction is not a good fit for reasoning about parallelism. I believe that high-level parallel abstraction are the keywords. The urgency now is to replace low level parallel programming abstractions with higher level concepts that may represent a more natural fit for parallelism. Achieving this goal requires to rethink the technology stack, from the Operating System level to the language level.

Several different authors have recognised that structured parallel programming represents a viable means to improve the efficiency of the entire process of designing, implementing and tuning parallel applications. Algorithmic skeletons and parallel design patterns have been proposed in completely disjoint research frameworks but with almost the same objective: providing the programmer of parallel applications with an effective programming environment. The two approaches have many similarities addressed at different abstraction level. They aim at simplify the application programmer's task and make the whole application development process more efficient by providing composable building blocks, such that complex applications may be implemented by composing certified abstractions rather than designing ad-hoc solutions, and supporting functional and performance portability across different target architectures.

These features raise the level of abstraction of the mechanisms exposed to the application programmer and distinguish structured parallel programming frameworks and models from more traditional, low level approaches where the separation of concerns design principle is not satisfied at all. Furthermore, solutions identified when implementing a parallel application are not readily reusable when working to parallelise another application or when moving the application to a different target architecture.

Current and Future research directions

I'm currently involved in the development of FastFlow parallel programming frameworks based on algorithmic skeletons/design patterns. FastFlow is currently used as run-time framework in the FP7 projects ParaPhrase and REPARA and also in the H2020 project RePhrase.

The long term research goal is to contribute to the definition of new methods, algorithms and software systems that can harness the full potential of current and forthcoming heterogeneous computing platforms enabling easy development of high performance large scale parallel applications.


Ongoing Projects

The focus of the RePhrase project is on producing new software engineering tools, techniques and methodologies for developing data-intensive applications in C++, targeting heterogeneous multicore/manycore systems that combine CPUs and GPUs into a coherent parallel platform.

Completed Projects

* REPARA (EU STREP FP7): Reengineering and Enabling Performance And poweR of Applications REPARA Project Home (1-09-2013, 36 months) * PARAPHRASE (EU STREP FP7): Parallel Patterns for Adaptive Heterogeneous Multicore Systems ParaPhrase's Project Home (1-10-2011, 42 months)