Research

Research Interests

My main research interests are in the area of parallel computing, with particular focus on high-level parallel programming models. I am mainly interested in:

  1. exploring mechanisms that enable software systems to take advantage of concurrency and parallelism to improve performance and reduce power consumption;
  2. studying and developing new algorithms and methodologies that simplify the development of complex parallel programs.

Motivations

In almost all computing domains, the explosion of availability of parallel computing resources signals a significant change: improving performance it is not any longer responsibility of the systems designer, instead, it became more and more responsibility of the application programmer. Now, more than ever, it is critical that the research community makes a significant progress toward making the development of parallel code accessible to all programmers, rather than allowing parallel programming to continue to be the domain of specialised expert programmers.

While the advent of multi-core processors has alleviated several problems that are related to single-core processors (i.e. memory wall, power wall, and instruction-level parallelism wall) it (re-)raised the issue of the programmability wall. On the one hand, parallel program development for multi-core processors, and particularly for heterogeneous ones, is significantly more complex than developing applications for a single processor system. On the other hand, programmers have been traditionally trained for the development of sequential programs, and only few of them have experience with parallel programming.

Current approaches to exploit multi-core parallelism mainly rely on sequential flows of control (threads), and the sharing of data is coordinated by using locks. Locks serialize access to critical regions. This abstraction is not a good fit for reasoning about parallelism. I believe that high-level parallel abstractions are the keywords. The urgency now is to replace low-level parallel programming abstractions with higher level concepts that may represent a more natural fit for parallelism. Achieving this goal requires to re-think the technology stack, from the Operating System level to the language level.

Several different authors have recognized that structured parallel programming represents a viable means to improve the efficiency of the entire process of designing, implementing and tuning parallel applications. Algorithmic skeletons and parallel design patterns have been proposed in completely disjoint research frameworks but with almost the same main objective: providing the programmer of parallel applications with an effective programming environment. The two approaches have many similarities addressed at different abstraction level. They aim at simplifying the application programmer’s task and make the whole application development process more efficient by providing composable building blocks, such that complex applications may be implemented by composing certified abstractions rather than designing ad-hoc solutions and supporting functional and performance portability across different target architectures.

These features raise the level of abstraction of the mechanisms exposed to the application programmer and distinguish structured parallel programming frameworks and models from more traditional, low-level approaches where the separation of concerns design principle is not satisfied at all. Furthermore, solutions identified when implementing a parallel application are not readily reusable when working to parallelise another application or when moving the application to a different target architecture.

Current and Future research directions

I’m currently involved in the development of FastFlow parallel programming frameworks based on algorithmic skeletons/design patterns. FastFlow was/is used as run-time system in the FP7 projects ParaPhrase and REPARA and in the H2020 project RePhrase.

The long-term research goal is to contribute to the definition of new methods, algorithms and software systems that can harness the full potential of current and forthcoming heterogeneous computing platforms enabling easy development of high-performance large-scale parallel applications.


Projects (completed):

  • RePhrase (EC-RIA, H2020, ICT-2014-1): Refactoring Parallel Heterogeneous Resource-Aware Applications – a Software Engineering Approach. RePhrase’s Project Home (2015, 36 months).
  • REPARA (EU STREP FP7): Reengineering and Enabling Performance And poweR of Applications REPARA Project Home (1-09-2013, 36 months).
  • PARAPHRASE (EU STREP FP7): Parallel Patterns for Adaptive Heterogeneous Multicore Systems ParaPhrase’s Project Home (1-10-2011, 42 months).
  • SMECY (European Project): Smart Multicore Embedded Systems SMECY’s Project Home (2010, 36 months).
  • INSYEME (Italian MIUR, FIRB): Integrated System for Emergency (2007, 36 months).
  • FRINP (founded by Fondazione Cassa di Risparmio di Pisa): Reconfigurable Firewall for Network Processors (2007, 36 months).
  • SFIDA (Italian MIUR, FAR-ICT): Information Science Solutions for Supply-Chains, Districts and Associations of Small and Medium Enterprises. Partners: TXT e-solutions S.p.A., Consorzio Milano Ricerche, Università LUISS (2006, 30 months).
  • VirtuaLinux (founded by Eurotech SpA): Virtualized high-density clusters with no single point of failure (2006, 6 months).
  • Grid.it (Italian MIUR, FIRB): Piattaforme abilitanti per griglie computazionali ad elevate prestazioni orientate a organizzazioni virtuali scalabili. Partners: CNR (ISTI, ISTM, ICAR), INFN, CNIT, ASI (2003, 4 years).
  • SAIB (Italian MIUR, Industrial Reserach): Internet-based Banking. Partner: Athos Origin (2001, 3.5 years).
  • Progetti strategici legge 449/97 anno 2000 (Italian MIUR): High-performance Components for Data/Web Mining and Search Engines (2003, 2 years).
  • ASI-PQE 2000 (Italian MIUR): ASSIST design and development (2002, 2 years).
  • PQE2000 (Italian CNR, ENA, INFN, Alenia Spazio): SkIE compiler design and development (1996, 4 years).
  • CNR Agenzia2000 (Italian MIUR-CNR): Design of an environment for parallel programming based on the structured parallelism paradigm.