• We work across several layers of the computing stack, from the hardware up to system software and applications, targeting among others the design and implementation of high-performance and cost-effective systems, the development and co-design of custom and application-specific architectures and accelerators, as well as optimizations for virtual memory, storage and networking.

  • We work on several aspects of HPC including algorithmic engineering (with a special focus on irregular applications), concurrent data structures, parallel programming, performance modeling, runtime systems and resource management. Contributions in this area include CSX, a storage format that optimizes Sparse Matrix-Vector Multiplication in multicore platforms implemented in the SparseX library, RCU-HTM, a concurrency control mechanism that increases the performance of concurrent search trees and graph algorithms [ISC18] and prediction approaches for large scale communication [CC17] and multi-GPU BLAS execution [ISPASS21]. Read moreā€¦

  • We work on effective virtualization technologies (involving also accelerators), cloud storage, elastic execution and interference-aware resource management.

  • We design and implement systems, algorithms and methodologies that exploit modern big-data middleware stacks executed over heterogeneous scalable infrastructure to solve problems in various workloads spanning from AI to traditional data-intensive domains.

  • We cover a wide spectrum of aspects that deal with distributed computing, ranging from theoretical algorithmic approaches to applied technologies that are able to scale to a large number of peers to accommodate a high number of concurrent user requests. We also cover the area of decentralized computing, focusing on system aspects of blockchains (e.g., concurrency, consensus, storage, etc.) as well as their application to other system-related subjects.