Since October 2019, I am a Research Fellow at the Trinity College Dublin (Ireland) working in the ADAPT Centre under the lead of Prof. Declan O'Sullivan. Practically, I am contributing to research efforts in Semantic Web technologies: mainly focusing on analyzing large distributed knowledge graphs and on designing complex transformation pipelines for heterogeneous Big Data. In particular, since July 2020, I can focus on these research topics thanks to a Marie Skłodowska-Curie ELITE-S fellowship.
From January 2018 to September 2019, I was a Senior Researcher at the Fraunhofer IAIS in Sankt Augustin (Germany, close to Bonn) focusing on the domain of Semantic Web and Linked Data in the context of large-scale datasets. My research topics include ontology management, ontology engineering, Semantic Web, Linked Data, clustering, machine learning methods. I also applied the results of my research in various European and Industry-funded projects. In parallel, I was an associated postdoc researcher of the Smart Data Analytics group at the University of Bonn, under the lead of Prof. Jens Lehmann.
In 2017, as a postdoc, still with the Tyrex group (in Inria, France), I pushed further what I developed during my PhD thesis by integrating SPARQL evaluators into larger systems where various kinds of data structures are involved: several query results are needed (and aggregated) to build a complex answer. More specifically, I was trying to design efficient languages to facilitate the development of optimized ETL pipelines in a semantic context.
From 2013 to 2016, during my PhD thesis at Inria, with the Tyrex group in Grenoble, I focused on Semantic Web standards, especially on the Resource Description Framework RDF and its dedicated query language SPARQL. My main goal was to design efficient tools to evaluate SPARQL queries on very large RDF datasets (i.e. ≥100GB). Indeed, I provided a new reading grid to rank SPARQL evaluators before designing several efficient ones.
As a past time during my PhD main activities, I also designed a semantic pipeline for trip planning aggregating heterogeneous datasets (e.g. GTFS, RDF, CSV) in order to provide users touristic alternatives at plane stopovers.
Previously, before 2013, I worked on designing and implementing broadcast algorithms with special properties such as UTO (uniform and totally ordered). This work, mainly developed in C, is also openly available from github.
|LAMBDA defines a scientific strategy for stepping up and stimulating scientific excellence and innovation capacity, increasing research capacities and unlocking the research potential of the biggest and the oldest R&D Institute in the ICT area in the whole West Balkan region, turning the Institute Mihajlo Pupin into a regional point of reference when it comes to multidisciplinary ICT competence related to Big Data analytics.||Since 2019|
|SemanGit provides a resource at the crossroads of both Semantic Web and git web-based version control systems. It is actually the first collection of linked data extracted from GitHub based on a git ontology we designed and extended to include specific GitHub features.||Since 2018|
|QualiChain targets the creation, piloting and evaluation of a decentralised platform for storing, sharing and verifying education and employment qualifications and focuses on the assessment of the potential of blockchain technology, algorithmic techniques and computational intelligence for disrupting the domain of public education, as well as its interfaces with private education, the labour market, public sector administrative procedures and the wider socio-economic developments.||2019|
|BETTER is implementing a Big Data intermediate service layer focused on creating user-centric services and tools, while addressing the full data lifecycle associated with EO data, to bring more downstream users to the EO market and maximise exploitation of Copernicus data and information services.||2018-2019|
[Work Package Leader]
|SLIPO develops software, models and processes for: transforming conventional POI formats and schemas into RDF data; interlinking POI entities from different datasets; enriching POI entities with additional metadata, including temporal, thematic and semantic properties; fusing Linked POI data in order to produce more complete and accurate POI profiles; assessing the quality of the integrated POI data; offering value added services based on spatial aggregation, association extraction and spatiotemporal prediction.||2018-2019|
|Clear addresses one fundamental challenge of our time: the construction of effective programming models and compilation techniques for the correct, efficient and scalable exploitation of large amounts of data.||2017|
|Datalyse is a smart treatment demonstrator dedicated to Big Data focusing on collecting, certificating, integrating, categorizing, securing, enriching and sharing data.||2013-2016|
Bsc. & Msc. Theses Supervision
Software Engineer Supervision