Register Challenges Agenda Organization

Cloud Meets Innovation Hackathon

June 6 - 7, 2025

co-hosted at the University of Innsbruck and Universitat Rovira i Virgili

Late registration open

Organization

Universität Innsbruck (UIBK), Austria

Ass. Prof. Sashko Ristov

Universitat Rovira i Virgili (URV), Spain

Prof. Pedro Garcia Lopez

Mentoring and Jury

Daniel Barcelona Pons (URV)
Marc Sanches Artigas (URV)
Marco Cotrotzo (UIBK)
Philipp Gritsch (UIBK)
Klaus Kaserer (UIBK)

4 EU Projects

Bringing together four EU-funded initiatives and state-of-the-art research, ranging from foundational cloud-computing frameworks to advanced Digital Twins, MATISSE, CloudSkin, NearData, and Extract provide the technologies and inspiration for our two-day event. Co-hosted on June 6 and 7 at the University of Innsbruck (UIBK) and Universitat Rovira i Virgili (URV), each campus tackles its own challenge, then comes together in hybrid sessions so teams at both sites can watch demos and share results in real time. Put your skills to the test, harness live cloud services and data streams, enjoy hands-on innovation powered by Europe's top research projects, and compete for exclusive rewards.

Agenda

June 6 (Friday)

15:30 - 16:00 Kickoff and Problem Introduction

Welcome & tools overview by Sashko and Pedro
Presentation of the EU projects and their challenges

16:00 - 18:30 Strategy Session

Teams work on approach and design
Mentoring support throughout the session

June 7 (Saturday)

09:00 - 15:00 Hack Time

Teams develop and test their solutions
Mentoring support throughout the day

15:00 - 17:00 Final Demos and Presentations

Showcase your working prototype

17:30 - 18:00 Awards and Closing Ceremony

Prices will be awarded to the winning teams

Refreshments will be provided!

Late Registration now open!

Participate with teams of up to 4 persons

Register

MATISSE challenge (@UIBK, Campus Technik, RR15)

MATISSE (Model-based engineering of Digital Twins for early verification and validation of Industrial Systems) is a European research project developing a cloud-native framework to automate the engineering, federation and continuous validation of Digital Twins for industrial systems. MATISSE comprises a consortium of over 30 partners from 7 countries and combines model-based and data-driven techniques with cloud services.

Find out more at matisse-kdt.eu.

Hackathon Challenge: Digital Twin EV Charging

In this hands-on challenge, you'll take a VM-based EV charger simulator and turn it into a fully functional Digital Twin using AWS IoT TwinMaker, SiteWise and IoT Core. Your tasks will include:

Automating the provisioning of charger assets in the cloud,
Wiring up the virtual-to-physical data flow so your twin sends charging commands,
Embedding smart logic to coordinate multi-charger setups and cap aggregate power, and
Enhancing the twin with live energy-mix data so charging ramps up when renewables are plentiful.

PyRun.cloud challenge (@URV)

The goal of this challenge is to demonstrate what can be done in the pyrun.cloud environment. We will assess both technical quality and social impact of the proposed solution. We outline three main goals:

1. Serverless Data processing pipeline

Implement a Serverless Data processing pipeline in Lithops inside pyrun.cloud. Lithops is ideally suited for larga parallel data manipullation using serverless functions (like ETLs previous to AI training or inference). We recommend to process data in parallel from object storage. The code should work with different data volumes and you should demonstrate your pipeline with a minimum on 1GB of information. In general, we recommend large unstructured formats like text, image, or video but you can also leverage any scientific format supported by Dataplug library. Use the Data Cockpit tool inside pyrun to select the data source using open data repositories like AWS registry for open data.

2. Data-driven resource optimization

An important goal of Serverless Data Processing is to be efficient in the use of resources, and to achieve high resource utilization. Such hugh resource utilization will imply reduced economic costs thanks to the pay as you go serverless model. Try to optimize you serverless pipeline by provisioning the right amount of resources. In many cases, this can be achieved in Lithops by identifying the right chunk size for dynamic data partitioning. Here, you can leverage the Data cockpit benchmark tool that uses probing to find ideal chunk size. You can also use pyrun monitoring services to optimize resource utilization in your pipeline.

3. Serverless AI

Finally, you must run a Serverless AI pipeline that involves training or inference that benefits from the previously developed Serverless Data processing pipeline. Here, you can run AI code inside Lithops, run your machine learning code in a Dask cluster in pyRun, or run your AI code (Pytorch, vLLM, TensorFlow) inside pyrun selected VM. In this phase, it is interesting to demonstrate the sinergies with the previous achievements (serverless data pipeline) and with pyrun tools and resources. Finally, it is also important to present a visual output that can be understood by the general public and that it involves certain social impact.