Bachelor and Master Theses

To apply for conducting this thesis, please contact the thesis supervisor(s).
Title: Do LLMs improve in modelling tasks with their evolution? An empirical study
Subject: Computer science, Software engineering
Level: Advanced
Description:

LLMs have gained their hype thanks to the powerful "comprehension" of tasks defined in natural language queries (prompts). In the system and software modelling field, the use of LLMs has raised a lot of interests due to the barrier typically faced by non experts in using modelling languages and performing related tasks (e.g. language engineering, model manipulation, etc.). As a consequence, there exists a number of recent research works testing and measuring the performances of LLMs in completing modelling related tasks. In this respect, by looking at the current state of the art, many stakeholders seem to rely on the hypothesis that the progressive development of LLMs will eventually bring to error-free solutions. On the other hand, other studies on LLMs claim that some AI-related limitations cannot be inherently overcome. Therefore, this thesis aims to empirically investigate whether the release of updated LLMs is bringing increasingly better performances in terms of precision of the generated results.

This thesis will pursue the following goals:

- search for publicly available replication packages related to modelling-related experiments with LLMs;

- replicate the experiments with several versions of the same LLMs and possibly different ones;

- data analysis of performance comparison.

Start date: 2026-01-01
End date: 2026-06-30
Prerequisites:

- good knowledge of LLMs;

- advanced programming skills in Python, Java, or similar;

- basic knowledge of modelling terminology.

IDT supervisors: Riccardo Rubei
Examiner: Antonio Cicchetti
Comments:

This thesis fits better 2 students but can be done also by 1 student.

Company contact: