Skip to content

TALKS

Imagine asking a computer to build an entire app from scratch and it does, instantly, just by understanding your words. That is not science fiction anymore. Large Language Models (LLMs) are reshaping how we write software, aiding developers and even non-developers alike in crafting code with unprecedented ease.

But there is a catch. These models are fluent in the world's most popular programming languages, like Python or Java, and struggle when they face the dialects of industry: the in-house programming languages companies build for their own unique problems. LLMs often have little to no training knowledge of these languages, which makes it difficult for them to assist in these specialized domains. For this reason, in-house programming languages are perceived as exotic by LLMs.

In this talk, I will share my experience in teaching new and exotic programming languages to LLMs.
I will show the challenges we faced during the process and how current techniques like retrieval-augmented generation, few-shot learning, and fine-tuning open-weight models performed in this context. Finally, I will share the "secret recipe" we used to successfully teach these languages to LLMs, enabling them to assist in coding tasks that were previously out of reach.
Alessandro Giagnorio
Università della Svizzera italiana
Alessandro Giagnorio is a Ph.D. student in the Faculty of Informatics at Università della Svizzera italiana (USI), Switzerland, where he is part of the SEART research group. His research focuses on the intersection of software engineering and artificial intelligence, particularly on enhancing Large Language Models for software development tasks.

Although research is already part of his daily routine, Alessandro is passionate about keeping up-to-date with the latest AI and industry trends and experiment with new technologies. His enthusiasm for AI and software development drives him to explore innovative solutions that can bridge the gap between academia and industry.