T-LADIES Publications

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


Full BiBTeX

2022

[1] Luca Ciccone, Francesco Dagnino, and Luca Padovani, “Fair Termination of Multiparty Sessions”, in 36th European Conference on Object-Oriented Programming (ECOOP 2022), Karim Ali and Jan Vitek, Eds., Dagstuhl, Germany, 2022, vol. 222 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 26:1–26:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik. [ DOI | http ]
There exists a broad family of multiparty sessions in which the progress of one session participant is not unconditional, but depends on the choices performed by other participants. These sessions fall outside the scope of currently available session type systems that guarantee progress. In this work we propose the first type system ensuring that well-typed multiparty sessions, including those exhibiting the aforementioned dependencies, fairly terminate. Fair termination is termination under a fairness assumption that disregards those interactions deemed unfair and therefore unrealistic. Fair termination, combined with the usual safety properties ensured within sessions, not only is desirable per se, but it entails progress and enables a compositional form of static analysis such that the well-typed composition of fairly terminating sessions results in a fairly terminating program.
[2] Francesco Dagnino and Francesco Gavazzo, “A Fibrational Tale of Operational Logical Relations”, in 7th International Conference on Formal Structures for Computation and Deduction (FSCD 2022), Amy P. Felty, Ed., Dagstuhl, Germany, 2022, vol. 228 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 3:1–3:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik. [ DOI | http ]
Logical relations built on top of an operational semantics are one of the most successful proof methods in programming language semantics. In recent years, more and more expressive notions of operationally-based logical relations have been designed and applied to specific families of languages. However, a unifying abstract framework for operationally-based logical relations is still missing. We show how fibrations can provide a uniform treatment of operational logical relations, using as reference example a Ī»-calculus with generic effects endowed with a novel, abstract operational semantics defined on a large class of categories. Moreover, this abstract perspective allows us to give a solid mathematical ground also to differential logical relations - a recently introduced notion of higher-order distance between programs - both pure and effectful, bringing them back to a common picture with traditional ones.
[3] Francesco Dagnino and Fabio Pasquali, “Logical foundations of quantitative equality”, in Proceedings of the 37th Annual ACM/IEEE Symposium on Logic in Computer Science, New York, NY, USA, 2022, LICS '22, Association for Computing Machinery. [ DOI | http ]
In quantitative reasoning one compares objects by distances, instead of equivalence relations, so that one can measure how much they are similar, rather than just saying whether they are equivalent or not. In this paper we aim at providing a logical ground to quantitative reasoning with distances in Linear Logic, using the categorical language of Lawvereā€™s doctrines. The key idea is to see distances as equality predicates in Linear Logic. We use graded modalities to write a resource sensitive substitution rule for equality, which allows us to give it a quantitative meaning by distances. We introduce a deductive calculus for (Graded) Linear Logic with quantitative equality and the notion of Lipschitz doctrine to give it a sound and complete categorical semantics. We also describe a universal construction of Lipschitz doctrines, which generates examples based for instance on metric spaces and quantitative realisability.
[4] Davide Ancona, Pietro Barbieri, and Elena Zucca, “Enhancing expressivity of checked corecursive streams”, in Functional and Logic Programming - 16th International Symposium, FLOPS 2022, Kyoto, Japan, May 10-12, 2022, Proceedings, Michael Hanus and Atsushi Igarashi, Eds. 2022, vol. 13215 of Lecture Notes in Computer Science, pp. 1–18, Springer. [ DOI ]
We propose a novel approach to stream definition and manipulation. Our solution is based on two key ideas. Regular corecursion, which avoids non termination by detecting cyclic calls, is enhanced, by allowing in equations defining streams other operators besides the stream constructor. In this way, some non-regular streams are definable. Furthermore, execution includes a runtime check to ensure that the stream generated by a function call is well-defined, in the sense that access to an arbitrary index always succeeds. We extend the technique beyond the simple stream operators considered in previous work, notably by adding an interleaving combinator which has a non-trivial recursion scheme.
[5] Davide Ancona, Pietro Barbieri, and Elena Zucca, “Equality of corecursive streams defined by finitary equational systems”, in Proceedings of the 23rd Italian Conference on Theoretical Computer Science, ICTCS 2022, Rome, Italy, September 7-9, 2022. 2022, pp. 86–98, CEUR. [ .pdf ]
In recent work, non-regular streams have been defined corecursively, by representing them with finitary equational systems built on top of various operators, besides the standard constructor.

With finitary equational systems based only on the stream constructor, one can use the free theory of regular (a.k.a. rational) trees to get a sound and complete procedure to decide whether two streams are equal. However, this is not the case if one allows other operators in equations, since the underlying equational theory becomes non-trivial, hence equality of regular trees is too strong to guarantee termination of corecursive functions defined even only with the constructor and tail operators.

To overcome this problem, we provide a weaker definition of equality between streams denoted by finitary equational systems built on different stream operators, including tail operator and constructor, and prove its soundness.

[6] Lavinia Egidi, Paola Giannini, and Lorenzo Ventura, “Multiparty-session-types coordination for core erlang”, in Proceedings of the 17th International Conference on Software Technologies, ICSOFT 2022, Hans-Georg Fill, Marten van Sinderen, and Leszek A. Maciaszek, Eds. 2022, pp. 532–541, SCITEPRESS. [ DOI | http ]
In this paper, we present a formalization of multiparty-session-type coordination for a core subset of Erlang and provide a tool for checking the correctness of a system against the specification of its protocol. In Erlang "actors" are primitive entities, which communicate only through explicit "asynchronous message passing". Our tool ensures that if an Erlang system is well typed, then it does not incur in deadlocks or have actors getting stuck waiting for messages that never arrive; moreover any message that is sent will eventually be read. The tool is based on "multiparty session types", a formalism introduced to specify the structure of interactions and to ensure safety properties.
[7] Maurice H. ter Beek, Ferruccio Damiani, Michael Lienhardt, Franco Mazzanti, Luca Paolini, and Giordano Scarso, “FTS4VMC: a Front-End Tool for Static Analysis and Family-Based Model Checking of FTSs with VMC”, Science of Computer Programming, vol. 224, 2022. [ DOI ]
FTS4VMC is a publicly available front-end tool for the static analysis and family-based model checking of a Featured Transition System (FTS). It can detect ambiguities in an FTS, disambiguate an ambiguous FTS, transform an FTS into a Modal Transition System (MTS), and interact with the VMC model checker for family-based verification
[8] Maurice H. ter Beek and Alessio Ferrari, “Empirical Formal Methods: Guidelines for Performing Empirical Studies on Formal Methods”, Software, vol. 1, no. 4, pp. 381–416, 2022. [ DOI ]
Empirical studies on formal methods and tools are rare. In this paper, we provide guidelines for such studies. We mention their main ingredients and then define nine different study strategies (usability testing, laboratory experiments with software and human subjects, case studies, qualitative studies, surveys, judgement studies, systematic literature reviews, and systematic mapping studies) and discuss for each of them their crucial characteristics, the difficulties of applying them to formal methods and tools, typical threats to validity, their maturity in formal methods, pointers to external guidelines, and pointers to studies in other fields. We conclude with a number of challenges for empirical formal methods.
[9] Davide Basile, ter Beek Maurice H., Sami Lazreg, Maxime Cordy, and Axel Legay, “Static Detection of Equivalent Mutants in Real-Time Model-Based Mutation Testing: An Empirical Evaluation”, Empirical Software Engineering, vol. 27, no. 7, pp. 160:1–160:55, 2022. [ DOI ]
Model-based mutation testing has the potential to effectively drive test generation to reveal faults in software systems. However, it faces a typical efficiency issue since it could produce many mutants that are equivalent to the original system model, making it impossible to generate test cases from them. We consider this problem when model-based mutation testing is applied to real-time system product lines, represented as timed automata. We define novel, time-specific mutation operators and formulate the equivalent mutant problem in the frame of timed refinement relations. Further, we study in which cases a mutation yields an equivalent mutant. Our theoretical results provide guidance to system engineers, allowing them to eliminate mutations from which no test case can be produced. Our empirical evaluation, based on a proof-of-concept implementation and a set of benchmarks from the literature, confirms the validity of our theory and demonstrates that in general our approach can avoid the generation of a significant amount of the equivalent mutants.
[10] Davide Basile, Maurice H. ter Beek, Giovanna Broccia, and Alessio Ferrari, “Empirical Software Engineering and Formal Methods for IoT Systems”, ERCIM News, vol. 131, pp. 34–35, 2022. [ http ]
Researchers from the Formal Methods and Tools (FMT) lab of ISTI–CNR are working on the application of formal methods to devise interaction protocols for safe-by-construction IoT Systems of Systems. They are also working on the empirical investigation and evaluation of the effectiveness of techniques and methodologies proposed for IoT application scenarios. The research is being conducted in the context of the national project T-LADIES, funded by the Italian Ministry of Education, University and Research (MIUR) under the program for Projects of National Interest (PRIN)
[11] Giancarlo Fortino, Lidia Fotia, Fabrizio Messina, Domenico Rosaci, and Giuseppe ML Sarnè, “A social edge-based iot framework using reputation-based clustering for enhancing competitiveness”, IEEE Transactions on Computational Social Systems, 2022.
[12] Nick Bezhanishvili, Vincenzo Ciancia, David Gabelaia, Gianluca Grilletti, Diego Latella, and Mieke Massink, “Geometric model checking of continuous space”, Log. Methods Comput. Sci., vol. 18, no. 4, 2022. [ DOI | http ]
[13] Stefania Monica, Federico Bergenti, and Franco Zambonelli, “Toward a kinetic framework to model the collective dynamics of multi-agent systems”, in Proceedings of the 11th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation (ISoLA 2022). 2022, vol. 13703 of Lecture Notes in Computer Science, pp. 165–180, Springer. [ DOI ]
The investigation of the collective dynamics of multi-agent systems in terms of the study of the properties of single agents is not feasible when the number of interacting agents is large. In this case, the collective dynamics can be better examined by adopting a statistical approach that studies the long-time asymptotic properties of the system as a whole. The kinetic framework discussed in this paper can be used to study collective and emergent properties of large and decentralized multi-agent systems once single interactions among agents are properly described. Moreover, the discussed framework can be used to design how agents should interact to ensure that the resulting multi-agent system would exhibit the required collective and emergent characteristics. The discussed framework restricts the interactions among agents to message exchanges, and it assumes that the investigated properties emerge from interactions. As an example of the use of the framework, and to outline a concrete application of it, the properties of a system in which agents implement the symmetric gossip algorithm are analyzed. Analytic results obtained using the discussed framework are compared with independent simulations, showing the effectiveness of the approach.
[14] Giuseppe Petrosino, Stefania Monica, and Federico Bergenti, “Robust software agents with the Jadescript programming language”, in WOA 2022: 23rd Workshop “From Objects to Agents”. 2022, vol. 3261 of CEUR Workshop Proceedings, pp. 194–208, RWTH Aachen. [ .pdf ]
This paper discusses several recent additions to the Jadescript agent-oriented programming language that regard the effective detection and handling of exceptional and erroneous situations at runtime. These new features were introduced to better support the mission-critical level of robustness that software agents are normally demanded to exhibit. The description of these new features is supported by an analysis of the state of the art of exception handling in programming languages, and it is complemented by a discussion on planned future developments. First, the novel exception handling mechanism introduced in Jadescript is presented, and the conceptual similarities and differences with the exception handling mechanisms normally provided by mainstream programming languages are emphasized. Second, the recent additions to Jadescript designed to support failures in behaviours are described, and these additions are related to the novel exception handling mechanism. Finally, the recent language support to manage stale messages using dedicated message handlers is presented and discussed.
[15] Giuseppe Petrosino, Stefania Monica, and Federico Bergenti, “Delayed and periodic execution of tasks in jadescript programming language”, in DCAI 2022: Distributed Computing and Artificial Intelligence, 19th International Conference. 2022, vol. 583 of Lecture Notes in Networks and Systems, pp. 50–59, Springer. [ DOI ]
Software agents are expected to timely act and to dynamically plan their activities, for example, to solve the complex collaboration problems of many real-world applications. The collaboration among agents requires the ability to reason about time to dynamically coordinate and to effectively adjust the frequency of periodic actions and reactions. For these and related reasons, an agent-oriented programming language is demanded to provide the programmer with effective means to schedule the execution of delayed and periodic tasks. This paper describes the new datatypes, and the related changes to some language constructs, that have been recently added to the Jadescript programming language to allow agents to effectively manage the dynamic scheduling of delayed and periodic tasks.
[16] Davide Ancona, Angelo Ferrando, and Viviana Mascardi, “Exploiting probabilistic trace expressions for decentralized runtime verification with gaps”, in Proceedings of the 37th Italian Conference on Computational Logic, Bologna, Italy, June 29 - July 1, 2022, 2022, pp. 154–170. [ .pdf ]
Multiagent Systems (MASs) are distributed systems composed by autonomous, reactive, proactive, heterogeneous communicating entities. In order to dynamically verify the behavior of such complex systems, a decentralized solution able to scale with the number of agents is necessary. When, for physical, infrastructural, or legal reasons, the monitor is not able to observe all the events emitted by the MAS, gaps are generated. In this paper we present a runtime verification decentralized approach to handle observation gaps in a MAS.
[17] Davide Ancona, Angelo Ferrando, and Viviana Mascardi, “Mind the gap! runtime verification of partially observable mass with probabilistic trace expressions”, in Multi-Agent Systems - 19th European Conference, EUMAS 2022, Düsseldorf, Germany, September 14-16, 2022, Proceedings, 2022, pp. 22–40. [ DOI | http ]
In this paper we present the theory behind Probabilistic Trace Expressions (PTEs), an extension of Trace Expressions where types of events that can be observed by a monitor are associated with an observation probability. PTEs can be exploited for monitoring that agents in a MAS interact in compliance with an Agent Interaction Protocol (AIP) modeled as a PTE, even when the monitor realizes that an interaction took place in the MAS, but it was not correctly observed (“observation gap”). To this aim, we adapt an existing approach for runtime verification with state estimation, we present a semantics for PTEs that allows for the estimation of the probability to reach a given state, given a sequence of observations which may include observation gaps, we present a centralized implemented algorithm to dynamically verify the behavior of the MAS under monitoring and we discuss its potential and limitations.
[18] Débora C. Engelmann, Angelo Ferrando, Alison R. Panisson, Davide Ancona, Rafael H. Bordini, and Viviana Mascardi, “Rv4jaca - runtime verification for multi-agent systems”, in Proceedings of the Second Workshop on Agents and Robots for reliable Engineered Autonomy, AREA@IJCAI-ECAI 2022, Vienna, Austria, 24th July 2022, 2022, pp. 23–36. [ DOI | http ]
This paper presents a Runtime Verification (RV) approach for Multi-Agent Systems (MAS) using the JaCaMo framework. Our objective is to bring a layer of security to the MAS. This layer is capable of controlling events during the execution of the system without needing a specific implementation in the behaviour of each agent to recognise the events. MAS have been used in the context of hybrid intelligence. This use requires communication between software agents and human beings. In some cases, communication takes place via natural language dialogues. However, this kind of communication brings us to a concern related to controlling the flow of dialogue so that agents can prevent any change in the topic of discussion that could impair their reasoning. We demonstrate the implementation of a monitor that aims to control this dialogue flow in a MAS that communicates with the user through natural language to aid decision-making in hospital bed allocation.
[19] Walter Cazzola and Luca Favalli, “Towards a Recipe for Language Decomposition: Quality Assessment of Language Product Lines”, Empirical Software Engineering, vol. 27, April 2022. [ DOI | http ]
Programming languages are complex systems that are usually implemented as monolithic interpreters and compilers. In recent years, researchers and practitioners gained interest in product line engineering to improve the reusability of language assets and the management of variability-rich systems, introducing the notions of language workbenches and language product lines (LPLs). Nonetheless, language development remains a complex activity and design or implementation flaws can easily waste the efforts of decomposing a language specification into language features. Poorly designed language decompositions result in high inter-dependent components, reducing the variability space of the LPL system and its maintainability. One should detect and fix the design flaws posthaste to prevent these risks while minimizing the development overhead. Therefore, various aspects of the quality of a language decomposition should be quantitatively measurable through adequate metrics. The evaluation, analysis and feedback of these measures should be a primary part of the engineering process of a LPL. In this paper, we present an exploratory study trying to capture these aspects by introducing a design methodology for LPLs; we define the properties of a good language decomposition and adapt a set of metrics from the literature to the framework of language workbenches. Moreover, we leverage the AiDE 2 LPL engineering environment to perform an empirical evaluation of 26 Neverlang-based LPLs based on this design methodology. Our contributions form the foundations of a design methodology for Neverlang-based LPLs. This methodology is comprised of four different elements: i) an engineering process that defines the order in which decisions are made, ii) an integrated development environment for LPL designers and iii) some best practices in the design of well-structured language decomposition when using Neverlang, supported by iv) a variety of LPL metrics that can be used to detect errors in design decisions.
[20] Walter Cazzola, Francesco Cesarini, and Luca Tansini, “PerformERL: A Performance Testing Framework for Erlang”, Distributed Computing, vol. 35, pp. 439–454, May 2022. [ DOI | .pdf ]
The Erlang programming language is used to build concurrent, distributed, scalable and resilient systems. Every component of these systems has to be thoroughly tested not only for correctness, but also for performance. Performance analysis tools in the Erlang ecosystem, however, do not provide a sufficient level of automation and insight needed to be integrated in modern tool chains. In this paper, we present PerformERL: an extendable performance testing framework that combines the repeatability of load testing tools with the details on how the resources are internally used typical of the performance monitoring tools. These features allow PerformERL to be integrated in the early stages of testing pipelines, providing users with a systematic approach to identifying performance issues. This paper introduces the PerformERL framework, focusing on its features, design and imposed monitoring overhead measured through both theoretical estimates and trial runs on systems in production. The uniqueness of the features offered by PerformERL, together with its usability and contained overhead prove that the framework can be avaluable resource in the development and maintenance of Erlang applications.
[21] Francesco Bertolotti, Walter Cazzola, and Luca Favalli, “Features, Believe It or Not! A Design Pattern for First-Class Citizen Features on Stock JVM”, in Proceedings of the 26th International Software Product Line Conference (SPLC'22), Jane Cleland-Huang and Wesley K. G. Assunção, Eds., Graz, Austria, September 2022, pp. 32–42, ACM. [ DOI | http ]
Modern software systems must fulfill the needs of an ever-growing customer base. Due to the innate diversity of human needs, software should be highly customizable and reconfigurable. Researchers and practitioners gained interest in software product lines (SPL), mimicking aspects of product lines in industrial production for the engineering of highly-variable systems. There are two main approaches towards the engineering of SPLs. The first uses macros—such as the #ifdef macro in C. The second—called feature-oriented programming (FOP)—uses variability-aware preprocessors called composers to generate a program variant from a set of features and a configuration. Both approaches have disadvantages. Most notably, these approaches are usually not supported by the base language; for instance Java is one of the most commonly used FOP languages among researchers, but it does not support macros rather it relies on the C preprocessor or a custom one to translate macros into actual Java code. As a result, developers must struggle to keep up with the evolution of the base language, hindering the general applicability of SPL engineering. Moreover, to effectively evolve a software configuration and its features, their location must be known. The problem of recording and maintaining traceability information is considered expensive and error-prone and it is once again handled externally through dedicated modeling languages and tools. Instead, to properly convey the FOP paradigm, software features should be treated as first-class citizens using concepts that are proper to the host language, so that the variability can be expressed and analyzed with the same tools used to develop any other software in the same language. In this paper, we present a simple and flexible design pattern for JVM-based languages—dubbed devise pattern—that can be used to express feature dependencies and behaviors with a light-weight syntax both at domain analysis and at domain implementation level. To showcase the qualities and feasibility of our approach, we present several variability-aware implementations of a MNIST-encoder—including one using the devise pattern—and compare strengths and weaknesses of each approach.
[22] Riccardo Bianchini, Francesco Dagnino, Paola Giannini, Elena Zucca, and Marco Servetto, “Coeffects for sharing and mutation”, Proc. ACM Program. Lang., vol. 6, no. OOPSLA2, oct 2022. [ DOI ]
In type-and-coeffect systems, contexts are enriched by coeffects modeling how they are actually used, typically through annotations on single variables. Coeffects are computed bottom-up, combining, for each term, the coeffects of its subterms, through a fixed set of algebraic operators. We show that this principled approach can be adopted to track sharing in the imperative paradigm, that is, links among variables possibly introduced by the execution. This provides a significant example of non-structural coeffects, which cannot be computed by-variable, since the way a given variable is used can affect the coeffects of other variables. To illustrate the effectiveness of the approach, we enhance the type system tracking sharing to model a sophisticated set of features related to uniqueness and immutability. Thanks to the coeffect-based approach, we can express such features in a simple way and prove related properties with standard techniques.

2023

[1] Ilaria Castellani, Mariangiola Dezani-Ciancaglini, and Paola Giannini, “Event structure semantics for multiparty sessions”, Journal of Logical and Algebraic Methods in Programming, vol. 131, pp. 100844, 2023. [ DOI | http ]
We propose an interpretation of multiparty sessions as Flow Event Structures, which allows concurrency within sessions to be explicitly represented. We show that this interpretation is equivalent, when the multiparty sessions can be described by global types, to an interpretation of such global types as Prime Event Structures.
[2] Francesco Dagnino, Paola Giannini, and Mariangiola Dezani-Ciancaglini, “Deconfined global types for asynchronous sessions”, Logical Methods in Computer Science, vol. 19, no. 1, 2023. [ DOI ]
[3] Riccardo Bianchini and Francesco Dagnino, “QueryAGT: Asynchronous global types in co-logic programming”, Science of Computer Programming, vol. 225, pp. 102895, 2023. [ DOI ]
[4] Giovanna Broccia, Alessio Ferrari, Maurice ter Beek, Walter Cazzola, Luca Favalli, and Francesco Bertolotti, “Evaluating a Language Workbench: from Working Memory Capacity to Comprehension to Acceptance”, in Proceedings of the 31st IEEE/ACM International Conference on Program Comprehension (ICPC'23). 2023, pp. 54–58, ACM. [ DOI | NEW! | www: ]
Language workbenches are tools that enable the definition, reuse and composition of programming languages and their ecosystem. This breed of frameworks aims to make the development of new languages easier and more affordable. Consequently, the understandability and learnability of the language used in a language workbench (i.e., the meta-language) should be an important aspect to consider and evaluate. To the best of our knowledge, although the quantitative aspects of language workbenches are often discussed in the literature, the evaluation of understandability and learnability are typically neglected. Neverlang is a language workbench that enables the definition of languages with a modular approach. This paper presents a preliminary study that intends to assess the comprehensibility of Neverlang programs, evaluated in terms of users' effectiveness and efficiency in a code comprehension task. The study also investigates the relationship between Neverlang comprehensibility and the users' working memory capacity. Furthermore, we intend to capture the relationship between Neverlang comprehensibility and users' acceptance, in terms of perceived ease of use, perceived usefulness, and intention to use. Our preliminary results on 10 subjects suggest that the users' working memory capacity may be related to the ability to comprehend Neverlang programs. On the other hand, effectiveness and efficiency do not appear to be associated with an increase in users' acceptance variables.
[5] Davide Basile, Maurice H. ter Beek, Hendrik Göttmann, and Malte Lochau, “Mutant Equivalence as Monotonicity in Parametric Timed Games”, in Proceedings of the 11th International Conference on Formal Methods in Software Engineering (FormaliSE'23). 2023, IEEE.
The detection of faults in software systems can be enhanced effectively by model-based mutation testing. The efficiency of this technique is hindered when mutants are equivalent to the original system model, since this makes them useless. Recently, the application of model-based mutation testing to realtime systems modelled as timed games has been investigated, which has resulted in guidelines for statically avoiding equivalent mutants. In this paper, we recast this problem into the framework of parametric timed games. We then prove a correspondence between theoretical results for the detection of equivalent mutants in timed games and the property of monotonicity that is known to hold for a sub-class of parametric timed games called L/U parametric timed games. The presented results not only simplify the theory underlying the detection of equivalent mutants in timed games, but at the same time they improve the expressiveness of a known decidable fragment of parametric timed games for which monotonicity holds.
[6] Maurice H. ter Beek, Guillermina Cledou, Rolf Hennicker, and José Proença, “Can We Communicate? Using Dynamic Logic to Verify Team Automata”, in Proceedings of the 25th International Symposium on Formal Methods (FM'23), Marsha Chechik, Joost-Pieter Katoen, and Martin Leucker, Eds. 2023, vol. 14000 of Lecture Notes in Computer Science, pp. 122–141, Springer. [ DOI ]
Team automata describe networks of automata with input and output actions, extended with synchronisation policies guiding how many interacting components can synchronise on a shared input/output action. Given such a team automaton, we can reason over communication properties such as receptiveness (sent messages must be received) and responsiveness (pending receives must be satisfied). Previous work focused on how to identify these communication properties. However, automatically verifying these properties is non-trivial, as it may involve traversing networks of interacting automata with large state spaces. This paper investigates (1) how to characterise communication properties for team automata (and subsumed models) using test-free propositional dynamic logic, and (2) how to use this characterisation to verify communication properties by model checking. A prototype tool supports the theory, using a transformation to interact with the mCRL2 tool for model checking.
[7] Davide Basile and Maurice H. ter Beek, “Research Challenges in Orchestration Synthesis”, in Proceedings of the 16th Interaction and Concurrency Experience (ICE'23), Clément Aubert, Cinzia Di Giusto, Simon Fowler, and Larisa Safina, Eds., 2023, vol. 383 of EPTCS, pp. 73–90. [ DOI ]
Contract automata allow to formally define the behaviour of service contracts in terms of service offers and requests, some of which are moreover optional and some of which are necessary. A composition of contracts is said to be in agreement if all service requests are matched by corresponding offers. Whenever a composition of contracts is not in agreement, it can be refined to reach an agreement using the orchestration synthesis algorithm. This algorithm is a variant of the synthesis algorithm used in supervisory control theory and it is based on the fact that optional transitions are controllable, whereas necessary transitions are at most semi-controllable and cannot always be controlled. In fact, the resulting orchestration is such that as much of the behaviour in agreement is maintained. In this paper, we discuss recent developments of the orchestration synthesis algorithm for contract automata. Notably, we present a refined notion of semi-controllability and compare it with the original notion by means of examples. We then discuss the current limits of the orchestration synthesis algorithm and identify a number of research challenges together with a research roadmap.
[8] Davide Basile and Maurice H. ter Beek, “A Runtime Environment for Contract Automata”, in Proceedings of the 25th International Symposium on Formal Methods (FM'23), Marsha Chechik, Joost-Pieter Katoen, and Martin Leucker, Eds. 2023, vol. 14000 of Lecture Notes in Computer Science, pp. 550–567, Springer. [ DOI ]
Contract automata have been introduced for specifying applications through behavioural contracts and for synthesising their orchestrations as finite state automata. This paper addresses the realisation of applications from contract automata specifications. We present CARE, a new runtime environment to coordinate services implementing contracts that guarantees the adherence of the implementation to its contract. We discuss how CARE can be adopted to realise contract-based applications, its formal guarantees, and we identify the responsibilities of the involved business actors. Experiments show the benefits of adopting CARE with respect to manual implementations
[9] Riccardo Bianchini, Francesco Dagnino, Paola Giannini, and Elena Zucca, “A java-like calculus with heterogeneous coeffects”, Theor. Comput. Sci., vol. 971, pp. 114063, 2023. [ DOI ]
[10] Riccardo Bianchini, Francesco Dagnino, Paola Giannini, and Elena Zucca, “Multi-graded featherweight java”, in 37th European Conference on Object-Oriented Programming, ECOOP 2023, July 17-21, 2023, Seattle, Washington, United States, Karim Ali and Guido Salvaneschi, Eds. 2023, vol. 263 of LIPIcs, pp. 3:1–3:27, Schloss Dagstuhl - Leibniz-Zentrum für Informatik. [ DOI ]
[11] Giancarlo Fortino, Fabrizio Messina, Domenico Rosaci, and Giuseppe ML Sarnè, “Using trust measures to optimize neighbor selection for smart blockchain networks in the iot”, IEEE Internet of Things Journal, 2023.
[12] Maurice H. ter Beek, Rolf Hennicker, and José Proença, “Realisability of Gobal Models of Interaction”, in Proceedings of the 20th International Colloquium on Theoretical Aspects of Computing (ICTAC'23), Erika Ábrahám, Clemens Dubslaff, and Lizeth Tarifa, Eds. 2023, vol. 14446 of LNCS, pp. 236–255, Springer. [ DOI ]
We consider global models of communicating agents specified as transition systems labelled by interactions in which multiple senders and receivers can participate. A realisation of such a model is a set of local transition systems—one per agent—which are executed concurrently using synchronous communication. Our core challenge is how to check whether a global model is realisable and, if it is, how to synthesise a realisation. We identify and compare two variants to realise global interaction models, both relying on bisimulation equivalence. Then we investigate, for both variants, realisability conditions to be checked on global models. We propose a synthesis method for the construction of realisations by grouping locally indistinguishable states. The paper is accompanied by a tool that implements realisability checks and synthesises realisations.
[13] Juliane Päßler, Maurice H. ter Beek, Ferruccio Damiani, S. Lizeth Tapia Tarifa, and Einar Broch Johnsen, “Formal Modelling and Analysis of a Self-Adaptive Robotic System”, in Proceedings of the 18th International Conference on Integrated Formal Methods (IFM'23), Paula Herber and Anton Wijs, Eds. 2023, vol. 14300 of LNCS, pp. 342–363, Springer. [ DOI ]
Self-adaptation is a crucial feature of autonomous systems that must cope with uncertainties in, e.g., their environment and their internal state. Self-adaptive systems are often modelled as two-layered systems with a managed subsystem handling the domain concerns and a managing subsystem implementing the adaptation logic. We consider a case study of a self-adaptive robotic system; more concretely, an autonomous underwater vehicle (AUV) used for pipeline inspection. In this paper, we model and analyse it with the feature-aware probabilistic model checker ProFeat. The functionalities of the AUV are modelled in a feature model, capturing the AUV's variability. This allows us to model the managed subsystem of the AUV as a family of systems, where each family member corresponds to a valid feature configuration of the AUV. The managing subsystem of the AUV is modelled as a control layer capable of dynamically switching between such valid feature configurations, depending both on environmental and internal conditions. We use this model to analyse probabilistic reward and safety properties for the AUV.
[14] Vincenzo Ciancia, Jan Friso Groote, Diego Latella, Mieke Massink, and Erik P. de Vink, “Minimisation of spatial models using branching bisimilarity”, in Proceedings of Formal Methods (FM'23), Marsha Chechik, Joost-Pieter Katoen, and Martin Leucker, Eds. 2023, vol. 14000 of Lecture Notes in Computer Science, pp. 263–281, Springer. [ DOI | http ]
[15] Vincenzo Ciancia, David Gabelaia, Diego Latella, Mieke Messink, and Erik P. de Vink, “On Bisimilarity for Polyhedral Models and SLCS”, in Proceedings of FORTE'23, Marieke Huisman and António Ravara, Eds. 2023, vol. 14300 of LNCS, pp. 343–363, Springer.
[16] Stefania Monica, Federico Bergenti, and Franco Zambonelli, “A kinetic approach to investigate the collective dynamics of multi-agent systems”, International Journal on Software Tools for Technology Transfer, vol. 25, no. 5–6, pp. 693–705, 2023. [ DOI ]
When the number of interacting agents in a multi-agent system is large, the detailed study of the dynamics of each agent tends to obfuscate the collective and possibly emergent dynamics of the multi-agent system as a whole. When the interest is on the collective properties of the multi-agent system, a statistical study of the dynamics of the states of the agents can provide a more effective perspective on the system. In particular, a statistical approach can better focus on the long-term asymptotic properties of the studied multi-agent system. The initial part of this paper outlines a framework to approach the study of the collective properties of multi-agent systems. The framework targets large and decentralized multi-agent systems in which the relevant collective properties emerge from interactions. Then, the paper exemplifies the use of the framework to study the long-term asymptotic properties of multi-agent systems in which agents interact using the symmetric gossip algorithm. The state of each agent is represented as a real number, and the use of the framework shows that all agents exponentially converge to the average of their initial states. The analytic results provided by the framework are confirmed by independent multi-agent simulations. Finally, the paper is concluded with a brief discussion of related work and an overview of future extensions.
[17] Simone Dallospedale, Mattia Alex Leoni, Stefania Monica, and Federico Bergenti, “Experiments on accurate indoor positioning using the whale optimization algorithm”, in Proceedings of the Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE 2023). 2023, IEEE. [ DOI ]
The scenarios considered in this paper are examples of indoor positioning problems in industrial premises like warehouses and storage buildings. As usual in these cases, some anchor nodes are deployed at fixed and known locations near the ceiling of the considered environment to support maximum coverage and minimum interference positioning. These scenarios are critical for ordinary positioning algorithms because of the symmetries introduced by the deployment of the anchor nodes, and therefore alternative algorithms based on optimization are required. The whale optimization algorithm is considered in this paper as a viable alternative to support accurate positioning in the considered scenarios. The reported experimental campaign is designed to assess the performance of this algorithm for 2D positioning using two options to reduce the positioning problem from 3D to 2D under the assumption that ultra-wide band ranging is used to estimate distances. The first option treats the problem as genuinely 3D. The second option projects the estimated distances on the target plane to work in 2D. The results of the reported experimental campaign provide empirical evidence that the whale optimization algorithm can be effectively used for accurate 2D positioning using both options, but the second option is preferable because it requires a lower computational cost.
[18] Giuseppe Petrosino, Stefania Monica, and Federico Bergenti, “Imperative and event-driven programming of interoperable software agents”, in Proceedings of the 11th International Workshop on Engineering Multi-Agent Systems (EMAS 2023). 2023, vol. 14378 of Lecture Notes in Computer Science, pp. 23–40, Springer. [ DOI ]
Jadescript is a recent agent-oriented programming language conceived to support the effective development of agents and multi-agent systems based on JADE. Jadescript is designed to ease the development of agents by means of a tailored syntax matched with a programmer-friendly development environment. This paper presents a brief overview of Jadescript by describing its main features and abstractions and by comparing them with the corresponding features and abstractions advocated by other agent-oriented programming languages. Moreover, to show the applicability of Jadescript to the construction of interesting multi-agent systems, this paper concisely summarizes a case study in which Jadescript is used to implement agents that participate in English auctions. Finally, this paper is concluded with a brief overview of planned future developments of the language.
[19] Federico Bergenti, Stefania Monica, and Giuseppe Petrosino, “A comprehensive presentation of the Jadescript agent-oriented programming language”, in EUMAS 2023: Multi-Agent Systems. 2023, vol. 14282 of Lecture Notes in Computer Science, pp. 100–115, Springer. [ DOI ]
Jadescript is an agent-oriented programming language based on JADE that aims to become a dependable tool for the construction of industrial-strength multi-agent systems. This paper contributes to this objective by providing researchers and practitioners with a comprehensive description of Jadescript that discusses the most relevant features attained in several years of continuous development. In particular, this paper focuses on how Jadescript promotes the adoption of some ideas taken from agent-oriented programming by providing direct support for agent-oriented abstractions, like messages and ontologies, by encouraging the use of event-driven programming to govern interactions, and by allowing fine-grained task management using behaviours. Finally, to illustrate the practical applicability of Jadescript, this paper presents in detail the implementation of a well-known election algorithm traditionally used to coordinate distributed systems.
[20] Giuseppe Petrosino, Stefania Monica, and Federico Bergenti, “Cross-paradigm interoperability between Jadescript and Java”, in Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART. 2023, pp. 165–172, SciTePress. [ DOI ]
Jadescript is a recent language for practical agent-oriented programming that aims at easing the development of multi-agent systems for real-world applications. Normally, these applications require extensive and structured reuse of existing software modules, which are commonly developed using object-oriented or legacy technologies. Jadescript has been originally designed to ease the translation to Java and, as such, it natively eases the interoperability with Java, and therefore, with mainstream and legacy technologies. This paper overviews the features that have been recently added to Jadescript to support effective two-way interoperability with Java. Moreover, this paper thoroughly discusses the main ideas behind such features by framing them in a comparison with related work, and by outlining possible directions for further developments.
[21] Giuseppe Petrosino, Stefania Monica, and Federico Bergenti, “Effective handling of exceptional situations in robust software agents”, Intelligenza Artificiale, vol. 17, no. 1, pp. 37–49, 2023. [ DOI ]
Software agents are normally expected to operate in open and dynamic environments, and therefore they are often supposed to face situations that significantly deviate from the nominal course of events. The effective management of exceptional situations is of paramount importance to provide agents with the needed means to operate in their environments, mostly because these situations should be considered as the norm in open and dynamic environments. This paper presents some recent additions to the Jadescript agent-oriented programming language that were specifically designed to provide agents with the needed capabilities to effectively detect and manage exceptional situations. The first part of this paper motivates the need of sophisticated exception handling capabilities, also by relating the proposed language features with the state of the art documented in the literature. Then, the second part of this paper discusses the proposed language features, also considering the conceptual similarities and differences with the related features normally available in mainstream programming languages. In particular, the proposed language features are presented in terms of three language improvements: the general-purpose support to handle exceptions, the specific support to handle behaviour failures, and the specific support to handle stale messages. Finally, before concluding with some indications on future research activities, the third part of this paper describes a concrete example intended to practically present the actual use of the new language features.
[22] Débora C. Engelmann, Angelo Ferrando, Alison R. Panisson, Davide Ancona, Rafael H. Bordini, and Viviana Mascardi, “Rv4jaca - towards runtime verification of multi-agent systems and robotic applications”, Robotics, vol. 12, no. 2, pp. 49, 2023. [ DOI | http ]
This paper presents a Runtime Verification (RV) approach for Multi-Agent Systems (MAS) using the JaCaMo framework. Our objective is to bring a layer of security to the MAS. This is achieved keeping in mind possible safety-critical uses of the MAS, such as robotic applications. This layer is capable of controlling events during the execution of the system without needing a specific implementation in the behaviour of each agent to recognise the events. In this paper, we mainly focus on MAS when used in the context of hybrid intelligence. This use requires communication between software agents and human beings. In some cases, communication takes place via natural language dialogues. However, this kind of communication brings us to a concern related to controlling the flow of dialogue so that agents can prevent any change in the topic of discussion that could impair their reasoning. The latter may be a problem and undermine the development of the software agents. In this paper, we tackle this problem by proposing and demonstrating the implementation of a framework that aims to control the dialogue flow in a MAS; especially when the MAS communicates with the user through natural language to aid decision-making in a hospital bed allocation scenario.
[23] Davide Ancona, Pietro Barbieri, and Elena Zucca, “Checked corecursive streams: Expressivity and completeness”, Theor. Comput. Sci., vol. 974, pp. 114081, 2023. [ DOI | http ]
Checked corecursive streams are a novel approach to stream definitions relying on a semantics of function application detecting cyclic calls, and a well-definedness check ensuring that access to an arbitrary index will always return a result. We extend the technique beyond the simple stream operators considered in previous work, notably by adding an interleaving combinator which has a non-trivial recursion scheme. We show that this actually enhances expressive power, since the interleaving operator cannot be expressed by the others, and that it is still possible to perform a sound and complete well-definedness check, through a symbolic computation which mimics access to an arbitrary index.
[24] Davide Ancona, Angelo Ferrando, and Viviana Mascardi, “Runtime verification of hash code in mutable classes”, in Proceedings of the 25th ACM International Workshop on Formal Techniques for Java-like Programs, FTfJP 2023, Seattle, WA, USA, 18 July 2023, 2023, pp. 25–31. [ DOI | http ]
Most mainstream object-oriented languages provide a notion of equality between objects which can be customized to be weaker than reference equality, and which is coupled with the customizable notion of object hash code. This feature is so pervasive in object-oriented code that incorrect redefinition or use of equality and hash code may have a serious impact on software reliability and safety.

Despite redefinition of equality and hash code in mutable classes is unsafe, many widely used API libraries do that in Java and other similar languages. When objects of such classes are used as keys in hash tables, programs may exhibit unexpected and unpredictable behavior. In this paper we propose a runtime verification solution to avoid or at least mitigate this issue.

Our proposal uses RML, a rewriting-based domain specific language for runtime verification which is independent from code instrumentation and the programming language used to develop the software to be verified.

[25] Davide Ancona, Angelo Ferrando, and Viviana Mascardi, “Exploiting logic programming for runtime verification: Current and future perspectives”, in Prolog: The Next 50 Years, pp. 300–317. 2023. [ DOI | http ]
In this paper we discuss how Logic Programming can be exploited for Runtime Verification, an activity where a monitor is in charge for checking whether an observed event is allowed in the current state. If this is the case, the monitor moves to the successive state, observes another event, and so on, until either a violation is detected, or the stream of events ends. If the system emitting events is expected to run forever, so does the monitor.

Being a semi-formal method, Runtime Verification must rely on a formal specification of the states of the observed system, and on a precise, formal description of the monitor’s behavior. These requirements, and the raising need to deal with partial observability of events, make the adoption of Logic Programming in the Runtime Verification domain extremely suitable, flexible and powerful.

[26] Lorenzo Bettini, “A Java Testing Framework without Reflection”, in Proceedings of the 18th International Conference on Software Tecnologies (ICSOFT'23), Hans-Georg Fill, Francisco José Domínguez, Marten van Sinderen, and Leszek A. Maciaszek, Eds. 2023, pp. 369–376, SCITEPRESS. [ DOI ]
Java reflection allows a program to inspect the structure of objects at run-time and provides a powerful mechanism to achieve many interesting dynamic features in several Java frameworks. However, reflection breaks the static type safety properties of Java programs and introduces a run-time overhead; thus, it might be better to avoid reflection when possible. In this paper, we present a novel Java testing framework where reflection is never used: we implement the framework only with the Object-Oriented and functional programming mechanisms provided by Java. We will show that implementing and using such a framework is easy, and we avoid the run-time overhead of reflection. Our framework can be used with existing testing libraries and is meant to be extendable.
[27] Francesco Bertolotti and Walter Cazzola, “Fold2Vec: Towards a Statement Based Representation of Code for Code Comprehension”, Transaction on Software Engineering and Methodology, vol. 32, no. 1, pp. 6:1–6:31, February 2023. [ DOI | http ]
We introduce a novel approach to source code representation to be used in combination with neural networks. Such a representation is designed to permit the production of a continuous vector for each code statement. In particular, we present how the representation is produced in the case of Java source code. We test our representation for three tasks: code summarization, statement separation, and code search. We compare with the state-of-the-art non-autoregressive and end-to-end models for these tasks. We conclude that all tasks benefit from the proposed representation to boost their performance in terms of f1-score, accuracy, and MRR, respectively. Moreover, we show how models trained on code summarization and models trained on statement separation can be combined to address methods with tangled responsibilities. Meaning that these models can be used to detect code misconduct.
[28] Francesco Bertolotti, Walter Cazzola, and Luca Favalli, “On the Granularity of Linguistic Reuse”, Journal of Systems and Software, vol. 202, August 2023. [ DOI | NEW! | www: ]
Programming languages are complex software systems integrated across an ecosystem of different applications such as language compilers or interpreters but also an integrated development environment comprehensive of syntax highlighting, code completion, error recovery, and a debugger. The complexity of language ecosystems can be faced using language workbenches—i.e., tools that tackle the development of programming languages, domain specific languages and their ecosystems in a modular way.

As with any other software system, one of the priorities that developers struggle to achieve when developing programming languages is reusability. After all, the capacity to easily reuse and adapt existing components to new scenarios can dramatically improve development times. Therefore, as programming languages offer features to reuse existing code, language workbenches should offer tools to reuse existing language assets. However, reusability can be achieved in many different ways.

In this work, we identify six forms of linguistic reusability, ordered by level of granularity: (i) sub-languages composition, (ii) language features composition, (iii) syntax and semantics assets composition, (iv) semantic assets composition, (v) actions composition, and (vi) action extension. We use these mechanisms to extend the taxonomy of language composition proposed by Erdweg et al. To show a concrete application of this taxonomy, we evaluate the capabilities provided by the Neverlang language workbench with regards to our taxonomy and extend it by adding explicit support for any granularity level that was originally not supported. This is done by instantiating two levels of reusability as actual operators—desugaring, and delegation. We evaluate these operators against the clone-and-own approach, which was the only form of reuse at that level of granularity prior to the introduction of explicit operators. We show that with the clone-and-own approach the design quality of the source code is negatively affected. We conclude that language workbenches can benefit from the introduction of mechanisms to explicitly support reuse at all granularity levels.

[29] Francesco Bertolotti and Walter Cazzola, “CombTransformers: Statement-Wise Transformers for Statement-Wise Representations”, IEEE Transactions on Software Engineering, vol. 49, no. 10, August 2023. [ DOI | NEW! | www: ]
This study presents a novel category of Transformer architectures known as comb transformers, which effectively reduce the space complexity of the self-attention layer from a quadratic to a sub-quadratic level. This is achieved by processing sequence segments independently and incorporating X -word embeddings to merge cross-segment information. The reduction in attention memory requirements enables the deployment of deeper architectures, potentially leading to more competitive outcomes. Furthermore, we design an abstract syntax tree (AST)-based code representation to effectively exploit comb transformer properties. To explore the potential of our approach, we develop nine specific instances based on three popular architectural concepts: funnel, hourglass, and encoder-decoder. These architectures are subsequently trained on three code-related tasks: method name generation, code search, and code summarization. These tasks encompass a range of capabilities: short/long sequence generation and classification. In addition to the proposed comb transformers, we also evaluate several baseline architectures for comparative analysis. Our findings demonstrate that the comb transformers match the performance of the baselines and frequently perform better.
[30] Walter Cazzola and Luca Favalli, “The Language Mutation Problem: Leveraging Language Product Lines for Mutation Testing of Interpreters”, Journal of Systems and Software, vol. 195, October 2023. [ DOI ]
Compilers translate programs from a high level of abstraction into a low level representation that can be understood and executed by the computer; interpreters directly execute instructions from source code to convey their semantics. Undoubtedly, the correctness of both compilers and interpreters is fundamental to reliably execute the semantics of any software developed by means of high-level languages. Testing is one of the most important methods to detect errors in any software, including compilers and interpreters. Among testing methods, mutation testing is an empirically effective technique often used to evaluate and improve the quality of test suites. However, mutation testing imposes severe demands in computing resources due to the large number of mutants that need to be generated, compiled and executed. In this work, we introduce a mutation approach for programming languages that mitigates this problem by leveraging the properties of language product lines, language workbenches and separate compilations. In this approach, the base language is taken as a black-box and mutated by means of mutation operators performed at language feature level to create a family of mutants of the base language. Each variant of the mutant family is created at runtime, without any access to the source code and without performing any additional compilation. We report results from a preliminary case study in which mutants of an ECMAScript interpreter are tested against the Sputnik conformance test suite for the ECMA-262 specification. The experimental data indicates that this approach can be used to create generally non-trivial mutants.
[31] Walter Cazzola and Luca Favalli, “Exceptions All Over the Shop: Modular, Customizable, Language-Indipendent Exception Handling Layer”, in Proceedings of the 16th International Conference on Software Language Engineering (SLE'23), Thomas Degueuele and Elizabeth Scott, Eds., Cascais, Portugal, October 2023, pp. 15–28, ACM. [ DOI | NEW! | www: ]
The introduction of better abstractions is at the forefront of research and practice. Among many approaches, domain-specific languages are subject to an increase in popularity due to the need for easier, faster and more reliable application development that involves programmers and domain experts alike. To smooth the adoption of such a language-driven development process, researchers must create new engineering techniques for the development of programming languages and their ecosystems. Traditionally, programming languages are implemented from scratch and in a monolithic way. Conversely, modular and reusable language development solutions would improve maintainability, reusability and extensibility. Many programming languages share similarities that can be leveraged to reuse the same language feature implementations across several programming languages; recent language workbenches strive to achieve this goal by solving the language composition and language extension problems. Yet, some features are inherently complex and affect the behavior of several language features. Most notably, the exception handling mechanism involves varied aspects, such as the memory layout, variables, their scope, up to the execution of each statement that may cause an exceptional eventā€”e.g., a division by zero. In this paper, we propose an approach to untangle the exception handling mechanism dubbed the exception handling layer: its components are modular and fully independent from one another, as well as from other language features. The exception handling layer is language-independent, customizable with regards to the memory layout and supports unconventional exception handling language features. To avoid any assumptions with regards to the host language, the exception handling layer is a stand-alone framework, decoupled from the exception handling mechanism offered by the back-end. Then, we present a full-fledged, generic Java implementation of the exception handling layer. The applicability of this approach is presented through a language evolution scenario based on a Neverlang implementation of JavaScript and LogLang, that we extend with conventional and unconventional exception handling language features using the exception handling layer, with limited impact on their original implementation.
[32] Walter Cazzola and Luca Favalli, “Scrambled Features for Breakfast: Concept, and Practice of Agile Language Development”, Communications of the ACM, vol. 66, no. 11, pp. 30–40, November 2023. [ DOI | NEW! | www: ]
Modular software development is taken for granted thanks to the abstractions of modern programming languages. However, the programming languages themselves are still often developed as a monolithic whole, despite them being the very foundation our software is based on. Mainstream programming languages are rarely developed incrementally and their evolution and maintenance are a mere afterthought. The growing research field of language workbenches tries to improve language design and development by employing modularization techniques that maximize code reuse up to the language construct granularity. In this paper, we investigate how such an approach fits an agile style of development: agile engineering concepts such as product backlog items and sprint goals can be directly mapped onto language workbench concepts such as language features and language variants. This direct mapping allows for an easier development of modular, extensible and maintainable programming languages. We conceptualized an agile language development style that acknowledges such a mapping and then turned concepts into theory by outlining an agile language development methodology. Then, thanks to our partnership with Tyl, we put theory into practice in an industrial production environment. Now, we share our experience on the agile creation process of a domain-specific language for the enterprise resource planning (ERP).
[33] Francesco Bertolotti, Walter Cazzola, and Luca Favalli, “SP⅃LꟼƧ: Software Product Lines Extraction Driven by Language Server Protocol”, Journal of Systems and Software, vol. 205, November 2023. [ DOI | NEW! | www: ]
Software product lines (SPL) describe highly-variable software systems as a family of similar products that differ in terms of the features they provide. The promise of SPL engineering is to enable massive software reuse by allowing software features to be reused across a variety of different products made for several customers. However, there are some disadvantages in the extraction of SPLs from standard applications. Most notably, approaches to the development of SPLs are not supported by the base language and use a syntax and composition techniques that require a deep understanding of the tools being used. Therefore, the same features cannot be used in a different application and developers must face a steep learning curve when developing SPLs for the first time or when switching from one approach to a different one. Ultimately, this problem is due to a lack of standards in the area of SPL engineering and in the way SPLs are extracted from variability-unaware applications. In this work, we present a framework based on LSP and dubbed SPLL PS that aims at standardizing such a process by decoupling the refactoring operations made by the user from the effect they have on the source code. This way, the server for a specific SPL development approach can be used across several development environments that provide clients with customized refactoring options. Conversely, the developers can use the same client to refactor SPLs made according to different approaches without needing to learn the syntax of each approach. To showcase the applicability of the approach, we present an evaluation performed by refactoring four SPLs according to two different approaches: the results show that a minimal implementation of the SPLL PS client and server applications can be used to reduce the effort of extracting an SPL up to the 93% and that it can greatly reduce or even completely hide the implementation details from the developer, depending on the chosen approach.

2024

[1] Maurice H. ter Beek, Rolf Hennicker, and José Proença, “Team Automata: Overview and Roadmap”, in Proceedings of the 26th IFIP WG 6.1 International Conference on Coordination Models and Languages (COORDINATION'24), Ilaria Castellani and Francesco Tiezzi, Eds. 2024, vol. 14676 of Lecture Notes in Computer Science, pp. 161–198, Springer. [ DOI ]
Team Automata is a formalism for interacting component-based systems proposed in 1997, whereby multiple sending and receiving actions from concurrent automata can synchronise. During the past 25+ years, team automata have been studied and applied in many different contexts, involving 25+ researchers and resulting in 25+ publications. In this paper, we first revisit the specific notion of synchronisation and composition of team automata, relating it to other relevant coordination models, such as Reo, BIP, Contract Automata, Choreography Automata, and Multi-Party Session Types. We then identify several aspects that have recently been investigated for team automata and related models. These include communication properties (which are the properties of interest?), realisability (how to decompose a global model into local components?) and tool support (what has been automatised or implemented?). Our presentation of these aspects provides a snapshot of the most recent trends in research on team automata, and delineates a roadmap for future research, both for team automata and for related formalisms.
[2] Fabrizio Messina, Corrado Santoro, and Federico Fausto Santoro, “Enhancing security and trust in internet of things through meshtastic protocol utilising low-range technology”, Electronics, vol. 13, no. 6, pp. 1055, 2024.
[3] Fabrizio Messina, Domenico Rosaci, and Giuseppe M. L. Sarnè, “Applying trust patterns to model complex trustworthiness in the internet of things”, Electronics, vol. 13, no. 11, 2024. [ DOI | http ]
[4] Alessio Ferrari, Thaide Huichapa, Paola Spoletini, Nicole Novielli, Davide Fucci, and Daniela Girardi, “Using Voice and Biofeedback to Predict User Engagement during Product Feedback Interviews”, ACM Trans. Softw. Eng. Methodol., vol. 33, no. 4, pp. 87:1–87:36, 2024. [ DOI ]
Capturing users? engagement is crucial for gathering feedback about the features of a software product. In a market-driven context, current approaches to collecting and analyzing users? feedback are based on techniques leveraging information extracted from product reviews and social media. These approaches are hardly applicable in contexts where online feedback is limited, as for the majority of apps, and software in general. In such cases, companies need to resort to face-to-face interviews to get feedback on their products. In this article, we propose to utilize biometric data, in terms of physiological and voice features, to complement product feedback interviews with information about the engagement of the user on product-relevant topics. We evaluate our approach by interviewing users while gathering their physiological data (i.e., biofeedback) using an Empatica E4 wristband, and capturing their voice through the default audio-recorder of a common laptop. Our results show that we can predict users? engagement by training supervised machine learning algorithms on biofeedback and voice data, and that voice features alone can be sufficiently effective. The best configurations evaluated achieve an average F1 ∼ 70% in terms of classification performance, and use voice features only. This work is one of the first studies in requirements engineering in which biometrics are used to identify emotions. Furthermore, this is one of the first studies in software engineering that considers voice analysis. The usage of voice features can be particularly helpful for emotion-aware feedback collection in remote communication, either performed by human analysts or voice-based chatbots, and can also be exploited to support the analysis of meetings in software engineering research.
[5] Giovanna Broccia, Maurice H. ter Beek, Alberto Lluch Lafuente, Paola Spoletini, and Alessio Ferrari, “Assessing the Understandability and Acceptance of Attack-Defense Trees for Modelling Security Requirements”, in Proceedings of the 30th International Working Conference on Requirements Engineering: Foundation for Software Quality (REFSQ'24), Daniel Mendez and Ana Moreira, Eds. March 2024, vol. 14588 of Lecture Notes in Computer Science, pp. 39–56, Springer. [ DOI ]
Context and Motivation Attack-Defense Trees (ADTs) are a graphical notation used to model and assess security requirements. ADTs are widely popular, as they can facilitate communication between different stakeholders involved in system security evaluation, and they are formal enough to be verified, e.g., with model checkers. Question/Problem While the quality of this notation has been primarily assessed quantitatively, its understandability has never been evaluated despite being mentioned as a key factor for its success. Principal idea/Results In this paper, we conduct an experiment with 25 human subjects to assess the understandability and user acceptance of the ADT notation. The study focuses on performance-based variables and perception-based variables, with the aim of evaluating the relationship between these measures and how they might impact the practical use of the notation. The results confirm the good level of understandability of ADTs. Participants consider them useful, and they show intention to use them. Contribution This is the first study empirically supporting the understandability of ADTs, thereby contributing to the theory of security requirements engineering.
[6] Michael Lienhardt, Maurice H. ter Beek, and Ferruccio Damiani, “Product Lines of Dataflows”, The Journal of Systems and Software, vol. 210, April 2024. [ DOI ]
Data-centric parallel programming models such as dataflows are well established to implement complex concurrent software. However, in a context of a configurable software, the dataflow used in its computation might vary with respect to the selected options: this happens in particular in fields such as Computational Fluid Dynamics (CFD), where the shape of the domain in which the fluid flows and the equations used to simulate the flow are all options configuring the dataflow to execute.

In this paper, we present an approach to implement product lines of dataflows, based on Delta-Oriented Programming (DOP) and term rewriting. This approach includes several analyses to check that all dataflows of a product line can be generated. Moreover, we discuss a prototype implementation of the approach and demonstrate its feasibility in practice.

[7] Davide Basile, Maurice H. ter Beek, Alessandro Fantechi, and Stefania Gnesi, “Coherent Modal Transition Systems Refinement”, Journal of Logical and Algebraic Methods in Programming, vol. 138, April 2024. [ DOI ]
Modal Transition Systems (MTS) are a well-known formalism that extend Labelled Transition Systems (LTS) with the possibility of specifying necessary and permitted behaviour. Coherent MTS (CMTS) have been introduced to model Software Product Lines (SPL) based on a correspondence between the necessary and permitted modalities of MTS transitions and their associated actions, and the core and optional features of SPL. In this paper, we address open problems of the coherent fragment of MTS and introduce the notions of refinement and thorough refinement of CMTS. Most notably, we prove that refinement and thorough refinement coincide for CMTS, while it is known that this is not the case for MTS. We also define (thorough) equivalence and strong bisimilarity of both MTS and CMTS. We show their relations and, in particular, we prove that also strong bisimilarity and equivalence coincide for CMTS, whereas they do not for MTS. Finally, we extend our investigation to CMTS equipped with Constraints (MTSC), originally introduced to express alternative behaviour, and we prove that novel notions of refinement and strong thorough refinement coincide for MTSC, and so do their extensions to strong (thorough) equivalence and strong bisimilarity.
[8] Sallam Abualhaija, Fatma Basak Aydemir, Fabiano Dalpiaz, Davide Dell?Anna, Alessio Ferrari, Xavier Franch, and Davide Fucci, “Replication in Requirements Engineering: the NLP for RE Case”, ACM Trans. Softw. Eng. Methodol., apr 2024, Just Accepted. [ DOI ]
[Context] Natural language processing (NLP) techniques have been widely applied in the requirements engineering (RE) field to support tasks such as classification and ambiguity detection. Despite its empirical vocation, RE research has given limited attention to replication of NLP for RE studies. Replication is hampered by several factors, including the context specificity of the studies, the heterogeneity of the tasks involving NLP, the tasks? inherent hairiness, and, in turn, the heterogeneous reporting structure. [Solution] To address these issues, we propose a new artifact, referred to as ID-Card, whose goal is to provide a structured summary of research papers emphasizing replication-relevant information. We construct the ID-Card through a structured, iterative process based on design science. [Results] In this paper: (i) we report on hands-on experiences of replication, (ii) we review the state-of-the-art and extract replication-relevant information, (iii) we identify, through focus groups, challenges across two typical dimensions of replication: data annotation and tool reconstruction, and (iv) we present the concept and structure of the ID-Card to mitigate the identified challenges. [Contribution] This study aims to create awareness of replication in NLP for RE. We propose an ID-Card that is intended to foster study replication, but can also be used in other contexts, e.g., for educational purposes.
[9] Francesco Bertolotti, Walter Cazzola, and Luca Favalli, “★piler: Compilers in Search of Compilations”, Journal of Systems and Software, vol. 212, June 2024. [ DOI ]
Compilers pose significant challenges in their development as software products. Language developers face the complexities of ensuring efficiency, adhering to good design practices, and maintaining the overall codebase. These factors make it difficult to predict the unexpected impact of updates on existing software built on the current compiler stack. Furthermore, software created for a specific compiler often lacks reusability for other compiler environments. In this study, we propose a comprehensive framework for the uniform development of compilers that addresses these issues. Our approach involves developing compilers as a collection of small transpilation units, referred to as deltas. The transpilation infrastructure takes source code written in a particular source language and searches for a path of deltas to generate equivalent source code in the target language. By adopting this methodology, language developers can easily update their languages by introducing new deltas into the system. Existing code remains unaffected as old transpilation paths remain available. To support this framework, we have devised a metric space for efficient delta search. This metric space enables us to define a non-overestimating heuristic function, which proves valuable in solving the search problem. Leveraging the A* search algorithm, we can efficiently transpile programs from a source language to the target language. To evaluate the effectiveness of our approach, we conducted a benchmark comparison between the A* search algorithm and the simpler breadth-first search (BFS) algorithm. The benchmark consisted of over 100 transpilation searches, providing valuable insights into the performance and capabilities of this framework.
[10] Francesco Bertolotti and Walter Cazzola, “By Tying Embeddings You Are Assuming the Distributional Hypothesis”, in Proceedings of the 41st International Conference on Machine Learning (ICML'24), Wien, Austria, July 2024. [ DOI ]
In this work, we analyze both theoretically and empirically the effect of tied input-output embeddings—a popular technique that reduces the model size while often improving training. Interestingly, we found that this technique is connected to Harris (1954)'s distributional hypothesis—often portrayed by the famous Firth (1957)'s quote “a word is characterized by the company it keeps”. Specifically, our findings indicate that words (or, more broadly, symbols) with similar semantics tend to be encoded in similar input embeddings, while words that appear in similar contexts are encoded in similar output embeddings (thus explaining the semantic space arising in input and output embedding of foundational language models). As a consequence of these findings, the tying of the input and output embeddings is encouraged only when the distributional hypothesis holds for the underlying data. These results also provide insight into the embeddings of foundation language models (which are known to be semantically organized). Further, we complement the theoretical findings with several experiments supporting the claims.
[11] Francesco Bertolotti, Walter Cazzola, Dario Ostuni, and Carlo Castoldi, “When the Dragons Defeat the Knight: Basilisk an Architectural Pattern for Platform and Language Independent Development”, Journal of Systems and Software, vol. 215, September 2024. [ DOI ]
In this work, we introduce Basilisk, a high-level architectural pattern designed to facilitate interoperability among various languages, platforms, and ecosystems. The pursuit of language-independent software development is highly desirable, enabling developers to utilize existing software products with most programming languages. Achieving platform independence is equally advantageous, allowing code deployment on different platforms effortlessly. While the development community has often aimed for either language or platform independence, Basilisk aims to combine both into a single product. To realize this dual objective, Basilisk employs two fundamental components. The first is a transpilation infrastructure used to render software products language-independent. The second is an abstraction layer over platforms, enabling the creation of platform-independent software products. To illustrate Basilisk’s potential, we introduce Hydra, a one-to-many, declarative transpilation infrastructure. Hydra has been utilized to develop transpilers from HydraKernel (source language) to various target languages, including D, C++, C#, Scala, Ruby, Hy, and Python. Additionally, we instantiate the abstraction layer in Wyvern, a low-level embedded domain-specific language for GPU programming, supporting any Vulkan-compatible GPU. With the Hydra transpilation infrastructure, Wyvern becomes available for D, C++, C#, Scala, Ruby, Hy, and Python. We evaluate Basilisk through the instantiation of Hydra and Wyvern, writing five algorithms from the Rodinia suite for the seven available languages, totaling 35 benchmarks. These benchmarks are executed on four different hardware platforms.

2025