Walter Cazzola's

Bibliography by Topic


The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Full BiBTeX
[1]
Walter Cazzola, “Evolution as «Reflections on the Design»”, in MoDELS@Run-Time, Nelly Bencomo, Betty Chang, Robert B. France, and Uwe Aßmann, Eds., Lecture Notes in Computer Science 8378, pp. 259–278. Springer, August 2014. [ www: ]

[2]
Walter Cazzola and Edoardo Vacchi, “@Java: Bringing a Richer Annotation Model to Java”, Computer Languages, Systems & Structures, vol. 40, no. 1, pp. 2–18, April 2014. [ DOI | www: ]
The ability to annotate code and, in general, the capability to attach arbitrary meta-data to portions of a program are features that have become more and more common in programming languages.

Annotations in Java make it possible to attach custom, structured meta-data to declarations of classes, fields and methods. However, the mechanism has some limits: annotations can only decorate declarations and their instantiation can only be resolved statically.

With this work, we propose an extension to Java (named @Java) with a richer annotation model, supporting code block and expression annotations, as well as dynamically evaluated members. In other words, in our model, the granularity of annotations extends to the statement and expression level and annotations may hold the result of runtime-evaluated expressions.

Our extension to the Java annotation model is twofold: (i) we introduced block and expression annotations and (ii) we allow every annotation to hold dynamically evaluated values. Our implementation also provides an extended reflection API to support inspection and retrieval of our enhanced annotations.

[3]
Walter Cazzola and Edoardo Vacchi, “@Java: Annotations in Freedom”, in Proceedings of the 28th Annual ACM Symposium on Applied Computing (SAC'13), Coimbra, Portugal, March 2013, pp. 1691–1696, ACM Press. [ http ]
The ability to annotate code and, in general, the capability to attach arbitrary metadata to portions of a program are features that have become more and more common in programming languages. In fact, various programming techniques and tools exploit their explicit availability for a number of purposes, such as extracting documentation, guiding code profiling, enhancing the description of a data type, marking code for instrumentation (for instance, in aspect-oriented frameworks), and the list could go on.

While support to attach metadata to code is not a new concept (programming platforms as CLOS and Smalltalk have pioneered in this field), consistent, pervasive APIs to define and manage code annotations are something comparatively recent on modern platforms like the .NET and Java.

Annotations in Java make possible to attach custom, structured metadata to declarations of classes, fields and methods. With this work, we propose an extension to Java (named @Java) that has a richer annotation model, supporting code block and expression annotations. In other words, the granularity of annotations extends to the statement and expression level and does not limit to class, method and field declarations.

[4]
Ying Liu, Walter Cazzola, and Bin Zhang, “Towards a Colored Reflective Petri-Net Approach to Model Self-Evolving Service-Oriented Architectures”, in Proceedings of the 17th Annual ACM Symposium on Applied Computing (SAC'12), Riva del Garda, Trento, Italy, March 2012, pp. 1858–1865, ACM. [ http ]
Service-based software systems could require to evolve during their execution. To support this, we need to consider system evolving since the design phase. Reflective Petri nets separate the system from its evolution by describing it and how it can evolve. However, reflective Petri nets have some expressivity limits and render overcomplicated the consistency checking necessary during service evolution. In this paper, we extend the reflective Petri nets approach to overcome such limits and show that on a case study.

[5]
Lorenzo Capra and Walter Cazzola, “(Symbolic) State-Space Inspection of a Class of Dynamic Petri Nets”, in Proceedings of the Summer Computer Simulation Conference (SCSC'10), Ottawa, Canada, July 2010, pp. 522–530, ACM. [ www: ]

[6]
Lorenzo Capra and Walter Cazzola, “An Introduction to Reflective Petri Nets”, in Handbook of Research on Discrete Event Simulation Environments: Technologies and Applications, Evon M. O. Abu-Taieh and Asim A. El Sheikh, Eds., chapter 9, pp. 191–217. IGI Global, November 2009. [ .pdf ]
Most discrete-event systems are subject to evolution during lifecycle. Evolution often implies the development of new features, and their integration in deployed systems. Taking evolution into account since the design phase therefore is mandatory. A common approach consists of hard-coding the foreseeable evolutions at the design level. Neglecting the obvious ifficulties of this approach, we also get system's design polluted by details not concerning functionality, which hamper analysis, reuse and maintenance. Petri Nets, as a central formalism for discrete-event systems, are not exempt from pollution when facing evolution. Embedding evolution in Petri nets requires expertise, other than early knowledge of evolution. The complexity of resulting models is likely to affect the consolidated analysis algorithms for Petri nets. We introduce Reflective Petri nets, a formalism for dynamic discrete-event systems. Based on a reflective layout, in which functional aspects are separated from evolution, this model preserves the description effectiveness and the analysis capabilities of Petri nets. Reflective Petri nets are provided with timed state-transition semantics.

[7]
Lorenzo Capra and Walter Cazzola, “Trying out Reflective Petri Nets on a Dynamic Workflow Case”, in Handbook of Research on Discrete Event Simulation Environments: Technologies and Applications, Evon M. O. Abu-Taieh and Asim A. El Sheikh, Eds., chapter 10, pp. 218–233. IGI Global, November 2009. [ .pdf ]
Industrial/business processes are an evident example of discrete-event systems which are subject to evolution during life-cycle. The design and management of dynamic workflows need adequate formal models and support tools to handle in sound way possible changes occurring during workflow operation. The known, well-established workflow's models, among which Petri nets play a central role, are lacking in features for representing evolution. We propose a recent Petri net-based reflective layout, called Reflective Petri nets, as a formal model for dynamic workflows. A localized open problem is considered: how to determine what tasks should be redone and which ones do not when transferring a workflow instance from an old to a new template. The problem is efficiently but rather empirically addressed in a workflow management system. Our approach is formal, may be generalized, and is based on the preservation of classical Petri nets structural properties, which permit an efficient characterization of workflow's soundness.

[8]
Walter Cazzola, “Cogito, Ergo Muto!”, in Proceedings of the Workshop on Self-Organizing Architecture (SOAR'09), Danny Weyns, Sam Malek, Rogério de Lemos, and Jesper Andersson, Eds., Cambridge, United Kingdom, September 2009, pp. 1–7, Invited Paper. [ .pdf ]
No system escapes from the need of evolving either to fix bugs, to be reconfigured or to add new features. To evolve becomes particularly problematic when the system to evolve can not be stopped.

Traditionally the evolution of a continuously running system is tackled on by calculating all the possible evolutions in advance and hardwiring them in the application itself. This approach gives origin to the code pollution phenomenon where the code of the application is polluted by code that could never be applied. The approach has the following defects: i) code bloating, ii) it is impossible to forecast any possible change and iii) the code becomes hard to read and maintain.

Computational reflection by definition allows an application to introspect and intercede on its own structure and behavior endowing, therefore, a reflective application with (potent ially) the ability of self-evolving. Furthermore, to deal with the evolution as a nonfunctional concerns, i.e., that can be separated from the current implementation of the applicat ion, can limit the code pollution phenomenon.

To bring the design information (model and/or architecture) at run-time provides the application with a basic knowledge about itself to reflect on when a change is necessary and on how to deploy it. The availability of such a knowledge at run-time frees the designer from forecasting and coding all the possible evolutions in favor of a sort of evolutionary engi ne that, to some extent, can evaluate which countermove to apply.

In this contribution, the author will explore the role of reflection and of the design information in the development of self-evolving applications. Moreover, the author will sketch a basic reflective architecture to support dynamic self-evolution and he will analyze the adherence of the existing frameworks to such an architecture.

[9]
Lorenzo Capra and Walter Cazzola, “Evolving System's Modeling and Simulation through Reflective Petri Nets”, in Proceedings of the 4th International Conference on Evaluation of Novel Approaches to Software Engineering (ENASE'09), Stefan Jablonski and Leszek Maciaszek, Eds., Milan, Italy, May 2009, INSTICC, pp. 59–70, INSTICC Press. [ .pdf ]
The design of dynamic discrete-event systems calls for adequate modeling formalisms and tools to manage possible changes occurring during system's lifecycle. A common approach is to pollute design with details that do not regard the current system behavior rather its evolution. That hampers analysis, reuse and maintenance in general. A reflective Petri net model (based on classical Petri nets) was recently proposed to support dynamic discrete-event system's design, and was applied to dynamic workflow's management. Behind there is the idea that keeping functional aspects separated from evolutionary ones and applying them to the (current) system only when necessary, results in a simple formal model on which the ability of verifying properties typical of Petri nets is preserved. In this paper we provide the reflective Petri nets with a (labeled) state-transition graph semantics.

[10]
Manuel Oriol, Walter Cazzola, Shigeru Chiba, and Gunter Saake, “Getting Farther on Software Evolution via AOP and Reflection”, in ECOOP'08 Workshop Reader, Patrick Eugster, Ed., Lecture Notes in Computer Science 5475, pp. 63–69. Springer-Verlag, March 2009. [ .pdf ]
[11]
Walter Cazzola, Shigeru Chiba, Manuel Oriol, and Gunter Saake, Eds., Proceedings of the 5th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'08), Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, December 2008. [ .pdf ]
[12]
Walter Cazzola and Sonia Pini, “Jigsaw: Information System Composition through a Self-Adaptable Interface”, Technical Report RT 26-08, Department of Informatics and Communication, University of Milan, Milan, Italy, April 2008. [ www: ]

[13]
Lorenzo Capra and Walter Cazzola, “Evolutionary Design through Reflective Petri Nets: an Application to Workflow”, in Proceedings of the 26th IASTED International Conference on Software Engineering (SE'08), Innsbruck, Austria, February 2008, pp. 200–207, ACTA Press. [ .pdf ]
The design of dynamic workflows needs adequate modeling/specification formalisms and tools to soundly handle possible changes during workflow operation. A common approach is to pollute workflow design with details that do not regard the current behavior, but rather evolution. That hampers analysis, reuse and maintenance in general. We propose and discuss the adoption of a recent Petri net-based reflective model as a support to dynamic workflow design. Keeping separated functional aspects from evolution, results in a dynamic workflow model merging flexibility and ability of formally verifying basic workflow properties. A structural on-the-fly characterization of sound dynamic workflows is adopted based on Petri net's free-choiceness preservation. An application is presented to a localized open problem: how to determine what tasks should be redone and which ones do not when transferring a workflow instance from an old to a new template.

[14]
Manuel Oriol, Walter Cazzola, Shigeru Chiba, Gunter Saake, Yvonne Coady, Stéphane Ducasse, and Günter Kniesel, “Enabling Software Evolution via AOP and Reflection”, in ECOOP'07 Workshop Reader, Michael Cebulla, Ed., Lecture Notes in Computer Science 4906, pp. 91–98. Springer-Verlag, February 2008. [ .pdf ]
[15]
Lorenzo Capra and Walter Cazzola, “Self-Evolving Petri Nets”, Journal of Universal Computer Science, vol. 13, no. 13, pp. 2002–2034, December 2007. [ .pdf ]
Nowadays, software evolution is a very hot topic. It is particularly complex when it regards critical and nonstopping systems. Usually, these situations are tackled by hard-coding all the foreseeable evolutions in the application design and code.

Neglecting the obvious difficulties in pursuing this approach, we also get the application code and design polluted with details that do not regard the current system functionality, and that hamper design analysis, code reuse and application maintenance in general. Petri Nets (PN), as a formalism for modeling and designing distributed/concurrent software systems, are not exempt from this issue.

The goal of this work is to propose a PN based reflective framework that lets everyone model a system able to evolve, keeping separated functional aspects from evolutionary ones and applying evolution to the model only if necessary. Such an approach tries to keep system's model as simple as possible, preserving (and exploiting) ability of formally verifying system properties typical of PN, granting at the same time adaptability.

[16]
Walter Cazzola, Shigeru Chiba, Yvonne Coady, Stéphane Ducasse, Günter Kniesel, Manuel Oriol, and Gunter Saake, Eds., Proceedings of the 4th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'07), Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, November 2007. [ .pdf ]
[17]
Lorenzo Capra and Walter Cazzola, “A Reflective PN-based Approach to Dynamic Workflow Change”, in Proceedings of the 9th International Symposium in Symbolic and Numeric Algorithms for Scientific Computing (SYNASC'07), Timisoara, Romania, September 2007, IEEE, pp. 533–540. [ .pdf ]
The design of dynamic workflows needs adequate modeling/specification formalisms and tools to soundly handle possible changes occurring during workflow operation. A common approach is to pollute design with details that do not regard the current workflow behavior, but rather its evolution. That hampers analysis, reuse and maintenance in general.

We propose and discuss the adoption of a recent Petri Net based reflective model (based on classical PN) as a support to dynamic workflow design, by addressing a localized problem: how to determine what tasks should be redone and which ones do not when transferring a workflow instance from an old to a new template. Behind there is the idea that keeping functional aspects separated from evolutionary ones, and applying evolution to the (current) workflow template only when necessary, results in a simple reference model on which the ability of formally verifying typical workflow properties is preserved, thus favoring a dependable adaptability.

[18]
Walter Cazzola, Sonia Pini, Ahmed Ghoneim, and Gunter Saake, “Co-Evolving Application Code and Design Models by Exploiting Meta-Data”, in Proceedings of the 12th Annual ACM Symposium on Applied Computing (SAC'07), Seoul, South Korea, March 2007, pp. 1275–1279, ACM Press. [ http ]
Evolvability and adaptability are intrinsic properties of today's software applications. Unfortunately, the urgency of evolving/adapting a system often drives the developer to directly modify the application code neglecting to update its design models. Even, most of the development environments support the code refactoring without supporting the refactoring of the design information.

Refactoring, evolution and in general every change to the code should be reflected into the design models, so that these models consistently represent the application and can be used as documentation in the successive maintenance steps. The code evolution should not evolve only the application code but also its design models. Unfortunately, to co-evolve the application code and its design is a hard job to be carried out automatically, since there is an evident and notorious gap between these two representations.

We propose a new approach to code evolution (in particular to code refactoring) that supports the automatic co-evolution of the design models. The approach relies on a set of predefined meta-data that the developer should use to annotate the application code and to highlight the refactoring performed on the code. Then, these meta-data are retrieved through reflection and used to automatically and coherently update the application design models.

[19]
Walter Cazzola, Shigeru Chiba, Yvonne Coady, and Gunter Saake, Eds., Proceedings of the 3rd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'06), Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, November 2006. [ .pdf ]
[20]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “Viewpoint for Maintaining UML Models against Application Changes”, in Proceedings of International Conference on Software and Data Technologies (ICSOFT 2006), Joaquim Filipe, Markus Helfert, and Boris Shishkov, Eds., Setúbal, Portugal, September 2006, pp. 263–268, Springer. [ .pdf ]
The urgency that characterizes many requests for evolution forces the system administrators/developers of directly adapting the system without passing through the adaptation of its design. This creates a gap between the design information and the system it describes. The existing design models provide a static and often outdated snapshot of the system unrespectful of the system changes. Software developers spend a lot of time on evolving the system and then on updating the design information according to the evolution of the system. To this respect, we present an approach to automatically keep the design information (diagrams in our case) updated when the system evolves. The diagrams are bound to the application and all the changes to it are reflected to the diagrams as well.

[21]
Walter Cazzola, Shigeru Chiba, Yvonne Coady, and Gunter Saake, “AOSD and Reflection: Benefits and Drawbacks to Software Evolution”, in ECOOP'06 Workshop Reader, Charles Consel and Mario Südholt, Eds., Lecture Notes in Computer Science 4379, pp. 40–52. Springer-Verlag, July 2006. [ .pdf ]
[22]
Lorenzo Capra and Walter Cazzola, “A Petri-Net Based Reflective Framework for the Evolution of Dynamic Systems”, Electronic Notes on Theoretical Computer Science, vol. 159, pp. 41–59, 2006. [ .pdf ]
Nowadays, software evolution is a very hot topic. Many applications need to be updated or extended with new characteristics during their lifecycle. Software evolution is characterized by its huge cost and slow speed of implementation. Often, software evolution implies a redesign of the whole system, the development of new features and their integration in the existing and/or running systems (this last step often implies a complete rebuilding of the system). A good evolution is carried out through the evolution of the system design information and then propagating the evolution to the implementation.

Petri Nets (PN), as a formalism for modeling and designing distributed/concurrent software systems, are not exempt from this issue. Several times a system modeled through Petri nets has to be updated and consequently also the model should be updated. Often, some kinds of evolution are foreseeable and could be hardcoded in the code or in the model, respectively.

Embedding evolutionary steps in the model or in the code however requires early and full knowledge of the evolution. The model itself should be augmented with details that do not regard the current system functionality, and that jeopardize or make very hard analysis and verification of system properties.

In this work, we propose a PN based reflective framework that lets everyone model a system able to evolve, keeping separated functional aspects from evolutionary ones and applying evolution to the model if necessary. Such an approach tries to keep the model as simple as possible, preserving (and exploiting) the ability of formally verifying system properties typical of PN, granting at the same time model adaptability.

[23]
Walter Cazzola, Antonio Cisternino, and Diego Colombo, “Freely Annotating C#”, Journal of Object Technology, vol. 4, no. 10, pp. 31–48, December 2005. [ .pdf ]
Reflective programming is becoming popular due to the increasing set of dynamic services provided by execution environments like JVM and CLR. With custom attributes Microsoft introduced an extensible model of reflection for CLR: they can be used as additional decorations on element declarations. The same notion has been introduced in Java 1.5. The annotation model, both in Java and in C#, limits annotations to classes and class members. In this paper we describe [a]C#, an extension of the C# programming language, that allows programmers to annotate statements and code blocks and retrieve these annotations at run-time. We show how this extension can be reduced to the existing model. A set of operations on annotated code blocks to retrieve annotations and manipulate bytecode is introduced. We also discuss how to use [a]C# to annotate programs giving hints on how to parallelize a sequential method and how it can be implemented by means of the abstractions provided by the run-time of the language. Finally, we show how our model for custom attributes has been realized.

[24]
Walter Cazzola, Shigeru Chiba, Gunter Saake, and Tom Tourwé, Eds., Proceedings of the 2nd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'05), Preprint No. 9/2005 of Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, November 2005. [ .pdf ]
[25]
Antonio Cisternino, Walter Cazzola, and Diego Colombo, “Metadata-Driven Library Design”, in Proceedings of Library-Centric Software Design Workshop (LCSD'05), San Diego, CA, USA, October 2005. [ .pdf ]
Library development has greatly benefited by the wide adoption of virtual machines like Java and Microsoft .NET. Reflection services and first class dynamic loading have contributed to this trend. Microsoft introduced the notion of custom annotation, which is a way for the programmer to define custom meta-data stored along reflection meta-data within the executable file. Recently also Java has introduced an equivalent notion into the virtual machine. Custom annotations allow the programmer to give hints to libraries about his intention without having to introduce semantics dependencies within the program; on the other hand these annotations are read at run-time introducing a certain amount of overhead. The aim of this paper is to investigate the impact of this new feature on library design, focusing both on expressivity and performance issues.

[26]
Walter Cazzola, Antonio Cisternino, and Diego Colombo, “[a]C#: C# with a Customizable Code Annotation Mechanism”, in Proceedings of the 10th Annual ACM Symposium on Applied Computing (SAC'05), Santa Fe, New Mexico, USA, March 2005, pp. 1274–1278, ACM Press. [ http ]
Reflective programming is becoming popular due to the increasing set of dynamic services provided by execution environments like JVM and CLR. With custom attributes Microsoft introduced an extensible model of reflection for CLR: they can be used as additional decorations on element declarations. The same notion has been introduced in Java 1.5. The extensible model proposed in both platforms limits annotations to class members. In this paper we describe [a]C#, an extension of the C# programming language, that allows programmers to annotate statements or code blocks and retrieve these annotations at run-time. We show how this extension can be reduced to the existing model. A set of operations on annotated code blocks to retrieve annotations and manipulate bytecode is introduced. Finally, we discuss how to use [a]C# to annotate programs giving hints on how to parallel a sequential method and how it can be implemented by means of the abstractions provided by the run-time of the language.

[27]
Walter Cazzola, Shigeru Chiba, and Gunter Saake, “Software Evolution: a Trip through Reflective, Aspect, and Meta-Data Oriented Techniques”, in ECOOP'04 Workshop Reader, Jacques Malenfant and Bjarte M. Østvold, Eds., Lecture Notes in Computer Science 3344, pp. 116–130. Springer-Verlag, December 2004. [ .pdf ]
[28]
Walter Cazzola, “SmartReflection: Efficient Introspection in Java”, Journal of Object Technology, vol. 3, no. 11, pp. 117–132, December 2004. [ .pdf ]
In the last few years the interest in reflection has grown and many modern programming languages/environments (e.g., Java and .NET) have provided the programmer with reflective mechanisms, i.e., with the ability of dynamically looking into (introspect) the structure of the code from the code itself. In spite of its evident usefulness, reflection has many detractors, who claim that it is too inefficient to be used with real profit. In this work, we have investigated about the performance issue in the context of the Java reflection library and presented a different approach to the introspection in Java that improves its performances. The basic idea of the proposed approach consists of moving most of the overhead due to the dynamic introspection from run-time to compile-time. The efficiency improvement has been proved by providing a new reflection library compliant – that is, it provides exactly the same services –, with the standard Java reflection library based on the proposed approach. This paper is focused on speeding up the reification and the invocation of methods, i.e., on the class SmartMethod that replaces the class Method of the standard reflection library.

[29]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “Software Evolution through Dynamic Adaptation of Its OO Design”, in Objects, Agents and Features: Structuring Mechanisms for Contemporary Software, Hans-Dieter Ehrich, John-Jules Meyer, and Mark D. Ryan, Eds., Lecture Notes in Computer Science 2975, pp. 69–84. Springer-Verlag, July 2004. [ .pdf ]
In this paper we present a proposal for safely evolving a software system against run-time changes. This proposal is based on a reflective architecture which provides objects with the ability of dynamically changing their behavior by using their design information. The meta-level system of the proposed architecture supervises the evolution of the software system to be adapted that runs as the base-level system of the reflective architecture. The meta-level system is composed of cooperating components; these components carry out the evolution against sudden and unexpected environmental changes on a reification of the design information (e.g., object models, scenarios and statecharts) of the system to be adapted. The evolution takes place in two steps: first a meta-object, called Papers about Software Evolution, Refactoring and Co-Evolution.ary meta-object, plans a possible evolution against the detected event then another meta-object, called consistency checker meta-object validates the feasibility of the proposed plan before really evolving the system. Meta-objects use the system design information to govern the evolution of the base-level system. Moreover, we show our architecture at work on a case study.

[30]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “System Evolution through Design Information Evolution: a Case Study”, in Proceedings of the 13th International Conference on Intelligent and Adaptive Systems and Software Engineering (IASSE 2004), Walter Dosch and Narayan Debnath, Eds., Nice, France, July 2004, pp. 145–150, ISCA. [ .pdf ]
This paper describes how design information, in our case specifications, can be used to evolve a software system and validate the consistency of such an evolution. This work complements our previous work on reflective architectures for software evolution describing the role played by meta-data in the evolution of software systems. The whole paper focuses on a case study; we show how the urban traffic control system (UTCS) or part of it must evolve when unscheduled road maintenance, a car crush or a traffic jam block normal vehicular flow in a specific road. The UTCS case study perfectly shows how requirements can dynamically change and how the design of the system should adapt to such changes. Both system consistency and adaptation are governed by rules based on meta-data representing the system design information. As we show by an example, such rules represent the core of our evolutionary approach driving the Papers about Software Evolution, Refactoring and Co-Evolution.ary and consistency checker meta-objects and interfacing the meta-level system (the evolutionary system) with the system that has to be adapted.

[31]
Walter Cazzola, Shigeru Chiba, and Gunter Saake, Eds., Proceedings of the 1st ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'04), Research Report C-196 of the Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology. Preprint No. 10/2004 of Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, July 2004. [ .pdf ]
[32]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “RAMSES: a Reflective Middleware for Software Evolution”, in Proceedings of the 1st ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'04), Oslo, Norway, June 2004, pp. 21–26. [ .pdf ]
Software systems today need to dynamically self-adapt against dynamic requirement changes. In this paper we describe a reflective middleware whose aim consists of consistently evolving software systems against runtime changes. This middleware provides the ability to change both structure and behavior for the base-level system at run-time by using its design information. The meta-level is composed of cooperating objects, and has been specified by using a design pattern language. The base objects are controlled by meta-objects that drive their evolution. The essence of is the ability of extracting the design data from the base application, and of constraining the dynamic evolution to stable and consistent systems.

[33]
Walter Cazzola, “SmartMethod: an Efficient Replacement for Method”, in Proceedings of the 9th Annual ACM Symposium on Applied Computing (SAC'04), Nicosia, Cyprus, March 2004, pp. 1305–1309, ACM Press. [ http ]
In the last few years the interest in reflection has grown and many modern programming languages/architectures have provided the programmer with reflective mechanisms. As well as any other novelty also reflection has detractors. They rightly or wrongly accuse reflection to be too inefficient to be used with real profit. In this work, we have investigated about the performance of Java reflection library (especially of the class Method and of its method invoke) and realized a mechanism which improves its performances. Our mechanism consists of a class, named SmartMethod and of a parser contributing to transform reflective invocations into direct call carried out by the standard invocation mechanism of Java. The SmartMethod class is compliant — that is, it provides exactly the same services —, with the class Method of the standard Java core reflection library but it provides a more efficient reflective method invocation.

[34]
Massimo Ancona and Walter Cazzola, “Implementing the Essence of Reflection: a Reflective Run-Time Environment”, in Proceedings of the 9th Annual ACM Symposium on Applied Computing (SAC'04), Nicosia, Cyprus, March 2004, pp. 1503–1507, ACM Press. [ http ]
Computational reflection provides the developers with a programming mechanism devoted to favorite code extensibility, reuse and maintenance. Notwithstanding that, it has not achieved developers' unanimous acceptance and its full potential yet. In our opinion, this depends on the intrinsic complexity of most of the reflective approaches that hinders their efficient implementation. The aim of this paper consists of defining the essence of reflection, that is, to identify the minimal set of characteristics that a software system must have to be considered reflective. The consequence is the realization of a run-time environment supporting the essence of reflection without affecting the programming language and with a minimal impact on the programming system design. This achievement will improve reflective system performances reducing the impact of one of the most diffuse criticism about reflection: low performance.

[35]
Walter Cazzola and Dario Maggiorini, “Seamless Nomadic System-Aware Servants”, in Proceedings of the 37th Hawai'i International Conference on System Sciences (HICSS'04), Ralph H. Sprague, Jr, Ed., Big Island, Hawaii, January 2004, IEEE Computer Society Press. [ .pdf ]
The growing diffusion of wireless technologies is leading to deployment of small-scale and location dependent information services (LDISs). Those new services call for provisioning schemes that are able to operate in a distributed environment and do not require network infrastructure. This paper describes an approach to a service-oriented middleware which enables a mobile device to be aware of the surrounding environment and to transparently exploit every LDIS discovered in the coverage area of the hosting wireless network. the paper introduces seamless nomadic system-aware (SNA) servant. SNA servants run on mobile devices, discover LDISs and are not associated with any specific service. The paper also describes the key features for the SNA servants implementation and for rendering them interoperable and cross-platform on, at least, .NET and JVM frameworks.

[36]
Walter Cazzola, “Remote Method Invocation as a First-Class Citizen”, Distributed Computing, vol. 16, no. 4, pp. 287–306, December 2003. [ DOI | .pdf ]
The classical remote method invocation (RMI) mechanism adopted by several object-based middleware is `black box' in nature, and the RMI functionality, i.e., the RMI interaction policy and its configuration, is hard-coded into the application. This RMI nature hinders software development and reuse, forcing the programmer to focus on communication details often marginal to the application he is developing. Extending the RMI behavior with extra functionality is also a very difficult job, because added code must be scattered among the entities involved in communications.

This situation could be improved by developing the system in several separate layers, confining communications and related matters to specific layers. As demonstrated by recent work on reflective middleware, reflection represents a powerful tool for realizing such a separation and therefore overcoming the problems referred to above. Such an approach improves the separation of concerns between the communication-related algorithms and the functional aspects of an application. However, communications and all related concerns are not managed as a single unit separate from the rest of the application, which makes their reuse, extension and management difficult. As a consequence, communications concerns continue to be scattered across the meta-program, communication mechanisms continue to be black-box in nature, and there is only limited opportunity to adjust communication policies through configuration interfaces.

In this paper we examine the issues raised above, and propose a reflective approach especially designed to open up the Java RMI mechanism. Our proposal consists of a new reflective model, called multi-channel reification, that reflects on and reifies communication channels, i.e., it renders communication channels first-class citizens. This model is designed both for developing new communication mechanisms and for extending the behavior of communication mechanisms provided by the underlying system. Our approach is embodied in a framework called mChaRM which is described in detail in this paper.

[37]
Dario Maggiorini, Walter Cazzola, B.S. Prabhu, and Rajit Gadh, “A Service-Oriented Middleware for Seamless Nomadic System-Aware (SNA) Servants”, White paper, WINMEC: Wireless INternet for the Mobile Enterprise Consortium, March 2003. [ .pdf ]
In the last few years there has been a considerable penetration of wireless technology in everyday life. This penetration has also increased the availability of Location-Dependent Information Services (LDIS), such as local information access (e.g. traffic reports, news, etc.), nearest-neighbor queries (such as finding the nearest restaurant, gas station, medical facility, ATM, etc.) and others.

New wireless environments and paradigms are continuously evolving and novel LDISs are continuously being deployed. Such a growth means the need to deal with:

    services without standard interfaces - same or similar LDISs being offered by different vendors through different APIs but with same standard functional interfaces; services deployed dynamically - LDIS made available on a need basis or when the scenario dynamically mutates and in addition provides dynamic roaming between services and dynamic service interchangeability; and non-classified services (i.e., novel services).

[38]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “Reflective Analysis and Design for Adapting Object Run-time Behavior”, in Proceedings of the 8th International Conference on Object-Oriented Information Systems (OOIS'02), Zohra Bellahsène, Dilip Patel, and Colette Rolland, Eds., Montpellier, France, September 2002, Lecture Notes in Computer Science 2425, pp. 242–254, Springer-Verlag. [ .pdf ]
Today, complex information systems need a simple way for changing the object behavior according with changes that occur in its running environment. We present a reflective architecture which provides the ability to change object behavior at run-time by using design-time information. By integrating reflection with design patterns we get a flexible and easily adaptable architecture. A reflective approach that describes object model, scenarios and statecharts helps to dynamically adapt the software system to environmental changes. The object model, system scenario and many other design information are reified by special meta-objects, named Papers about Software Evolution, Refactoring and Co-Evolution.ary meta-objects. Evolutionary meta-objects deal with two types of run-time evolution. Structural evolution is carried out by causal connection between evolutionary meta-objects and its referents through changing the structure of these referents by adding or removing objects or relations. Behavioral evolution allows the system to dynamically adapt its behavior to environment changes by itself. Evolutionary meta-objects react to environment changes for adapting the information they have reified and steering the system evolution. They provide a natural liaison between design information and the system based on such information. This paper describes how this liaison can be built and how it can be used for adapting a running system to environment changes.

[39]
Walter Cazzola, James O. Coplien, Ahmed Ghoneim, and Gunter Saake, “Framework Patterns for the Evolution of Nonstoppable Software Systems”, in Proceedings of the 1st Nordic Conference on Pattern Languages of Programs (VikingPLoP'02), Pavel Hruby and Kristian Elof Søresen, Eds., Højstrupgard, Helsingør, Denmark, September 2002, pp. 35–54, Microsoft Business Solutions. [ .pdf ]
The fragment of pattern language proposed in this paper, shows how to adapt a nonstoppable software system to reflect changes in its running environment. These framework patterns depend on well-known techniques for programs to dynamically analyze and modify their own structure, commonly called computational reflection. Our patterns go together with common reflective software architectures.

[40]
Walter Cazzola, “mChaRM: Reflective Middleware with a Global View of Communications”, IEEE Distributed System On-Line, vol. 3, no. 2, February 2002. [ http ]
The main objective of remote-method-invocation- and object-based middleware is to provide a convenient environment for the realization of distributed computations. In most cases, unfortunately, interaction policies in these middleware platforms are hardwired into the platform itself. Some platforms, e.g., CORBA's interceptors, offer means to redefine such details but their flexibility is limited to the possibilities that the designer has foreseen.

In this way, distributed algorithms must be exclusively embedded in the application code, breaking any separation of concerns between functional and nonfunctional code. Some programming languages like Java disguise remote interactions as local calls, thus rendering their presence transparent to the programmer. However their management is not so transparent and easily maskable to the programmer.

We can summarize these kinds of problems with current middleware platforms as follows:

1. interaction policies are hidden from the programmer who cannot customize them (lack of adaptability);

2. communication, synchronization, and tuning code is intertwined with application code (lack of separation of concerns);

3. algorithms are scattered among several objects, thus forcing the programmer to explicitly coordinate their work (lack of global view).

[41]
Walter Cazzola, Communication-Oriented Reflection: a Way to Open Up the RMI Mechanism, PhD thesis, Università degli Studi di Milano, Milano, Italy, February 2001. [ .pdf ]
The Problem
From our experience, RMI-based frameworks and in general all frameworks supplying distributed computation seem to have some troubles. We detected at least three problems related to their flexibility and applicability.

Most of them lack in flexibility. Their main duty consists in providing a friendly environment suitable for simply realizing distributed computations. Unfortunately, interaction policies are hardwired in the framework. If it is not otherwise foreseen, it is a hard job to change, for example, how messages are marshaled/unmarshaled, or the dispatching algorithm which the framework adopts. Some frameworks provide some limited mechanism to redefine such details but their flexibility is limited from the possibility that the designer has foreseen.

Distributed algorithms are imbued in the applicative code breaking the well-known software engineering requirement termed as separation of concerns. Some programming languages like Java mask remote interactions (i.e., remote method or procedure call) as local calls rendering their presence transparent to the programmer. However their management, — i.e., tuning the needed environment to rightly carry out remote computations, and synchronizing involved objects — is not so transparent and easily maskable to the programmer. Such a behavior hinders the distributed algorithms reuse.

Object-oriented distributed programming is not distributed object-oriented programming. It is an hard job to write object-oriented distributed applications based on information managed by several separated entities. Algorithms originally designed as a whole, have to be scattered among several entities and no one of these entities directly knows the whole algorithm. This fact improves the complexity of the code that the programmer has to write because (s)he has to extend the original algorithm with statements for synchronizing and for putting in touch all the remote objects involved in the computation. Moreover the scattering of the algorithm among several objects contrasts with the object-oriented philosophy which states that data and algorithms managing them are encapsulated into the same entity, because each object can't have a global view of any data it manages, thus we could say that this approach lacks of global view. The lack of global view forces the programmer to strictly couple two or more distributed object.

A reflective approach, as stated in [Briot98], can be considered as the glue sticking together distributed and object-oriented programming and filling the gaps in their integration. Reflection improves flexibility, allows developers to provide their own solutions to communication problems, and keeps communication code separated from the application code, and completely encapsulated into the meta-level.

Hence reflection could help to solve most of the troubles we detected. Reflection permits to expose implementation details of a systems, i.e., in our case allows to expose the interaction policies. It also permits to easily manipulate them. A reflective approach also permits to easily separate the interaction management from the applicative code. Using reflection and some syntactic sugar for masking the remote calls we can achieve a good separation of concerns also in distributed environments. Thanks to such considerations a lot of distributed reflective middleware have been developed. Their main goal consists both in overcoming the lacking of flexibility and in decoupling the interaction code from the applicative code.

By the way, reflective distributed middlewares exhibit the same troubles detected in the distributed middlewares. They still fail in considering each remote invocation in terms of the entity involved in the communication (i.e., the client, the server, the message and so on) and not as a single entity. Hence the global view requirement is not achieved. This is due to the fact that most of the meta-models that have been presented so far and used to design the existing reflective middlewares are object-based models. In these models, every object is associated to a meta-object, which traps the messages sent to the object and implements the behavior of that invocation. Such a meta-models inherit the trouble of the lack of global view from the object-oriented methodology which encapsulates the computation orthogonally to the communication.

Hence, these approaches are not appropriate to handle all the aspects of distributed computing. In particular adopting an object-based model to monitor distributed communications, the meta-programmer often has to duplicate the base-level communication graph into the meta-level augmenting the meta-program complexity. Thus, object-based approaches to reflection on communications move the well-known problem [Videira-Lopez95] of nonfunctional code intertwined to functional one from the base- to the meta-level. Simulating a base-level communication into the meta-level allows to perform meta-computations either related sending or receiving action, but not related to the whole communication or which involve information owned both by the sender and by the receiver without dirty tricks. This trouble goes under the name of global view lacking.

Besides, object-based reflective approaches and their reflective middlewares based on them allow only to carry out global changes to the mechanisms responsible for message dispatching, neglecting the management of each single message. Hence they fail to differentiate the meta-behavior related to each single exchanged message. In order to apply a different meta-behavior to either each or group of exchanged messages the meta-programmer has to write the meta-program planning a specific meta-behavior for each kind of incoming message. Unfortunately, in this way the size of the meta-program grows to the detriment of its readability, and of its maintenance.

Due to such a consideration, a crucial issue of opening a RMI-based framework consists in choosing a good meta-model which permits to go around the global view lacking, and to differentiate the meta-behavior for each exchanged message.

Our Solution
From the problem analysis we have briefly presented we learned that in order to solve the drawbacks of the RMI-based framework we have to provide an open RMI mechanism, i.e., a reflective RMI mechanism, which exposes its details to be manipulated by the meta-program and allows the meta-program to manage each communication separately and as a single entity. The main goal of this work consists in designing such a mechanism using a reflective approach.

To render the impact of reflection on object-oriented distributed framework effective, and to obtain a complete separation of concern, we need new models and frameworks especially designed for communication-oriented reflection, i.e., we need a reflective approach suitable for RMI-based communication which allows meta-programmer to enrich, manipulate and replace each remote method invocation and its semantics with a new one. That is, we need to encapsulate message exchanging into a single logical meta-object instead of scattering any relevant information related to it among several meta-objects and mimicking the real communication with one defined by the meta-programmer among such meta-objects as it is done using traditional approaches.

To fulfill this commitment we designed a new model, called multi-channel reification model. The multi-channel reification model is based on the idea of considering a method call as a message sent through a logical channel established among a set of objects requiring a service, and a set of objects providing such a service. This logical channel is reified into a logical object called multi-channel, which monitors message exchange and enriches the underlying communication semantics with new features used for the performed communication. Each multi-channel can be viewed as an interface established among the senders and the receivers of the messages. Each multi-channel is characterized by its behavior, termed kind, and the receivers, which it is connected to.

multi-channel ≡ (kind, receiveri, ..., receivern)

Thanks to this multi-channel's characterization it is possible to connect several multi-channels to the same group of objects. In such a case, each multi-channel will be characterized by a different kind and will filter different patterns of messages.

This model permits to design an open RMI-based mechanism which potentially overcomes the previously exposed problems.

In this way, each communication channel is reified into a meta-entity. Such a meta-entity has a complete access to all details related to the communications it filters, i.e. the policies related to both the sender, and the receivers side, and, of course, the messages it filters. A channel realizes a close meta-system with respect to the communications. It encapsulates all base-level aspect related to the communication providing the global view feature.

Of course, this model keeps all the properties covered by the other reflective models, such as transparency and separation of concerns. Hence the approach also guarantees to go around the problems already solved using reflection. Protocols and other realizative stuff are exposed to the meta-programmer manipulations, and the remote method invocation management is completely separated from the applicative code.

Moreover through the kind mechanism we can differentiate the behavior which is applied to a specified pattern of messages. So a set of multi-channels (each one with a different kind) can be associated to the same communication channel. Each channel will operate to a different set of messages. In this way the channel's code is related to a unique behavior it indiscriminately has to apply to all the messages it filters.

mChaRM is a framework developed by the authors which opens the RMI mechanism supplied by Java. This framework supplies a development and run-time environment based on the multi-channel reification model. Multi-channels will be developed in Java, and the underlying mChaRM framework will dynamically realize the context switching and the causal connection link. A beta version of mChaRM, documentations and examples are available from:

http://cazzola.di.unimi.it/mChaRM_webpage.html

Such a system provided RMI-based programming environment. The supplied RMI mechanism is multi-cast (i.e., supplies a mechanism to remotely invoke a method of several servers), open (the RMI mechanism is fully customizable through reflection), and globally aware of its aspects. Some example of application are also provide.

[42]
Walter Cazzola, “Communication Oriented Reflection”, in ECOOP'00 Workshop Reader, Jacques Malenfant, Sabine Moisan, and Ana Moreira, Eds., Lecture Notes in Computer Science 1964, pp. 287–288. Springer-Verlag, December 2000. [ www: ]

[43]
Walter Cazzola, Shigeru Chiba, and Thomas Ledoux, “Reflection and Meta-Level Architectures: State of the Art, and Future Trends”, in ECOOP'00 Workshop Reader, Jacques Malenfant, Sabine Moisan, and Ana Moreira, Eds., Lecture Notes in Computer Science 1964, pp. 1–15. Springer-Verlag, December 2000. [ .pdf ]
[44]
Francesco Tisato, Andrea Savigni, Walter Cazzola, and Andrea Sosio, “Architectural Reflection: Realising Software Architectures via Reflective Activities”, in Proceedings of the 2nd International Workshop on Engineering Distributed Objects (EDO 2000), Wolfang Emmerich and Stephan Tai, Eds., University of California, Davis, USA, November 2000, Lecture Notes in Computer Science 1999, pp. 102–115, Springer-Verlag. [ .pdf ]
Architectural reflection is the computation performed by a software system about its own software architecture. Building on previous research and on practical experience in industrial projects, in this paper we expand the approach and show a practical (albeit very simple) example of application of architectural reflection. The example shows how one can express, thanks to reflection, both functional and non-functional requirements in terms of object-oriented concepts, and how a clean separation of concerns between application domain level and architectural level activities can be enforced.

[45]
Walter Cazzola, Robert J. Stroud, and Francesco Tisato, Eds., Reflection and Software Engineering, vol. 1826 of Lecture Notes in Computer Science, Springer-Verlag, Heidelberg, Germany, June 2000. [ http ]
[46]
Walter Cazzola, Andrea Sosio, and Francesco Tisato, “Shifting Up Reflection from the Implementation to the Analysis Level”, in Reflection and Software Engineering, Walter Cazzola, Robert J. Stroud, and Francesco Tisato, Eds., Lecture Notes in Computer Science 1826, pp. 1–20. Springer-Verlag, Heidelberg, Germany, June 2000. [ .pdf ]
Traditional methods for object-oriented analysis and modeling focus on the functional specification of software systems, i.e., application domain modeling. Non-functional requirements such as fault-tolerance, distribution, integration with legacy systems, and so on, have no clear collocation within the analysis process, since they are related to the architecture and workings of the system itself rather than the application domain. They are thus addressed in the system's design, based on the partitioning of the system's functionality into classes resulting from analysis. As a consequence, the smooth transition from analysis to design that is usually celebrated as one of the main advantages of the object-oriented paradigm does not actually hold for what concerns non-functional issues. A side effect is that functional and non-functional concerns tend to be mixed at the implementation level. We argue that the reflective approach whereby non-functional properties are ascribed to a meta-level of the software system may be extended “back to” analysis. Adopting a reflective approach in object-oriented analysis may support the precise specification of non-functional requirements in analysis and, if used in conjunction with a reflective approach to design, recover the smooth transition from analysis to design in the case of non-functional system's properties.

[47]
Walter Cazzola and Massimo Ancona, “mChaRM: a Reflective Middleware for Communication-Based Reflection”, Technical Report DISI-TR-00-09, DISI, Università degli Studi di Genova, May 2000. [ www: ]

[48]
Walter Cazzola, Andrea Sosio, and Francesco Tisato, “Reflection and Object-Oriented Analysis”, in Proceedings of the 1st Workshop on Object-Oriented Reflection and Software Engineering (OORaSE'99), Walter Cazzola, Robert J. Stroud, and Francesco Tisato, Eds. November 1999, pp. 95–106, University of Milano Bicocca. [ .pdf ]
Traditional methods for object-oriented analysis and modeling focus on the functional specification of software systems. Non-functional requirements such as fault-tolerance, distribution, integration with legacy systems, and the like, do not have a clear collocation within the analysis process, as they are related to the architecture and workings of the system itself rather than the application domain. They are thus addressed in the system's design, based on the partitioning of the system's functionality into classes as resulting from the analysis. As a consequence of this, the “smooth transition from analysis to design” that is usually celebrated as one of the main advantages of the object-oriented paradigm does not actually hold for what concerns non-functional issues. Moreover, functional and non-functional concerns tend to be mixed at the implementation level. We argue that the reflective design approach whereby non-functional properties are ascribed to a meta-level of the software system may be extended “back to” analysis. Reflective Object Oriented Analysis may support the precise specification of non-functional requirements in analysis and, if used in conjunction with a reflective approach to design, recover the smooth transition from analysis to design in the case of non-functional system's properties.

[49]
Walter Cazzola, Robert J. Stroud, and Francesco Tisato, Eds., Proceedings of the 1st Workshop on Object-Oriented Reflection and Software Engineering (OORaSE'99), University of Milano Bicocca, Denver, Colorado, USA, November 1999.
[50]
Walter Cazzola, Andrea Savigni, Andrea Sosio, and Francesco Tisato, “Rule-Based Strategic Reflection: Observing and Modifying Behaviour at the Architectural Level”, in Proceedings of 14th IEEE International Conference on Automated Software Engineering (ASE'99), Cocoa Beach, Florida, USA, October 1999, pp. 263–266. [ .pdf ]
As software systems become larger and more complex, a relevant part of code shifts from the application domain to the management of the system's run-time architecture (e.g., substituting components and connectors for run-time automated tuning). We propose a novel design approach for component-based systems supporting architectural management in a systematic and conceptually clean way and allowing for the transparent addition of architectural management functionality to existing systems. The approach builds on the concept of reflection, extending it to the programming-in-the-large level, thus yielding architectural reflection (AR). This paper focuses on one aspect of AR, namely the monitoring and dynamic modification of the system's overall control structure (strategic reflection), which allows the behaviour of a system to be monitored and adjusted without modifying the system itself.

[51]
Massimo Ancona, Walter Cazzola, and Eduardo B. Fernandez, “Reflective Authorization Systems: Possibilities, Benefits and Drawbacks”, in Secure Internet Programming: Security Issues for Mobile and Distributed Objects, Jan Vitek and Christian Jensen, Eds., Lecture Notes in Computer Science 1603, pp. 35–49. Springer-Verlag, July 1999. [ .pdf ]
We analyze how to use the reflective approach to integrate an authorization system into a distributed object-oriented framework. The expected benefits from the reflective approach are: more stability of the security layer (i.e., with a more limited number of hidden bugs), better software and development modularity, more reusability, and the possibility to adapt the security module with at most a few changes to other applications. Our analysis is supported by simple and illustrative examples written in Java.

[52]
Massimo Ancona, Walter Cazzola, and Eduardo B. Fernandez, “A History-Dependent Access Control Mechanism Using Reflection”, in Proceedings of 5th ECOOP Workshop on Mobile Object Systems (EWMOS'99), Peter Sewell and Jan Vitek, Eds., Lisbon, Portugal, June 1999. [ .pdf ]
We propose here a mechanism for history-dependent access control for a distributed object-oriented system, implemented using reflection. In a history-dependent access control system, access is decided based not only on the current request, but also on the previous history of accesses to some entity or service. We consider timing constraints expressed using temporal logic, and we describe a possible implementation for our mechanism. The expected benefits from the reflective approach are: more stability of the security layer (i.e., with a more limited number of hidden bugs), better software modularity, more reusability, and the possibility to adapt the security module with relatively few changes to other applications and other authorisation policies.

[53]
Walter Cazzola, Andrea Savigni, Andrea Sosio, and Francesco Tisato, “Architectural Reflection: Concepts, Design, and Evaluation”, Technical Report RI-DSI 234-99, DSI, Università degli Studi di Milano, May 1999. [ .pdf ]
This paper proposes a novel reflective approach, orthogonal to the classic computational approach, whereby a system performs computation on its software architecture instead of individual components. The approach supports system's self-management activities such as dynamic reconfiguration to be realized in a systematic and conceptually clean way and added to existing systems without modifying the system itself. The parallelism between such architectural reflection and classic reflection is discussed, as well as the transposition of classic reflective concepts in the architectural domain.

[54]
Walter Cazzola, Andrea Savigni, Andrea Sosio, and Francesco Tisato, “A Fresh Look at Programming-in-the-Large”, in Proceedings of 22nd Annual International Computer Software and Application Conference (COMPSAC'98), Wien, Austria, August 1998, IEEE, pp. 502–506. [ .pdf ]
Realizing a shift of software engineering towards a component based approach to software development requires the development of higher level programming systems supporting the development of systems from components. The paper presents a novel approach to the design of large software systems where a program in the large describing the system's architecture is executed at run time to rule over the assembly and dynamic cooperation of components. This approach has several advantages following from a clean separation of concerns between programming in the small and programming in the large issues in instantiated systems.

[55]
Massimo Ancona, Walter Cazzola, and Eduardo B. Fernandez, “Reflective Authorization Systems”, in Proceedings of ECOOP Workshop on Distributed Object Security (EWDOS'98), Brussels, Belgium, July 1998, in 12th European Conference on Object-Oriented Programming (ECOOP'98), pp. 35–39, Unité de Recherche INRIA Rhone-Alpes. [ .pdf ]
A reflective approach for modeling and implementing authorization systems is presented. The advantages of the combined use of computational reflection and authorization mechanisms are discussed, and three reflective architectures are examined for pointing out the corresponding merits and defects.

[56]
Walter Cazzola, “Evaluation of Object-Oriented Reflective Models”, in Proceedings of ECOOP Workshop on Reflective Object-Oriented Programming and Systems (EWROOPS'98), Brussels, Belgium, July 1998, in 12th European Conference on Object-Oriented Programming (ECOOP'98), Extended Abstract also published on ECOOP'98 Workshop Readers, S. Demeyer and J. Bosch editors, LNCS 1543, ISBN 3-540-65460-7 pages 386-387. [ .pdf ]
In this paper we explore the object-oriented reflective world, performing an overview of the existing models and presenting a set of features suitable to evaluate the quality of each reflective model. The purpose of the paper is to determine the context applicability of each reflective model examined.

[57]
Walter Cazzola, Andrea Savigni, Andrea Sosio, and Francesco Tisato, “Architectural Reflection: Bridging the Gap Between a Running System and its Architectural Specification”, in Proceedings of 6th Reengineering Forum (REF'98), Firenze, Italia, March 1998, IEEE, pp. 12–1–12–6. [ .pdf ]
As the size and complexity of software systems increase, a relevant part of the system overall functionality shifts from the applicative domain to run-time system management activities, i.e., management activities which cannot be performed off-line. These range from monitoring to dynamic reconfiguration and, for non-stopping systems, also include evolution, i.e., addition or replacement of components or entire subsystems. In current practice, run-time system management is impeded by the fact that the knowledge of the overall structure and functioning of the system (i.e., its software architecture) is confined in design specification documents, while it is only implicit in running systems. In this paper we introduce, provide rationale for, and briefly demonstrate an approach to system management where the system maintains, and operates on, an architectural description of itself. This description is causally connected to the system's concrete structure and state, i.e., any change of the system architecture affects the description, and vice versa. This model can be said to extend the principles of computational reflection from the realm of programming-in-the-small to that of programming-in-the-large.

[58]
Massimo Ancona, Walter Cazzola, Gabriella Dodero, and Vittoria Gianuzzi, “Channel Reification: A Reflective Model for Distributed Computation”, in Proceedings of IEEE International Performance Computing, and Communication Conference (IPCCC'98), Roy Jenevein and Mohammad S. Obaidat, Eds., Phoenix, Arizona, USA, February 1998, IEEE, pp. 32–36. [ .pdf ]
The paper presents a new reflective model, called Channel Reification, which can be used in distributed computations to overcome difficulties experienced by other models in the literature when monitoring communication among objects.

The channel is an extension of the message reification model. A Channel is a communication manager incarning successive messages exchanges by two objects: its application range between those of message reification and those of meta-object model.

After a brief review of existing reflective models and how reflections can be used in distributed systems, channel reification is presented and compared to the widely used meta-object model. Applications of channel reification to protocol implementation, and to fault tolerant object systems are shown. Future extensions to this model are also summarized.

[59]
Massimo Ancona, Walter Cazzola, Gabriella Dodero, and Vittoria Gianuzzi, “Communication Modeling by Channel Reification”, in Proceedings of the workshop “Advances in Languages for User Modeling”, Chia Laguna, Sardinia Italia, June 1997, pp. 1–9. [ .pdf ]
The paper presents a new reflective model, called Channel Reification, which can be used to implement communication abstractions. After a brief review of existing reflective models and how reflections can be used in distributed systems, channel reification is presented and compared to the widely used meta-object model. An application to protocol implementation, and hints on other channel applications are also given.

[1]
Walter Cazzola and Alessandro Marchetto, “A Concern-Oriented Framework for Dynamic Measurements”, Information and Software Technology, vol. 57, pp. 32–51, January 2015. [ DOI | .pdf ]
Evolving software programs requires that software developers reason quantitatively about the modularity impact of several concerns, which are often scattered over the system. To this respect, concern-oriented software analysis is rising to a dominant position in software development. Hence, measurement techniques play a fundamental role in assessing the concern modularity of a software system. Unfortunately, existing measurements are still fundamentally module-oriented rather than concern-oriented. Moreover, the few available concern-oriented metrics are defined in a non-systematic and shared way and mainly focus on static properties of a concern, even if many properties can only be accurately quantified at run-time. Hence, novel concern-oriented measurements and, in particular, shared and systematic ways to define them are still welcome. This paper poses the basis for a unified framework for concern-driven measurement. The framework provides a basic terminology and criteria for defining novel concern metrics. To evaluate the framework feasibility and effectiveness, we have shown how it can be used to adapt some classic metrics to quantify concerns and in particular to instantiate new dynamic concern metrics from their static counterparts.

[2]
Walter Cazzola and Edoardo Vacchi, “@Java: Bringing a Richer Annotation Model to Java”, Computer Languages, Systems & Structures, vol. 40, no. 1, pp. 2–18, April 2014. [ DOI | www: ]
The ability to annotate code and, in general, the capability to attach arbitrary meta-data to portions of a program are features that have become more and more common in programming languages.

Annotations in Java make it possible to attach custom, structured meta-data to declarations of classes, fields and methods. However, the mechanism has some limits: annotations can only decorate declarations and their instantiation can only be resolved statically.

With this work, we propose an extension to Java (named @Java) with a richer annotation model, supporting code block and expression annotations, as well as dynamically evaluated members. In other words, in our model, the granularity of annotations extends to the statement and expression level and annotations may hold the result of runtime-evaluated expressions.

Our extension to the Java annotation model is twofold: (i) we introduced block and expression annotations and (ii) we allow every annotation to hold dynamically evaluated values. Our implementation also provides an extended reflection API to support inspection and retrieval of our enhanced annotations.

[3]
Walter Cazzola and Edoardo Vacchi, “Fine-Grained Annotations for Pointcuts with a Finer Granularity”, in Proceedings of the 28th Annual ACM Symposium on Applied Computing (SAC'13), Coimbra, Portugal, March 2013, pp. 1709–1714, ACM Press. [ http ]
A number of authors have suggested that AspectJ-like pointcut languages are too limited, and that they cannot select every possible join point in a program. Many enhanced pointcut languages have been proposed; they require virtually no change to the original code, but their improved expressive power comes often at the cost of making the pointcut expression too tightly connected with the structure of the programs that are being advised. Other solutions consist in simple extensions to the base language; they require only small changes to the original code, but they frequently serve no other immediate purpose than exposing pieces of code to the weaver. Annotations are a form of metadata that has been introduced in Java 5. Annotations have a number of uses: they may provide hints to the compiler, information to code processing tools and they can be retained at runtime. At the moment of writing, runtime-accessible annotations in the Java programming language can only be applied to classes, fields and methods. The support to annotate expressions and blocks feels like a natural extension to Java's annotation model, that can be also exploited to expose join points at a finer-grained level. In this paper we present an extension to the @AspectJ language to select block and expression annotations in the @Java language extension.

[4]
Walter Cazzola and Edoardo Vacchi, “DEXTER and Neverlang: A Union Towards Dynamicity”, in Proceedings of the 7th Workshop on the Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems (ICOOOLPS'12), Eric Jul, Ian Rogers, and Olivier Zendra, Eds., Beijing, China, June 2012, ACM.

[5]
Walter Cazzola, “Domain-Specific Languages in Few Steps: The Neverlang Approach”, in Proceedings of the 11th International Conference on Software Composition (SC'12), Thomas Gschwind, Flavio De Paoli, Volker Gruhn, and Matthias Book, Eds., Prague, Czech Republic, May-June 2012, Lecture Notes in Computer Science 7306, pp. 162–177, Springer. [ .pdf ]
Often an ad hoc programming language integrating features from different programming languages and paradigms represents the best choice to express a concise and clean solution to a problem. But, developing a programming language is not an easy task and this often discourages from developing your problem-oriented or domain-specific language. To foster DSL development and to favor clean and concise problem-oriented solutions we developed Neverlang.

The Neverlang framework provides a mechanism to build custom programming languages up from features coming from different languages. The composability and flexibility provided by Neverlang permit to develop a new programming language by simply composing features from previously developed languages and reusing the corresponding support code (parsers, code generators, ...).

In this work, we explore the Neverlang framework and try out its benefits in a case study that merges functional programming à la Python with coordination for distributed programming as in Linda.

[6]
Jeff Gray, Dominik Stein, Jörg Kienzle, and Walter Cazzola, “Report of the 15th International Workshop on Aspect-Oriented Modeling”, in MoDELS 2010 Workshops, Oslo, Norway, February 2011, Lecture Notes in Computer Science 6627, pp. 105–109, Springer. [ www: ]
[7]
Walter Cazzola and Davide Poletti, “DSL Evolution through Composition”, in Proceedings of the 7th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'10), Maribor, Slovenia, June 2010, ACM. [ http ]
The use of domain specific languages (DSL), instead of general purpose languages introduces a number of advantages in software development even if could be problematic to maintain the DSL consistent with the evolution of the domain. Traditionally, to develop a compiler/interpreter from scratch but also to modify an existing compiler to support the novel DSL is a long and difficult task. We have developed Neverlang to simplify and speed up the development and maintenance of DSLs. The framework presented in this article not only allows to develop the syntax and the semantic of a new language from scratch but it is particularly focused on the reusability of the language definition. The interpreters/compilers produced with such a framework are modular and it is easy to add remove or modify their sections. This allows to modify the DSL definition in order to follow the evolution of the underneath domain. In this work, we explore the Neverlang framework and try out the adaptability of its language definition.

[8]
Jörg Kienzle, Jeff Gray, Dominik Stein, Thomas Cottenier, Walter Cazzola, and Omar Aldawud, “Report of the 14th International Workshop on Aspect-Oriented Modeling”, in MODELS 2009 Workshops, Sudipto Ghosh, Ed., Denver, Colorado, USA, February 2010, vol. Lecture Notes in Computer Science 6002, pp. 98–103, Springer. [ .pdf ]
[9]
Walter Cazzola, Diego Colombo, and Duncan Harrison, “Aspect-Oriented Procedural Content Engineering for Game Design”, in Proceedings of the 14th Annual ACM Symposium on Applied Computing (SAC'09), Honolulu, Hawai'i, USA, March 2009, ACM, pp. 1957–1962. [ http ]
Generally progressive procedural content in the context of 3D scene rendering is expressed as recursive functions where a finer level of detail gets computed on demand. Typical examples of content procedurally generated are fractal images and noise textures. Unfortunately, not always the content can be expressed in this way, developers and content creators need the data to have some peculiarity (like windows on a wall for a house 3D model) and a method to drive data simplification without losing relevant details. In this paper we discuss how aspect oriented (AO) techniques can be used to drive the content creation process by mapping each data peculiarity to the code to generate it. Using aspects will let us to partially evaluate the code of the procedure improving the performance without losing the flow of the generation logic. We will also discuss how the use of AO can provide techniques to build simplified version of the data through code transformations.

[10]
Walter Cazzola and Stefano Salvotelli, “Recognizing Join Points from their Context through Graph Grammars”, in Proceedings of the 13th Aspect-Oriented Modeling Workshop (AOM'09), Charlottesville, Virginia, USA, March 2009, pp. 37–42, ACM. [ http ]
Aspect-oriented software development has been proposed with the intent of better modularizing object-oriented programs by confining crosscutting concerns in aspects. Unfortunately, the aspects do not completely keep their promises. Most of the current approaches revealed to be tightly coupled with the base-program's code compromising the modularity. Moreover, the feasible modularization has a coarse-grain since the aspects can only be woven at the public interface level but not on a generic statement. We have designed the Blueprint framework to overcome these limits. The join points are located through the description of the context where they could be found. This work is about the framework realization and the role that graph grammars play in locating the join points in the base-program from the context description.

[11]
Walter Cazzola and Ivan Speziale, “Sectional Domain Specific Languages”, in Proceedings of the 4th Domain Specific Aspect-Oriented Languages (DSAL'09), Charlottesville, Virginia, USA, March 2009, pp. 11–14, ACM. [ http ]
Nowadays, many problems are solved by using a domain specific language (DSL), i.e., a programming language tailored to work on a particular application domain. Normally, a new DSL is designed and implemented from scratch requiring a long time-to-market due to implementation and testing issues. Whereas when the DSL simply extends another language it is realized as a source-to-source transformation or as an external library with limited flexibility.

The Hive framework is developed with the intent of overcoming these issues by providing a mechanism to compose different programming features together forming a new DSL, what we call a sectional DSL. The support (both at compiler and interpreter level) of each feature is separately described and easily composed with the others. This approach is quite flexible and permits to build up a new DSL from scratch or simplifying an existing language without penalties. Moreover, it has the desirable side-effect that each DSL can be extended at any time potentially also at run-time.

[12]
Manuel Oriol, Walter Cazzola, Shigeru Chiba, and Gunter Saake, “Getting Farther on Software Evolution via AOP and Reflection”, in ECOOP'08 Workshop Reader, Patrick Eugster, Ed., Lecture Notes in Computer Science 5475, pp. 63–69. Springer-Verlag, March 2009. [ .pdf ]
[13]
Walter Cazzola, Shigeru Chiba, Manuel Oriol, and Gunter Saake, Eds., Proceedings of the 5th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'08), Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, December 2008. [ .pdf ]
[14]
Eduardo Figueiredo, Cláudio Sant'Anna, Alessandro Garcia, Thiago T. Bartolomei, Walter Cazzola, and Alessandro Marchetto, “On the Maintainability of Aspect-Oriented Software: A Concern-Oriented Measurement Framework”, in Proceedings of the 12th European Conference on Software Maintenance and Reengineering (CSMR 2008), Christos Tjortjis and Andreas Winter, Eds., Athens, Greece, April 2008, pp. 183–192, IEEE Press. [ .pdf ]
Aspect-oriented design needs to be systematically assessed with respect to modularity flaws caused by the realization of driving system concerns, such as tangling, scattering, and excessive concern dependencies. As a result, innovative concern metrics have been defined to support quantitative analyses of concern's properties. However, the vast majority of these measures have not yet being theoretically validated and managed to get accepted in the academic or industrial settings. The core reason for this problem is the fact that they have not been built by using a clearly-defined terminology and criteria. This paper defines a concern-oriented framework that supports the instantiation and comparison of concern measures. The framework subsumes the definition of a core terminology and criteria in order to lay down a rigorous process to foster the definition of meaningful and well-founded concern measures. In order to evaluate the framework generality, we demonstrate the framework instantiation and extension to a number of concern measures suites previously used in empirical studies of aspect-oriented software maintenance.

[15]
Manuel Oriol, Walter Cazzola, Shigeru Chiba, Gunter Saake, Yvonne Coady, Stéphane Ducasse, and Günter Kniesel, “Enabling Software Evolution via AOP and Reflection”, in ECOOP'07 Workshop Reader, Michael Cebulla, Ed., Lecture Notes in Computer Science 4906, pp. 91–98. Springer-Verlag, February 2008. [ .pdf ]
[16]
Walter Cazzola and Alessandro Marchetto, “AOPHiddenMetrics: Separation, Extensibility and Adaptability in SW Measurement”, Journal of Object Technology, vol. 7, no. 2, pp. 53–68, February 2008. [ .pdf ]
Traditional approaches to dynamic system analysis and metrics measurement are based on system code (both source, intermediate and executable code) instrumentation or need ad hoc support by the run-time environment. In these contexts, the measurement process is tricky, invasive and the results could be affected by the process itself making the data not germane.

Moreover, the tool based on these approaches are difficult to customize, extend and often use since their properties are rooted at specific system details (e.g., special tools such as bytecode analyzers or virtual machine goodies such as the debugger interface) and require high efforts, skills and knowledges to be adapted.

Notwithstanding its importance, software measurement is clearly a nonfunctional concern and should not impact on the software development and efficiency. Aspect-oriented programming provides the mechanisms to deal with this kind of concern and to overcome the software measurement limitations.

In this paper, we present a different approach to dynamic software measurements based on aspect-oriented programming and the corresponding support framework named AOPHiddenMetrics. The proposed approach makes the measurement process highly customizable and easy to use reducing its invasiveness and the dependency from the code knowledge.

[17]
Walter Cazzola, Shigeru Chiba, Yvonne Coady, Stéphane Ducasse, Günter Kniesel, Manuel Oriol, and Gunter Saake, Eds., Proceedings of the 4th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'07), Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, November 2007. [ .pdf ]
[18]
Walter Cazzola, Shigeru Chiba, and Gunter Saake (Eds), “Special Issue on Software Evolution”, Transaction on Aspect-Oriented SW Development, vol. 1, no. 4, October 2007, Printed on Lecture Notes on Computer Science 4640.
[19]
Jörg Kienzle, Jeff Gray, Dominik Stein, Walter Cazzola, Omar Aldawud, and Elrad Tzilla, “11th International Workshop on Aspect-Oriented Modeling (Report)”, in MoDELS 2007 Workshops, Holger Giese, Ed., Nashville, TN, USA, September 2007, Lecture Notes in Computer Science 5002, pp. 1–6, Springer. [ .pdf ]
[20]
Walter Cazzola and Sonia Pini, “On the Footprints of Join Points: The Blueprint Approach”, Journal of Object Technology, vol. 6, no. 7, pp. 167–192, August 2007. [ .pdf ]
Aspect-oriented techniques are widely used to better modularize object-oriented programs by introducing crosscutting concerns in a safe and non-invasive way, i.e., aspect-oriented mechanisms better address the modularization of functionality that orthogonally crosscuts the implementation of the application.

Unfortunately, as noted by several researchers, most of the current aspect-oriented approaches are too coupled with the application code, and this fact hinders the concerns separability and consequently their re-usability since each aspect is strictly tailored on the base application. Moreover, the join points (i.e., locations affected by a crosscutting concerns) actually are defined at the operation level. It implies that the possible set of join points includes every operation (e.g., method invocations) that the system performs. Whereas, in many contexts we wish to define aspects that are expected to work at the statement level, i.e., by considering as a join point every point between two generic statements (i.e., lines of code).

In this paper, we present our approach, called Blueprint, to overcome the abovementioned limitations of the current aspect-oriented approaches. The Blueprint consists of a new aspect-oriented programming language based on modeling the join point selection mechanism at a high-level of abstraction to decouple aspects from the application code. To this regard, we adopt a high-level pattern-based join point model, where join points are described by join point blueprints, i.e., behavioral patterns describing where the join points should be found.

[21]
Walter Cazzola, Jeff Gray, Dominik Stein, Jörg Kienzle, Tzilla Elrad, and Omar Aldawud (Eds), “Special Issue on Aspect-Oriented Modeling”, Journal of Object Technology, vol. 6, no. 7, August 2007. [ http ]
[22]
Walter Cazzola and Sonia Pini, “AOP vs Software Evolution: a Score in Favor of the Blueprint”, in Proceedings of the 4th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'07), Walter Cazzola, Shigeru Chiba, Yvonne Coady, Stéphane Ducasse, Günter Kniesel, Manuel Oriol, and Gunter Saake, Eds., Berlin, Germany, July 2007, pp. 81–91. [ .pdf ]
All software systems are subject to evolution, independently by the developing technique. Aspect oriented software in addition to separate the different concerns during the software development, must be “not fragile” against software evolution. Otherwise, the benefit of disentangling the code will be burred by the extra complication in maintaining the code.

To obtain this goal, the aspect-oriented languages/tools must evolve, they have to be less coupled to the base program. In the last years, a few attempts have been proposed, the Blueprint is our proposal based on behavioral patterns.

In this paper we test the robustness of the Blueprint aspect-oriented language against software evolution.

[23]
Walter Cazzola and Alessandro Marchetto, “AOPHiddenMetrics”, Technical Report TR 19-07, Università degli Studi di Milano, Milano, Italy, June 2007. [ www: ]

[24]
Walter Cazzola, Shigeru Chiba, Yvonne Coady, and Gunter Saake, Eds., Proceedings of the 3rd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'06), Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, November 2006. [ .pdf ]
[25]
Walter Cazzola and Sonia Pini, “Join Point Patterns: a High-Level Join Point Selection Mechanism”, in MoDELS'06 Satellite Events Proceedings, Thomas Khüne, Ed., Genova, Italy, October 2006, Lecture Notes in Computer Science 4364, pp. 17–26, Springer, Best Paper Awards at the 9th Aspect-Oriented Modeling Workshop. [ .pdf ]
Aspect-Oriented Programming is a powerful technique to better modularize object-oriented programs by introducing crosscutting concerns in a safe and noninvasive way. Unfortunately, most of the current join point models are too coupled with the application code. This fact hinders the concerns separability and reusability since each aspect is strictly tailored on the base application.

This work proposes a possible solution to this problem based on modeling the join points selection mechanism at a higher level of abstraction. In our view, the aspect designer does not need to know the inner details of the application such as a specific implementation or the used name conventions rather he exclusively needs to know the application behavior to apply his/her aspects.

In the paper, we present a novel join point model with a join point selection mechanism based on a high-level program representation. This high-level view of the application decouples the aspects definition from the base program structure and syntax. The separation between aspects and base program will render the aspects more reusable and independent of the manipulated application.

[26]
Jörg Kienzle, Dominik Stein, Walter Cazzola, Jeff Gray, Omar Aldawud, and Elrad Tzilla, “9th International Workshop on Aspect-Oriented Modeling (Report)”, in MoDELS'06 Satellite Events Proceedings, Thomas Khüne, Ed., Genova, Italy, October 2006, Lecture Notes in Computer Science 4364, pp. 1–5, Springer. [ .pdf ]
[27]
Walter Cazzola, Sonia Pini, and Massimo Ancona, “Design-Based Pointcuts Robustness Against Software Evolution”, in Proceedings of the 3rd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'06), Walter Cazzola, Shigeru Chiba, Yvonne Coady, and Gunter Saake, Eds., Nantes, France, July 2006, pp. 35–45. [ .pdf ]
Aspect-Oriented Programming (AOP) is a powerful technique to better modularize object-oriented programs by introducing crosscutting concerns in a safe and noninvasive way. Unfortunately, most of the current join point models are too coupled with the application code. This fact harms the evolvability of the program, hinders the concerns selection and reduces the aspect reusability. To overcome this problem is an hot topic.

This work propose a possible solution to the limits of the current aspect-oriented techniques based on modeling the join point selection mechanism at a higher level of abstraction to decoupling base program and aspects.

In this paper, we will present by examples a novel join point model based on design models (e.g., expressed through UML diagrams). Design models provide a high-level view on the application structure and behavior decoupled by base program. A design oriented join point model will render aspect definition more robust against base program evolution, reusable and independent of the base program.

[28]
Walter Cazzola, Shigeru Chiba, Yvonne Coady, and Gunter Saake, “AOSD and Reflection: Benefits and Drawbacks to Software Evolution”, in ECOOP'06 Workshop Reader, Charles Consel and Mario Südholt, Eds., Lecture Notes in Computer Science 4379, pp. 40–52. Springer-Verlag, July 2006. [ .pdf ]
[29]
Walter Cazzola, Antonio Cicchetti, and Alfonso Pierantonio, “Towards a Model-Driven Join Point Model”, in Proceedings of the 11th Annual ACM Symposium on Applied Computing (SAC'06), Dijon, France, April 2006, pp. 1306–1307, ACM Press. [ .pdf | http ]
Aspect–Oriented Programming (AOP) is increasingly being adopted by developers to better modularize object–oriented design by introducing crosscutting concerns. However, due to tight coupling of existing approaches with the implementing code and to the poor expressiveness of the pointcut languages a number of problems became evident. Model–Driven Architecture (MDA) is an emerging technology that aims at shifting the focus of software development from a programming language specific implementation to application design, using appropriate representations by means of models which could be transformed toward several development platforms. Therefore, this work presents a possible solution based on modeling aspects at a higher level of abstraction which are, in turn, transformed to specific targets.

[30]
Walter Cazzola, Jean-Marc Jézéquel, and Awais Rashid, “Semantic Join Point Models: Motivations, Notions and Requirements”, in Proceedings of the Software Engineering Properties of Languages and Aspect Technologies Workshop (SPLAT'06), Bonn, Germany, March 2006. [ .pdf ]
Aspect-oriented programming (AOP) has been designed to provide a better separation of concerns at development level by modularizing concerns that would otherwise be tangled and scattered across other concerns. Current mainstream AOP techniques separate crosscutting concerns on a syntactic basis whereas a concern is more a semantic matter. Therefore, a different, more semantic-oriented, approach to AOP is needed. In this position paper, we investigate the limitations of mainstream AOP techniques, mainly , in this regard and highlight the issues that need to be addressed to design semantic-based join point models.

[31]
Walter Cazzola, Shigeru Chiba, Gunter Saake, and Tom Tourwé, Eds., Proceedings of the 2nd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'05), Preprint No. 9/2005 of Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, November 2005. [ .pdf ]
[32]
Walter Cazzola, Sonia Pini, and Massimo Ancona, “The Role of Design Information in Software Evolution”, in Proceedings of the 2nd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'05), Walter Cazzola, Shigeru Chiba, Gunter Saake, and Tom Tourwé, Eds., Glasgow, Scotland, July 2005, pp. 59–70. [ .pdf ]
Software modeling has received a lot a of attention in the last decade and now is an important support for the design process.

Actually, the design process is very important to the usability and understandability of the system, for example functional requirements present a complete description of how the system will function from the user's perspective, while non-functional requirements dictate properties and impose constraints on the project or system.

The design models and implementation code must be strictly connected, i.e. we must have correlation and consistency between the two previous views, and this correlation must exist during all the software cycle. Often, the early stages of development, the specifications and the design of the system, are ignored once the code has been developed. This practice cause a lot of problems, in particular when the system must evolve. Nowadays, to maintain a software is a difficult task, since there is a high coupling degree between the software itself and its environment. Often, changes in the environment cause changes in the software, in other words, the system must evolve itself to follow the evolution of its environment.

Typically, a design is created initially, but as the code gets written and modified, the design is not updated to reflect such changes.

This paper describes and discusses how the design information can be used to drive the software evolution and consequently to maintain consistency among design and code.

[33]
Walter Cazzola, Antonio Cicchetti, and Alfonso Pierantonio, “On the Problems of the JPMs”, in Proceedings of the 1st ECOOP Workshop on Models and Aspects (MAW'05), Glasgow, Scotland, July 2005. [ .pdf ]
[34]
Walter Cazzola, Sonia Pini, and Massimo Ancona, “AOP for Software Evolution: A Design Oriented Approach”, in Proceedings of the 10th Annual ACM Symposium on Applied Computing (SAC'05), Santa Fe, New Mexico, USA, March 2005, pp. 1356–1360, ACM Press. [ http ]
In this paper, we have briefly explored the aspect-oriented approach as a tool for supporting the software evolution. The aim of this analysis is to highlight the potentiality and the limits of the aspect-oriented development for software evolution. From our analysis follows that in general (and in particular for AspectJ) the approach to join points, pointcuts and advices definition are not enough intuitive, abstract and expressive to support all the requirements for carrying out the software evolution. We have also examined how a mechanism for specifying pointcuts and advices based on design information, in particular on the use of UML diagrams, can better support the software evolution through aspect oriented programming. Our analysis and proposal are presented through an example.

[35]
Walter Cazzola, Shigeru Chiba, and Gunter Saake, “Software Evolution: a Trip through Reflective, Aspect, and Meta-Data Oriented Techniques”, in ECOOP'04 Workshop Reader, Jacques Malenfant and Bjarte M. Østvold, Eds., Lecture Notes in Computer Science 3344, pp. 116–130. Springer-Verlag, December 2004. [ .pdf ]
[36]
Walter Cazzola, Shigeru Chiba, and Gunter Saake, Eds., Proceedings of the 1st ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'04), Research Report C-196 of the Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology. Preprint No. 10/2004 of Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, July 2004. [ .pdf ]
[37]
Walter Cazzola, Sonia Pini, and Massimo Ancona, “Evolving Pointcut Definition to Get Software Evolution”, in Proceedings of the 1st ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'04), Oslo, Norway, June 2004, pp. 83–88. [ .pdf ]
In this paper, we have briefly analyzed the aspect-oriented approach with respect to the software evolution topic. The aim of this analysis is to highlight the aspect-oriented potentiality for software evolution and its limits. From our analysis, we can state that actual pointcut definition mechanisms are not enough expressive to pick out from design information where software evolution should be applied. We will also give some suggestions about how to improve the pointcut definition mechanism.

[1]
Walter Cazzola and Mehdi Jalili, “Dodging Unsafe Update Points in Java Dynamic Updating Systems”, in Proceedings of the 27th International Symposium on Software Reliability Engineering (ISSRE'16), Alexander Romanovsky and Elena Troubitsyna, Eds., Ottawa, Canada, October 2016, IEEE, pp. 332–341. [ .pdf ]
Dynamic Software Updating (DSU) provides mechanisms to update a program without stopping its execution. An indiscriminate update, that does not consider the current state of the computation, potentially undermines the stability of the running application. To automatically determine a safe moment when to update the running system is still an open problem often neglected from the existing DSU systems. This paper proposes a mechanism to support the choice of a safe update point by marking which point can be considered unsafe and therefore dodged during the update. The method is based on decorating the code with some specific meta-data that can be used to find the right moment to do the update. The proposed approach has been implemented as an external component that can be plugged into every DSU system. The approach is demonstrated on the evolution of the HSQLDB system from two distinct versions to their next update.

[2]
Walter Cazzola and Edoardo Vacchi, “@Java: Bringing a Richer Annotation Model to Java”, Computer Languages, Systems & Structures, vol. 40, no. 1, pp. 2–18, April 2014. [ DOI | www: ]
The ability to annotate code and, in general, the capability to attach arbitrary meta-data to portions of a program are features that have become more and more common in programming languages.

Annotations in Java make it possible to attach custom, structured meta-data to declarations of classes, fields and methods. However, the mechanism has some limits: annotations can only decorate declarations and their instantiation can only be resolved statically.

With this work, we propose an extension to Java (named @Java) with a richer annotation model, supporting code block and expression annotations, as well as dynamically evaluated members. In other words, in our model, the granularity of annotations extends to the statement and expression level and annotations may hold the result of runtime-evaluated expressions.

Our extension to the Java annotation model is twofold: (i) we introduced block and expression annotations and (ii) we allow every annotation to hold dynamically evaluated values. Our implementation also provides an extended reflection API to support inspection and retrieval of our enhanced annotations.

[3]
Walter Cazzola and Edoardo Vacchi, “Fine-Grained Annotations for Pointcuts with a Finer Granularity”, in Proceedings of the 28th Annual ACM Symposium on Applied Computing (SAC'13), Coimbra, Portugal, March 2013, pp. 1709–1714, ACM Press. [ http ]
A number of authors have suggested that AspectJ-like pointcut languages are too limited, and that they cannot select every possible join point in a program. Many enhanced pointcut languages have been proposed; they require virtually no change to the original code, but their improved expressive power comes often at the cost of making the pointcut expression too tightly connected with the structure of the programs that are being advised. Other solutions consist in simple extensions to the base language; they require only small changes to the original code, but they frequently serve no other immediate purpose than exposing pieces of code to the weaver. Annotations are a form of metadata that has been introduced in Java 5. Annotations have a number of uses: they may provide hints to the compiler, information to code processing tools and they can be retained at runtime. At the moment of writing, runtime-accessible annotations in the Java programming language can only be applied to classes, fields and methods. The support to annotate expressions and blocks feels like a natural extension to Java's annotation model, that can be also exploited to expose join points at a finer-grained level. In this paper we present an extension to the @AspectJ language to select block and expression annotations in the @Java language extension.

[4]
Walter Cazzola and Edoardo Vacchi, “@Java: Annotations in Freedom”, in Proceedings of the 28th Annual ACM Symposium on Applied Computing (SAC'13), Coimbra, Portugal, March 2013, pp. 1691–1696, ACM Press. [ http ]
The ability to annotate code and, in general, the capability to attach arbitrary metadata to portions of a program are features that have become more and more common in programming languages. In fact, various programming techniques and tools exploit their explicit availability for a number of purposes, such as extracting documentation, guiding code profiling, enhancing the description of a data type, marking code for instrumentation (for instance, in aspect-oriented frameworks), and the list could go on.

While support to attach metadata to code is not a new concept (programming platforms as CLOS and Smalltalk have pioneered in this field), consistent, pervasive APIs to define and manage code annotations are something comparatively recent on modern platforms like the .NET and Java.

Annotations in Java make possible to attach custom, structured metadata to declarations of classes, fields and methods. With this work, we propose an extension to Java (named @Java) that has a richer annotation model, supporting code block and expression annotations. In other words, the granularity of annotations extends to the statement and expression level and does not limit to class, method and field declarations.

[5]
Walter Cazzola, Diego Colombo, and Duncan Harrison, “Aspect-Oriented Procedural Content Engineering for Game Design”, in Proceedings of the 14th Annual ACM Symposium on Applied Computing (SAC'09), Honolulu, Hawai'i, USA, March 2009, ACM, pp. 1957–1962. [ http ]
Generally progressive procedural content in the context of 3D scene rendering is expressed as recursive functions where a finer level of detail gets computed on demand. Typical examples of content procedurally generated are fractal images and noise textures. Unfortunately, not always the content can be expressed in this way, developers and content creators need the data to have some peculiarity (like windows on a wall for a house 3D model) and a method to drive data simplification without losing relevant details. In this paper we discuss how aspect oriented (AO) techniques can be used to drive the content creation process by mapping each data peculiarity to the code to generate it. Using aspects will let us to partially evaluate the code of the procedure improving the performance without losing the flow of the generation logic. We will also discuss how the use of AO can provide techniques to build simplified version of the data through code transformations.

[6]
Walter Cazzola, Shigeru Chiba, Manuel Oriol, and Gunter Saake, Eds., Proceedings of the 5th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'08), Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, December 2008. [ .pdf ]
[7]
Manuel Oriol, Walter Cazzola, Shigeru Chiba, Gunter Saake, Yvonne Coady, Stéphane Ducasse, and Günter Kniesel, “Enabling Software Evolution via AOP and Reflection”, in ECOOP'07 Workshop Reader, Michael Cebulla, Ed., Lecture Notes in Computer Science 4906, pp. 91–98. Springer-Verlag, February 2008. [ .pdf ]
[8]
Walter Cazzola, Shigeru Chiba, Yvonne Coady, Stéphane Ducasse, Günter Kniesel, Manuel Oriol, and Gunter Saake, Eds., Proceedings of the 4th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'07), Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, November 2007. [ .pdf ]
[9]
Walter Cazzola, Sonia Pini, Ahmed Ghoneim, and Gunter Saake, “Co-Evolving Application Code and Design Models by Exploiting Meta-Data”, in Proceedings of the 12th Annual ACM Symposium on Applied Computing (SAC'07), Seoul, South Korea, March 2007, pp. 1275–1279, ACM Press. [ http ]
Evolvability and adaptability are intrinsic properties of today's software applications. Unfortunately, the urgency of evolving/adapting a system often drives the developer to directly modify the application code neglecting to update its design models. Even, most of the development environments support the code refactoring without supporting the refactoring of the design information.

Refactoring, evolution and in general every change to the code should be reflected into the design models, so that these models consistently represent the application and can be used as documentation in the successive maintenance steps. The code evolution should not evolve only the application code but also its design models. Unfortunately, to co-evolve the application code and its design is a hard job to be carried out automatically, since there is an evident and notorious gap between these two representations.

We propose a new approach to code evolution (in particular to code refactoring) that supports the automatic co-evolution of the design models. The approach relies on a set of predefined meta-data that the developer should use to annotate the application code and to highlight the refactoring performed on the code. Then, these meta-data are retrieved through reflection and used to automatically and coherently update the application design models.

[10]
Walter Cazzola, Shigeru Chiba, Yvonne Coady, and Gunter Saake, Eds., Proceedings of the 3rd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'06), Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, November 2006. [ .pdf ]
[11]
Walter Cazzola, Shigeru Chiba, Yvonne Coady, and Gunter Saake, “AOSD and Reflection: Benefits and Drawbacks to Software Evolution”, in ECOOP'06 Workshop Reader, Charles Consel and Mario Südholt, Eds., Lecture Notes in Computer Science 4379, pp. 40–52. Springer-Verlag, July 2006. [ .pdf ]
[12]
Walter Cazzola, Antonio Cisternino, and Diego Colombo, “Freely Annotating C#”, Journal of Object Technology, vol. 4, no. 10, pp. 31–48, December 2005. [ .pdf ]
Reflective programming is becoming popular due to the increasing set of dynamic services provided by execution environments like JVM and CLR. With custom attributes Microsoft introduced an extensible model of reflection for CLR: they can be used as additional decorations on element declarations. The same notion has been introduced in Java 1.5. The annotation model, both in Java and in C#, limits annotations to classes and class members. In this paper we describe [a]C#, an extension of the C# programming language, that allows programmers to annotate statements and code blocks and retrieve these annotations at run-time. We show how this extension can be reduced to the existing model. A set of operations on annotated code blocks to retrieve annotations and manipulate bytecode is introduced. We also discuss how to use [a]C# to annotate programs giving hints on how to parallelize a sequential method and how it can be implemented by means of the abstractions provided by the run-time of the language. Finally, we show how our model for custom attributes has been realized.

[13]
Walter Cazzola, Shigeru Chiba, Gunter Saake, and Tom Tourwé, Eds., Proceedings of the 2nd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'05), Preprint No. 9/2005 of Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, November 2005. [ .pdf ]
[14]
Antonio Cisternino, Walter Cazzola, and Diego Colombo, “Metadata-Driven Library Design”, in Proceedings of Library-Centric Software Design Workshop (LCSD'05), San Diego, CA, USA, October 2005. [ .pdf ]
Library development has greatly benefited by the wide adoption of virtual machines like Java and Microsoft .NET. Reflection services and first class dynamic loading have contributed to this trend. Microsoft introduced the notion of custom annotation, which is a way for the programmer to define custom meta-data stored along reflection meta-data within the executable file. Recently also Java has introduced an equivalent notion into the virtual machine. Custom annotations allow the programmer to give hints to libraries about his intention without having to introduce semantics dependencies within the program; on the other hand these annotations are read at run-time introducing a certain amount of overhead. The aim of this paper is to investigate the impact of this new feature on library design, focusing both on expressivity and performance issues.

[15]
Walter Cazzola, Antonio Cisternino, and Diego Colombo, “[a]C#: C# with a Customizable Code Annotation Mechanism”, in Proceedings of the 10th Annual ACM Symposium on Applied Computing (SAC'05), Santa Fe, New Mexico, USA, March 2005, pp. 1274–1278, ACM Press. [ http ]
Reflective programming is becoming popular due to the increasing set of dynamic services provided by execution environments like JVM and CLR. With custom attributes Microsoft introduced an extensible model of reflection for CLR: they can be used as additional decorations on element declarations. The same notion has been introduced in Java 1.5. The extensible model proposed in both platforms limits annotations to class members. In this paper we describe [a]C#, an extension of the C# programming language, that allows programmers to annotate statements or code blocks and retrieve these annotations at run-time. We show how this extension can be reduced to the existing model. A set of operations on annotated code blocks to retrieve annotations and manipulate bytecode is introduced. Finally, we discuss how to use [a]C# to annotate programs giving hints on how to parallel a sequential method and how it can be implemented by means of the abstractions provided by the run-time of the language.

[16]
Walter Cazzola, Shigeru Chiba, and Gunter Saake, “Software Evolution: a Trip through Reflective, Aspect, and Meta-Data Oriented Techniques”, in ECOOP'04 Workshop Reader, Jacques Malenfant and Bjarte M. Østvold, Eds., Lecture Notes in Computer Science 3344, pp. 116–130. Springer-Verlag, December 2004. [ .pdf ]
[17]
Walter Cazzola, Shigeru Chiba, and Gunter Saake, Eds., Proceedings of the 1st ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'04), Research Report C-196 of the Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology. Preprint No. 10/2004 of Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, July 2004. [ .pdf ]
[1]
Walter Cazzola and Albert Shaqiri, “Open Programming Language Interpreters”, The Art, Science, and Engineering of Programming Journal, vol. 1, no. 2, pp. 5–1–5–34, April 2017. [ DOI | NEW! | http ]
Context: This paper presents the concept of open programming language interpreters and the implementation of a framework-level metaobject protocol (MOP) to support them. Inquiry: We address the problem of dynamic interpreter adaptation to tailor the interpreter's behavior on the task to be solved and to introduce new features to fulfill unforeseen requirements. Many languages provide a MOP that to some degree supports reflection. However, MOPs are typically language-specific, their reflective functionality is often restricted, and the adaptation and application logic are often mixed which hardens the understanding and maintenance of the source code. Our system overcomes these limitations. Approach: We designed and implemented a system to support open programming language interpreters. The prototype implementation is integrated in the Neverlang framework. The system exposes the structure, behavior and the runtime state of any Neverlang-based interpreter with the ability to modify it. Knowledge: Our system provides a complete control over interpreter's structure, behavior and its runtime state. The approach is applicable to every Neverlang-based interpreter. Adaptation code can potentially be reused across different language implementations. Grounding: Having a prototype implementation we focused on feasibility evaluation. The paper shows that our approach well addresses problems commonly found in the research literature. We have a demonstrative video and examples that illustrate our approach on dynamic software adaptation, aspect-oriented programming, debugging and context-aware interpreters. Importance: To our knowledge, our paper presents the first reflective approach targeting a general framework for language development. Our system provides full reflective support for free to any Neverlang-based interpreter. We are not aware of any prior application of open implementations to programming language interpreters in the sense defined in this paper. Rather than substituting other approaches, we believe our system can be used as a complementary technique in situations where other approaches present serious limitations.

[2]
Walter Cazzola, Ruzanna Chitchyan, Awais Rashid, and Albert Shaqiri, “μ-DSU: A Micro-Language Based Approach to Dynamic Software Updating”, Computer Languages, Systems & Structures, 2017. [ DOI | NEW! | www: ]

[3]
Walter Cazzola and Albert Shaqiri, “Context-Aware Software Variability through Adaptable Interpreters”, IEEE Software, 2017, Special Issue on Context Variability Modeling. [ NEW! | www: ]

[4]
Walter Cazzola and Mehdi Jalili, “Dodging Unsafe Update Points in Java Dynamic Updating Systems”, in Proceedings of the 27th International Symposium on Software Reliability Engineering (ISSRE'16), Alexander Romanovsky and Elena Troubitsyna, Eds., Ottawa, Canada, October 2016, IEEE, pp. 332–341. [ .pdf ]
Dynamic Software Updating (DSU) provides mechanisms to update a program without stopping its execution. An indiscriminate update, that does not consider the current state of the computation, potentially undermines the stability of the running application. To automatically determine a safe moment when to update the running system is still an open problem often neglected from the existing DSU systems. This paper proposes a mechanism to support the choice of a safe update point by marking which point can be considered unsafe and therefore dodged during the update. The method is based on decorating the code with some specific meta-data that can be used to find the right moment to do the update. The proposed approach has been implemented as an external component that can be plugged into every DSU system. The approach is demonstrated on the evolution of the HSQLDB system from two distinct versions to their next update.

[5]
Mohammed Al-Refai, Sudipto Ghosh, and Walter Cazzola, “Model-based Regression Test Selection for Validating Runtime Adaptation of Software Systems”, in Proceedings of the 9th IEEE International Conference on Software Testing, Verification and Validation (ICST'16), Lionel Briand and Sarfraz Khurshid, Eds., Chicago, IL, USA, April 2016, pp. 288–298, IEEE. [ .pdf ]
An increasing number of modern software systems need to be adapted at runtime without stopping their execution. Runtime adaptations can introduce faults in existing functionality, and thus, regression testing must be conducted after an adaptation is performed but before the adaptation is deployed to the running system. Regression testing must be completed subject to time and resource constraints. Thus, test selection techniques are needed to reduce the cost of regression testing.

The FiGA framework provides a complete loop from code to models and back that allows fine-grained model-based adaptation and validation of running Java systems without stopping their execution. In this paper we present a model-based test selection approach for regression testing during the validation activity to be used with the FiGA framework. The evaluation results show that our approach was able to reduce the number of selected test cases, and that the model-level fault detection ability of the selected test cases was never lower than that of the original test cases.

[6]
Walter Cazzola and Albert Shaqiri, “Dynamic Software Evolution through Interpreter Adaptation”, in Proceedings of the 15th International Conference on Modularity (Modularity'16), Málaga, Spain, March 2016, pp. 16–19, ACM. [ http ]
Significant research has been dedicated to dynamic software evolution and adaptation that lead to different approaches which can mainly be categorized as either architecture-based or language-based. But there was little or no focus on dynamic evolution achieved through language interpreter adaptation. In this paper we present a model for such adaptations and illustrate their applicability and usefulness on practical examples developed in Neverlang, a framework for modular language development with features for dynamic adaptation of language interpreters.

[7]
Mohammed Al-Refai, Walter Cazzola, Sudipto Ghosh, and Robert France, “Using Models to Validate Unanticipated, Fine-Grained Adaptations at Runtime”, in Proceedings of the 17th IEEE International Symposium on High Assurance Systems Engineering (HASE'16), Helene Waeselynck and Radu Babiceanu, Eds., Orlando, FL, USA, January 2016, pp. 23–30, IEEE. [ .pdf ]
An increasing number of modern software systems need to be adapted at runtime while they are still executing. It becomes crucial to validate each adaptation before it is deployed to the running system. Models are used to ease software maintenance and can, therefore, be used to manage dynamic software adaptations. For example, models are used to manage coarse-grained anticipated adaptations for self-adaptive systems. However, the need for both fine-grained and unanticipated adaptations is becoming increasingly common, and their validation is also becoming more crucial.

This paper proposes an approach to validate unanticipated, fine-grained adaptations performed on models before the adaptations are deployed into the running system. The proposed approach exploits model execution where model representations of the test suites of a software system are executed. The proposed approach is demonstrated and evaluated within the Fine Grained Adaptation (FiGA) framework.

[8]
Ruzanna Chitchyan, Walter Cazzola, and Awais Rashid, “Engineering Sustainability through Language”, in Proceedings of the 37th International Conference on Software Engineering (ICSE'15), Firenze, Italy, May 2015, pp. 501–504, IEEE, Track on Software Engineering in Society. [ .pdf ]
As our understanding and care for sustainability concerns increases, so does the demand for incorporating these concerns into software. Yet, existing programming language constructs are not well-aligned with concepts of the sustainability domain. This undermines what we term technical sustainability of the software due to (i) increased complexity in programming of such concerns and (ii) continuous code changes to keep up with changes in (environmental, social, legal and other) sustainability-related requirements. In this paper we present a proof-of-concept approach on how technical sustainability support for new and existing concerns can be provided through flexible language-level progr amming. We propose to incorporate sustainability-related behaviour into programs through micro-languages enabling such behaviour to be updated and/or redefined as and wh en required.

[9]
Walter Cazzola, Nicole Alicia Rossini, Phillipa Bennett, Sai Pradeep Mandalaparty, and Robert B. France, “Fine-Grained Semi-Automated Runtime Evolution”, in MoDELS@Run-Time, Nelly Bencomo, Betty Chang, Robert B. France, and Uwe Aßmann, Eds., Lecture Notes in Computer Science 8378, pp. 237–258. Springer, August 2014. [ www: ]

[10]
Walter Cazzola, “Evolution as «Reflections on the Design»”, in MoDELS@Run-Time, Nelly Bencomo, Betty Chang, Robert B. France, and Uwe Aßmann, Eds., Lecture Notes in Computer Science 8378, pp. 259–278. Springer, August 2014. [ www: ]

[11]
Mohammed Al-Refai, Walter Cazzola, and Robert B. France, “Using Models to Dynamically Refactor Runtime Code”, in Proceedings of the 29th Annual ACM Symposium on Applied Computing (SAC'14), Gyeongju, South Korea, March 2014, ACM, pp. 1108–1113. [ http ]
Modern software systems that play critical roles in society's infrastructures are often required to change at runtime so that they can continuously provide essential services in the dynamic environments they operate in. Updating open, distributed software systems at runtime is very challenging. Using runtime models as an interface for updating software at runtime can help developers manage the complexity of updating software while it is executing. To support this idea, we developed the FiGA framework that permits developers to update running software through changes made to UML models of the running software. In this paper, we address the following question: can the UML models be used to express any type of code change a developer desires? Specifically, we report our experience on applying Fowler's code refactoring catalog through model refactoring in the FiGA framework. The goal of this work is to show that the set of FiGA change operators is complete by showing that the refactorings at the source code level can be expressed as model changes in the FiGA approach.

[12]
Walter Cazzola, Nicole Alicia Rossini, Mohammed Al-Refai, and Robert B. France, “Fine-Grained Software Evolution using UML Activity and Class Models”, in Proceedings of the 16th International Conference on Model Driven Engineering Languages and Systems (MoDELS'13), Ana Moreira and Bernhard Schätz, Eds., Miami, FL, USA, September-October 2013, Lecture Notes in Computer Science 8107, pp. 271–286, Springer. [ www: ]

[13]
Mario Pukall, Christian Kästner, Walter Cazzola, Sebastian Götz, Alexander Grebhahn, Reimar Schöter, and Gunter Saake, “JavAdaptor — Flexible Runtime Updates of Java Applications”, Software—Practice and Experience, vol. 43, no. 2, pp. 153–185, February 2013. [ DOI | .pdf ]
Software is changed frequently during its life cycle. New requirements come, and bugs must be fixed. To update an application, it usually must be stopped, patched, and restarted. This causes time periods of unavailability, which is always a problem for highly available applications. Even for the development of complex applications, restarts to test new program parts can be time consuming and annoying. Thus, we aim at dynamic software updates to update programs at runtime. There is a large body of research on dynamic software updates, but so far, existing approaches have shortcomings either in terms of flexibility or performance. In addition, some of them depend on specific runtime environments and dictate the program’s architecture. We present JavAdaptor, the first runtime update approach based on Java that (a) offers flexible dynamic software updates, (b) is platform independent, (c) introduces only minimal performance overhead, and (d) does not dictate the program architecture. JAVADAPTOR combines schema changing class replacements by class renaming and caller updates with Java HotSwap using containers and proxies. It runs on top of all major standard Java virtual machines. We evaluate our approach’s applicability and performance in non-trivial case studies and compare it with existing dynamic software update approaches.

[14]
Ying Liu, Walter Cazzola, and Bin Zhang, “Towards a Colored Reflective Petri-Net Approach to Model Self-Evolving Service-Oriented Architectures”, in Proceedings of the 17th Annual ACM Symposium on Applied Computing (SAC'12), Riva del Garda, Trento, Italy, March 2012, pp. 1858–1865, ACM. [ http ]
Service-based software systems could require to evolve during their execution. To support this, we need to consider system evolving since the design phase. Reflective Petri nets separate the system from its evolution by describing it and how it can evolve. However, reflective Petri nets have some expressivity limits and render overcomplicated the consistency checking necessary during service evolution. In this paper, we extend the reflective Petri nets approach to overcome such limits and show that on a case study.

[15]
Mario Pukall, Alexander Grebhahn, Reimar Schröter, Christian Kästner, Walter Cazzola, and Sebastian Götz, “JavAdaptor: Unrestricted Dynamic Software Updates for Java”, in Proceedings of the 33rd International Conference on Software Engineering (ICSE'11), Waikiki, Honolulu, Hawaii, May 2011, pp. 989–991, IEEE. [ http ]
Dynamic software updates (DSU) are one of the top-most features requested by developers and users. As a result, DSU is already standard in many dynamic programming languages. But, it is not standard in statically typed languages such as Java. Even if at place number three of Oracle's current request for enhancement (RFE) list, DSU support in Java is very limited. Therefore, over the years many different DSU approaches for Java have been proposed. Nevertheless, DSU for Java is still an active field of research, because most of the existing approaches are too restrictive. Some of the approaches have shortcomings either in terms of flexibility or performance, whereas others are platform dependent or dictate the program's architecture. With JavAdaptor, we present the first DSU approach which comes without those restrictions. We will demonstrate JavAdaptor based on the well-known arcade game Snake which we will update stepwise at runtime.

[16]
Lorenzo Capra and Walter Cazzola, “An Introduction to Reflective Petri Nets”, in Handbook of Research on Discrete Event Simulation Environments: Technologies and Applications, Evon M. O. Abu-Taieh and Asim A. El Sheikh, Eds., chapter 9, pp. 191–217. IGI Global, November 2009. [ .pdf ]
Most discrete-event systems are subject to evolution during lifecycle. Evolution often implies the development of new features, and their integration in deployed systems. Taking evolution into account since the design phase therefore is mandatory. A common approach consists of hard-coding the foreseeable evolutions at the design level. Neglecting the obvious ifficulties of this approach, we also get system's design polluted by details not concerning functionality, which hamper analysis, reuse and maintenance. Petri Nets, as a central formalism for discrete-event systems, are not exempt from pollution when facing evolution. Embedding evolution in Petri nets requires expertise, other than early knowledge of evolution. The complexity of resulting models is likely to affect the consolidated analysis algorithms for Petri nets. We introduce Reflective Petri nets, a formalism for dynamic discrete-event systems. Based on a reflective layout, in which functional aspects are separated from evolution, this model preserves the description effectiveness and the analysis capabilities of Petri nets. Reflective Petri nets are provided with timed state-transition semantics.

[17]
Lorenzo Capra and Walter Cazzola, “Trying out Reflective Petri Nets on a Dynamic Workflow Case”, in Handbook of Research on Discrete Event Simulation Environments: Technologies and Applications, Evon M. O. Abu-Taieh and Asim A. El Sheikh, Eds., chapter 10, pp. 218–233. IGI Global, November 2009. [ .pdf ]
Industrial/business processes are an evident example of discrete-event systems which are subject to evolution during life-cycle. The design and management of dynamic workflows need adequate formal models and support tools to handle in sound way possible changes occurring during workflow operation. The known, well-established workflow's models, among which Petri nets play a central role, are lacking in features for representing evolution. We propose a recent Petri net-based reflective layout, called Reflective Petri nets, as a formal model for dynamic workflows. A localized open problem is considered: how to determine what tasks should be redone and which ones do not when transferring a workflow instance from an old to a new template. The problem is efficiently but rather empirically addressed in a workflow management system. Our approach is formal, may be generalized, and is based on the preservation of classical Petri nets structural properties, which permit an efficient characterization of workflow's soundness.

[18]
Mario Pukall, Christian Kästner, Sebastian Götz, Walter Cazzola, and Gunter Saake, “Flexible Runtime Program Adaptations in Java – A Comparison”, Technical Report 14, Fakultät für Informatik, Universität Magdeburg, Magdeburg, Germany, November 2009. [ www: ]

[19]
Walter Cazzola, “Cogito, Ergo Muto!”, in Proceedings of the Workshop on Self-Organizing Architecture (SOAR'09), Danny Weyns, Sam Malek, Rogério de Lemos, and Jesper Andersson, Eds., Cambridge, United Kingdom, September 2009, pp. 1–7, Invited Paper. [ .pdf ]
No system escapes from the need of evolving either to fix bugs, to be reconfigured or to add new features. To evolve becomes particularly problematic when the system to evolve can not be stopped.

Traditionally the evolution of a continuously running system is tackled on by calculating all the possible evolutions in advance and hardwiring them in the application itself. This approach gives origin to the code pollution phenomenon where the code of the application is polluted by code that could never be applied. The approach has the following defects: i) code bloating, ii) it is impossible to forecast any possible change and iii) the code becomes hard to read and maintain.

Computational reflection by definition allows an application to introspect and intercede on its own structure and behavior endowing, therefore, a reflective application with (potent ially) the ability of self-evolving. Furthermore, to deal with the evolution as a nonfunctional concerns, i.e., that can be separated from the current implementation of the applicat ion, can limit the code pollution phenomenon.

To bring the design information (model and/or architecture) at run-time provides the application with a basic knowledge about itself to reflect on when a change is necessary and on how to deploy it. The availability of such a knowledge at run-time frees the designer from forecasting and coding all the possible evolutions in favor of a sort of evolutionary engi ne that, to some extent, can evaluate which countermove to apply.

In this contribution, the author will explore the role of reflection and of the design information in the development of self-evolving applications. Moreover, the author will sketch a basic reflective architecture to support dynamic self-evolution and he will analyze the adherence of the existing frameworks to such an architecture.

[20]
Mario Pukall, Norbert Siegmund, and Walter Cazzola, “Feature-Oriented Runtime Adaptation”, in Proceedings of ESEC/FSE Workshop on Software INTegration and Evolution @ Runtime (SINTER'09), Amsterdam, The Netherlands, August 2009, pp. 33–36, ACM. [ http ]
Creating tailor-made programs based on the concept of software product lines (SPLs) gains more and more momentum. This is, because SPLs significantly decrease development costs and time to market while increasing product's quality. Especially highly available programs benefit from the quality improvements caused by an SPL. However, after a program variant is created from an SPL and then started, the program is completely decoupled from its SPL. Changes within the SPL, i.e., source code of its features do not affect the running program. To apply the changes, the program has to be stopped, recreated, and restarted. This causes at least short time periods of program unavailability which is not acceptable for highly available programs. Therefore, we present a novel approach based on class replacements and Java HotSwap that allows to apply features to running programs.

[21]
Lorenzo Capra and Walter Cazzola, “Evolving System's Modeling and Simulation through Reflective Petri Nets”, in Proceedings of the 4th International Conference on Evaluation of Novel Approaches to Software Engineering (ENASE'09), Stefan Jablonski and Leszek Maciaszek, Eds., Milan, Italy, May 2009, INSTICC, pp. 59–70, INSTICC Press. [ .pdf ]
The design of dynamic discrete-event systems calls for adequate modeling formalisms and tools to manage possible changes occurring during system's lifecycle. A common approach is to pollute design with details that do not regard the current system behavior rather its evolution. That hampers analysis, reuse and maintenance in general. A reflective Petri net model (based on classical Petri nets) was recently proposed to support dynamic discrete-event system's design, and was applied to dynamic workflow's management. Behind there is the idea that keeping functional aspects separated from evolutionary ones and applying them to the (current) system only when necessary, results in a simple formal model on which the ability of verifying properties typical of Petri nets is preserved. In this paper we provide the reflective Petri nets with a (labeled) state-transition graph semantics.

[22]
Manuel Oriol, Walter Cazzola, Shigeru Chiba, and Gunter Saake, “Getting Farther on Software Evolution via AOP and Reflection”, in ECOOP'08 Workshop Reader, Patrick Eugster, Ed., Lecture Notes in Computer Science 5475, pp. 63–69. Springer-Verlag, March 2009. [ .pdf ]
[23]
Walter Cazzola, Shigeru Chiba, Manuel Oriol, and Gunter Saake, Eds., Proceedings of the 5th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'08), Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, December 2008. [ .pdf ]
[24]
Walter Cazzola and Sonia Pini, “Jigsaw: Information System Composition through a Self-Adaptable Interface”, Technical Report RT 26-08, Department of Informatics and Communication, University of Milan, Milan, Italy, April 2008. [ www: ]

[25]
Lorenzo Capra and Walter Cazzola, “Evolutionary Design through Reflective Petri Nets: an Application to Workflow”, in Proceedings of the 26th IASTED International Conference on Software Engineering (SE'08), Innsbruck, Austria, February 2008, pp. 200–207, ACTA Press. [ .pdf ]
The design of dynamic workflows needs adequate modeling/specification formalisms and tools to soundly handle possible changes during workflow operation. A common approach is to pollute workflow design with details that do not regard the current behavior, but rather evolution. That hampers analysis, reuse and maintenance in general. We propose and discuss the adoption of a recent Petri net-based reflective model as a support to dynamic workflow design. Keeping separated functional aspects from evolution, results in a dynamic workflow model merging flexibility and ability of formally verifying basic workflow properties. A structural on-the-fly characterization of sound dynamic workflows is adopted based on Petri net's free-choiceness preservation. An application is presented to a localized open problem: how to determine what tasks should be redone and which ones do not when transferring a workflow instance from an old to a new template.

[26]
Manuel Oriol, Walter Cazzola, Shigeru Chiba, Gunter Saake, Yvonne Coady, Stéphane Ducasse, and Günter Kniesel, “Enabling Software Evolution via AOP and Reflection”, in ECOOP'07 Workshop Reader, Michael Cebulla, Ed., Lecture Notes in Computer Science 4906, pp. 91–98. Springer-Verlag, February 2008. [ .pdf ]
[27]
Lorenzo Capra and Walter Cazzola, “Self-Evolving Petri Nets”, Journal of Universal Computer Science, vol. 13, no. 13, pp. 2002–2034, December 2007. [ .pdf ]
Nowadays, software evolution is a very hot topic. It is particularly complex when it regards critical and nonstopping systems. Usually, these situations are tackled by hard-coding all the foreseeable evolutions in the application design and code.

Neglecting the obvious difficulties in pursuing this approach, we also get the application code and design polluted with details that do not regard the current system functionality, and that hamper design analysis, code reuse and application maintenance in general. Petri Nets (PN), as a formalism for modeling and designing distributed/concurrent software systems, are not exempt from this issue.

The goal of this work is to propose a PN based reflective framework that lets everyone model a system able to evolve, keeping separated functional aspects from evolutionary ones and applying evolution to the model only if necessary. Such an approach tries to keep system's model as simple as possible, preserving (and exploiting) ability of formally verifying system properties typical of PN, granting at the same time adaptability.

[28]
Walter Cazzola, Shigeru Chiba, Yvonne Coady, Stéphane Ducasse, Günter Kniesel, Manuel Oriol, and Gunter Saake, Eds., Proceedings of the 4th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'07), Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, November 2007. [ .pdf ]
[29]
Walter Cazzola, Shigeru Chiba, and Gunter Saake (Eds), “Special Issue on Software Evolution”, Transaction on Aspect-Oriented SW Development, vol. 1, no. 4, October 2007, Printed on Lecture Notes on Computer Science 4640.
[30]
Lorenzo Capra and Walter Cazzola, “A Reflective PN-based Approach to Dynamic Workflow Change”, in Proceedings of the 9th International Symposium in Symbolic and Numeric Algorithms for Scientific Computing (SYNASC'07), Timisoara, Romania, September 2007, IEEE, pp. 533–540. [ .pdf ]
The design of dynamic workflows needs adequate modeling/specification formalisms and tools to soundly handle possible changes occurring during workflow operation. A common approach is to pollute design with details that do not regard the current workflow behavior, but rather its evolution. That hampers analysis, reuse and maintenance in general.

We propose and discuss the adoption of a recent Petri Net based reflective model (based on classical PN) as a support to dynamic workflow design, by addressing a localized problem: how to determine what tasks should be redone and which ones do not when transferring a workflow instance from an old to a new template. Behind there is the idea that keeping functional aspects separated from evolutionary ones, and applying evolution to the (current) workflow template only when necessary, results in a simple reference model on which the ability of formally verifying typical workflow properties is preserved, thus favoring a dependable adaptability.

[31]
Walter Cazzola and Sonia Pini, “On the Footprints of Join Points: The Blueprint Approach”, Journal of Object Technology, vol. 6, no. 7, pp. 167–192, August 2007. [ .pdf ]
Aspect-oriented techniques are widely used to better modularize object-oriented programs by introducing crosscutting concerns in a safe and non-invasive way, i.e., aspect-oriented mechanisms better address the modularization of functionality that orthogonally crosscuts the implementation of the application.

Unfortunately, as noted by several researchers, most of the current aspect-oriented approaches are too coupled with the application code, and this fact hinders the concerns separability and consequently their re-usability since each aspect is strictly tailored on the base application. Moreover, the join points (i.e., locations affected by a crosscutting concerns) actually are defined at the operation level. It implies that the possible set of join points includes every operation (e.g., method invocations) that the system performs. Whereas, in many contexts we wish to define aspects that are expected to work at the statement level, i.e., by considering as a join point every point between two generic statements (i.e., lines of code).

In this paper, we present our approach, called Blueprint, to overcome the abovementioned limitations of the current aspect-oriented approaches. The Blueprint consists of a new aspect-oriented programming language based on modeling the join point selection mechanism at a high-level of abstraction to decouple aspects from the application code. To this regard, we adopt a high-level pattern-based join point model, where join points are described by join point blueprints, i.e., behavioral patterns describing where the join points should be found.

[32]
Walter Cazzola and Sonia Pini, “AOP vs Software Evolution: a Score in Favor of the Blueprint”, in Proceedings of the 4th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'07), Walter Cazzola, Shigeru Chiba, Yvonne Coady, Stéphane Ducasse, Günter Kniesel, Manuel Oriol, and Gunter Saake, Eds., Berlin, Germany, July 2007, pp. 81–91. [ .pdf ]
All software systems are subject to evolution, independently by the developing technique. Aspect oriented software in addition to separate the different concerns during the software development, must be “not fragile” against software evolution. Otherwise, the benefit of disentangling the code will be burred by the extra complication in maintaining the code.

To obtain this goal, the aspect-oriented languages/tools must evolve, they have to be less coupled to the base program. In the last years, a few attempts have been proposed, the Blueprint is our proposal based on behavioral patterns.

In this paper we test the robustness of the Blueprint aspect-oriented language against software evolution.

[33]
Walter Cazzola, Sonia Pini, Ahmed Ghoneim, and Gunter Saake, “Co-Evolving Application Code and Design Models by Exploiting Meta-Data”, in Proceedings of the 12th Annual ACM Symposium on Applied Computing (SAC'07), Seoul, South Korea, March 2007, pp. 1275–1279, ACM Press. [ http ]
Evolvability and adaptability are intrinsic properties of today's software applications. Unfortunately, the urgency of evolving/adapting a system often drives the developer to directly modify the application code neglecting to update its design models. Even, most of the development environments support the code refactoring without supporting the refactoring of the design information.

Refactoring, evolution and in general every change to the code should be reflected into the design models, so that these models consistently represent the application and can be used as documentation in the successive maintenance steps. The code evolution should not evolve only the application code but also its design models. Unfortunately, to co-evolve the application code and its design is a hard job to be carried out automatically, since there is an evident and notorious gap between these two representations.

We propose a new approach to code evolution (in particular to code refactoring) that supports the automatic co-evolution of the design models. The approach relies on a set of predefined meta-data that the developer should use to annotate the application code and to highlight the refactoring performed on the code. Then, these meta-data are retrieved through reflection and used to automatically and coherently update the application design models.

[34]
Walter Cazzola, Shigeru Chiba, Yvonne Coady, and Gunter Saake, Eds., Proceedings of the 3rd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'06), Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, November 2006. [ .pdf ]
[35]
Walter Cazzola and Sonia Pini, “Join Point Patterns: a High-Level Join Point Selection Mechanism”, in MoDELS'06 Satellite Events Proceedings, Thomas Khüne, Ed., Genova, Italy, October 2006, Lecture Notes in Computer Science 4364, pp. 17–26, Springer, Best Paper Awards at the 9th Aspect-Oriented Modeling Workshop. [ .pdf ]
Aspect-Oriented Programming is a powerful technique to better modularize object-oriented programs by introducing crosscutting concerns in a safe and noninvasive way. Unfortunately, most of the current join point models are too coupled with the application code. This fact hinders the concerns separability and reusability since each aspect is strictly tailored on the base application.

This work proposes a possible solution to this problem based on modeling the join points selection mechanism at a higher level of abstraction. In our view, the aspect designer does not need to know the inner details of the application such as a specific implementation or the used name conventions rather he exclusively needs to know the application behavior to apply his/her aspects.

In the paper, we present a novel join point model with a join point selection mechanism based on a high-level program representation. This high-level view of the application decouples the aspects definition from the base program structure and syntax. The separation between aspects and base program will render the aspects more reusable and independent of the manipulated application.

[36]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “Viewpoint for Maintaining UML Models against Application Changes”, in Proceedings of International Conference on Software and Data Technologies (ICSOFT 2006), Joaquim Filipe, Markus Helfert, and Boris Shishkov, Eds., Setúbal, Portugal, September 2006, pp. 263–268, Springer. [ .pdf ]
The urgency that characterizes many requests for evolution forces the system administrators/developers of directly adapting the system without passing through the adaptation of its design. This creates a gap between the design information and the system it describes. The existing design models provide a static and often outdated snapshot of the system unrespectful of the system changes. Software developers spend a lot of time on evolving the system and then on updating the design information according to the evolution of the system. To this respect, we present an approach to automatically keep the design information (diagrams in our case) updated when the system evolves. The diagrams are bound to the application and all the changes to it are reflected to the diagrams as well.

[37]
Walter Cazzola, Sonia Pini, and Massimo Ancona, “Design-Based Pointcuts Robustness Against Software Evolution”, in Proceedings of the 3rd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'06), Walter Cazzola, Shigeru Chiba, Yvonne Coady, and Gunter Saake, Eds., Nantes, France, July 2006, pp. 35–45. [ .pdf ]
Aspect-Oriented Programming (AOP) is a powerful technique to better modularize object-oriented programs by introducing crosscutting concerns in a safe and noninvasive way. Unfortunately, most of the current join point models are too coupled with the application code. This fact harms the evolvability of the program, hinders the concerns selection and reduces the aspect reusability. To overcome this problem is an hot topic.

This work propose a possible solution to the limits of the current aspect-oriented techniques based on modeling the join point selection mechanism at a higher level of abstraction to decoupling base program and aspects.

In this paper, we will present by examples a novel join point model based on design models (e.g., expressed through UML diagrams). Design models provide a high-level view on the application structure and behavior decoupled by base program. A design oriented join point model will render aspect definition more robust against base program evolution, reusable and independent of the base program.

[38]
Walter Cazzola, Shigeru Chiba, Yvonne Coady, and Gunter Saake, “AOSD and Reflection: Benefits and Drawbacks to Software Evolution”, in ECOOP'06 Workshop Reader, Charles Consel and Mario Südholt, Eds., Lecture Notes in Computer Science 4379, pp. 40–52. Springer-Verlag, July 2006. [ .pdf ]
[39]
Lorenzo Capra and Walter Cazzola, “A Petri-Net Based Reflective Framework for the Evolution of Dynamic Systems”, Electronic Notes on Theoretical Computer Science, vol. 159, pp. 41–59, 2006. [ .pdf ]
Nowadays, software evolution is a very hot topic. Many applications need to be updated or extended with new characteristics during their lifecycle. Software evolution is characterized by its huge cost and slow speed of implementation. Often, software evolution implies a redesign of the whole system, the development of new features and their integration in the existing and/or running systems (this last step often implies a complete rebuilding of the system). A good evolution is carried out through the evolution of the system design information and then propagating the evolution to the implementation.

Petri Nets (PN), as a formalism for modeling and designing distributed/concurrent software systems, are not exempt from this issue. Several times a system modeled through Petri nets has to be updated and consequently also the model should be updated. Often, some kinds of evolution are foreseeable and could be hardcoded in the code or in the model, respectively.

Embedding evolutionary steps in the model or in the code however requires early and full knowledge of the evolution. The model itself should be augmented with details that do not regard the current system functionality, and that jeopardize or make very hard analysis and verification of system properties.

In this work, we propose a PN based reflective framework that lets everyone model a system able to evolve, keeping separated functional aspects from evolutionary ones and applying evolution to the model if necessary. Such an approach tries to keep the model as simple as possible, preserving (and exploiting) the ability of formally verifying system properties typical of PN, granting at the same time model adaptability.

[40]
Walter Cazzola, Shigeru Chiba, Gunter Saake, and Tom Tourwé, Eds., Proceedings of the 2nd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'05), Preprint No. 9/2005 of Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, November 2005. [ .pdf ]
[41]
Walter Cazzola, Sonia Pini, and Massimo Ancona, “The Role of Design Information in Software Evolution”, in Proceedings of the 2nd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'05), Walter Cazzola, Shigeru Chiba, Gunter Saake, and Tom Tourwé, Eds., Glasgow, Scotland, July 2005, pp. 59–70. [ .pdf ]
Software modeling has received a lot a of attention in the last decade and now is an important support for the design process.

Actually, the design process is very important to the usability and understandability of the system, for example functional requirements present a complete description of how the system will function from the user's perspective, while non-functional requirements dictate properties and impose constraints on the project or system.

The design models and implementation code must be strictly connected, i.e. we must have correlation and consistency between the two previous views, and this correlation must exist during all the software cycle. Often, the early stages of development, the specifications and the design of the system, are ignored once the code has been developed. This practice cause a lot of problems, in particular when the system must evolve. Nowadays, to maintain a software is a difficult task, since there is a high coupling degree between the software itself and its environment. Often, changes in the environment cause changes in the software, in other words, the system must evolve itself to follow the evolution of its environment.

Typically, a design is created initially, but as the code gets written and modified, the design is not updated to reflect such changes.

This paper describes and discusses how the design information can be used to drive the software evolution and consequently to maintain consistency among design and code.

[42]
Walter Cazzola, Sonia Pini, and Massimo Ancona, “AOP for Software Evolution: A Design Oriented Approach”, in Proceedings of the 10th Annual ACM Symposium on Applied Computing (SAC'05), Santa Fe, New Mexico, USA, March 2005, pp. 1356–1360, ACM Press. [ http ]
In this paper, we have briefly explored the aspect-oriented approach as a tool for supporting the software evolution. The aim of this analysis is to highlight the potentiality and the limits of the aspect-oriented development for software evolution. From our analysis follows that in general (and in particular for AspectJ) the approach to join points, pointcuts and advices definition are not enough intuitive, abstract and expressive to support all the requirements for carrying out the software evolution. We have also examined how a mechanism for specifying pointcuts and advices based on design information, in particular on the use of UML diagrams, can better support the software evolution through aspect oriented programming. Our analysis and proposal are presented through an example.

[43]
Walter Cazzola, Shigeru Chiba, and Gunter Saake, “Software Evolution: a Trip through Reflective, Aspect, and Meta-Data Oriented Techniques”, in ECOOP'04 Workshop Reader, Jacques Malenfant and Bjarte M. Østvold, Eds., Lecture Notes in Computer Science 3344, pp. 116–130. Springer-Verlag, December 2004. [ .pdf ]
[44]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “Software Evolution through Dynamic Adaptation of Its OO Design”, in Objects, Agents and Features: Structuring Mechanisms for Contemporary Software, Hans-Dieter Ehrich, John-Jules Meyer, and Mark D. Ryan, Eds., Lecture Notes in Computer Science 2975, pp. 69–84. Springer-Verlag, July 2004. [ .pdf ]
In this paper we present a proposal for safely evolving a software system against run-time changes. This proposal is based on a reflective architecture which provides objects with the ability of dynamically changing their behavior by using their design information. The meta-level system of the proposed architecture supervises the evolution of the software system to be adapted that runs as the base-level system of the reflective architecture. The meta-level system is composed of cooperating components; these components carry out the evolution against sudden and unexpected environmental changes on a reification of the design information (e.g., object models, scenarios and statecharts) of the system to be adapted. The evolution takes place in two steps: first a meta-object, called Papers about Software Evolution, Refactoring and Co-Evolution.ary meta-object, plans a possible evolution against the detected event then another meta-object, called consistency checker meta-object validates the feasibility of the proposed plan before really evolving the system. Meta-objects use the system design information to govern the evolution of the base-level system. Moreover, we show our architecture at work on a case study.

[45]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “System Evolution through Design Information Evolution: a Case Study”, in Proceedings of the 13th International Conference on Intelligent and Adaptive Systems and Software Engineering (IASSE 2004), Walter Dosch and Narayan Debnath, Eds., Nice, France, July 2004, pp. 145–150, ISCA. [ .pdf ]
This paper describes how design information, in our case specifications, can be used to evolve a software system and validate the consistency of such an evolution. This work complements our previous work on reflective architectures for software evolution describing the role played by meta-data in the evolution of software systems. The whole paper focuses on a case study; we show how the urban traffic control system (UTCS) or part of it must evolve when unscheduled road maintenance, a car crush or a traffic jam block normal vehicular flow in a specific road. The UTCS case study perfectly shows how requirements can dynamically change and how the design of the system should adapt to such changes. Both system consistency and adaptation are governed by rules based on meta-data representing the system design information. As we show by an example, such rules represent the core of our evolutionary approach driving the Papers about Software Evolution, Refactoring and Co-Evolution.ary and consistency checker meta-objects and interfacing the meta-level system (the evolutionary system) with the system that has to be adapted.

[46]
Walter Cazzola, Shigeru Chiba, and Gunter Saake, Eds., Proceedings of the 1st ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'04), Research Report C-196 of the Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology. Preprint No. 10/2004 of Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, July 2004. [ .pdf ]
[47]
Walter Cazzola, Sonia Pini, and Massimo Ancona, “Evolving Pointcut Definition to Get Software Evolution”, in Proceedings of the 1st ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'04), Oslo, Norway, June 2004, pp. 83–88. [ .pdf ]
In this paper, we have briefly analyzed the aspect-oriented approach with respect to the software evolution topic. The aim of this analysis is to highlight the aspect-oriented potentiality for software evolution and its limits. From our analysis, we can state that actual pointcut definition mechanisms are not enough expressive to pick out from design information where software evolution should be applied. We will also give some suggestions about how to improve the pointcut definition mechanism.

[48]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “RAMSES: a Reflective Middleware for Software Evolution”, in Proceedings of the 1st ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'04), Oslo, Norway, June 2004, pp. 21–26. [ .pdf ]
Software systems today need to dynamically self-adapt against dynamic requirement changes. In this paper we describe a reflective middleware whose aim consists of consistently evolving software systems against runtime changes. This middleware provides the ability to change both structure and behavior for the base-level system at run-time by using its design information. The meta-level is composed of cooperating objects, and has been specified by using a design pattern language. The base objects are controlled by meta-objects that drive their evolution. The essence of is the ability of extracting the design data from the base application, and of constraining the dynamic evolution to stable and consistent systems.

[49]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “Reflective Analysis and Design for Adapting Object Run-time Behavior”, in Proceedings of the 8th International Conference on Object-Oriented Information Systems (OOIS'02), Zohra Bellahsène, Dilip Patel, and Colette Rolland, Eds., Montpellier, France, September 2002, Lecture Notes in Computer Science 2425, pp. 242–254, Springer-Verlag. [ .pdf ]
Today, complex information systems need a simple way for changing the object behavior according with changes that occur in its running environment. We present a reflective architecture which provides the ability to change object behavior at run-time by using design-time information. By integrating reflection with design patterns we get a flexible and easily adaptable architecture. A reflective approach that describes object model, scenarios and statecharts helps to dynamically adapt the software system to environmental changes. The object model, system scenario and many other design information are reified by special meta-objects, named Papers about Software Evolution, Refactoring and Co-Evolution.ary meta-objects. Evolutionary meta-objects deal with two types of run-time evolution. Structural evolution is carried out by causal connection between evolutionary meta-objects and its referents through changing the structure of these referents by adding or removing objects or relations. Behavioral evolution allows the system to dynamically adapt its behavior to environment changes by itself. Evolutionary meta-objects react to environment changes for adapting the information they have reified and steering the system evolution. They provide a natural liaison between design information and the system based on such information. This paper describes how this liaison can be built and how it can be used for adapting a running system to environment changes.

[50]
Walter Cazzola, James O. Coplien, Ahmed Ghoneim, and Gunter Saake, “Framework Patterns for the Evolution of Nonstoppable Software Systems”, in Proceedings of the 1st Nordic Conference on Pattern Languages of Programs (VikingPLoP'02), Pavel Hruby and Kristian Elof Søresen, Eds., Højstrupgard, Helsingør, Denmark, September 2002, pp. 35–54, Microsoft Business Solutions. [ .pdf ]
The fragment of pattern language proposed in this paper, shows how to adapt a nonstoppable software system to reflect changes in its running environment. These framework patterns depend on well-known techniques for programs to dynamically analyze and modify their own structure, commonly called computational reflection. Our patterns go together with common reflective software architectures.

[1]
Walter Cazzola and Albert Shaqiri, “Open Programming Language Interpreters”, The Art, Science, and Engineering of Programming Journal, vol. 1, no. 2, pp. 5–1–5–34, April 2017. [ DOI | NEW! | http ]
Context: This paper presents the concept of open programming language interpreters and the implementation of a framework-level metaobject protocol (MOP) to support them. Inquiry: We address the problem of dynamic interpreter adaptation to tailor the interpreter's behavior on the task to be solved and to introduce new features to fulfill unforeseen requirements. Many languages provide a MOP that to some degree supports reflection. However, MOPs are typically language-specific, their reflective functionality is often restricted, and the adaptation and application logic are often mixed which hardens the understanding and maintenance of the source code. Our system overcomes these limitations. Approach: We designed and implemented a system to support open programming language interpreters. The prototype implementation is integrated in the Neverlang framework. The system exposes the structure, behavior and the runtime state of any Neverlang-based interpreter with the ability to modify it. Knowledge: Our system provides a complete control over interpreter's structure, behavior and its runtime state. The approach is applicable to every Neverlang-based interpreter. Adaptation code can potentially be reused across different language implementations. Grounding: Having a prototype implementation we focused on feasibility evaluation. The paper shows that our approach well addresses problems commonly found in the research literature. We have a demonstrative video and examples that illustrate our approach on dynamic software adaptation, aspect-oriented programming, debugging and context-aware interpreters. Importance: To our knowledge, our paper presents the first reflective approach targeting a general framework for language development. Our system provides full reflective support for free to any Neverlang-based interpreter. We are not aware of any prior application of open implementations to programming language interpreters in the sense defined in this paper. Rather than substituting other approaches, we believe our system can be used as a complementary technique in situations where other approaches present serious limitations.

[2]
Walter Cazzola, Ruzanna Chitchyan, Awais Rashid, and Albert Shaqiri, “μ-DSU: A Micro-Language Based Approach to Dynamic Software Updating”, Computer Languages, Systems & Structures, 2017. [ DOI | NEW! | www: ]

[3]
Walter Cazzola and Albert Shaqiri, “Context-Aware Software Variability through Adaptable Interpreters”, IEEE Software, 2017, Special Issue on Context Variability Modeling. [ NEW! | www: ]

[4]
Walter Cazzola and Diego Mathias Olivares, “Gradually Learning Programming Supported by a Growable Programming Language”, IEEE Transactions on Emerging Topics in Computing, vol. 4, no. 3, pp. 404–415, September 2016, Special Issue on Emerging Trends in Education. [ DOI | .pdf ]
Learning programming is a difficult task. The learning process is particularly disorienting when you are approaching programming for the first time. As a student you are exposed to several new concepts (control flow, variable, etc. but also coding, compiling etc.) and new ways to think (algorithms). Teachers try to expose the students gradually to the new concepts by presenting them one by one but the tools at student's disposal do not help: they provide support, suggestion and documentation for the full programming language of choice hampering the teacher's efforts. On the other side, students need to learn real languages and not didactic languages. In this work we propose an approach to gradually teaching programming supported by a programming language that grows—together with its implementation—along with the number of concepts presented to the students. The proposed approach can be applied to the teaching of any programming language and some experiments with Javascript are reported

[5]
Thomas Kühn and Walter Cazzola, “Apples and Oranges: Comparing Top-Down and Bottom-Up Language Product Lines”, in Proceedings of the 20th International Software Product Line Conference (SPLC'16), Rick Rabiser and Bing Xie, Eds., Beijing, China, September 2016, pp. 50–59, ACM. [ http ]
Over the past decade language development tools have been significantly improved. This permitted both practitioners and researchers to design a wide variety of domain-specific languages (DSL) and extensions to programming languages. Moreover, multiple researchers have combined different language variants to form families of DSLs as well as programming languages. Unfortunately, current language development tools cannot directly support the development of these families. To overcome this limitation, researchers have recently applied ideas from software product lines (SPL) to create product lines of compilers/interpreters for language families, denoted language product lines (LPL). Similar to SPLs, however, these product lines can be created either using a top-down or a bottom-up approach. Yet, there exist no case study comparing the suitability of both approaches to the development of LPLs, making it unclear how language development tools should evolve. Accordingly, this paper compares both feature modeling approaches by applying them to the development of an LPL for the family of role-based programming languages and discussing their applicability, feasibility and overall suitability for the development of LPLs. Although one might argue that this compares apples and oranges, we believe that this case still provides crucial insights into the requirements, assumptions, and challenges of each approach.

[6]
Walter Cazzola and Edoardo Vacchi, “Language Components for Modular DSLs using Traits”, Computer Languages, Systems & Structures, vol. 45, pp. 16–34, April 2016. [ DOI | .pdf ]
Recent advances in tooling and modern programming languages have progressively brought back the practice of developing domain-specific languages as a means to improve software development. Consequently, the problem of making composition between languages easier by emphasizing code reuse and componentized programming is a topic of increasing interest in research. In fact, it is not uncommon for different languages to share common features, and, because in the same project different DSLs may coexist to model concepts from different problem areas, it is interesting to study ways to develop modular, extensible languages. Earlier work has shown that traits can be used to modularize the semantics of a language implementation; a lot of attention is often spent on embedded DSLs; even when external DSLs are discussed, the main focus is on modularizing the semantics. In this paper we will show a complete trait-based approach to modularize not only the semantics but also the syntax of external DSLs, thereby simplifying extension and therefore evolution of a language implementation. We show the benefits of implementing these techniques using the Scala programming language.

[7]
Walter Cazzola and Albert Shaqiri, “Modularity and Optimization in Synergy”, in Proceedings of the 15th International Conference on Modularity (Modularity'16), Don Batory, Ed., Málaga, Spain, March 2016, pp. 70–81, ACM. [ http ]
As with traditional software, the complexity of a programming language implementation is faced with modularization which favors the separation of concerns, independent development, maintainability and reuse. However, modularity interferes with language optimization as the latter requires context information that crosses over the single module boundaries and involves other modules. This renders hard to provide the optimization for a single language concept to be reusable with the concept itself. Therefore, the optimization is in general postponed to when all language concepts are available. We defined a model for modular language development with a multiple semantic actions dispatcher based on condition guards that are evaluated at runtime. The optimization can be implemented as context-dependent extensions applied a posteriori to the composed language interpreter without modifying a single component implementation. This renders effective the defined optimization within the language concept boundaries according to the context provided by other language concepts when available and eases its reuse with the language concepts implementation independently of its usage context. The presented model is integrated into the Neverlang development framework and is demonstrated on the optimization of a Javascript interpreter written in Neverlang. We also discuss the applicability of our model to other frameworks for modular language development.

[8]
Walter Cazzola and Albert Shaqiri, “Dynamic Software Evolution through Interpreter Adaptation”, in Proceedings of the 15th International Conference on Modularity (Modularity'16), Málaga, Spain, March 2016, pp. 16–19, ACM. [ http ]
Significant research has been dedicated to dynamic software evolution and adaptation that lead to different approaches which can mainly be categorized as either architecture-based or language-based. But there was little or no focus on dynamic evolution achieved through language interpreter adaptation. In this paper we present a model for such adaptations and illustrate their applicability and usefulness on practical examples developed in Neverlang, a framework for modular language development with features for dynamic adaptation of language interpreters.

[9]
Rosa Gini, Martijn Schuemie, Jeffrey Brown, Patrick Ryan, Edoardo Vacchi, Massimo Coppola, Walter Cazzola, Preciosa Coloma, Roberto Berni, Gayo Diallo, José Luis Oliveira, Paul Avillach, Gianluca Trifirò, Peter Rijnbeek, Mariadonata Bellentani, Johan van Der Lei, Niek Klazinga, and Miriam Sturkenboom, “Data Extraction and Management in Networks of Observational Health Care Databases for Scientific Research: A Comparison among EU-ADR, OMOP, Mini-Sentinel and MATRICE Strategies”, Generating Evidence & Methods to improve patient outcomes (eGEMs), vol. 4, no. 1, pp. 1189–1212, February 2016. [ DOI | http ]
Intrduction. To achieve fast and transparent production of empirical evidence in healthcare research we see increased use of existing observational data. Multiple databases are often used, to increase power or assess rare exposures or outcomes or study diverse populations. For privacy and sociological reasons, original data on individualsubjects can’t be shared, requiring a distributed network approach where data processing is performed prior to data sharing.

Case Descriptions and Variation Among Sites. We created a conceptual framework distinguishing three steps in local data processing: (1) data reorganization into a data structure common across the network; (2) derivation of study variables not present in original data; (3) application of study design to transform longitudinal data into aggregated datasets for statistical analysis. We applied this framework to four case studies to identify similarities and differences in the United States and Europe: EU-ADR, OMOP, Mini-Sentinel and MATRICE.

Findings. National networks (OMOP, Mini-Sentinel, MATRICE) all adopted shared procedures for local data reorganization. The multinational EU-ADR network needed locally-defined procedures to reorganize its heterogeneous data into a common structure. Derivation of new data elements was centrally-defined in all networks but the procedure was not shared in EU-ADR. Application of study design was automated and shared in all the case studies. Computer procedures were embodied in different programming languages, including SAS, R, SQL, Java and C++.

Conclusion. Using our conceptual framework we identified several areas that would benefit from research to identify optimal standards for production of empirical knowledge from existing databases.

[10]
Edoardo Vacchi and Walter Cazzola, “Neverlang: A Framework for Feature-Oriented Language Development”, Computer Languages, Systems & Structures, vol. 43, no. 3, pp. 1–40, October 2015. [ DOI | .pdf ]
Reuse in programming language development is an open research problem. Many authors have proposed frameworks for modular language development. These frameworks focus on maximiz ing code reuse, providing primitives for componentizing language implementations. There is also an open debate on combining feature-orientation with modular language developme nt. Feature-oriented programming is a vision of computer programming in which features can be implemented separately, and then combined to build a variety of software products. How ever, even though feature-orientation and modular programming are strongly connected, modular language development frameworks are not usually meant primarily for feature-orient ed language definition. In this paper we present a model of language development that puts feature implementation at the center, and describe its implementation in the Neverla ng framework. The model has been evaluated through several languages implementations: in this paper, a state machine language is used as a means of comparison with other framew orks, and a JavaScript interpreter implementation is used to further illustrate the benefits that our model provides.

[11]
Thomas Kühn, Walter Cazzola, and Diego Mathias Olivares, “Choosy and Picky: Configuration of Language Product Lines”, in Proceedings of the 19th International Software Product Line Conference (SPLC'15), Goetz Botterweck and Jules White, Eds., Nashville, TN, USA, July 2015, pp. 71–80, ACM. [ http ]
Although most programming languages naturally share several language features, they are typically implemented as a monolithic product. Language features cannot be plugged and unplugged from a language and reused in another language. Some modular approaches to language construction do exist but composing language features requires a deep understanding of its implementation hampering their use. The choose and pick approach from software product lines provides an easy way to compose a language out of a set of language features. However, current approaches to language product lines are not sufficient enough to cope with the complexity and evolution of real world programming languages. In this work, we propose a general light-weight bottom-up approach to automatically extract a feature model from a set of tagged language components. We applied this approach to the Neverlang language development framework and developed the AiDE tool to guide language developers towards a valid language composition. The approach has been evaluated on a decomposed version of Javascript to highlight the benefits of such a language product line.

[12]
Ruzanna Chitchyan, Walter Cazzola, and Awais Rashid, “Engineering Sustainability through Language”, in Proceedings of the 37th International Conference on Software Engineering (ICSE'15), Firenze, Italy, May 2015, pp. 501–504, IEEE, Track on Software Engineering in Society. [ .pdf ]
As our understanding and care for sustainability concerns increases, so does the demand for incorporating these concerns into software. Yet, existing programming language constructs are not well-aligned with concepts of the sustainability domain. This undermines what we term technical sustainability of the software due to (i) increased complexity in programming of such concerns and (ii) continuous code changes to keep up with changes in (environmental, social, legal and other) sustainability-related requirements. In this paper we present a proof-of-concept approach on how technical sustainability support for new and existing concerns can be provided through flexible language-level progr amming. We propose to incorporate sustainability-related behaviour into programs through micro-languages enabling such behaviour to be updated and/or redefined as and wh en required.

[13]
Walter Cazzola and Edoardo Vacchi, “On the Incremental Growth and Shrinkage of LR Goto-Graphs”, ACTA Informatica, vol. 51, no. 7, pp. 419–447, October 2014. [ DOI | http ]
The LR(0) goto-graph is the basis for the construction of parsers for several interesting grammar classes such as LALR and GLR. Early work has shown that even when a grammar is an extension to another, the goto-graph of the first is not necessarily a subgraph of the second. Some authors presented algorithms to grow and shrink these graphs incrementally, but the formal proof of the existence of a particular relation between a given goto-graph and a grown or shrunk counterpart seems to be still missing in literature as of today. In this paper we use the recursive projection of paths of limited length to prove the existence of one such relation, when the sets of productions are in a subset relation. We also use this relation to present two algorithms (Grow and Shrink) that transform the goto-graph of a given grammar into the goto-graph of an extension or a restriction to that grammar. We implemented these algorithms in a dynamically updatable LALR parser generator called DEXTER (the Dynamically EXTEnsible Recognizer) that we are now shipping with our current implementation of the Neverlang framework for programming language development.

[14]
Edoardo Vacchi, Walter Cazzola, Benoît Combemale, and Mathieu Acher, “Automating Variability Model Inference for Component-Based Language Implementations”, in Proceedings of the 18th International Software Product Line Conference (SPLC'14), Patrick Heymans and Julia Rubin, Eds., Florence, Italy, September 2014, pp. 167–176, ACM. [ http ]
Recently, domain-specific language development has become again a topic of interest, as a means to help designing solutions to domain-specific problems. Componentized language frameworks, coupled with variability modeling, have the potential to bring language development to the masses, by simplifying the configuration of a new language from an existing set of reusable components. However, designing variability models for this purpose requires not only a good understanding of these frameworks and the way components interact, but also an adequate familiarity with the problem domain.

In this paper we propose an approach to automatically infer a relevant variability model from a collection of already implemented language components, given a structured, but general representation of the domain. We describe techniques to assist users in achieving a better understanding of the relationships between language components, and find out which languages can be derived from them with respect to the given domain.

[15]
Edoardo Vacchi, Diego Mathias Olivares, Albert Shaqiri, and Walter Cazzola, “Neverlang 2: A Framework for Modular Language Implementation”, in Proceedings of the 13th International Conference on Modularity (Modularity'14), Lugano, Switzerland, April 2014, pp. 23–26, ACM. [ http ]
Neverlang 2 is a JVM-based framework for language development that emphasizes code reuse through composition of language features. This paper is aimed at showing how to develop extensible, custom languages using Neverlang's component-based model of implementation. Using this model, each feature of the language can be implemented as a separate, conceptually isolated unit that can be compiled and distributed separately from the others. A live tutorial of the framework can be found at http://youtu.be/Szxvg7XLbXc

[16]
Edoardo Vacchi, Walter Cazzola, Suresh Pillay, and Benoît Combemale, “Variability Support in Domain-Specific Language Development”, in Proceedings of 6th International Conference on Software Language Engineering (SLE'13), Martin Erwig, Richard F. Paige, and Eric Van Wyk, Eds., Indianapolis, USA, October 2013, Lecture Notes on Computer Science 8225, pp. 76–95, Springer. [ www: ]

[17]
Walter Cazzola and Edoardo Vacchi, “Neverlang 2: Componentised Language Development for the JVM”, in Proceedings of the 12th International Conference on Software Composition (SC'13), Walter Binder, Eric Bodden, and Welf Löwe, Eds., Budapest, Hungary, June 2013, Lecture Notes in Computer Science 8088, pp. 17–32, Springer. [ www: ]

[18]
Walter Cazzola and Edoardo Vacchi, “DEXTER and Neverlang: A Union Towards Dynamicity”, in Proceedings of the 7th Workshop on the Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems (ICOOOLPS'12), Eric Jul, Ian Rogers, and Olivier Zendra, Eds., Beijing, China, June 2012, ACM.

[19]
Walter Cazzola, “Domain-Specific Languages in Few Steps: The Neverlang Approach”, in Proceedings of the 11th International Conference on Software Composition (SC'12), Thomas Gschwind, Flavio De Paoli, Volker Gruhn, and Matthias Book, Eds., Prague, Czech Republic, May-June 2012, Lecture Notes in Computer Science 7306, pp. 162–177, Springer. [ .pdf ]
Often an ad hoc programming language integrating features from different programming languages and paradigms represents the best choice to express a concise and clean solution to a problem. But, developing a programming language is not an easy task and this often discourages from developing your problem-oriented or domain-specific language. To foster DSL development and to favor clean and concise problem-oriented solutions we developed Neverlang.

The Neverlang framework provides a mechanism to build custom programming languages up from features coming from different languages. The composability and flexibility provided by Neverlang permit to develop a new programming language by simply composing features from previously developed languages and reusing the corresponding support code (parsers, code generators, ...).

In this work, we explore the Neverlang framework and try out its benefits in a case study that merges functional programming à la Python with coordination for distributed programming as in Linda.

[20]
Walter Cazzola and Davide Poletti, “DSL Evolution through Composition”, in Proceedings of the 7th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'10), Maribor, Slovenia, June 2010, ACM. [ http ]
The use of domain specific languages (DSL), instead of general purpose languages introduces a number of advantages in software development even if could be problematic to maintain the DSL consistent with the evolution of the domain. Traditionally, to develop a compiler/interpreter from scratch but also to modify an existing compiler to support the novel DSL is a long and difficult task. We have developed Neverlang to simplify and speed up the development and maintenance of DSLs. The framework presented in this article not only allows to develop the syntax and the semantic of a new language from scratch but it is particularly focused on the reusability of the language definition. The interpreters/compilers produced with such a framework are modular and it is easy to add remove or modify their sections. This allows to modify the DSL definition in order to follow the evolution of the underneath domain. In this work, we explore the Neverlang framework and try out the adaptability of its language definition.

[21]
Walter Cazzola and Ivan Speziale, “Sectional Domain Specific Languages”, in Proceedings of the 4th Domain Specific Aspect-Oriented Languages (DSAL'09), Charlottesville, Virginia, USA, March 2009, pp. 11–14, ACM. [ http ]
Nowadays, many problems are solved by using a domain specific language (DSL), i.e., a programming language tailored to work on a particular application domain. Normally, a new DSL is designed and implemented from scratch requiring a long time-to-market due to implementation and testing issues. Whereas when the DSL simply extends another language it is realized as a source-to-source transformation or as an external library with limited flexibility.

The Hive framework is developed with the intent of overcoming these issues by providing a mechanism to compose different programming features together forming a new DSL, what we call a sectional DSL. The support (both at compiler and interpreter level) of each feature is separately described and easily composed with the others. This approach is quite flexible and permits to build up a new DSL from scratch or simplifying an existing language without penalties. Moreover, it has the desirable side-effect that each DSL can be extended at any time potentially also at run-time.

[1]
Walter Cazzola and Albert Shaqiri, “Open Programming Language Interpreters”, The Art, Science, and Engineering of Programming Journal, vol. 1, no. 2, pp. 5–1–5–34, April 2017. [ DOI | NEW! | http ]
Context: This paper presents the concept of open programming language interpreters and the implementation of a framework-level metaobject protocol (MOP) to support them. Inquiry: We address the problem of dynamic interpreter adaptation to tailor the interpreter's behavior on the task to be solved and to introduce new features to fulfill unforeseen requirements. Many languages provide a MOP that to some degree supports reflection. However, MOPs are typically language-specific, their reflective functionality is often restricted, and the adaptation and application logic are often mixed which hardens the understanding and maintenance of the source code. Our system overcomes these limitations. Approach: We designed and implemented a system to support open programming language interpreters. The prototype implementation is integrated in the Neverlang framework. The system exposes the structure, behavior and the runtime state of any Neverlang-based interpreter with the ability to modify it. Knowledge: Our system provides a complete control over interpreter's structure, behavior and its runtime state. The approach is applicable to every Neverlang-based interpreter. Adaptation code can potentially be reused across different language implementations. Grounding: Having a prototype implementation we focused on feasibility evaluation. The paper shows that our approach well addresses problems commonly found in the research literature. We have a demonstrative video and examples that illustrate our approach on dynamic software adaptation, aspect-oriented programming, debugging and context-aware interpreters. Importance: To our knowledge, our paper presents the first reflective approach targeting a general framework for language development. Our system provides full reflective support for free to any Neverlang-based interpreter. We are not aware of any prior application of open implementations to programming language interpreters in the sense defined in this paper. Rather than substituting other approaches, we believe our system can be used as a complementary technique in situations where other approaches present serious limitations.

[2]
Walter Cazzola, Ruzanna Chitchyan, Awais Rashid, and Albert Shaqiri, “μ-DSU: A Micro-Language Based Approach to Dynamic Software Updating”, Computer Languages, Systems & Structures, 2017. [ DOI | NEW! | www: ]

[3]
Walter Cazzola and Albert Shaqiri, “Context-Aware Software Variability through Adaptable Interpreters”, IEEE Software, 2017, Special Issue on Context Variability Modeling. [ NEW! | www: ]

[4]
Walter Cazzola and Diego Mathias Olivares, “Gradually Learning Programming Supported by a Growable Programming Language”, IEEE Transactions on Emerging Topics in Computing, vol. 4, no. 3, pp. 404–415, September 2016, Special Issue on Emerging Trends in Education. [ DOI | .pdf ]
Learning programming is a difficult task. The learning process is particularly disorienting when you are approaching programming for the first time. As a student you are exposed to several new concepts (control flow, variable, etc. but also coding, compiling etc.) and new ways to think (algorithms). Teachers try to expose the students gradually to the new concepts by presenting them one by one but the tools at student's disposal do not help: they provide support, suggestion and documentation for the full programming language of choice hampering the teacher's efforts. On the other side, students need to learn real languages and not didactic languages. In this work we propose an approach to gradually teaching programming supported by a programming language that grows—together with its implementation—along with the number of concepts presented to the students. The proposed approach can be applied to the teaching of any programming language and some experiments with Javascript are reported

[5]
Thomas Kühn and Walter Cazzola, “Apples and Oranges: Comparing Top-Down and Bottom-Up Language Product Lines”, in Proceedings of the 20th International Software Product Line Conference (SPLC'16), Rick Rabiser and Bing Xie, Eds., Beijing, China, September 2016, pp. 50–59, ACM. [ http ]
Over the past decade language development tools have been significantly improved. This permitted both practitioners and researchers to design a wide variety of domain-specific languages (DSL) and extensions to programming languages. Moreover, multiple researchers have combined different language variants to form families of DSLs as well as programming languages. Unfortunately, current language development tools cannot directly support the development of these families. To overcome this limitation, researchers have recently applied ideas from software product lines (SPL) to create product lines of compilers/interpreters for language families, denoted language product lines (LPL). Similar to SPLs, however, these product lines can be created either using a top-down or a bottom-up approach. Yet, there exist no case study comparing the suitability of both approaches to the development of LPLs, making it unclear how language development tools should evolve. Accordingly, this paper compares both feature modeling approaches by applying them to the development of an LPL for the family of role-based programming languages and discussing their applicability, feasibility and overall suitability for the development of LPLs. Although one might argue that this compares apples and oranges, we believe that this case still provides crucial insights into the requirements, assumptions, and challenges of each approach.

[6]
Walter Cazzola and Edoardo Vacchi, “Language Components for Modular DSLs using Traits”, Computer Languages, Systems & Structures, vol. 45, pp. 16–34, April 2016. [ DOI | .pdf ]
Recent advances in tooling and modern programming languages have progressively brought back the practice of developing domain-specific languages as a means to improve software development. Consequently, the problem of making composition between languages easier by emphasizing code reuse and componentized programming is a topic of increasing interest in research. In fact, it is not uncommon for different languages to share common features, and, because in the same project different DSLs may coexist to model concepts from different problem areas, it is interesting to study ways to develop modular, extensible languages. Earlier work has shown that traits can be used to modularize the semantics of a language implementation; a lot of attention is often spent on embedded DSLs; even when external DSLs are discussed, the main focus is on modularizing the semantics. In this paper we will show a complete trait-based approach to modularize not only the semantics but also the syntax of external DSLs, thereby simplifying extension and therefore evolution of a language implementation. We show the benefits of implementing these techniques using the Scala programming language.

[7]
Walter Cazzola, Paola Giannini, and Albert Shaqiri, “Formal Attributes Traceability in Modular Language Development Frameworks”, Electric Notes In Theoretical Computer Science, vol. 322, pp. 119–134, April 2016. [ DOI | .pdf ]
Modularization and component reuse are concepts that can speed up the design and implementation of domain specific languages. Several modular development frameworks have been developed that rely on attributes to share information among components. Unfortunately, modularization also fosters development in isolation and attributes could be undefined or used inconsistently due to a lack of coordination. This work presents 1) a type system that permits to trace attributes and statically validate the composition against attributes lack or misuse and 2) a correct and complete type inference algorithm for this type system. The type system and inference are based on the Neverlang development framework but it is also discussed how it can be used with different frameworks.

[8]
Walter Cazzola and Albert Shaqiri, “Modularity and Optimization in Synergy”, in Proceedings of the 15th International Conference on Modularity (Modularity'16), Don Batory, Ed., Málaga, Spain, March 2016, pp. 70–81, ACM. [ http ]
As with traditional software, the complexity of a programming language implementation is faced with modularization which favors the separation of concerns, independent development, maintainability and reuse. However, modularity interferes with language optimization as the latter requires context information that crosses over the single module boundaries and involves other modules. This renders hard to provide the optimization for a single language concept to be reusable with the concept itself. Therefore, the optimization is in general postponed to when all language concepts are available. We defined a model for modular language development with a multiple semantic actions dispatcher based on condition guards that are evaluated at runtime. The optimization can be implemented as context-dependent extensions applied a posteriori to the composed language interpreter without modifying a single component implementation. This renders effective the defined optimization within the language concept boundaries according to the context provided by other language concepts when available and eases its reuse with the language concepts implementation independently of its usage context. The presented model is integrated into the Neverlang development framework and is demonstrated on the optimization of a Javascript interpreter written in Neverlang. We also discuss the applicability of our model to other frameworks for modular language development.

[9]
Walter Cazzola and Albert Shaqiri, “Dynamic Software Evolution through Interpreter Adaptation”, in Proceedings of the 15th International Conference on Modularity (Modularity'16), Málaga, Spain, March 2016, pp. 16–19, ACM. [ http ]
Significant research has been dedicated to dynamic software evolution and adaptation that lead to different approaches which can mainly be categorized as either architecture-based or language-based. But there was little or no focus on dynamic evolution achieved through language interpreter adaptation. In this paper we present a model for such adaptations and illustrate their applicability and usefulness on practical examples developed in Neverlang, a framework for modular language development with features for dynamic adaptation of language interpreters.

[10]
Edoardo Vacchi and Walter Cazzola, “Neverlang: A Framework for Feature-Oriented Language Development”, Computer Languages, Systems & Structures, vol. 43, no. 3, pp. 1–40, October 2015. [ DOI | .pdf ]
Reuse in programming language development is an open research problem. Many authors have proposed frameworks for modular language development. These frameworks focus on maximiz ing code reuse, providing primitives for componentizing language implementations. There is also an open debate on combining feature-orientation with modular language developme nt. Feature-oriented programming is a vision of computer programming in which features can be implemented separately, and then combined to build a variety of software products. How ever, even though feature-orientation and modular programming are strongly connected, modular language development frameworks are not usually meant primarily for feature-orient ed language definition. In this paper we present a model of language development that puts feature implementation at the center, and describe its implementation in the Neverla ng framework. The model has been evaluated through several languages implementations: in this paper, a state machine language is used as a means of comparison with other framew orks, and a JavaScript interpreter implementation is used to further illustrate the benefits that our model provides.

[11]
Thomas Kühn, Walter Cazzola, and Diego Mathias Olivares, “Choosy and Picky: Configuration of Language Product Lines”, in Proceedings of the 19th International Software Product Line Conference (SPLC'15), Goetz Botterweck and Jules White, Eds., Nashville, TN, USA, July 2015, pp. 71–80, ACM. [ http ]
Although most programming languages naturally share several language features, they are typically implemented as a monolithic product. Language features cannot be plugged and unplugged from a language and reused in another language. Some modular approaches to language construction do exist but composing language features requires a deep understanding of its implementation hampering their use. The choose and pick approach from software product lines provides an easy way to compose a language out of a set of language features. However, current approaches to language product lines are not sufficient enough to cope with the complexity and evolution of real world programming languages. In this work, we propose a general light-weight bottom-up approach to automatically extract a feature model from a set of tagged language components. We applied this approach to the Neverlang language development framework and developed the AiDE tool to guide language developers towards a valid language composition. The approach has been evaluated on a decomposed version of Javascript to highlight the benefits of such a language product line.

[12]
Ruzanna Chitchyan, Walter Cazzola, and Awais Rashid, “Engineering Sustainability through Language”, in Proceedings of the 37th International Conference on Software Engineering (ICSE'15), Firenze, Italy, May 2015, pp. 501–504, IEEE, Track on Software Engineering in Society. [ .pdf ]
As our understanding and care for sustainability concerns increases, so does the demand for incorporating these concerns into software. Yet, existing programming language constructs are not well-aligned with concepts of the sustainability domain. This undermines what we term technical sustainability of the software due to (i) increased complexity in programming of such concerns and (ii) continuous code changes to keep up with changes in (environmental, social, legal and other) sustainability-related requirements. In this paper we present a proof-of-concept approach on how technical sustainability support for new and existing concerns can be provided through flexible language-level progr amming. We propose to incorporate sustainability-related behaviour into programs through micro-languages enabling such behaviour to be updated and/or redefined as and wh en required.

[13]
Edoardo Vacchi, Walter Cazzola, Benoît Combemale, and Mathieu Acher, “Automating Variability Model Inference for Component-Based Language Implementations”, in Proceedings of the 18th International Software Product Line Conference (SPLC'14), Patrick Heymans and Julia Rubin, Eds., Florence, Italy, September 2014, pp. 167–176, ACM. [ http ]
Recently, domain-specific language development has become again a topic of interest, as a means to help designing solutions to domain-specific problems. Componentized language frameworks, coupled with variability modeling, have the potential to bring language development to the masses, by simplifying the configuration of a new language from an existing set of reusable components. However, designing variability models for this purpose requires not only a good understanding of these frameworks and the way components interact, but also an adequate familiarity with the problem domain.

In this paper we propose an approach to automatically infer a relevant variability model from a collection of already implemented language components, given a structured, but general representation of the domain. We describe techniques to assist users in achieving a better understanding of the relationships between language components, and find out which languages can be derived from them with respect to the given domain.

[14]
Edoardo Vacchi, Diego Mathias Olivares, Albert Shaqiri, and Walter Cazzola, “Neverlang 2: A Framework for Modular Language Implementation”, in Proceedings of the 13th International Conference on Modularity (Modularity'14), Lugano, Switzerland, April 2014, pp. 23–26, ACM. [ http ]
Neverlang 2 is a JVM-based framework for language development that emphasizes code reuse through composition of language features. This paper is aimed at showing how to develop extensible, custom languages using Neverlang's component-based model of implementation. Using this model, each feature of the language can be implemented as a separate, conceptually isolated unit that can be compiled and distributed separately from the others. A live tutorial of the framework can be found at http://youtu.be/Szxvg7XLbXc

[15]
Edoardo Vacchi, Walter Cazzola, Suresh Pillay, and Benoît Combemale, “Variability Support in Domain-Specific Language Development”, in Proceedings of 6th International Conference on Software Language Engineering (SLE'13), Martin Erwig, Richard F. Paige, and Eric Van Wyk, Eds., Indianapolis, USA, October 2013, Lecture Notes on Computer Science 8225, pp. 76–95, Springer. [ www: ]

[16]
Walter Cazzola and Edoardo Vacchi, “Neverlang 2: Componentised Language Development for the JVM”, in Proceedings of the 12th International Conference on Software Composition (SC'13), Walter Binder, Eric Bodden, and Welf Löwe, Eds., Budapest, Hungary, June 2013, Lecture Notes in Computer Science 8088, pp. 17–32, Springer. [ www: ]

[17]
Sebastián González, Kim Mens, Marius Colăcioiu, and Walter Cazzola, “Context Traits: Dynamic Behaviour Adaptation through Run-Time Trait Recomposition”, in Proceedings of the 12th International Conference on Aspect-Oriented Software Development (AOSD'13), Jörg Kienzle, Ed., Fukuoka, Japan, March 2013, pp. 209–220, ACM. [ http ]
Context-oriented programming emerged as a new paradigm to support fine-grained dynamic adaptation of software behaviour according to the context of execution. Though existing context-oriented approaches permit the adaptation of individual methods, in practice behavioural adaptations to specific contexts often require the modification of groups of interrelated methods. Furthermore, existing approaches impose a composition semantics that cannot be adjusted on a domain-specific basis. The mechanism of traits seems to provide a more appropriate level of granularity for defining adaptations, and brings along a flexible composition mechanism that can be exploited in a dynamic setting. This paper explores how to achieve context-oriented programming by using traits as units of adaptation, and trait composition as a mechanism to introduce behavioural adaptations at run time. First-class contexts reify relevant aspects of the environment in which the application executes, and they directly influence the trait composition of the objects that make up the application. To resolve conflicts arising from dynamic composition of behavioural adaptations, programmers can explicitly encode composition policies. With all this, the notion of context traits offers a promising approach to implementing dynamically adaptable systems. To validate the context traits model we implemented a JavaScript library and conducted case studies on context-driven adaptability.

[18]
Walter Cazzola, “Domain-Specific Languages in Few Steps: The Neverlang Approach”, in Proceedings of the 11th International Conference on Software Composition (SC'12), Thomas Gschwind, Flavio De Paoli, Volker Gruhn, and Matthias Book, Eds., Prague, Czech Republic, May-June 2012, Lecture Notes in Computer Science 7306, pp. 162–177, Springer. [ .pdf ]
Often an ad hoc programming language integrating features from different programming languages and paradigms represents the best choice to express a concise and clean solution to a problem. But, developing a programming language is not an easy task and this often discourages from developing your problem-oriented or domain-specific language. To foster DSL development and to favor clean and concise problem-oriented solutions we developed Neverlang.

The Neverlang framework provides a mechanism to build custom programming languages up from features coming from different languages. The composability and flexibility provided by Neverlang permit to develop a new programming language by simply composing features from previously developed languages and reusing the corresponding support code (parsers, code generators, ...).

In this work, we explore the Neverlang framework and try out its benefits in a case study that merges functional programming à la Python with coordination for distributed programming as in Linda.

[19]
Walter Cazzola and Davide Poletti, “DSL Evolution through Composition”, in Proceedings of the 7th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'10), Maribor, Slovenia, June 2010, ACM. [ http ]
The use of domain specific languages (DSL), instead of general purpose languages introduces a number of advantages in software development even if could be problematic to maintain the DSL consistent with the evolution of the domain. Traditionally, to develop a compiler/interpreter from scratch but also to modify an existing compiler to support the novel DSL is a long and difficult task. We have developed Neverlang to simplify and speed up the development and maintenance of DSLs. The framework presented in this article not only allows to develop the syntax and the semantic of a new language from scratch but it is particularly focused on the reusability of the language definition. The interpreters/compilers produced with such a framework are modular and it is easy to add remove or modify their sections. This allows to modify the DSL definition in order to follow the evolution of the underneath domain. In this work, we explore the Neverlang framework and try out the adaptability of its language definition.

[20]
Walter Cazzola and Ivan Speziale, “Sectional Domain Specific Languages”, in Proceedings of the 4th Domain Specific Aspect-Oriented Languages (DSAL'09), Charlottesville, Virginia, USA, March 2009, pp. 11–14, ACM. [ http ]
Nowadays, many problems are solved by using a domain specific language (DSL), i.e., a programming language tailored to work on a particular application domain. Normally, a new DSL is designed and implemented from scratch requiring a long time-to-market due to implementation and testing issues. Whereas when the DSL simply extends another language it is realized as a source-to-source transformation or as an external library with limited flexibility.

The Hive framework is developed with the intent of overcoming these issues by providing a mechanism to compose different programming features together forming a new DSL, what we call a sectional DSL. The support (both at compiler and interpreter level) of each feature is separately described and easily composed with the others. This approach is quite flexible and permits to build up a new DSL from scratch or simplifying an existing language without penalties. Moreover, it has the desirable side-effect that each DSL can be extended at any time potentially also at run-time.

[1]
Walter Cazzola and Diego Mathias Olivares, “Gradually Learning Programming Supported by a Growable Programming Language”, IEEE Transactions on Emerging Topics in Computing, vol. 4, no. 3, pp. 404–415, September 2016, Special Issue on Emerging Trends in Education. [ DOI | .pdf ]
Learning programming is a difficult task. The learning process is particularly disorienting when you are approaching programming for the first time. As a student you are exposed to several new concepts (control flow, variable, etc. but also coding, compiling etc.) and new ways to think (algorithms). Teachers try to expose the students gradually to the new concepts by presenting them one by one but the tools at student's disposal do not help: they provide support, suggestion and documentation for the full programming language of choice hampering the teacher's efforts. On the other side, students need to learn real languages and not didactic languages. In this work we propose an approach to gradually teaching programming supported by a programming language that grows—together with its implementation—along with the number of concepts presented to the students. The proposed approach can be applied to the teaching of any programming language and some experiments with Javascript are reported

[2]
James Paterson, Robert Law, Walter Cazzola, Dario Malchiodi, Markku Karhu, Irina Illina, Marisa Maximiano, and Catarina Silva, “Experience of an International Collaborative Project with First Year Programming Students”, in Proceedings of the IEEE 39th Annual Computer Software and Applications Conference (COMPSAC'15), Taichung, Taiwan, July 2015, pp. 829–834, IEEE. [ DOI | NEW! | www: ]

[1]
Walter Cazzola, “Evolution as «Reflections on the Design»”, in MoDELS@Run-Time, Nelly Bencomo, Betty Chang, Robert B. France, and Uwe Aßmann, Eds., Lecture Notes in Computer Science 8378, pp. 259–278. Springer, August 2014. [ www: ]

[2]
Jeff Gray, Dominik Stein, Jörg Kienzle, and Walter Cazzola, “Report of the 15th International Workshop on Aspect-Oriented Modeling”, in MoDELS 2010 Workshops, Oslo, Norway, February 2011, Lecture Notes in Computer Science 6627, pp. 105–109, Springer. [ www: ]
[3]
Jörg Kienzle, Jeff Gray, Dominik Stein, Thomas Cottenier, Walter Cazzola, and Omar Aldawud, “Report of the 14th International Workshop on Aspect-Oriented Modeling”, in MODELS 2009 Workshops, Sudipto Ghosh, Ed., Denver, Colorado, USA, February 2010, vol. Lecture Notes in Computer Science 6002, pp. 98–103, Springer. [ .pdf ]
[4]
Walter Cazzola, “Cogito, Ergo Muto!”, in Proceedings of the Workshop on Self-Organizing Architecture (SOAR'09), Danny Weyns, Sam Malek, Rogério de Lemos, and Jesper Andersson, Eds., Cambridge, United Kingdom, September 2009, pp. 1–7, Invited Paper. [ .pdf ]
No system escapes from the need of evolving either to fix bugs, to be reconfigured or to add new features. To evolve becomes particularly problematic when the system to evolve can not be stopped.

Traditionally the evolution of a continuously running system is tackled on by calculating all the possible evolutions in advance and hardwiring them in the application itself. This approach gives origin to the code pollution phenomenon where the code of the application is polluted by code that could never be applied. The approach has the following defects: i) code bloating, ii) it is impossible to forecast any possible change and iii) the code becomes hard to read and maintain.

Computational reflection by definition allows an application to introspect and intercede on its own structure and behavior endowing, therefore, a reflective application with (potent ially) the ability of self-evolving. Furthermore, to deal with the evolution as a nonfunctional concerns, i.e., that can be separated from the current implementation of the applicat ion, can limit the code pollution phenomenon.

To bring the design information (model and/or architecture) at run-time provides the application with a basic knowledge about itself to reflect on when a change is necessary and on how to deploy it. The availability of such a knowledge at run-time frees the designer from forecasting and coding all the possible evolutions in favor of a sort of evolutionary engi ne that, to some extent, can evaluate which countermove to apply.

In this contribution, the author will explore the role of reflection and of the design information in the development of self-evolving applications. Moreover, the author will sketch a basic reflective architecture to support dynamic self-evolution and he will analyze the adherence of the existing frameworks to such an architecture.

[5]
Jörg Kienzle, Jeff Gray, Dominik Stein, Walter Cazzola, Omar Aldawud, and Elrad Tzilla, “11th International Workshop on Aspect-Oriented Modeling (Report)”, in MoDELS 2007 Workshops, Holger Giese, Ed., Nashville, TN, USA, September 2007, Lecture Notes in Computer Science 5002, pp. 1–6, Springer. [ .pdf ]
[6]
Walter Cazzola and Sonia Pini, “On the Footprints of Join Points: The Blueprint Approach”, Journal of Object Technology, vol. 6, no. 7, pp. 167–192, August 2007. [ .pdf ]
Aspect-oriented techniques are widely used to better modularize object-oriented programs by introducing crosscutting concerns in a safe and non-invasive way, i.e., aspect-oriented mechanisms better address the modularization of functionality that orthogonally crosscuts the implementation of the application.

Unfortunately, as noted by several researchers, most of the current aspect-oriented approaches are too coupled with the application code, and this fact hinders the concerns separability and consequently their re-usability since each aspect is strictly tailored on the base application. Moreover, the join points (i.e., locations affected by a crosscutting concerns) actually are defined at the operation level. It implies that the possible set of join points includes every operation (e.g., method invocations) that the system performs. Whereas, in many contexts we wish to define aspects that are expected to work at the statement level, i.e., by considering as a join point every point between two generic statements (i.e., lines of code).

In this paper, we present our approach, called Blueprint, to overcome the abovementioned limitations of the current aspect-oriented approaches. The Blueprint consists of a new aspect-oriented programming language based on modeling the join point selection mechanism at a high-level of abstraction to decouple aspects from the application code. To this regard, we adopt a high-level pattern-based join point model, where join points are described by join point blueprints, i.e., behavioral patterns describing where the join points should be found.

[7]
Walter Cazzola, Jeff Gray, Dominik Stein, Jörg Kienzle, Tzilla Elrad, and Omar Aldawud (Eds), “Special Issue on Aspect-Oriented Modeling”, Journal of Object Technology, vol. 6, no. 7, August 2007. [ http ]
[8]
Walter Cazzola and Sonia Pini, “AOP vs Software Evolution: a Score in Favor of the Blueprint”, in Proceedings of the 4th ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'07), Walter Cazzola, Shigeru Chiba, Yvonne Coady, Stéphane Ducasse, Günter Kniesel, Manuel Oriol, and Gunter Saake, Eds., Berlin, Germany, July 2007, pp. 81–91. [ .pdf ]
All software systems are subject to evolution, independently by the developing technique. Aspect oriented software in addition to separate the different concerns during the software development, must be “not fragile” against software evolution. Otherwise, the benefit of disentangling the code will be burred by the extra complication in maintaining the code.

To obtain this goal, the aspect-oriented languages/tools must evolve, they have to be less coupled to the base program. In the last years, a few attempts have been proposed, the Blueprint is our proposal based on behavioral patterns.

In this paper we test the robustness of the Blueprint aspect-oriented language against software evolution.

[9]
Walter Cazzola, Sonia Pini, Ahmed Ghoneim, and Gunter Saake, “Co-Evolving Application Code and Design Models by Exploiting Meta-Data”, in Proceedings of the 12th Annual ACM Symposium on Applied Computing (SAC'07), Seoul, South Korea, March 2007, pp. 1275–1279, ACM Press. [ http ]
Evolvability and adaptability are intrinsic properties of today's software applications. Unfortunately, the urgency of evolving/adapting a system often drives the developer to directly modify the application code neglecting to update its design models. Even, most of the development environments support the code refactoring without supporting the refactoring of the design information.

Refactoring, evolution and in general every change to the code should be reflected into the design models, so that these models consistently represent the application and can be used as documentation in the successive maintenance steps. The code evolution should not evolve only the application code but also its design models. Unfortunately, to co-evolve the application code and its design is a hard job to be carried out automatically, since there is an evident and notorious gap between these two representations.

We propose a new approach to code evolution (in particular to code refactoring) that supports the automatic co-evolution of the design models. The approach relies on a set of predefined meta-data that the developer should use to annotate the application code and to highlight the refactoring performed on the code. Then, these meta-data are retrieved through reflection and used to automatically and coherently update the application design models.

[10]
Walter Cazzola and Sonia Pini, “Join Point Patterns: a High-Level Join Point Selection Mechanism”, in MoDELS'06 Satellite Events Proceedings, Thomas Khüne, Ed., Genova, Italy, October 2006, Lecture Notes in Computer Science 4364, pp. 17–26, Springer, Best Paper Awards at the 9th Aspect-Oriented Modeling Workshop. [ .pdf ]
Aspect-Oriented Programming is a powerful technique to better modularize object-oriented programs by introducing crosscutting concerns in a safe and noninvasive way. Unfortunately, most of the current join point models are too coupled with the application code. This fact hinders the concerns separability and reusability since each aspect is strictly tailored on the base application.

This work proposes a possible solution to this problem based on modeling the join points selection mechanism at a higher level of abstraction. In our view, the aspect designer does not need to know the inner details of the application such as a specific implementation or the used name conventions rather he exclusively needs to know the application behavior to apply his/her aspects.

In the paper, we present a novel join point model with a join point selection mechanism based on a high-level program representation. This high-level view of the application decouples the aspects definition from the base program structure and syntax. The separation between aspects and base program will render the aspects more reusable and independent of the manipulated application.

[11]
Jörg Kienzle, Dominik Stein, Walter Cazzola, Jeff Gray, Omar Aldawud, and Elrad Tzilla, “9th International Workshop on Aspect-Oriented Modeling (Report)”, in MoDELS'06 Satellite Events Proceedings, Thomas Khüne, Ed., Genova, Italy, October 2006, Lecture Notes in Computer Science 4364, pp. 1–5, Springer. [ .pdf ]
[12]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “Viewpoint for Maintaining UML Models against Application Changes”, in Proceedings of International Conference on Software and Data Technologies (ICSOFT 2006), Joaquim Filipe, Markus Helfert, and Boris Shishkov, Eds., Setúbal, Portugal, September 2006, pp. 263–268, Springer. [ .pdf ]
The urgency that characterizes many requests for evolution forces the system administrators/developers of directly adapting the system without passing through the adaptation of its design. This creates a gap between the design information and the system it describes. The existing design models provide a static and often outdated snapshot of the system unrespectful of the system changes. Software developers spend a lot of time on evolving the system and then on updating the design information according to the evolution of the system. To this respect, we present an approach to automatically keep the design information (diagrams in our case) updated when the system evolves. The diagrams are bound to the application and all the changes to it are reflected to the diagrams as well.

[13]
Walter Cazzola, Sonia Pini, and Massimo Ancona, “Design-Based Pointcuts Robustness Against Software Evolution”, in Proceedings of the 3rd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'06), Walter Cazzola, Shigeru Chiba, Yvonne Coady, and Gunter Saake, Eds., Nantes, France, July 2006, pp. 35–45. [ .pdf ]
Aspect-Oriented Programming (AOP) is a powerful technique to better modularize object-oriented programs by introducing crosscutting concerns in a safe and noninvasive way. Unfortunately, most of the current join point models are too coupled with the application code. This fact harms the evolvability of the program, hinders the concerns selection and reduces the aspect reusability. To overcome this problem is an hot topic.

This work propose a possible solution to the limits of the current aspect-oriented techniques based on modeling the join point selection mechanism at a higher level of abstraction to decoupling base program and aspects.

In this paper, we will present by examples a novel join point model based on design models (e.g., expressed through UML diagrams). Design models provide a high-level view on the application structure and behavior decoupled by base program. A design oriented join point model will render aspect definition more robust against base program evolution, reusable and independent of the base program.

[14]
Walter Cazzola, Antonio Cicchetti, and Alfonso Pierantonio, “Towards a Model-Driven Join Point Model”, in Proceedings of the 11th Annual ACM Symposium on Applied Computing (SAC'06), Dijon, France, April 2006, pp. 1306–1307, ACM Press. [ .pdf | http ]
Aspect–Oriented Programming (AOP) is increasingly being adopted by developers to better modularize object–oriented design by introducing crosscutting concerns. However, due to tight coupling of existing approaches with the implementing code and to the poor expressiveness of the pointcut languages a number of problems became evident. Model–Driven Architecture (MDA) is an emerging technology that aims at shifting the focus of software development from a programming language specific implementation to application design, using appropriate representations by means of models which could be transformed toward several development platforms. Therefore, this work presents a possible solution based on modeling aspects at a higher level of abstraction which are, in turn, transformed to specific targets.

[15]
Walter Cazzola, Sonia Pini, and Massimo Ancona, “The Role of Design Information in Software Evolution”, in Proceedings of the 2nd ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'05), Walter Cazzola, Shigeru Chiba, Gunter Saake, and Tom Tourwé, Eds., Glasgow, Scotland, July 2005, pp. 59–70. [ .pdf ]
Software modeling has received a lot a of attention in the last decade and now is an important support for the design process.

Actually, the design process is very important to the usability and understandability of the system, for example functional requirements present a complete description of how the system will function from the user's perspective, while non-functional requirements dictate properties and impose constraints on the project or system.

The design models and implementation code must be strictly connected, i.e. we must have correlation and consistency between the two previous views, and this correlation must exist during all the software cycle. Often, the early stages of development, the specifications and the design of the system, are ignored once the code has been developed. This practice cause a lot of problems, in particular when the system must evolve. Nowadays, to maintain a software is a difficult task, since there is a high coupling degree between the software itself and its environment. Often, changes in the environment cause changes in the software, in other words, the system must evolve itself to follow the evolution of its environment.

Typically, a design is created initially, but as the code gets written and modified, the design is not updated to reflect such changes.

This paper describes and discusses how the design information can be used to drive the software evolution and consequently to maintain consistency among design and code.

[16]
Walter Cazzola, Antonio Cicchetti, and Alfonso Pierantonio, “On the Problems of the JPMs”, in Proceedings of the 1st ECOOP Workshop on Models and Aspects (MAW'05), Glasgow, Scotland, July 2005. [ .pdf ]
[17]
Walter Cazzola, Sonia Pini, and Massimo Ancona, “AOP for Software Evolution: A Design Oriented Approach”, in Proceedings of the 10th Annual ACM Symposium on Applied Computing (SAC'05), Santa Fe, New Mexico, USA, March 2005, pp. 1356–1360, ACM Press. [ http ]
In this paper, we have briefly explored the aspect-oriented approach as a tool for supporting the software evolution. The aim of this analysis is to highlight the potentiality and the limits of the aspect-oriented development for software evolution. From our analysis follows that in general (and in particular for AspectJ) the approach to join points, pointcuts and advices definition are not enough intuitive, abstract and expressive to support all the requirements for carrying out the software evolution. We have also examined how a mechanism for specifying pointcuts and advices based on design information, in particular on the use of UML diagrams, can better support the software evolution through aspect oriented programming. Our analysis and proposal are presented through an example.

[18]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “Software Evolution through Dynamic Adaptation of Its OO Design”, in Objects, Agents and Features: Structuring Mechanisms for Contemporary Software, Hans-Dieter Ehrich, John-Jules Meyer, and Mark D. Ryan, Eds., Lecture Notes in Computer Science 2975, pp. 69–84. Springer-Verlag, July 2004. [ .pdf ]
In this paper we present a proposal for safely evolving a software system against run-time changes. This proposal is based on a reflective architecture which provides objects with the ability of dynamically changing their behavior by using their design information. The meta-level system of the proposed architecture supervises the evolution of the software system to be adapted that runs as the base-level system of the reflective architecture. The meta-level system is composed of cooperating components; these components carry out the evolution against sudden and unexpected environmental changes on a reification of the design information (e.g., object models, scenarios and statecharts) of the system to be adapted. The evolution takes place in two steps: first a meta-object, called Papers about Software Evolution, Refactoring and Co-Evolution.ary meta-object, plans a possible evolution against the detected event then another meta-object, called consistency checker meta-object validates the feasibility of the proposed plan before really evolving the system. Meta-objects use the system design information to govern the evolution of the base-level system. Moreover, we show our architecture at work on a case study.

[19]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “System Evolution through Design Information Evolution: a Case Study”, in Proceedings of the 13th International Conference on Intelligent and Adaptive Systems and Software Engineering (IASSE 2004), Walter Dosch and Narayan Debnath, Eds., Nice, France, July 2004, pp. 145–150, ISCA. [ .pdf ]
This paper describes how design information, in our case specifications, can be used to evolve a software system and validate the consistency of such an evolution. This work complements our previous work on reflective architectures for software evolution describing the role played by meta-data in the evolution of software systems. The whole paper focuses on a case study; we show how the urban traffic control system (UTCS) or part of it must evolve when unscheduled road maintenance, a car crush or a traffic jam block normal vehicular flow in a specific road. The UTCS case study perfectly shows how requirements can dynamically change and how the design of the system should adapt to such changes. Both system consistency and adaptation are governed by rules based on meta-data representing the system design information. As we show by an example, such rules represent the core of our evolutionary approach driving the Papers about Software Evolution, Refactoring and Co-Evolution.ary and consistency checker meta-objects and interfacing the meta-level system (the evolutionary system) with the system that has to be adapted.

[20]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “RAMSES: a Reflective Middleware for Software Evolution”, in Proceedings of the 1st ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE'04), Oslo, Norway, June 2004, pp. 21–26. [ .pdf ]
Software systems today need to dynamically self-adapt against dynamic requirement changes. In this paper we describe a reflective middleware whose aim consists of consistently evolving software systems against runtime changes. This middleware provides the ability to change both structure and behavior for the base-level system at run-time by using its design information. The meta-level is composed of cooperating objects, and has been specified by using a design pattern language. The base objects are controlled by meta-objects that drive their evolution. The essence of is the ability of extracting the design data from the base application, and of constraining the dynamic evolution to stable and consistent systems.

[21]
Walter Cazzola, Ahmed Ghoneim, and Gunter Saake, “Reflective Analysis and Design for Adapting Object Run-time Behavior”, in Proceedings of the 8th International Conference on Object-Oriented Information Systems (OOIS'02), Zohra Bellahsène, Dilip Patel, and Colette Rolland, Eds., Montpellier, France, September 2002, Lecture Notes in Computer Science 2425, pp. 242–254, Springer-Verlag. [ .pdf ]
Today, complex information systems need a simple way for changing the object behavior according with changes that occur in its running environment. We present a reflective architecture which provides the ability to change object behavior at run-time by using design-time information. By integrating reflection with design patterns we get a flexible and easily adaptable architecture. A reflective approach that describes object model, scenarios and statecharts helps to dynamically adapt the software system to environmental changes. The object model, system scenario and many other design information are reified by special meta-objects, named Papers about Software Evolution, Refactoring and Co-Evolution.ary meta-objects. Evolutionary meta-objects deal with two types of run-time evolution. Structural evolution is carried out by causal connection between evolutionary meta-objects and its referents through changing the structure of these referents by adding or removing objects or relations. Behavioral evolution allows the system to dynamically adapt its behavior to environment changes by itself. Evolutionary meta-objects react to environment changes for adapting the information they have reified and steering the system evolution. They provide a natural liaison between design information and the system based on such information. This paper describes how this liaison can be built and how it can be used for adapting a running system to environment changes.

[22]
Walter Cazzola, James O. Coplien, Ahmed Ghoneim, and Gunter Saake, “Framework Patterns for the Evolution of Nonstoppable Software Systems”, in Proceedings of the 1st Nordic Conference on Pattern Languages of Programs (VikingPLoP'02), Pavel Hruby and Kristian Elof Søresen, Eds., Højstrupgard, Helsingør, Denmark, September 2002, pp. 35–54, Microsoft Business Solutions. [ .pdf ]
The fragment of pattern language proposed in this paper, shows how to adapt a nonstoppable software system to reflect changes in its running environment. These framework patterns depend on well-known techniques for programs to dynamically analyze and modify their own structure, commonly called computational reflection. Our patterns go together with common reflective software architectures.

[23]
Walter Cazzola, Andrea Sosio, and Francesco Tisato, “Shifting Up Reflection from the Implementation to the Analysis Level”, in Reflection and Software Engineering, Walter Cazzola, Robert J. Stroud, and Francesco Tisato, Eds., Lecture Notes in Computer Science 1826, pp. 1–20. Springer-Verlag, Heidelberg, Germany, June 2000. [ .pdf ]
Traditional methods for object-oriented analysis and modeling focus on the functional specification of software systems, i.e., application domain modeling. Non-functional requirements such as fault-tolerance, distribution, integration with legacy systems, and so on, have no clear collocation within the analysis process, since they are related to the architecture and workings of the system itself rather than the application domain. They are thus addressed in the system's design, based on the partitioning of the system's functionality into classes resulting from analysis. As a consequence, the smooth transition from analysis to design that is usually celebrated as one of the main advantages of the object-oriented paradigm does not actually hold for what concerns non-functional issues. A side effect is that functional and non-functional concerns tend to be mixed at the implementation level. We argue that the reflective approach whereby non-functional properties are ascribed to a meta-level of the software system may be extended “back to” analysis. Adopting a reflective approach in object-oriented analysis may support the precise specification of non-functional requirements in analysis and, if used in conjunction with a reflective approach to design, recover the smooth transition from analysis to design in the case of non-functional system's properties.

[1]
Walter Cazzola and Alessandro Marchetto, “A Concern-Oriented Framework for Dynamic Measurements”, Information and Software Technology, vol. 57, pp. 32–51, January 2015. [ DOI | .pdf ]
Evolving software programs requires that software developers reason quantitatively about the modularity impact of several concerns, which are often scattered over the system. To this respect, concern-oriented software analysis is rising to a dominant position in software development. Hence, measurement techniques play a fundamental role in assessing the concern modularity of a software system. Unfortunately, existing measurements are still fundamentally module-oriented rather than concern-oriented. Moreover, the few available concern-oriented metrics are defined in a non-systematic and shared way and mainly focus on static properties of a concern, even if many properties can only be accurately quantified at run-time. Hence, novel concern-oriented measurements and, in particular, shared and systematic ways to define them are still welcome. This paper poses the basis for a unified framework for concern-driven measurement. The framework provides a basic terminology and criteria for defining novel concern metrics. To evaluate the framework feasibility and effectiveness, we have shown how it can be used to adapt some classic metrics to quantify concerns and in particular to instantiate new dynamic concern metrics from their static counterparts.

[2]
Eduardo Figueiredo, Cláudio Sant'Anna, Alessandro Garcia, Thiago T. Bartolomei, Walter Cazzola, and Alessandro Marchetto, “On the Maintainability of Aspect-Oriented Software: A Concern-Oriented Measurement Framework”, in Proceedings of the 12th European Conference on Software Maintenance and Reengineering (CSMR 2008), Christos Tjortjis and Andreas Winter, Eds., Athens, Greece, April 2008, pp. 183–192, IEEE Press. [ .pdf ]
Aspect-oriented design needs to be systematically assessed with respect to modularity flaws caused by the realization of driving system concerns, such as tangling, scattering, and excessive concern dependencies. As a result, innovative concern metrics have been defined to support quantitative analyses of concern's properties. However, the vast majority of these measures have not yet being theoretically validated and managed to get accepted in the academic or industrial settings. The core reason for this problem is the fact that they have not been built by using a clearly-defined terminology and criteria. This paper defines a concern-oriented framework that supports the instantiation and comparison of concern measures. The framework subsumes the definition of a core terminology and criteria in order to lay down a rigorous process to foster the definition of meaningful and well-founded concern measures. In order to evaluate the framework generality, we demonstrate the framework instantiation and extension to a number of concern measures suites previously used in empirical studies of aspect-oriented software maintenance.

[3]
Walter Cazzola and Alessandro Marchetto, “AOPHiddenMetrics: Separation, Extensibility and Adaptability in SW Measurement”, Journal of Object Technology, vol. 7, no. 2, pp. 53–68, February 2008. [ .pdf ]
Traditional approaches to dynamic system analysis and metrics measurement are based on system code (both source, intermediate and executable code) instrumentation or need ad hoc support by the run-time environment. In these contexts, the measurement process is tricky, invasive and the results could be affected by the process itself making the data not germane.

Moreover, the tool based on these approaches are difficult to customize, extend and often use since their properties are rooted at specific system details (e.g., special tools such as bytecode analyzers or virtual machine goodies such as the debugger interface) and require high efforts, skills and knowledges to be adapted.

Notwithstanding its importance, software measurement is clearly a nonfunctional concern and should not impact on the software development and efficiency. Aspect-oriented programming provides the mechanisms to deal with this kind of concern and to overcome the software measurement limitations.

In this paper, we present a different approach to dynamic software measurements based on aspect-oriented programming and the corresponding support framework named AOPHiddenMetrics. The proposed approach makes the measurement process highly customizable and easy to use reducing its invasiveness and the dependency from the code knowledge.

[4]
Walter Cazzola and Alessandro Marchetto, “AOPHiddenMetrics”, Technical Report TR 19-07, Università degli Studi di Milano, Milano, Italy, June 2007. [ www: ]

[1]
Ying Liu, Walter Cazzola, and Bin Zhang, “Towards a Colored Reflective Petri-Net Approach to Model Self-Evolving Service-Oriented Architectures”, in Proceedings of the 17th Annual ACM Symposium on Applied Computing (SAC'12), Riva del Garda, Trento, Italy, March 2012, pp. 1858–1865, ACM. [ http ]
Service-based software systems could require to evolve during their execution. To support this, we need to consider system evolving since the design phase. Reflective Petri nets separate the system from its evolution by describing it and how it can evolve. However, reflective Petri nets have some expressivity limits and render overcomplicated the consistency checking necessary during service evolution. In this paper, we extend the reflective Petri nets approach to overcome such limits and show that on a case study.

[2]
Lorenzo Capra and Walter Cazzola, “(Symbolic) State-Space Inspection of a Class of Dynamic Petri Nets”, in Proceedings of the Summer Computer Simulation Conference (SCSC'10), Ottawa, Canada, July 2010, pp. 522–530, ACM. [ www: ]

[3]
Lorenzo Capra and Walter Cazzola, “An Introduction to Reflective Petri Nets”, in Handbook of Research on Discrete Event Simulation Environments: Technologies and Applications, Evon M. O. Abu-Taieh and Asim A. El Sheikh, Eds., chapter 9, pp. 191–217. IGI Global, November 2009. [ .pdf ]
Most discrete-event systems are subject to evolution during lifecycle. Evolution often implies the development of new features, and their integration in deployed systems. Taking evolution into account since the design phase therefore is mandatory. A common approach consists of hard-coding the foreseeable evolutions at the design level. Neglecting the obvious ifficulties of this approach, we also get system's design polluted by details not concerning functionality, which hamper analysis, reuse and maintenance. Petri Nets, as a central formalism for discrete-event systems, are not exempt from pollution when facing evolution. Embedding evolution in Petri nets requires expertise, other than early knowledge of evolution. The complexity of resulting models is likely to affect the consolidated analysis algorithms for Petri nets. We introduce Reflective Petri nets, a formalism for dynamic discrete-event systems. Based on a reflective layout, in which functional aspects are separated from evolution, this model preserves the description effectiveness and the analysis capabilities of Petri nets. Reflective Petri nets are provided with timed state-transition semantics.

[4]
Lorenzo Capra and Walter Cazzola, “Trying out Reflective Petri Nets on a Dynamic Workflow Case”, in Handbook of Research on Discrete Event Simulation Environments: Technologies and Applications, Evon M. O. Abu-Taieh and Asim A. El Sheikh, Eds., chapter 10, pp. 218–233. IGI Global, November 2009. [ .pdf ]
Industrial/business processes are an evident example of discrete-event systems which are subject to evolution during life-cycle. The design and management of dynamic workflows need adequate formal models and support tools to handle in sound way possible changes occurring during workflow operation. The known, well-established workflow's models, among which Petri nets play a central role, are lacking in features for representing evolution. We propose a recent Petri net-based reflective layout, called Reflective Petri nets, as a formal model for dynamic workflows. A localized open problem is considered: how to determine what tasks should be redone and which ones do not when transferring a workflow instance from an old to a new template. The problem is efficiently but rather empirically addressed in a workflow management system. Our approach is formal, may be generalized, and is based on the preservation of classical Petri nets structural properties, which permit an efficient characterization of workflow's soundness.

[5]
Lorenzo Capra and Walter Cazzola, “Evolving System's Modeling and Simulation through Reflective Petri Nets”, in Proceedings of the 4th International Conference on Evaluation of Novel Approaches to Software Engineering (ENASE'09), Stefan Jablonski and Leszek Maciaszek, Eds., Milan, Italy, May 2009, INSTICC, pp. 59–70, INSTICC Press. [ .pdf ]
The design of dynamic discrete-event systems calls for adequate modeling formalisms and tools to manage possible changes occurring during system's lifecycle. A common approach is to pollute design with details that do not regard the current system behavior rather its evolution. That hampers analysis, reuse and maintenance in general. A reflective Petri net model (based on classical Petri nets) was recently proposed to support dynamic discrete-event system's design, and was applied to dynamic workflow's management. Behind there is the idea that keeping functional aspects separated from evolutionary ones and applying them to the (current) system only when necessary, results in a simple formal model on which the ability of verifying properties typical of Petri nets is preserved. In this paper we provide the reflective Petri nets with a (labeled) state-transition graph semantics.

[6]
Lorenzo Capra and Walter Cazzola, “Evolutionary Design through Reflective Petri Nets: an Application to Workflow”, in Proceedings of the 26th IASTED International Conference on Software Engineering (SE'08), Innsbruck, Austria, February 2008, pp. 200–207, ACTA Press. [ .pdf ]
The design of dynamic workflows needs adequate modeling/specification formalisms and tools to soundly handle possible changes during workflow operation. A common approach is to pollute workflow design with details that do not regard the current behavior, but rather evolution. That hampers analysis, reuse and maintenance in general. We propose and discuss the adoption of a recent Petri net-based reflective model as a support to dynamic workflow design. Keeping separated functional aspects from evolution, results in a dynamic workflow model merging flexibility and ability of formally verifying basic workflow properties. A structural on-the-fly characterization of sound dynamic workflows is adopted based on Petri net's free-choiceness preservation. An application is presented to a localized open problem: how to determine what tasks should be redone and which ones do not when transferring a workflow instance from an old to a new template.

[7]
Lorenzo Capra and Walter Cazzola, “Self-Evolving Petri Nets”, Journal of Universal Computer Science, vol. 13, no. 13, pp. 2002–2034, December 2007. [ .pdf ]
Nowadays, software evolution is a very hot topic. It is particularly complex when it regards critical and nonstopping systems. Usually, these situations are tackled by hard-coding all the foreseeable evolutions in the application design and code.

Neglecting the obvious difficulties in pursuing this approach, we also get the application code and design polluted with details that do not regard the current system functionality, and that hamper design analysis, code reuse and application maintenance in general. Petri Nets (PN), as a formalism for modeling and designing distributed/concurrent software systems, are not exempt from this issue.

The goal of this work is to propose a PN based reflective framework that lets everyone model a system able to evolve, keeping separated functional aspects from evolutionary ones and applying evolution to the model only if necessary. Such an approach tries to keep system's model as simple as possible, preserving (and exploiting) ability of formally verifying system properties typical of PN, granting at the same time adaptability.

[8]
Lorenzo Capra and Walter Cazzola, “A Reflective PN-based Approach to Dynamic Workflow Change”, in Proceedings of the 9th International Symposium in Symbolic and Numeric Algorithms for Scientific Computing (SYNASC'07), Timisoara, Romania, September 2007, IEEE, pp. 533–540. [ .pdf ]
The design of dynamic workflows needs adequate modeling/specification formalisms and tools to soundly handle possible changes occurring during workflow operation. A common approach is to pollute design with details that do not regard the current workflow behavior, but rather its evolution. That hampers analysis, reuse and maintenance in general.

We propose and discuss the adoption of a recent Petri Net based reflective model (based on classical PN) as a support to dynamic workflow design, by addressing a localized problem: how to determine what tasks should be redone and which ones do not when transferring a workflow instance from an old to a new template. Behind there is the idea that keeping functional aspects separated from evolutionary ones, and applying evolution to the (current) workflow template only when necessary, results in a simple reference model on which the ability of formally verifying typical workflow properties is preserved, thus favoring a dependable adaptability.

[9]
Lorenzo Capra and Walter Cazzola, “A Petri-Net Based Reflective Framework for the Evolution of Dynamic Systems”, Electronic Notes on Theoretical Computer Science, vol. 159, pp. 41–59, 2006. [ .pdf ]
Nowadays, software evolution is a very hot topic. Many applications need to be updated or extended with new characteristics during their lifecycle. Software evolution is characterized by its huge cost and slow speed of implementation. Often, software evolution implies a redesign of the whole system, the development of new features and their integration in the existing and/or running systems (this last step often implies a complete rebuilding of the system). A good evolution is carried out through the evolution of the system design information and then propagating the evolution to the implementation.

Petri Nets (PN), as a formalism for modeling and designing distributed/concurrent software systems, are not exempt from this issue. Several times a system modeled through Petri nets has to be updated and consequently also the model should be updated. Often, some kinds of evolution are foreseeable and could be hardcoded in the code or in the model, respectively.

Embedding evolutionary steps in the model or in the code however requires early and full knowledge of the evolution. The model itself should be augmented with details that do not regard the current system functionality, and that jeopardize or make very hard analysis and verification of system properties.

In this work, we propose a PN based reflective framework that lets everyone model a system able to evolve, keeping separated functional aspects from evolutionary ones and applying evolution to the model if necessary. Such an approach tries to keep the model as simple as possible, preserving (and exploiting) the ability of formally verifying system properties typical of PN, granting at the same time model adaptability.

[1]
Walter Cazzola and Dario Maggiorini, “Seamless Nomadic System-Aware Servants”, in Proceedings of the 37th Hawai'i International Conference on System Sciences (HICSS'04), Ralph H. Sprague, Jr, Ed., Big Island, Hawaii, January 2004, IEEE Computer Society Press. [ .pdf ]
The growing diffusion of wireless technologies is leading to deployment of small-scale and location dependent information services (LDISs). Those new services call for provisioning schemes that are able to operate in a distributed environment and do not require network infrastructure. This paper describes an approach to a service-oriented middleware which enables a mobile device to be aware of the surrounding environment and to transparently exploit every LDIS discovered in the coverage area of the hosting wireless network. the paper introduces seamless nomadic system-aware (SNA) servant. SNA servants run on mobile devices, discover LDISs and are not associated with any specific service. The paper also describes the key features for the SNA servants implementation and for rendering them interoperable and cross-platform on, at least, .NET and JVM frameworks.

[2]
Walter Cazzola, “Remote Method Invocation as a First-Class Citizen”, Distributed Computing, vol. 16, no. 4, pp. 287–306, December 2003. [ DOI | .pdf ]
The classical remote method invocation (RMI) mechanism adopted by several object-based middleware is `black box' in nature, and the RMI functionality, i.e., the RMI interaction policy and its configuration, is hard-coded into the application. This RMI nature hinders software development and reuse, forcing the programmer to focus on communication details often marginal to the application he is developing. Extending the RMI behavior with extra functionality is also a very difficult job, because added code must be scattered among the entities involved in communications.

This situation could be improved by developing the system in several separate layers, confining communications and related matters to specific layers. As demonstrated by recent work on reflective middleware, reflection represents a powerful tool for realizing such a separation and therefore overcoming the problems referred to above. Such an approach improves the separation of concerns between the communication-related algorithms and the functional aspects of an application. However, communications and all related concerns are not managed as a single unit separate from the rest of the application, which makes their reuse, extension and management difficult. As a consequence, communications concerns continue to be scattered across the meta-program, communication mechanisms continue to be black-box in nature, and there is only limited opportunity to adjust communication policies through configuration interfaces.

In this paper we examine the issues raised above, and propose a reflective approach especially designed to open up the Java RMI mechanism. Our proposal consists of a new reflective model, called multi-channel reification, that reflects on and reifies communication channels, i.e., it renders communication channels first-class citizens. This model is designed both for developing new communication mechanisms and for extending the behavior of communication mechanisms provided by the underlying system. Our approach is embodied in a framework called mChaRM which is described in detail in this paper.

[3]
Dario Maggiorini, Walter Cazzola, B.S. Prabhu, and Rajit Gadh, “A Service-Oriented Middleware for Seamless Nomadic System-Aware (SNA) Servants”, White paper, WINMEC: Wireless INternet for the Mobile Enterprise Consortium, March 2003. [ .pdf ]
In the last few years there has been a considerable penetration of wireless technology in everyday life. This penetration has also increased the availability of Location-Dependent Information Services (LDIS), such as local information access (e.g. traffic reports, news, etc.), nearest-neighbor queries (such as finding the nearest restaurant, gas station, medical facility, ATM, etc.) and others.

New wireless environments and paradigms are continuously evolving and novel LDISs are continuously being deployed. Such a growth means the need to deal with:

    services without standard interfaces - same or similar LDISs being offered by different vendors through different APIs but with same standard functional interfaces; services deployed dynamically - LDIS made available on a need basis or when the scenario dynamically mutates and in addition provides dynamic roaming between services and dynamic service interchangeability; and non-classified services (i.e., novel services).

[4]
Massimo Ancona, Walter Cazzola, and Daniele D'Agostino, “Smart Data Caching in Archeological Wireless Applications: the PAST Solution”, in Proceedings of the 11th Euromicro Conference on Parallel, Distributed and Network-Based Processing (Euromicro PDP 2003), Andrea Clematis, Ed., Genova, Italy, February 2003, pp. 532–536, IEEE Computer Society Press. [ .pdf ]
Wireless computing, because of the limited memory capacity of the palmtops, forces to separate data (stored on a remote server) from the application (running on the palmtops) that uses them. In applications working on data that frequently change, several kilobytes of data are exchanged between the server and the client palmtops. It is fairly evident that a similar tight coupling may easily saturate the network bandwidth when many palmtops are used in parallel, thus degrading the performances of the application running on it. This paper shows a way to reduce the waste of bandwidth by exploiting at the best the palmtop memory to deal with data caching. The proposed smart data caching is based on context information. We have also applied our method and analyzed its features in a specific application: electronic guide to archeological sites in the PAST EC IT project.

[5]
Walter Cazzola, Massimo Ancona, Fabio Canepa, Massimo Mancini, and Vanja Siccardi, “Enhancing Java to Support Object Groups”, in Proceedings of the Third Conference on Recent Object-Oriented Trends (ROOTS'02), Bergen, Norway, April 2002. [ .pdf ]
In this paper we show how to enhancing the Java RMI framework to support object groups. The package we have developed allows programmers to dynamically deal with groups of servers all implementing the same interface. Our group mechanism can be used both to improve reliability preventing system failures and to implement processor farm parallelism. Each service request dispatched to an object group returns all the values computed by the group members permitting the implementation of both kind of applications. Moreover, these approaches differ both over computations failure and over the semantic of the implemented interface. Our extension is achieved enriching the classic RMI framework and the existing RMI registry with new functionalities. From user's point of view the multicast RMI acts just like the traditional RMI system, and really the same architecture has been used.

[6]
Walter Cazzola, “mChaRM: Reflective Middleware with a Global View of Communications”, IEEE Distributed System On-Line, vol. 3, no. 2, February 2002. [ http ]
The main objective of remote-method-invocation- and object-based middleware is to provide a convenient environment for the realization of distributed computations. In most cases, unfortunately, interaction policies in these middleware platforms are hardwired into the platform itself. Some platforms, e.g., CORBA's interceptors, offer means to redefine such details but their flexibility is limited to the possibilities that the designer has foreseen.

In this way, distributed algorithms must be exclusively embedded in the application code, breaking any separation of concerns between functional and nonfunctional code. Some programming languages like Java disguise remote interactions as local calls, thus rendering their presence transparent to the programmer. However their management is not so transparent and easily maskable to the programmer.

We can summarize these kinds of problems with current middleware platforms as follows:

1. interaction policies are hidden from the programmer who cannot customize them (lack of adaptability);

2. communication, synchronization, and tuning code is intertwined with application code (lack of separation of concerns);

3. algorithms are scattered among several objects, thus forcing the programmer to explicitly coordinate their work (lack of global view).

[7]
Walter Cazzola, Massimo Ancona, Fabio Canepa, Massimo Mancini, and Vanja Siccardi, “Shifting Up Java RMI from P2P to Multi-Point”, Technical Report DISI-TR-01-13, DISI, Università degli Studi di Genova, December 2001. [ .pdf ]
In this paper we describe how to realize a Java RMI framework supporting multi-point method invocation. The package we have realized allows programmers to build groups of servers that could provide services in two different modes: fault tolerant and parallel. These modes differ over computations failure. Our extension is based upon the creation of entities which maintain a common state between different servers. This has been done extending the existing RMI registry. From the user's point of view the multi-point RMI acts just like the traditional RMI system, and really the same architecture has been used.

[8]
Massimo Ancona, Walter Cazzola, Enrico Martinuzzi, Paolo Raffo, and Ioan Bogdan Vasian, “Clustering Algorithms for the Optimization of Communication Graphs”, in Proceedings of the Fourth Conference Italo-Latino American of Industrial and Applied Mathematics, Havana, Cuba, March 2001, pp. 328–334. [ .pdf ]
One of the main goals in optimizing communication networks is to enhance performances by minimizing the number of message hops, i.e. the number of graph nodes traversed by a message. Most of the optimization techniques are based on clustering, i.e., the network layout is reconfigured in sub-networks. Network clustering has been largely studied in the literature but most of the available algorithms are application dependent.

In this paper we restrict our attention to algorithms based on the location of the median points, in order to build clusters with a balanced number of elements and to minimize communication time. We present two algorithms and relative experimental results about the quality of the computed clusterizations, in terms of the minimum number of computed hops. One algorithm is based on the well-known multi-median heuristic algorithm, while the other adopts a greedy approach, i.e., at each step the algorithm computes clusters farther and farther from each central node.

To the achieved clusterization we apply a further step, which consists in finding a virtual path layout according to Gerstel's (VPPL) algorithm. The adopted criterium for our experimental comparisons is the optimality, in terms of the number of signal hops, of the achieved virtual path layout. The experiments are carried out upon a set of networks representing real environments.

[9]
Walter Cazzola, Communication-Oriented Reflection: a Way to Open Up the RMI Mechanism, PhD thesis, Università degli Studi di Milano, Milano, Italy, February 2001. [ .pdf ]
The Problem
From our experience, RMI-based frameworks and in general all frameworks supplying distributed computation seem to have some troubles. We detected at least three problems related to their flexibility and applicability.

Most of them lack in flexibility. Their main duty consists in providing a friendly environment suitable for simply realizing distributed computations. Unfortunately, interaction policies are hardwired in the framework. If it is not otherwise foreseen, it is a hard job to change, for example, how messages are marshaled/unmarshaled, or the dispatching algorithm which the framework adopts. Some frameworks provide some limited mechanism to redefine such details but their flexibility is limited from the possibility that the designer has foreseen.

Distributed algorithms are imbued in the applicative code breaking the well-known software engineering requirement termed as separation of concerns. Some programming languages like Java mask remote interactions (i.e., remote method or procedure call) as local calls rendering their presence transparent to the programmer. However their management, — i.e., tuning the needed environment to rightly carry out remote computations, and synchronizing involved objects — is not so transparent and easily maskable to the programmer. Such a behavior hinders the distributed algorithms reuse.

Object-oriented distributed programming is not distributed object-oriented programming. It is an hard job to write object-oriented distributed applications based on information managed by several separated entities. Algorithms originally designed as a whole, have to be scattered among several entities and no one of these entities directly knows the whole algorithm. This fact improves the complexity of the code that the programmer has to write because (s)he has to extend the original algorithm with statements for synchronizing and for putting in touch all the remote objects involved in the computation. Moreover the scattering of the algorithm among several objects contrasts with the object-oriented philosophy which states that data and algorithms managing them are encapsulated into the same entity, because each object can't have a global view of any data it manages, thus we could say that this approach lacks of global view. The lack of global view forces the programmer to strictly couple two or more distributed object.

A reflective approach, as stated in [Briot98], can be considered as the glue sticking together distributed and object-oriented programming and filling the gaps in their integration. Reflection improves flexibility, allows developers to provide their own solutions to communication problems, and keeps communication code separated from the application code, and completely encapsulated into the meta-level.

Hence reflection could help to solve most of the troubles we detected. Reflection permits to expose implementation details of a systems, i.e., in our case allows to expose the interaction policies. It also permits to easily manipulate them. A reflective approach also permits to easily separate the interaction management from the applicative code. Using reflection and some syntactic sugar for masking the remote calls we can achieve a good separation of concerns also in distributed environments. Thanks to such considerations a lot of distributed reflective middleware have been developed. Their main goal consists both in overcoming the lacking of flexibility and in decoupling the interaction code from the applicative code.

By the way, reflective distributed middlewares exhibit the same troubles detected in the distributed middlewares. They still fail in considering each remote invocation in terms of the entity involved in the communication (i.e., the client, the server, the message and so on) and not as a single entity. Hence the global view requirement is not achieved. This is due to the fact that most of the meta-models that have been presented so far and used to design the existing reflective middlewares are object-based models. In these models, every object is associated to a meta-object, which traps the messages sent to the object and implements the behavior of that invocation. Such a meta-models inherit the trouble of the lack of global view from the object-oriented methodology which encapsulates the computation orthogonally to the communication.

Hence, these approaches are not appropriate to handle all the aspects of distributed computing. In particular adopting an object-based model to monitor distributed communications, the meta-programmer often has to duplicate the base-level communication graph into the meta-level augmenting the meta-program complexity. Thus, object-based approaches to reflection on communications move the well-known problem [Videira-Lopez95] of nonfunctional code intertwined to functional one from the base- to the meta-level. Simulating a base-level communication into the meta-level allows to perform meta-computations either related sending or receiving action, but not related to the whole communication or which involve information owned both by the sender and by the receiver without dirty tricks. This trouble goes under the name of global view lacking.

Besides, object-based reflective approaches and their reflective middlewares based on them allow only to carry out global changes to the mechanisms responsible for message dispatching, neglecting the management of each single message. Hence they fail to differentiate the meta-behavior related to each single exchanged message. In order to apply a different meta-behavior to either each or group of exchanged messages the meta-programmer has to write the meta-program planning a specific meta-behavior for each kind of incoming message. Unfortunately, in this way the size of the meta-program grows to the detriment of its readability, and of its maintenance.

Due to such a consideration, a crucial issue of opening a RMI-based framework consists in choosing a good meta-model which permits to go around the global view lacking, and to differentiate the meta-behavior for each exchanged message.

Our Solution
From the problem analysis we have briefly presented we learned that in order to solve the drawbacks of the RMI-based framework we have to provide an open RMI mechanism, i.e., a reflective RMI mechanism, which exposes its details to be manipulated by the meta-program and allows the meta-program to manage each communication separately and as a single entity. The main goal of this work consists in designing such a mechanism using a reflective approach.

To render the impact of reflection on object-oriented distributed framework effective, and to obtain a complete separation of concern, we need new models and frameworks especially designed for communication-oriented reflection, i.e., we need a reflective approach suitable for RMI-based communication which allows meta-programmer to enrich, manipulate and replace each remote method invocation and its semantics with a new one. That is, we need to encapsulate message exchanging into a single logical meta-object instead of scattering any relevant information related to it among several meta-objects and mimicking the real communication with one defined by the meta-programmer among such meta-objects as it is done using traditional approaches.

To fulfill this commitment we designed a new model, called multi-channel reification model. The multi-channel reification model is based on the idea of considering a method call as a message sent through a logical channel established among a set of objects requiring a service, and a set of objects providing such a service. This logical channel is reified into a logical object called multi-channel, which monitors message exchange and enriches the underlying communication semantics with new features used for the performed communication. Each multi-channel can be viewed as an interface established among the senders and the receivers of the messages. Each multi-channel is characterized by its behavior, termed kind, and the receivers, which it is connected to.

multi-channel ≡ (kind, receiveri, ..., receivern)

Thanks to this multi-channel's characterization it is possible to connect several multi-channels to the same group of objects. In such a case, each multi-channel will be characterized by a different kind and will filter different patterns of messages.

This model permits to design an open RMI-based mechanism which potentially overcomes the previously exposed problems.

In this way, each communication channel is reified into a meta-entity. Such a meta-entity has a complete access to all details related to the communications it filters, i.e. the policies related to both the sender, and the receivers side, and, of course, the messages it filters. A channel realizes a close meta-system with respect to the communications. It encapsulates all base-level aspect related to the communication providing the global view feature.

Of course, this model keeps all the properties covered by the other reflective models, such as transparency and separation of concerns. Hence the approach also guarantees to go around the problems already solved using reflection. Protocols and other realizative stuff are exposed to the meta-programmer manipulations, and the remote method invocation management is completely separated from the applicative code.

Moreover through the kind mechanism we can differentiate the behavior which is applied to a specified pattern of messages. So a set of multi-channels (each one with a different kind) can be associated to the same communication channel. Each channel will operate to a different set of messages. In this way the channel's code is related to a unique behavior it indiscriminately has to apply to all the messages it filters.

mChaRM is a framework developed by the authors which opens the RMI mechanism supplied by Java. This framework supplies a development and run-time environment based on the multi-channel reification model. Multi-channels will be developed in Java, and the underlying mChaRM framework will dynamically realize the context switching and the causal connection link. A beta version of mChaRM, documentations and examples are available from:

http://cazzola.di.unimi.it/mChaRM_webpage.html

Such a system provided RMI-based programming environment. The supplied RMI mechanism is multi-cast (i.e., supplies a mechanism to remotely invoke a method of several servers), open (the RMI mechanism is fully customizable through reflection), and globally aware of its aspects. Some example of application are also provide.

[10]
Walter Cazzola, “Communication Oriented Reflection”, in ECOOP'00 Workshop Reader, Jacques Malenfant, Sabine Moisan, and Ana Moreira, Eds., Lecture Notes in Computer Science 1964, pp. 287–288. Springer-Verlag, December 2000. [ www: ]

[11]
Massimo Ancona, Walter Cazzola, Paolo Raffo, and Ioan Bogdan Vasian, “Virtual Path Layout Design Via Network Clustering”, in Proceedings of International Conference Communications 2000, Bucharest, Romania, December 2000, IEEE, pp. 352–360. [ www: ]

[12]
Walter Cazzola and Massimo Ancona, “mChaRM: a Reflective Middleware for Communication-Based Reflection”, Technical Report DISI-TR-00-09, DISI, Università degli Studi di Genova, May 2000. [ www: ]

Walter Cazzola

Didactics

Publications

Funded Projects

Research Projects

Related Events








Valid XHTML 1.0 Transitional