Advanced Ibero-American School on Software Engineering (EIbAIS)

The Advanced Ibero-American School on Software Engineering (EIbAIS) represents the CIbSE initiative to disseminate Software Engineering knowledge in Ibero-America. It started as a natural evolution of the CIbSE tutorials and short courses, by including prospective topics in the field and non-exclusively regarding the main CIbSE tracks: technologies, experimentation, and requirements.

EIbAIS aims to promote a forum for discussions regarding software engineering and its related technologies and theoretical foundations, allowing the participation of practitioners, undergraduate and graduate students, and researchers in the activities. As in any CIbSE related activity, all lecturers are presented by volunteers engaged with contributing to the CIbSE community and the spreading of Software Engineering knowledge in Ibero-America.

EIbAIS is organized into two basic categories of plenary sessions for two days: state-of-the-practice and state-of-the-art modules.

The modules regarding the state-of-the-practice offer discussions on topics of general interest and usually representing local demand. These modules support the amendment of knowledge and practical issues to the participants, presenting evidence-based software engineering results to the audience. Examples of state-of-practice topics include CASE tools, software testing of conventional systems, scenario-based specifications, surveys, controlled experiments, and so on.

The modules regarding the state-of-the-art offer discussion on topics of prospective interest and usually representing the Software Engineering community perspectives. These modules intend to provide information on ongoing and next to coming software engineering technologies candidates that could represent breakthroughs concepts in the field. Examples of state-of-the-art topics include requirements specification for ubiquitous systems, software testing of context awareness systems, software engineering for the Internet of Everything, simulation-based experiments in software engineering, synthesis of evidence, and so on.

All the participants that take part in the plenary sessions will get a certificate of participation, indicating the modules, lecturers, and contents.

The main topics for both types of modules of EIbAIS are going to be announced soon!

EIbAIS chairs

  • Beatriz Marín (Universidad Politécnica de Valencia, España)
  • Efraín Rodrigo Fonseca Carrera (Universidad de las Fuerzas Armadas ESPE, Ecuador)

EIbAIS talks

Carolyn Seaman

Carolyn Seaman is a Professor of Information Systems at the University of Maryland Baltimore County (UMBC). She is also the Director of the Center for Women in Technology, also at UMBC. Her research consists mainly of empirical studies of software engineering, with particular emphases on maintenance, organizational structure, communication, measurement, and technical debt. She also investigates qualitative research methods in software engineering, as well as computing pedagogy.  She holds a PhD in Computer Science from the University of Maryland, College Park, a MS from Georgia Tech, and a BA from the College of Wooster (Ohio).  

Decision Making in Software Engineering: The Central Role of Technical Debt 

Description

Technical Debt is a metaphor that captures the common tradeoff in software development projects between short-term pressures (e.g. delivery time) and long-term concerns (e.g. maintainability). It refers to the existing liabilities in a software product that were created in response to a short-term pressure but that pose a risk to the ability of the team to maintain the product over time. Common types of technical debt include overly complex code, poorly structured code, undocumented code, degradation of the software architecture, insufficient testing, etc. Decisions about whether to incur debt, or when to pay off debt, or which debt to pay off, perfectly encapsulate the short vs. long term tradeoffs that are typical in software engineering. In fact, it can be argued that nearly all decisions made during a software project are in some way related to Technical Debt. Thus, deeply understanding and improving Technical Debt decision making will have a significant impact on the management of software projects in general. In this keynote, we will review current streams in decision making research, in Technical Debt, in software engineering in general, and in related disciplines.


Luis Olsina

Luis Olsina, is an exclusive Regular Professor at the Faculty of Engineering, National University of La Pampa, Argentina, Senior Researcher and Director of the I+D group called GIDIS_Web. He has received the degrees of Doctor of Science (Software / Web Engineering Area) and Magister in Software Engineering both from the National University of La Plata, Argentina; besides Degree in Information Systems and Business Analyst. In the last 25 years he has published more than 170 scientific articles in international and national journals and congresses and, in addition, he has published the book Web Engineering: Modeling and Implementing Web Applications by Springer, HCIS Series as co-editor with his colleagues Rossi, Schwabe and Shepherd.

Luis has co-chaired scientific events such as: Web Engineering Workshop held in the USA within the framework of ICSE 2002 (Int’l Conference on Software Engineering); the 2002 and 2003 ICWE congresses held in Argentina and Spain; the IDEAS’04 workshop held in Peru (currently CIbSE), in addition to LA-Web 2005 and 2008 held in Argentina and Brazil, and the Web Engineering Track of the WWW’06 conference held in Edinburgh, UK, among others more recent such as QUATIC 2020, on the Requirements Engineering Track. His areas of greatest interest are Software / Web Engineering, Quality Measurement and Evaluation Strategies, Methods and Processes, and Specification of Ontologies at the Foundational, Core and Domain levels. He has been invited to give lectures, tutorials and postgraduate courses in the area in various countries around the world.

 

ThingFO: Foundational Ontology useful to enrich various terminologies such as Process and Test 

Description

The goal of the tutorial is to introduce participants in a foundational ontology useful for all sciences. It is called ThingFO and is located at the highest or foundational level (FO) of a four-layer ontological architecture. The next level is called core (CO) and here are located ontologies of Process, Situation, Project, and others. At both levels mentioned the terms of the ontologies are independent of any domain. The next level is called domain (DO) and here are located ontologies of Test, Evaluation, Functional and Non-Functional Requirements, among others. The last layer of the architecture is called the instance level.

ThingFO represents a reduced set of terms (Thing, Thing Category and Assertions) that refer both to particular and universal elements of the world and to different types of statements about them. These three terms (in addition to the relationships) are useful to enrich terms from the process (ProcessCO) and testing (TestTDO) ontologies, as from any other. The tutorial will illustrate the usefulness of ThingFO to particularly enrich process and testing terminologies. Terminologies have explicit, complete and concise, help communicate and represent effectively and efficiently to the specifications of strategies, processes, methods and tools used in any engineering or science, to solve problems of development, maintenance, evaluation and testing.


Oscar Dieste

Oscar Dieste received his BS and MS in Computing from the University of La Coruña and his PhD from the University of Castilla-La Mancha. He is a researcher with the UPM’s School of Computer Engineering. He was previously with the University of Colorado at Colorado Springs (as a Fulbright scholar), the Complutense University of Madrid, and the Alfonso X el Sabio University. His research interests include empirical software engineering and requirements engineering.

Power analysis made easy

Description

Power analysis is an activity performed during experimental design which informs experimenters about the minimum sample size to detect a given effect. The required sample size is a crucial decision criterion to find out whether experimentation is valuable. If the minimum sample size cannot be achieved, the experiment will not give accurate results, and it might be worthwhile not to experiment at all.

Software Engineering (SE) experiments typically do not include a power analysis. I firmly believe that SE experiments do not consider power analysis useless; they have trouble conducting such analysis instead. Honestly, me too. Tools such as G*Power are easy to use, but I feel uncertain when I report the results.

However, power analysis is relatively easy. This tutorial will show how to create R scripts to estimate the power of linear models (ANOVAs, mixed models, etc.) using Monte-Carlo methods. The concepts can be applied to arbitrary analysis approaches, e.g., chi-squared, but we will cover that part only if we have enough time. I hope my fellow attendees report power analysis in their papers in the future.