Opponentship List June 2019


DVA423

MEASURING COMPLEXITY OF NATURAL LANGUAGE REQUIREMENTS IN INDUSTRIAL CONTROL SYSTEMS

Kostadin Rajkovic
Advisor: Eduard Einou, Antonio Cicchetti
Examiner: Jan Carlson

Abstract: Requirements specification documents are one of the main sources of guidance in software engineering projects and they contribute to the definition of the final product and its attributes. They can often contain text, graphs, figures and diagrams. However, they are still mostly written in Natural Language (NL) in industry, which is also a convenient way of representing them. With the increase in the size of software projects in industrial systems, the requirements specification documents are often growing in size and complexity, that could result in requirements documents being not easy to analyze. There is a need to provide the stakeholders with a way of analyzing requirements in order to develop software projects more efficiently.

In this thesis we investigate how the complexity of textual requirements can be measured in industrial systems. A set of requirements complexity measures was selected from the literature. These measures are adapted for application on real-world requirements specification documents. These measures are implemented in a tool called RCM and evaluated on requirements documentation provided by Bombardier Transportation AB. The statistical correlation between the selected measures was investigated based on a sample of data from the provided documentation. The statistical analysis has shown a significant correlation between a couple of selected measures. In addition, a focus group was performed with a goal of exploring the potential use of these metrics and the RCM tool in industrial systems as well as what different areas of potential improvement future research can investigate.


DVA423

Test script design approaches supporting reusability, maintainability and review process

Aleksandar Acimovic, Aleksandar Bajceta
Advisor: Adnan Causevic
Examiner: Kristina Lundqvist

Abstract: Software testing is widely considered to be one of the most important parts of software development life-cycle. In this research, we investigated potential improvements in the testing process and design of automated test scripts inside Bombardier Transportation. For the creation of automated test scripts BT is using a group of programs called TAF (Test Automation Framework). These scripts are used for testing Train Control Management System (TCMS), software that is used for managing the train. TAF can export its test scripts in XML format. XML scripts are analyzed in order to identify the most frequent changes. To better understand the life cycle of automated Test scripts official documentation that defines the Verification and Validation process inside BT was analyzed. Also, an interview was conducted with one of the responsible persons for testing. We believe that we have found a possible solution for improving testing process and creation of automated test scripts in BT, and to evaluate it proof of concept tool was developed. The main idea behind the tool is to write the test script using keywords which are based on analysis that was conducted on test specification documentation. These keywords represent frequent actions that are being tested on the train. By storing those actions in keywords re-usability of test script is being increased. Also, because they are based on naturally language, they are having positive effect on readability and maintenance of the test script.


DVA423

Applicability study of software architectures in the discreet manufacturing domain.

Dhespina Carhoshi, Ermal Bizhuta
Advisor: Mohammad Ashjaei, Severine Sentilles
Examiner: Thomas Nolte

Abstract: Manufacturing, under the umbrella of the latest industrial revolution, has gone through enormous changes in the last decades to then later evolve in what we know now as smart manufacturing. Different companies and entities have developed their own versions of architectures for intelligent and digitalized manufacturing systems. Ideating a exible and safe architecture is one of the first steps towards a system that intends to be applicable in different environments, regardless of the vast variety of possibilities available. For this purpose, the following thesis presents an investigation on the state-of-the-art solutions of the most recent digitalized cloud-based system architectures in the domain of discreet manufacturing. Based on an initial system architecture conceived from the company ABB, an evaluation of this architecture was conducted, by taking in consideration the existing systematical approaches to the digitalization of this industry. In the following thesis work, we investigate, describe and evaluate the limitations and strengths of the most recent and known architectural approaches to cloud robotics. Finally, a few key remarks are made towards ABB's initial solution but also to the industry in general.


DVA428

Performance Study and Analysis of Time Sensitive Networking.

Haris Suljic, Mia Muminovic
Advisor: Mohammad Ashjaei
Examiner: Saad Mubeen

Abstract: Modern technology requires reliable, fast, and cheap networks as a background for the data transmission. Among many available solutions, switched Ethernet combined with Time Sensitive Networking (TSN) standard excels because it provides high bandwidth and real-time characteristics by utilizing low-cost hardware. For the industry to acknowledge this technology, extensive performance studies need to be conducted, and this thesis provides one. Concretely, the thesis examines the performance of two amendments IEEE 802.1Qbv and IEEE 802.1Qbu that are recently appended to the TSN standard. The academic community understands the potential of this technology, so several simulation frameworks already exist, but most of them are unstable and undertested. This thesis builds on top of existent frameworks and utilizes the framework developed in OMNeT++. Performance is analyzed through several segregated scenarios and is measured in terms of end-to-end transmission latency and link utilization. Attained results justify the industry interest in this technology and could lead to its greater representation in the future.


DVA428

Mapping HW Resource Usage Towards SW Performance.

Benjamin Suljević
Advisor: Jakob Danielsson
Examiner: Moris Behnam

Abstract: With the software applications increasing in complexity, description of hardware is becoming increasingly relevant. To ensure the quality of service for specific applications, it is imperative to have an insight into hardware resources. Cache memory is used for storing data closer to the processor needed for quick access and improves the quality of service of applications. The description of cache memory usually consists of the size of different cache levels, set associativity, or line size. Software applications would benefit more from a more detailed model of cache memory.

In this thesis, we offer a way of describing the behavior of cache memory which benefits software performance. Several performance events are tested, including L1 cache misses, L2 cache misses, and L3 cache misses. With the collected information, we develop performance models of cache memory behavior. Goodness of fit is tested for these models and they are used to predict the behavior of the cache memory during future runs of the same application.

Our experiments show that L1 cache misses can be modeled to predict the future runs. L2 cache misses model is less accurate but still usable for predictions, and L3 cache misses model is the least accurate and is not feasible to predict the behavior of the future runs.


DVA428

Creation of a Technology Independent Design Flow.

Anton Urvantsev
Advisor: Nils Muellner
Examiner: Cristina Seceleanu

Abstract: Modern embedded systems development poses new challenges to a designer due to the global reachability of the contemporary market. One product shipped to different countries or customers should satisfy varying conditions, standards and constraints. Variability of a developed system should be taken into account by a designer. In a case of the embedded heterogeneous systems, this problem becomes challenging. Along with the variability heterogeneity of a system introduces new tasks, which should be addressed during design process. In this work, we propose a technology independent design flow. The proposed solution is supported by state-of-the-art tools and takes into account variability, partitioning, interfacing and dependency resolving processes. This thesis is conducted as a case study. We explored a design process of an industrial project, identified existing challenges and drawbacks in the existing solutions. We propose a new approach to a design flow of heterogeneous embedded systems. Also, a tool, supporting the presented solution, is implemented, which would allow a developer to include this approach into everyday design flow in order to increase a development speed and enable a task automation.


DVA428

Optimizing Inter-core Data-Propagation Delays in Multi-core Embedded Systems

Emir Hasanovic, Hasan Grosic
Advisor: Saad Mubeen
Examiner: Thomas Nolte

Abstract: The demand for computing power and performance in real-time embedded systems is continuously increasing, since new customer requirements and more advanced features are appearing every day. To support these functionalities and handle them in a more efficient way, multi-core computing platforms are introduced. These platforms allow for a parallel execution of tasks on multiple cores, which in addition to its benefits to the system’s performance, introduces a major problem regarding the timing predictability of the system. That problem is reflected in unpredictable inter-core interferences, which occur due to shared resources among the cores, such as the system bus. This thesis investigates the application of different optimization techniques for the offline scheduling of tasks on the individual cores, together with a global scheduling policy for the access to the shared bus. The main effort of this thesis focuses on optimizing the inter-core data propagation delays which can provide a new way for creating optimized schedules. For that purpose, Constraint Programming optimization techniques are employed and a Phased Execution Model of the tasks is assumed. Also, in order to enforce end-to-end timing constraints that are imposed on the system, job-level dependencies are generated prior, and subsequently applied during the scheduling procedure. Finally, an experiment with a large number of test cases is conducted to evaluate the performance of the implemented scheduling approach. The obtained results show that the method is applicable for a wide spectrum of abstract systems with variable requirements, but also open for further improvement in several aspects.


DVA424

RoboRebeca:A New Framework To design Verified ROS-Based Robotic Programs

Saeid Dehnavi
Advisor: Marjan Sirjani, Ali Seddaghatbaf
Examiner: Jan Carlson

Abstract: Robotic technology helps humans in different areas such as manufacturing, health care and education. Due to the ubiquitous revolution, today's focus is on mobile robots and their applications in a variety of cyber-physical systems. There are several powerful robot middlewares, such as ROS and YARP to manage the complexity of robotic software implementation. However, they do not provide support for assuring important properties such as timeliness and safety. We believe that integrating model checking with a robot middleware helps developers design and implement high quality robotic software. By defining a general conceptual model for robotic programs, in this thesis we present an integration of Timed Rebeca modeling language (and its model checker) with ROS to automatically synthesize verified ROS-based robotic software. For this integration, first the conceptual model is mapped to a Timed Rebeca model which is used to verify the desired properties on the model. The Timed Rebeca model may be modified several times until the properties are satisfied. Finally, the verified Timed Rebeca model is translated to a ROS program based on a set of mapping rules. Conducting experiments on some small-scale case studies indicate the usefulness and applicability of the proposed integration method.


DVA501

Constraints for avoiding SysML model inconsistencies

Cristian Capozucco
Advisor: Federico Ciccozzi, Jan Carlson
Examiner: Antonio Cicchetti

Abstract: Models are used in multiple phases of a development process for several purposes. However, models may present inconsistencies. This is often due to the modelling language leaving a certain degree of freedom to the modeller or the tool, implementing the language and used by the modeller, not providing enough support for identifying possible inconsistencies. This thesis identifies a certain number of possible modelling situations, defined with the System Modeling Language, that can lead to inconsistencies and studies the causes of them, more specifically if they are caused by the language specification or by how the specification is implemented in modelling tools. Moreover, we provide automatic validation checks to identify those inconsistencies by means of constraints defined with the Epsilon Validation Language. The results of this thesis are useful for modellers since it helps them in automatically identifying inconsistencies in models and in that way benefitting the most from modelling activities.


DVA501

A Model-Driven Engineering approach for modeling Heterogeneous Embedded Systems

Vincenzo Stoico
Advisor: Federico Ciccozzi
Examiner: Jan Carlson

Abstract: Demands of high-performance systems guided the designers to the assessment of heterogeneous embedded systems (HES). Their complexity highlighted the need for methodologies and tools to ease their design. Model-Driven Engineering (MDE) can be crucial to facilitate the design of such a system. Research has demonstrated the usage of MDE to create platform-specific models (PSM). The aim of this work is to support HES design targeting platform-agnostic models. This work is based on a well-defined use case. It comprises a software application, written following the CUDA programming model, executing on a CPU-GPU hardware platform. The use case is analyzed to define the main characteristics of a HES. These concerns are included in a UML profile used to capture all the features of a HES. The profile is built as an extension of MARTE modeling language. Finally, the Alf action language is applied to make the model executable. The results prove the suitability of MARTE and Alf to create executable HES models. Additional research is needed to further investigate the HES domain. Finally, it is necessary to prove the validity of the UML profile targeting different programming models and hardware platforms.


DVA501

Automated Synthesis of Model Comparison Benchmarks

Lorenzo Addazi
Advisor: Antonio Cicchetti
Examiner: Jan Carlson

Abstract: Model-driven engineering promotes the migration from code-centric to model-based software development. Systems consist of model collections integrating different concerns and perspectives, while semi-automated model transformations generate executable code combining the information from these. Increasing the abstraction level to models required appropriate management technologies supporting the various software development activities. Among these, model comparison represents one of the most challenging tasks and plays an essential role in various modelling activities. Its hardness led researchers to propose a multitude of approaches adopting different approximation strategies and exploiting specific knowledge of the involved models. However, almost no support is provided for their evaluation against specific scenarios and modelling practices. This thesis presents Benji, a framework for the automated generation of model comparison benchmarks. Given a set of differences and an initial model, users generate models resulting from the application of the first on the latter. Differences consist of preconditions, actions and postconditions expressed using a dedicated specification language. The generator converts benchmark specifications to design-space exploration problems and produces the final solutions along with a model-based description of their differences with respect to the initial model. A set of representative use cases is used to evaluate the framework against its design principles, which resemble the essential properties expected from model comparison benchmark generators.


DVA501

Mapping UML diagrams to the Reactive Object Language (Rebeca)

Vladimir Djukanovic
Advisor: Antonio Cicchetti
Examiner: Marjan Sirjani

Abstract: Unified Modeling Language (UML) is a de-facto standard modeling language and has been used for years in various industrial domains. It is a general-purpose language with an extensive syntax and notations that can be used to model a system of any kind. However, its semantics are under-specified and too broad which leaves a room for different interpretations. This hinders the ability to perform formal verification of produced models and introduces the need for stricter and more rigorous specification. With that in mind, it is usually more suitable to map the UML models to other domains, where modeling concepts have stricter semantics. Notably, Reactive Objects Language (Rebeca) is an actor-based language with a formal foundation and formal verification support. This work aims to bridge this gap in UML by proposing a comprehensive mapping procedure between UML concepts and Rebeca concepts, thus enabling a formalization of a subset of UML used for mapping. In particular, we investigate Rebeca semantics, by extracting them from selected examples, and for each of them, we provide the corresponding UML semantics, as part of an iterative process. This process ends when all Rebeca semantics are exhausted and comprehensive mapping procedure emerges. Additionally, validation is an important part of this thesis as it aims to establish confidence in the developed mapping procedure (in post-conversion validation) and avoid performing the transformation if the design is not compatible with the mapping procedure (in pre-conversion validation). As part of the pre-conversion validation, in order to establish the compatibility with the mapping procedure, we provide an extensive list of correctness attributes. As part of the post-conversion validation, the mapping procedure is validated by transformation on the provided examples. The results of this transformation show the wide range applicability of the mapping procedure and serve as an assertion of its comprehensiveness.


DVA501

Systematic literature review of safety-related challenges for autonomous systems in safety-critical applications

Milos Ojdanic
Advisor: Elena Lisova, Irfan Sljivo
Examiner: Aida Causevic

Abstract: An increased focus on the development of autonomous safety-critical systems requires more attention at ensuring safety of humans and the environment. The main objective of this thesis is to explore the state of the art and to identify the safety-related challenges being addressed for using autonomy in safety-critical systems. In particular, the thesis explores the nature of these challenges, the different autonomy levels they address and the type of safety measures as proposed solutions. Above all, we focus on the safety measures by a degree of adaptiveness, time of being active and their ability of decision making. Collection of this information is performed by conducting a Systematic Literature Review of publications from the past 9 years. The results showed an increase in publications addressing challenges related to the use of autonomy in safety-critical systems. We managed to identify four high-level classes of safety challenges. The results also indicate that the focus of research was on finding solutions for challenges related to full autonomous systems as well as solutions that are independent of the level of autonomy. Furthermore, considering the amount of publications, results show that non-learning solutions addressing the identified safety challenges prevail over learning ones, active over passive solutions and decisive over supportive solutions.


DVA502

Real-Time Pupillary Analysis By Intelligent Embedded System

Alexandra Hengl, Mujtaba Hasanzadeh
Advisor: Martin Ekström, Adnan Causevic
Examiner: Ning Xiong

Abstract: With no online pupillary analysis methods today, both the medical and the research fields are left to carry out a lengthy, manual and often faulty examination. A real-time, intelligent, embedded systems solution to pupillary analysis would help reduce faulty diagnosis, speed-up the analysis procedure by eliminating the human expert operator and in general, provide a versatile and highly adaptable research tool. Therefore, this thesis has sought to investigate, develop and test possible system designs for pupillary analysis, with the aim for caffeine detection. A pair of LED manipulator glasses have been designed to standardize the illumination method across testing. A data analysis method of the raw pupillary data has been established offline and then adapted to a real-time platform. ANN was chosen as classification algorithm. The accuracy of the ANN from the offline analysis was 94% while for the online classification the obtained accuracy was 17%. A real-time data communication and synchronization method has been developed. The resulting system showed reliable and fast execution times. Data analysis and classification took no longer than 2ms, faulty data detection showed consistent results. Data communication suffered no message loss. In conclusion, it is reported that a real-time, intelligent, embedded solution is feasible for pupillary analysis.


DVA502

Machine learning for mechanical analysis

Sebastian Bengtsson
Advisor: Martin Ekström
Examiner: Ning Xiong

Abstract: It is not reliable to depend on a persons inference on dense data of high dimensionality on a daily basis. A person will grow tired and make mistakes over time. Therefore it is desirable to study the feasibility of replacing a persons inference with that of Machine Learning. Support Vector Machines (SVM) are implemented and tested for Anomaly Detection and classification and compared to the performance of Back-Propagation Neural Networks. Principal Component Analysis and Autoencoders are used with the intention to increase performance. One-Class SVMs proved very effective in detecting anomalous samples. SVMs were used for multiclass classification using the 1vsAll and 1vs1 approaches, producing promising results.

Sammanfattning:


DVA502

AUTOMATED TESTING OF ROBOTIC SYSTEMS IN SIMULATED ENVIRONMENTS

Sebastian Andersson, Gustav Carlstedt
Advisor: Alessandro Papadopoulos, Eduard Paul Enoiu
Examiner: Daniel Sundmark

Abstract: With the simulations tools available today, simulation can be utilised as a platform for more advanced software testing. By introducing simulations to software testing of robot controllers, the motion performance testing phase can begin at an earlier stage of development. This would benefit all parties involved with the robot controller. Testers at ABB would be able to include more motion performance tests to the regression tests. Also, ABB could save money by adapting to simulated robot tests and customers would be provided with more reliable software updates. In this thesis, a method is developed utilising simulations to create a test set for detecting motion anomalies in new robot controller versions. With auto-generated test cases and a similarity analysis that calculates the Hausdorff distance for a test case executed on controller versions with an induced artificial bug. A test set has been created with the ability to detect anomalies in a robot controller with a bug.


DVA502

Waveform clustering - Grouping similar power system events

Therése Eriksson, Mohamed Mahmoud
Advisor: Elaine Åstrand, Joaquín Ballesteros
Examiner: Ning Xiong

Abstract: Over the last decade, data has become a highly valuable resource. Electrical power grids deal with large quantities of data, and continuously collect this for analytical purposes. Anomalies that occur within this data is important to identify since they could cause nonoptimal performance within the substations, or in worse cases damage to the substations themselves. However, large datasets in the order of millions are hard or even impossible to gain a reasonable overview of the data manually. When collecting data from electrical power grids, predefined triggering criteria are often used to indicate that an event has occurred within the specific system. This makes it difficult to search for events that are unknown to the operator of the deployed acquisition system. Clustering, an unsupervised machine learning method, can be utilised for fault prediction within systems generating large amounts of multivariate time-series data without labels and can group data more efficiently and without the bias of a human operator. A large number of clustering techniques exist, as well as methods for extracting information from the data itself, and identification of these was of utmost importance. This thesis work presents a study of the methods involved in the creation of such a clustering system which is suitable for the specific type of data. The objective of the study was to identify methods that enables finding the underlying structures of the data and cluster the data based on these. The signals were split into multiple frequency sub-bands and from these features could be extracted and evaluated. Using suitable combinations of features the data was clustered with two different clustering algorithms, CLARA and CLARANS, and evaluated with established quality analysis methods. The results indicate that CLARA performed overall best on all the tested feature sets. The formed clusters hold valuable information such as indications of unknown events within the system, and if similar events are clustered together this can assist a human operator further to investigate the importance of the clusters themselves. A further conclusion from the results is that research into the use of more optimised clustering algorithms is necessary so that expansion into larger datasets can be considered.


DVA502

Development of a robust cascade controller for a riderless bicycle

Tom Andersson, Niklas Persson
Advisor: Anas Fattouh, Martin Ekström
Examiner: Alessandro Papadopoulos

Abstract: A controlled riderless bicycle is desired for the purpose of testing autonomous vehicles ability to detect and recognise cyclists. The bicycle, which is a highly unstable system with complex dynamics have been subject to research for over a century, and in the last decades, controllers have been developed for autonomous bicycles. The controllers are often only evaluated in simulation, but some complex controllers have been developed on real-life bicycles as well. The goal of this work is to validate sensors and subsystems of an instrumented bicycle and to develop a robust controller which can balance a bicycle by using actuation on the steering axis alone. Using an iterative design process, the sensor measuring the lean angle and the steering system are improved and validated. By sensing the lean angle, the handlebar is manipulated to make the bicycle stable. For this purpose, a P, PD, two different PID, an LQR and a fuzzy controller are developed, evaluated and compared. The results show that the bicycle can ride without human interaction on a bicycle roller in different velocities. Additionally, numerous experiments are conducted in an outdoor environment in several different terrains, where the proposed control structure manages to balance and steer the bicycle.


DVA502

Design of a multi-camera system for object identification, localisation, and visual servoing

Ulrik Åkesson
Advisor: Fredrik Ekstrand
Examiner: Mikael Ekström

Abstract: In this thesis, the development of a stereo camera system for an intelligent tool is presented. The task of the system is to identify and localise objects so that the tool can guide a robot. Different approaches to object detection have been implemented and evaluated and the systems ability to localise objects has been tested. The results show that the system can achieve a localisation accuracy below 5 mm.


DVA503

CODE PORTING IN EMBEDDED SYSTEMS: A CASE STUDY

Ali Alexander Mokdad
Advisor: Nils Mullner, Lennie Carlen Eriksson
Examiner: Adnan Causevic

Abstract: Code porting is a very known topic nowadays that brings some issues with it. It consists of translating code running on a specific platform, to run on a different one and adapt with the new hardware. This thesis aims to present code porting and find possible guidelines to be applied when being performed, and contributes at the same time to the state of the art of code porting in embedded systems. Also, test these selected guidelines in a case study to validate whether they can be applied for each action of code porting. The case study consists of an autonomous underwater vehicle having a proportional integral deivative (PID) that is ported into a new hardware according to the selected guidelines.


DVA424

Automating Integration-Level Test Case Generation for Object-Oriented .Net Applications

Mehdi Qorbanpur
Advisor: Mehrdad Saadatmand
Examiner: Antonio Cicchetti

Abstract: While many tools have been created for automating the Unit Testing in industry, the Integration Testing automation, because of its complexity, has always been a challenge in software engineering. In recent years, although some industrial tools have been introduced in this context, but none of them have been about automatic Test Case Generation at integration level. By the emergence of new distributed development environments and agile methodologies in recent years, the process of software development has considerably speeded-up, and as a consequence, the concept of DevOps including continuous integration and continuous delivery (CI/CD) has become important more and more. As a result, integration level testing has been getting software specialists’ attention more than before. In 2018, based on Data Flow analysis techniques, IntegrationDistiller [1] was introduced as an automated solution and tool to identify integration scenarios and generate test cases for .NET applications, using Roslyn C# Compiler APIs [2]. In this paper, after re-implementing the solution together with some improvements on the analysis algorithm, the validity of this approach was assessed by examining a couple of C# projects as benchmarks, and confronting the results to the Integration-Level Mutation Operators for Object-Oriented applications used in jMINT [3]. According to the reviewed literature, based on Coupling-based Analysis, and applicable Roslyn features, some future work is suggested at the end.


DVA503

DECISION-MAKING FOR AUTONOMOUS CONSTRUCTION VEHICLES

Marielle Gallardo, Sweta Chakraborty
Advisor: Saad Mubeen, Ning Xiong
Examiner: Daniel Sundmark

Abstract: Autonomous driving requires tactical decision-making while navigating in a dynamic shared space environment. The complexity and uncertainty in this process arise due to unknown and tightly-coupled interaction among traffic users. This thesis work formulates an unknown navigation problem as a Markov decision process (MDP), supported by models of traffic participants and user space. Instead of modeling a traditional MDP, this work formulates a Multi-policy decision making (MPDM) in a shared space scenario (pedestrian and vehicle). The employed model enables unified and robust self-driving of the ego vehicle by selecting a desired policy of along the pre-planned path. Obstacle avoidance is coupled within the navigation module performing a detour off the planned path obtains a reward on task completion and penalizes for collision with others. In addition to this, the thesis work is further extended by analyzing the real-time constraints of the proposed model. The performance of the implemented framework is evaluated in a simulation environment on a typical construction (quarry) scenario. The effectiveness and efficiency of the choice policy to verify the desired behavior of the autonomous vehicle.