SIMULTECH 2024 Abstracts


Area 1 - Modeling and Simulation Methodologies

Full Papers
Paper Nr: 16
Title:

Enhancing Echo Processing Through the Integration of Support Vector Machine and Weber's Law Descriptors

Authors:

Mehdia Hedir, Fethi Demim, Ali Z. Messaoui, Aimen A. Messaoui, Hadjira Belaidi, Abdenebi Rouigueb and Abdelkrim Nemra

Abstract: Removing ground echoes from weather radar images is a topic of great importance due to their significant impact on the accuracy of processed data. To address this challenge, we aim to develop methods that effectively eliminate ground echoes while preserving the precipitation, which is a crucial meteorological parameter. To accomplish this, we propose to test Local Descriptors based on Weber’s law (WLD), as well as descriptors that combine Weber’s law with Local Binary Pattern (WLBP), using Support Vector Machine (SVM) classifiers to automate the recognition of both types of echoes. The proposed methods are rigorously tested at the sites of Setif and Bordeaux to evaluate their effectiveness in accurately identifying the ground echoes and precipitation. The results of our experiments demonstrate that the proposed techniques are highly effective in eliminating ground echoes while preserving the precipitation, and can be considered satisfactory for practical applications in meteorological data processing.
Download

Paper Nr: 18
Title:

Coordinated Route Recommendation for Improving Social Distancing in a Congested Subway Network

Authors:

Maria Elsa, Hung-Jui Chang, Da-Wei Wang, Chih-Wen Hsueh and Tsan-sheng Hsu

Abstract: We investigate the problem of providing coordinated route recommendations to subway passengers to reduce peak-hour congestion and improve social distancing during a pandemic such as COVID-19. We develop TransMARL, a model-free method that combines multi-agent reinforcement learning and curriculum learning to learn optimal routing policies by interacting with the environment. Furthermore, TransMARL is simple in design and adapts the framework of centralized training with decentralized execution. Applying TransMARL to the busy Taipei Metro network with more than 2 million daily ridership, our simulation result shows that overcrowded passengers can be reduced by more than 50% with less than 10 minutes increasing traveling time when 20% or more passengers follow the provided route guidance. The result outperforms previous well-known transit assignment methods, e.g., the all-or-nothing and stochastic user equilibrium.
Download

Paper Nr: 42
Title:

Coupling Agent-Based Simulations and VR Universes: the Case of GAMA and Unity

Authors:

Alexis Drogoul, Patrick Taillandier, Arthur Brugière, Louis Martinez, Léon Sillano, Baptiste Lesquoy and Huynh Q. Nghi

Abstract: Agent-based models (ABMs) and video games, including those taking advantage of virtual reality (VR), have undergone a remarkable parallel evolution, achieving impressive levels of complexity and sophistication. This paper argues that while ABMs prioritize scientific analysis and understanding and VR aims for immersive entertainment, they both simulate artificial worlds and can benefit from closer integration. Coupling both approaches indeed opens interesting possibilities for research and development in various fields, and in particular education, at the heart of the SIMPLE project, an EU-funded project on the development of digital tools for awareness raising on environmental issues. However, existing tools often present limitations, including technical complexity, limited functionalities, and lack of interoperability. To address these challenges, we introduce a novel framework for linking GAMA, a popular ABM platform, with Unity, a widely used game engine. This framework enables seamless data exchange, real-time visualization, and user interaction within VR environments, allowing researchers to leverage the strengths of both ABMs and VR for more impactful and engaging simulations. We demonstrate the capabilities of our framework through two prototypes built to highlight its potential in representing and interacting with complex socio-environmental system models. We conclude by emphasizing the importance of continued collaboration between the ABM and VR communities to develop robust, user-friendly tools, paving the way for a new era of collaborative research and immersive experiences in simulations.
Download

Paper Nr: 43
Title:

Method for Automated Parametric Studies and Evaluation Using the Example of an Aerosol-on-Demand Jet-Printhead

Authors:

Hanna Pfannenstiel, Martin Ungerer and Ingo Sieber

Abstract: In this paper, we present a method for the automated determination of aerosol jet parameters for the Aerosol-on-Demand (AoD) jet-printhead. A critical aspect in the simulation of our computational fluid dynamics (CFD) model is the simulation time due to the high model complexity in combination with a large number of individual elements. This, together with the problem of determining the focal point through measurements on our laboratory setup, leads to our approach of model reduction as the basis for an automated determination of the aerosol jet parameters, which is shown using the example of determining the focal position and focal width. Starting from a fluid dynamic model, we create a reduced model by separating the variables, with which we can predict aerosol jet parameters. The method presented here is validated by CFD simulations of an aerosol spray with a mass of individual droplets are varied according to a Rosin-Rammler distribution.
Download

Paper Nr: 48
Title:

Trajectory Generation Model: Building a Simulation Link Between Expert Knowledge and Offline Learning

Authors:

Arlena Wellßow, Torben Logemann and Eric MSP Veith

Abstract: Reinforcement learning has shown its worth in multiple fields (e. g., voltage control, market participation). However, training each agent from scratch leads to relatively long time and high computational power costs. Including expert knowledge in agents is beneficial, as human reasoning is coming up with independent solutions for unusual states of the (less complex) system, and, especially in long-known fields, many strategies are already established and, therefore, learnable for the agent. Using this knowledge allows agents to use these solutions without encountering such situations in the first place. Expert knowledge is usually available only in semi-structured, non-machine-readable forms, such as (mis-) use cases. Also, data containing extreme situations, grid states, or emergencies is usually limited. However, these situations can be described as scenarios in these semi-structured forms. A State machine representing the scenarios’ steps can be built from thereon to generate data, which can then be used for offline learning. This paper shows a model and a prototype for such state machines. We implemented this prototype using state machines as internal policies of agents without a learning part for four specified scenarios. Acting according to the actuator setpoints the state machines emit, the agent outputs control the connected simulation. We found that our implemented prototype can generate data in the context of smart grid simulation. These data samples show the specified behavior, cover the search space through variance, and do not include data that violates given constraints.
Download

Paper Nr: 50
Title:

Artificial Bee Colony Algorithm: Bottom-Up Variants for the Job-Shop Scheduling Problem

Authors:

K. A. Youssefi, M. Gojkovic and M. Schranz

Abstract: The optimization of a job-shop scheduling problem, e.g., in the semiconductor industry, is an NP-hard problem. Various research work have shown us that agent-based modeling of such a production plant allows to efficiently plan tasks, maximize productivity (utilization and tardiness) and thus, minimize production delays. The optimization from the bottom-up especially overcomes computational barriers associated with traditional, typically centrally calculated optimization methods. Specifically, we consider a dynamic semiconductor production plant where we model machines and products as agents and propose two variants of the artificial bee colony algorithm for scheduling from the bottom-up. Variant (1) prioritizes decentralization and batch processing to boost production speed, while Variant (2) aims to predict production times to minimize queue delays. Both algorithmic variants are evaluated in the framework SwarmFabSim, designed in NetLogo, focusing on the job-shop scheduling problem in the semiconductor industry. With the evaluation we analyze the effectiveness of the bottom-up algorithms, which rely on low-effort local calculations.
Download

Paper Nr: 54
Title:

Performance Improvement of a Vertical Turbine Pump Accounting for the Solid-Water Two-Phase Flow Conditions

Authors:

Thomas M. Singock and Guyh D. Ngoma

Abstract: A numerical study of the performance of a vertical turbine pump is carried out accounting for the flow of water with solid particles through the pump. For this purpose, the governing equations of two-phase flow are applied and solved using the ANSYS-CFX software. The achieved numerical pump model is validated by comparison with the experimental results when the pump is subjected to clear water to generate the reference pump model. Under two-phase flow, the performance of the reference pump drops drastically. Thus, the results obtained reveal that the morphology of the pump studied favors the obstruction of the hydraulic channels of the diffuser under two-phase flow. Based on this, a geometrical enlargement of hydraulic channels compared to the reference pump model is adopted. The performance under two-phase flow is enhanced while it is observed its slight decrease under single-phase flow.
Download

Paper Nr: 71
Title:

Enhancing Continuous Optimization with a Hybrid History-Driven Firefly and Simulated Annealing Approach

Authors:

Sina Alizadeh and Malek Mouhoub

Abstract: In this study, we propose a hybrid History-driven approach through collaboration between Firefly (FA) and Simulated Annealing (SA) algorithms, to improve the hybrid framework performance in finding the global optima in continuous optimization problems in less time. A Self-Adaptive Binary Space Partitioning (SA-BSP) tree is used to partition the search space of a continuous problem and guide the hybrid framework towards the most promising sub-region. To solve the premature convergence challenge of FA a ”Finder − Tracker agents” mechanism is introduced. The hybrid framework progresses through three main stages. Initially, in the first phase, the SA-BSP tree is utilized within the FA algorithm as a unit of memory. The SA-BSP tree stores significant information of the explored regions of the search space, creates the fitness landscape, and divides the search space during exploration. Moving on to the second phase, a smart controller is introduced to maintain a balance between exploration and exploitation using HdFA and SA. During the third step, the search is limited to the most promising sub-region discovered. Subsequently, the SA algorithm employs the best solution’s information, including its fitness value and position, to efficiently exploit the limited search space. The proposed HdFA-SA technique is then compared against different metaheuristics across ten well-known unimodal and multimodal continuous optimization benchmarks. The results demonstrate HdFA-SA’s exceptional performance in finding the global optima solution while simultaneously reducing execution time.
Download

Paper Nr: 72
Title:

Reliability Analysis of Francis Turbine Cracking Using Gamma Frailty Model and Censored Historical Maintenance Data

Authors:

Théophile M. Tshibangu, Guyh D. Ngoma, Martin Gagnon and Sébastien Carle

Abstract: All over the world, the need for electrical energy has increased dramatically, forcing hydroelectric power plants to operate under non-standard conditions. This leads to premature fatigue cracking and consequently to multiples crack inspections. In this research, a probabilistic model is developed based on frailty and censoring. The model takes advantage of the use of a Non-Homogeneous Poisson Process (NHPP) because turbine runners are considered as repairable parts. We develop the marginal likelihood expression incorporating frailty effect using gamma frailty distribution and we use the stochastic gradient descent (SGD) algorithm to obtain the optimal parameters. Furthermore, instead of considering the frailty effect z as a random variable, we decide to derive its expression from the individual unconditional likelihood function that has been also optimized. Finally, we compare reliability and cumulative hazard functions between family members. We then confirm the results obtained by comparing reliability between two families that behaved differently. Results shows that frailty effect, that is fonction of failure statuses and individual final time of observation for a specific component has played an impor tant role in differentiating heterogeneity among groups of the same family. Reliability curves clearly demonstrate heterogeneity within and between families.
Download

Paper Nr: 73
Title:

A Greedy Search Based Ant Colony Optimization Algorithm for Large-Scale Semiconductor Production

Authors:

Ramsha Ali, Shahzad Qaiser, Mohammed S. El-Kholany, Peyman Eftekhari, Martin Gebser, Stephan Leitner and Gerhard Friedrich

Abstract: This paper presents a hybrid ant colony optimization algorithm for solving large-scale scheduling problems in semiconductor production, which can be represented as a Flexible Job Shop Scheduling Problem (FJSSP) with the objective of minimizing the makespan required to complete all jobs. We propose a Greedy Search based Ant Colony Optimization (GSACO) algorithm, where each ant constructs a feasible schedule using greedy search. For the sequencing of operations, accomplished in the first phase of GSACO, the ants adopt a probabilistic decision rule taking pheromone trails into account. The flexible machine assignment is then greedily performed in the second phase by allocating operations one by one to an earliest available machine. We evaluate our approach using classical FJSSP benchmarks as well as large-scale instances with about 10000 operations from the domain of semiconductor production scheduling. On these large-scale scheduling problems, our GSACO algorithm successfully overcomes the scalability limits of exact optimization by means of a state-of-the-art Constraint Programming approach, included as baseline for our experimental comparison.
Download

Short Papers
Paper Nr: 22
Title:

Supply Chain Modelling and Simulation of Hemp Fiber Production in Ireland

Authors:

Shunyang Ning, John Hanley, Mika Salmi and Pezhman Ghadimi

Abstract: With the growing concerns of environmental issues and awareness of sustainable materials and energy, industrial hemp has been proposed as one of the strategies to react to the ecological crisis and challenges that are caused by the transition from the original power plant to a sustainable ecosystem in Midland Ireland, with the support Just Transition Funds which the Irish government introduced. Industrial hemp has a wide range of applications and commercial value, and it is a choice that benefits both ecology and the economy in Ireland. However, a very limited focus has been put on hemp fibre production, compared to Cannabidiol-related products. Few studies have established a hemp production and management framework from raw materials to sales. Few studies can provide auxiliary information for the feasibility analysis of growing industrial hemp. This study aims to contribute to the feasibility study of hemp fibre production in Ireland by modelling and simulations of the supply chain of hemp animal bedding in Ireland. This paper establishes the supply chain framework for hemp production. It presents the results of the supply chain simulations in 5 scenarios for hemp animal bedding in Ireland. The supply chain modelling visualizes project structures, provides relevant simulation data results to aid decision-making, and establishes the basis for subsequent models. It is concluded that there is still potential for hemp fibre production, and it is feasible to adapt hemp fibre production in Midland Ireland from both sustainable and economic aspects.

Paper Nr: 29
Title:

Detecting the Impact of Changes in Platelet Demand following the Implementation of PRT Platelets in Canada

Authors:

Linden Smith and John Blake

Abstract: This paper describes tools to detect and estimate demand shifts for platelet products, through inventory monitoring, following the implementation of pathogen reduction (PR) technology at a pilot site in the Canadian Blood Services (CBS) network. A Statistical Process Control (SPC) framework was constructed to detect change points in inventory signals. A discrete event simulation is used to generate synthetic data for the inventory monitoring process. Both traditional forecasting and machine learning techniques were used to increase sensitivity to change detection and reduce time to detection by supplying the SPC algorithm with projected data. Experiments were run on data representative of changes in demand experienced at the pilot production site. It was found that larger shifts in demand had a higher probability of detection and a lower time to detection. Changes in demand, with an effect on the system larger than 10%, were almost always detected. Detection time varies greatly depending on the level of the demand shift. Typically, shifts greater than 25% have an average detection time of just over a week while shifts of less than 5% have an average detection time of up to 25 weeks.
Download

Paper Nr: 38
Title:

A Layering Approach with Role-based Workflow Modelling for the Enterprise Workflow

Authors:

Yevheniia Yehorova and Marina Waldén

Abstract: Fast and high-quality workflow management is one of the most important tasks in all domains. Workflow modelling has become a common technique that facilitates and supports the business process or rather its automation. Modelling a workflow always depends on the roles that perform tasks. A formal approach is essential for high-quality system modelling. Besides, visualisation tools are required to represent models and work with them. In this paper, we propose an approach for modelling processes for efficient workflow enterprise management. To make this modelling more fluent and flexible, it should be parallelised based on roles. The core of our approach is the Role-based Workflow method. The Role-based modelling is accomplished using a layered development method based on stepwise refinement. Our approach combines straightforward stepwise modelling with the possibility of a quick assessment of the situation at any stage of the workflow by using the UPPAAL tool for modelling and verification. In this paper, we visualise our approach with a healthcare case study.
Download

Paper Nr: 51
Title:

Optimizing Privacy-Utility Trade-Off in Healthcare Processes: Simulation, Anonymization, and Evaluation (Using Process Mining) of Event Logs

Authors:

Omar S. Kamal, Syeda A. Sohail and Faiza A. Bukhsh

Abstract: In healthcare, big data analytics involve balancing patients’ privacy and data utility. Optimizing healthcare data utility often includes limited access to sensitive data by trusted onsite entities. This potentially hinders broader-scale data utilization by third-party data analysts. As a solution, this research simulates a health-care process-based event log, inspired by a local hospital’s radiology department. The simulated event log is anonymized using k-anonymity. The anonymized and un-anonymized event logs are evaluated, through process discovery techniques, using the process mining tool, ProM 6.11, for Privacy-utility trade-off assessment. Results indicate successful privacy preservation with a distinct loss in utility in the anonymized healthcare process model, which was not visible otherwise. Therefore, to ensure the efficacy of healthcare process analysis on anonymized sensitive event logs, the utilization of process mining techniques is beneficial for process utility and privacy protection evaluation.
Download

Paper Nr: 57
Title:

The Unreasonable Effectiveness of Artefacts and Documentation: An Exploration of Consensus Using Multi-Agent Simulations in a Two-Team Configuration

Authors:

Johannes S. Vorster and Louise Leenen

Abstract: Documentation and artefact generation is an essential part of business processes. This paper explores the use of artefacts as a means of reaching consensus through the use of Multi-Agent Simulations. In particular we investigate the time to reach consensus with and without the use of artefacts and show the efficiency of artefacts as a means of facilitating consensus, perhaps more importantly, to create efficient consensus processes in the face of difficult organizational communications channels. We found that polyarchies are highly efficient at consensus formation, but are not realistic for larger organizations. For these organisations a small team that facilitate consensus formation is nearly as efficient. The introduction of artefacts significantly improve consensus formation in situations where intra-team communications causes delays in consensus formation.
Download

Paper Nr: 59
Title:

Complex Responsive Processes: The Emergence of Enabling Constraints in the Living Present of a Cyber-Physical Social System

Authors:

Guido J. Willemsen, Luis Correia and Marco A. Janssen

Abstract: Contemporary business process modeling is based on predefined constraints where flexibility is built in. Current business challenges result from an increase in data which, are a valuable source for decision taking. Control models from cybernetics could do the job, especially when learning capabilities are added. However, in an agent-based architecture there is something to add: the social component. This position paper aims to advance understanding and practical application of how organizations can effectively utilize the abundance of data in their operational processes while also exploring novel approaches to organizational dynamics and coordination. More in detail, the paper outlines a model that combines socialComplex Responsive Processes (CRP) with a cyber-physical control cycle within a multi-agent simulated business process.
Download

Paper Nr: 60
Title:

Optimal Wireless Meter Deployment Using Evolutionary Algorithms

Authors:

Siddhartha Shakya, Kin Poon, Ahmed Suliman, Alia Aljasmi, Huda Goian and Ahoud Barzaiq

Abstract: Utility companies use smart wireless meters to automate the collection of meter readings. This requires them to design and deploy a wireless meter network where each meter is connected to a central Data Concentrator Unit (DCU), which is then connected to the control centre of the company. In this paper we investigate the problem of wireless network meter deployment by means of evolutionary algorithms. We model the deployment problem as an evolutionary optimization problem, explore two different encoding schemes for the objective function, and test 4 different algorithms against 5 typical setups of the network in different areas. Our results show that Simulated Annealing (SA) is the best performing algorithm for the tested instances of the problem and has better reliability against the other compared algorithms The devised models and the algorithm have been built into a tool that is being used in a real-world scenario.
Download

Paper Nr: 66
Title:

Optimal Design of a Variable-Pitch Axial Flow Fan by Applying Optimization Algorithm to Design, Through-Flow Analysis and CFD Simulation Methods

Authors:

Chan Lee, Jimin Choi, Jiseok Hwang, Hyeongjin Lee, Sangyeol Lee and Sang H. Yang

Abstract: In order to develop a variable-pitch axial fan, an optimal three-dimensional fan blade is designed by using the 2-stage design optimization strategy to combine aerodynamic fan design program, CFD technique, and optimization algorithm. At the 1 st stage of fan design optimization, the aerodynamic fan design program of this study is the FANDAS code where the chord length, setting angle, and camber angle of the fan blade are considered as design variables, and the performance, efficiency, and power of the fan are predicted by applying the through-flow analysis method to the designed fan. By applying a optimization algorithm to the FANDAS program, a three-dimensional fan blade shape is optimized and constructed for maximizing fan efficiency. At the 2 nd stage of fan design optimization, CFD analysis method is also applied on the optimized fan from the first design optimization study, and additional design optimization for the blade setting angles is conducted by applying an optimization algorithm to the CFD model and simulation results. Furthermore the total pressure, efficiency, and power characteristic curves of the fan according to the variable-pitch operation conditions are calculated by applying the CFD technique to the final optimal fan model obtained through the 2-stage design optimization processes. From the CFD results on the characteristic curves of optimal fan, it is found that the optimal fan model of this study shows the highest efficiency of 91% at the design point, maintains high efficiency level of 80% in a wide flow range through variable-pitch operation.
Download

Paper Nr: 67
Title:

On the Adoption of Explainable Deep Learning for Image-Based Network Traffic Classification

Authors:

Amine Hattak, Fabio Martinelli, Francesco Mercaldo and Antonella Santone

Abstract: In an era marked by escalating cyber threats, ensuring the security of interconnected devices and networks within the Internet of Things (IoT) landscape is imperative. This paper addresses this pressing concern by delving into network security, focusing on the classification of network traffic through the lens of deep learning techniques. Our study presents a deep learning-based approach customized for network traffic classification in the IoT domain, based on image analysis. Crucially, to enhance the interpretability and the transparency in our model’s decisions, we integrate GradCAM (Gradient-weighted Class Activation Mapping), a technique that illuminates the salient regions of input images contributing to the model’s predictions. By leveraging GradCAM, we provide deeper insights into the decision-making process, enabling better understanding and trust in our approach. We evaluate the effectiveness of our methodology using the TON IoT dataset, consisting of 10 network traces categorized into various vulnerability scenarios and trusted applications. Our findings reveal a remarkable accuracy of 99.1%, demonstrating the potential of our approach in fortifying network security within IoT environments. Moreover, the utilization of GradCAM empowers stakeholders with valuable insights into the inner workings of the model, further enhancing its applicability and trustworthiness.
Download

Paper Nr: 68
Title:

Delivery Zones Partitioning Considering Workload Balance Using Clustering Algorithm

Authors:

Jaruwan Wangwattanakool and Wasakorn Laesanklang

Abstract: This research proposes a novel approach for partitioning delivery zones in Bangkok that utilizes a combination of clustering and iterative algorithms. The approach leverages 30 days of delivery data to create delivery zones that having balanced workloads for drivers. The study begins by analyzing the delivery data to confirm the presence of unbalanced workloads across drivers within the 30-day period. To solve this imbalance, we use iterative k-means to adjust delivery zones considering the number of deliveries within the zone. The effectiveness of the approach was evaluated using two sets of parameters: geographic coordinates (latitude and longitude) and actual travel distance to reflect real-world scenarios. Regardless of the parameter set used, the experiments yielded balanced transportation areas with evenly distributed workloads. This approach demonstrates an improvement in workload equality compared to the original workload distribution.
Download

Paper Nr: 85
Title:

Implementing OntoUML Models with OntoObject-Z Specifications: A Proof of Concept Relying on a Partial Ontology for VLANs

Authors:

Mohamed Bettaz

Abstract: OntoObject-Z is a descriptive language inspired by OntoUML. Just as OntoUML is a profile for the Unified Modeling Language (UML), OntoObject-Z is an extension of the Object-Z notation. The objective of this article is threefold. We first define a metamodel for OntoObject-Z and an EBNF-like notation formalizing the syntax of OntoObject-Z specifications. Second, we construct a partial ontology for Virtual Local Area Networks (VLANs) and describe it by OntoUML models. Third, we implement our OntoUML models with OntoObject-Z specifications. The OntoObject-Z metamodel is expressed in OntoUML and the proposed EBNF rules are based on OntoUML concepts. Thanks to this, each syntactically correct OntoObject-Z specification corresponds de facto to a correct implementation of an OntoUML model.
Download

Paper Nr: 86
Title:

Semi-Supervised Fuzzy DBN-Based Broad Learning System for Forecasting ICU Admissions in Post-Transplant COVID-19 Patients

Authors:

Xiao Zhang and Àngela Nebot

Abstract: This paper introduces a novel semi-supervised neuro-fuzzy system to predict ICU admissions among post-COVID organ transplant recipients. Addressing the challenges of small sample sizes and lacking labels in organ transplantation, our study takes on these issues by proposing a DBN-Based Dual Manifold Regularized Fuzzy Broad Learning System (D-DMR-FBLS). This system utilizes the streamlined and flat architecture of the Broad Learning System (BLS), integrating Deep Belief Networks (DBN) and Takagi-Sugeno-Kang (TSK) systems to enhance representation learning capacities during the Unsupervised Training Phase (UTP). The system combines the strong feature learning capabilities of DBN with the powerful fuzzy rule extraction capacity of the TSK system, enhancing the model’s predictive performance and generalization capability. Moreover, we propose two types of graph-based manifold regularization, sample-based and feature-based, within this novel D-DMR-FBLS framework. Our method enhances its predictive ability by exploiting both the similarity among unlabeled and labeled patient samples, as well as the correlations between features within the fuzzy feature space. Employed to predict ICU admission risks in post-transplant COVID-19 patients, the method has demonstrated superior performance over existing methods, particularly in scenarios with limited samples and labels, thereby providing more accurate decision support for medical professionals in optimizing resource allocation for transplant patients.
Download

Paper Nr: 91
Title:

A Model of the Control System of a Carbon Dioxide Gas Turbine in Supercritical Condition

Authors:

Marcin Zawadzki, Jarosław Milewski and Arkadiusz Szczęśniak

Abstract: The primary goal of the paper was to develop a model for a supercritical carbon dioxide gas turbine. The model was built using the GateCycle program It is designed for potential implementation in emerging Concentrating Solar Plants, with a focus on simple yet efficient construction in the recuperation model. Analyses were conducted on three types of power control systems: bypass, inventory, and turbine inlet temperature-based, using the Lee-Kesler real gas model for calculations. Key mathematical formulas used by the program are cited, and results are thoroughly analyzed and presented in charts. In conclusion, a combination of bypass and inventory control systems is recommended.
Download

Paper Nr: 14
Title:

Web Service-Based Capacitated Smart Vehicle Routing Problem with Time Window and Threshold Waste Level for Home Health Care Industry

Authors:

Kubra Sar and Pezhman Ghadimi

Abstract: In response to the significant rise of Home Health Care (HHC) due to technological advances, an expanding elderly demographic, and increased disease outbreaks—intensified by the COVID-19 pandemic—there is a pressing demand for better management of the resulting medical waste. This paper explores the development of a web-based decision support system designed to optimize medical waste collection in the HHC sector. The system is built using Flask for backend processes, with a user interface crafted from HTML and CSS, and employs JSON files for data management. It features dynamic routing enabled by two metaheuristic algorithms: the Strength Pareto Evolutionary Algorithm (SPEA-2) and the Non-Dominated Sorting Genetic Algorithm (NSGA-II). The application supports real-time adjustments to vehicle routes and waste production sites, enhancing the efficiency of medical waste management by minimizing human intervention. The design allows for easy adaptation to different sectors and can be expanded to test various scenarios.
Download

Paper Nr: 27
Title:

Algorithm of Forming the Appearance of the Flow Path of Turbomachinery of Two-Shaft Aircraft Engine Core

Authors:

V. N. Matveev, G. M. Popov, E. S. Goriachkin and O. V. Baturin

Abstract: Description of formation process of two-dimensional scheme of flow path of a two-shaft core engine turbomachinery of aviation gas turbine engine is presented. There were three steps in the design. The first step includes rational distribution of specific work and pressure ratio in the core engine between intermediate and high-pressure compressors, as well as the pressure ratio between high and intermediate-pressure turbines. At the second step we select the rotational speed of the high-pressure cascade and determine the main structural-geometrical cascade parameters in the meridional plane. At the third step the rotation frequency of the intermediate-pressure cascade and the main structural-geometric parameters of the intermediate pressure cascade with a transition duct between compressor compartments are determined. The excess of the middle diameter of the intermediate-pressure compressor over the middle diameter of the high-pressure compressor and introduction of the transition duct between them into the scheme of the compressor flow path of the core engine is justified. Applying of a diagonal turbine in an intermediate-pressure cascade is proposed. The axial length of the flow path channels between compressors and turbines was chosen considering the influence of the duct opening angle on hydraulic losses and mass-dimensional characteristics of the core engine.
Download

Paper Nr: 31
Title:

Regression Equations for Preliminary Dimensioning of Axial Compressor Discs

Authors:

O. V. Baturin, E. D. Gataullina, E. S. Goryachkin, S. A. Melnikov and Liu Xin

Abstract: In compressor design today, the process of obtaining disc parameters at the first steps is poorly formalised. This process is often determined by the experience of the designer and is not very predictable. As a result, it is possible to estimate compressor efficiency during design calculation, but determination of compressor mass and strength factors is often difficult or is performed using approximate formulas. The authors of the paper proposed to use regression formulae derived from statistical processing of a database of dimensionless parameters of more than 20 different gas turbine engines from different countries. Relying on the obtained regularities, it is possible to get a sketch of the disc in the meridional plane and to estimate its mass and strength using a semi ring model by means of the design gas-dynamic calculation model. As a result, even at the first steps of compressor design the engineer has an opportunity to screen out obviously unacceptable variants and choose the best option not only by efficiency criteria. The selected option will require fewer refinements in the future, which will reduce the number of iterations in the design process and minimise design time and costs.
Download

Paper Nr: 36
Title:

Development of GIS-Based Simulations for Evaluating Interventions in Latvia's Transport System

Authors:

Justina Hudenko, Igors Kukjans and Inguna J. Kaldava

Abstract: This paper presents the development and testing of a user interface (UI) for the Transport Interventions outcomes simulation Model (TIM) as part of Latvia’s efforts to comprehensively assess state interventions in its transport system. Following the design thinking process, the paper outlines stakeholder needs and proposes a centralized website for accessing transport system data and outcomes. The TIM UI facilitates data exploration, submission of proposals, and initial impact assessment. It integrates GIS technology for enhanced visualization and decision-making. Testing results inform improvements in navigation, clarity, visual design, and performance. The TIM UI contributes to sustainable transportation planning and policy formulation in Latvia by engaging stakeholders and enabling informed decision-making.
Download

Paper Nr: 44
Title:

Methodological Approach to Model and Validate CPS

Authors:

Perla Tannoury and Ahmed Hammad

Abstract: As Cyber-Physical Systems (CPS) become increasingly complex and critical, ensuring high-quality specifications is crucial. However, the process is often challenging due to the need for expertise and lack of guidance. To address this, we propose a methodological framework and tools for CPS specification and validation. Our approach utilizes semi-formal modeling and incorporates SysML and AcTRL technologies. By applying validation strategies early in the development process, we aim to reduce the costs of correcting errors. This comprehensive approach offers vital support to designers navigating CPS complexities.
Download

Paper Nr: 53
Title:

Using NetLogo to Simulate Large Production Plants: Simulation Performance: A Case Study

Authors:

M. Umlauft and M. Schranz

Abstract: NetLogo is a well-known agent-based modeling and simulation platform. While it is very popular in education, it is still often perceived as having bad performance for large models, which is due to performance related issues in early implementations. We show that over time, a quite large number of scientific papers have been published using NetLogo and measure its performance on a common laptop a researcher or student might have as a personal machine. We use a NetLogo model with about 2500 lines of code and up to 10000 agents to perform our measurements and show that even with such an underpowered machine, with current versions of NetLogo it is quite possible to run simulations of larger models in reasonable simulation time.
Download

Paper Nr: 64
Title:

Non Linear Homogenization of Laminate Magnetic Material by Computing Equivalent Magnetic Reluctivity

Authors:

Ghania Yousfi and Hassane Mohellebi

Abstract: In the present study we present a numerical modeling of a laminate magnetic material using an homogenization technique which exploit an inverse problem resolution. The laminate magnetic material is consisting of an alternate of magnetic and insulates layers. The equivalent magnetic reluctivity is computed for the homogenized domain by considering a non linear behavior of the magnetic layers. A finite element method is used to solve the 2D non-linear electromagnetic partial differential equation. An optimization problem is constructed and solved with the association of the 2D finite element resolution and a conjugate bi-gradient algorithm. The computation of the equivalent magnetic reluctivity is then performed for different excitation field value according to B-H curve. The comparison of the obtained B-H curves of the laminate and the homogenized domains to the theoretical B-H curve (experimental data) show a good agreement of laminate results.
Download

Paper Nr: 70
Title:

A Web-Based System for Learning Qualitative Constraint Networks with Preferences

Authors:

Pablo Echavarria and Malek Mouhoub

Abstract: We present a new web-based platform crafted to represent and learn Qualitative Constraint Networks (QCNs) with preferences, focusing specifically on temporal data. The system uses a learning algorithm that extracts qualitative temporal constraints through user-guided membership queries. The learning process is enhanced with transitive closure (Path Consistency) to infer new relations and reduce the number of queries. Path consistency relies on the Allen’s interval algebra composition table. During the learning phase, the user can add their preferences. The latter will be represented by a conditional preference network (CP-net).
Download

Paper Nr: 78
Title:

Possibilities of Simulation of the Socio-Political Conflicts Based on the Mathematical Technique of the Langmuir Monolayers Theory

Authors:

A. Y. Petukhov, A. N. Morozov, Yu. M. Selivantyev, I. V. Vorotyntsev, O. A. Raitman and N. S. Morozov

Abstract: The possibility of creating the model of social and political processes (in particular, that of conflicts) with use of the chemical theory of monolayers is studied. The main theoretical approaches to simulation of the social processes are analyzed. A formalized dynamic parameters of protest pro-cesses in the crowd are defined. The mathematical model based on the chemical theory of mono-molecular layers and the coupling field parameter (h) are proposed. In addition, the basic equations are derived, interpreted and applied to social processes. The main effects in the studied pro-cesses are described.
Download

Paper Nr: 84
Title:

A Simulation Analysis of Economic and Environmental Factors in the Design of an Electric Vehicle Battery Reverse Supply Chain

Authors:

Melissa V. Vallejos, Andrew Greasley and Aristides Matopoulos

Abstract: This article presents a study of a discrete-event simulation model of a UK reverse supply chain (RSC) for electric vehicle batteries. The purpose of the study is to use the model to run a set of simulated scenarios to explore how different operational strategies affect the RSC design configuration. The performance of the RSC can be measured in terms of its economic impact (such as the value of material recovered and production savings) and environmental impact (such as batteries recovered, remanufactured and repurposed, kg of materials recovered and CO2 emissions reduction). A key outcome of the study is that supply chain participants found that although they were aware of individual processes within the RSC the insights of the model covering the whole RSC and the metrics generated would enable them to make better informed RSC design decisions.
Download

Paper Nr: 87
Title:

Eco-Sustainability and Efficiency of Healthcare Complex Systems

Authors:

Ilaria A. Amantea and Marinella Quaranta

Abstract: Healthcare is one of the most difficult complex systems to optimize. The challenge is in the multiple factors to balance. Some are common to many other industries, while some are riskier, and the whole system must be well-balanced to flow and ensure the functioning of a vital service for the citizens. The close interconnection between the various factors means that making changes on one aspect will have cascading effects on many other aspects. Therefore, the optimization must not be done considering a single parameter, but considering the whole chain. According to the European objectives of digitalization and eco-sustainability, in this paper we present an overview of the impact of digitalization of certain paper documents on process activities, times, costs, and spaces of archives.
Download

Area 2 - Simulation Technologies, Tools and Platforms

Full Papers
Paper Nr: 26
Title:

A Scalable Synthetic Data Creation Pipeline for AI-Based Automated Optical Quality Control

Authors:

Christian Schorr, Sebastian Hocke, Tobias Masiak and Patrick Trampert

Abstract: In recent years, the industry’s interest in improving its production efficiency with AI algorithms has grown rapidly. Especially advancement in computer vision seem promising for visual quality inspection. However. the proper classification or detection of defects in manufactured parts based on images or recordings requires large amounts of annotated data, ideally containing every possible occurring defect of the manufactured part. Since some defects only appear rarely in production, sufficient data collection takes a lot of time and may lead to a waste of resources. In this work we introduce a configurable, reusable, and scalable 3D rendering pipeline based on a digital reality concept for generating highly realistic annotated image data. We combine various modelling concepts and rendering techniques and evaluate their use and practicability for industrial purposes by customizing our pipeline for a real-world industrial use case. The generated synthetic data is used in different combinations with real images to train a deep learning model for defect prediction. The results show that using synthetic data is a promising approach for AI-based automated quality control.
Download

Paper Nr: 74
Title:

Evolutionary Multi-Objective Task Scheduling for Heterogeneous Distributed Simulation Platform

Authors:

Xutian He, Yanlong Zhai, Ousman Manjang and Yan Zheng

Abstract: Most existing distributed simulation platforms lack native support for Python scripts, thereby hindering the seamless integration of AI models developed in Python. Some simulation platforms support script languages like Lua or javascript, but scheduling tasks in heterogeneous simulation platforms that are composed of simulation engine and script engine is a challenging problem. Moreover, conventional task scheduling methods often overlook the simulation time constraints, which are essential for simulation synchronization. In this paper, we propose a Heterogeneous Distributed Simulation Platform (HDSP) that could integrate different script languages, especially Python, to empower the simulation by leveraging intelligent AI models. A Dynamic Multi-Objective Optimization (D-MO) Scheduler is also designed to efficiently schedule simulation tasks that run across heterogeneous simulation engines and satisfy simulation synchronization constraints. HDSP integrates various script engines, enhancing its adaptability to model dynamic simulation logic using different script languages. D-MO Scheduler optimizes Simulation Acceleration Ratio (SAR), Average Weighted Waiting Time (AWWT), and Resource Utilization (RU). The D-MO scheduling problem is characterized as an NP-hard problem, tackled using the NSGA-III algorithm. The simulation time synchronization constraints are implemented through Lower Bound on Time Stamp (LBTS) and lookahead approach. The comparative results and statistical analysis demonstrate the superior efficacy and distribution performance of proposed D-MO Scheduler. The proposed HDSP and D-MO Scheduler significantly boost the capability to support Python-based AI algorithms, and navigate complex scheduling demands efficiently.
Download

Paper Nr: 89
Title:

Modelling and Simulation-Based Evaluation of Twinning Architectures and Their Deployment

Authors:

Randy Paredis and Hans Vangheluwe

Abstract: The Twinning paradigm –subsuming Digital Models, Digital Shadows and Digital Twins– is a powerful enabler for analyzing, verifying, deploying and maintaining complex Cyber-Physical Systems (CPSs). Throughout the (currently quite ad-hoc) creation of such systems, a plethora of choices impacts the functionality and performance of the realized Twinning system comprised of the actual system under study and its twin. The choices that are made have a high impact on the required investment when creating the twin, most notably on the development time and deployment cost. This is especially true when multiple iterations are needed to find the most appropriate level(s) of abstraction and detail, architecture, technologies and tools. As a core contribution, this work follows a Model-Based Systems Engineering methodology: before realizing a Twinning architecture, Discrete EVent system Specification (DEVS) models for deployed architecture alternatives are constructed and simulated, to evaluate their suitability. A simple use-case of a ship moving in one dimension is used as a running example.
Download

Short Papers
Paper Nr: 19
Title:

A New Digital Twin Paradigm: Definition, Framework, and Proposed Architecture

Authors:

Jhonathan V. Barbosa, Omar C. Gómez and Jaime G. García

Abstract: In this paper, the concept of Digital Twins is addressed in the context of Industry 4.0, highlighting its definition, functional components, scope of application, proposed framework, and architecture. A definition is proposed that emphasizes the precise replication of physical reality and the ability to adapt to changes and incoming data. The proposed framework and architecture provide guidance for the effective implementation of Digital Twins, emphasizing the importance of data management and versatile infrastructure. In summary, Digital Twins represent a transformative technology with the potential to improve operational efficiency, drive innovation, and realize the vision of Industry 4.0. Their evolution will continue to require additional research and practical applications to unlock their full potential across various industrial and commercial sectors.
Download

Paper Nr: 25
Title:

DREAM-ON GYM: A Deep Reinforcement Learning Environment for Next-Gen Optical Networks

Authors:

Nicolás Jara, Hermann Pempelfort, Erick Viera, Juan P. Sanchez, Gabriel España and Danilo Borquez-Paredes

Abstract: A novel open-source toolkit for a straightforward implementation of deep reinforcement learning (DRL) techniques to address any resource allocation problem in current and future optical network architectures is presented. The tool follows OpenAI GYMNASIUM guidelines, presenting a versatile framework adaptable to any optical network architecture. Our tool is compatible with the Stable Baseline library, allowing the use of any agent available in the literature or created by the software user. For the training and testing process, we adapted the Flex Net Sim Simulator to be compatible with our toolkit. Using three agents from the Stable Baselines library, we exemplify our framework performance to demonstrate the tool’s overall architecture and assess its functionality. Results demonstrate how easily and consistently our tool can solve optical network resource allocation challenges using just a few lines of code applying Deep Reinforcement Learning techniques and ad-hoc heuristics algorithms.
Download

Paper Nr: 32
Title:

The Negotiator: Interactive Hostage-Taking Training Simulation

Authors:

Pierre-Benjamin Monaco, Per Backlund and Stéphane Gobron

Abstract: High-stakes professions like negotiators and pilots utilize simulations for risk-free training, enhancing skills and decision-making. Traditional simulations, while effective, are resource-intensive. Computer simulations offer a scalable alternative but lack realism and sensory engagement, impacting immersion. This study explores mobile phones’ role in simulation-based learning, using a 3D hostage-taking scenario to improve immersion through realistic interactions. The simulation integrates a detailed environment and interactive elements, and demonstrate the potential of mobile technology to enhance training in critical fields by making simulations more lifelike and engaging. This paper is presented alongside a twin paper, both part of the same project Master final project. The second paper introduces a different version of the simulation using a Large Language Model (like ChatGPT) for generating dialogue freely and interactively. It also discusses the results of a study comparing immersion levels between the two versions.
Download

Paper Nr: 33
Title:

Interactive Storytelling Apps: Increasing Immersion and Realism with Artificial Intelligence?

Authors:

Pierre-Benjamin Monaco, Per Backlund and Stéphane Gobron

Abstract: The advent of Large Language Models (LLMs) has revolutionized digital narration, moving beyond the rigid and time-consuming process of creating conversational trees. These traditional methods required significant multidisciplinary expertise and often disrupted dialogue coherence due to their limited pathways. With LLMs character simulation seems to become accessible and coherent, allowing the creation of dynamic personas from text descriptions. This shift raises the possibility of streamlining content creation, reducing costs and enhancing immersion with interactive dialogues through expansive conversational capabilities. To address related questions, a digital hostage-taking simulation was set up, and this publication reports the results obtained both on the feasibility and on the immersion aspects. This paper is proposed as a twin paper detailing the implementation of a simulation that use an actual mobile phone to communicate with the hostage-taker.
Download

Paper Nr: 39
Title:

A Digital Twin based Approach to Structural Mechanics: New Perspectives for Robotics in Forestry and Beyond

Authors:

Dorit Kaufmann, Tobias Osterloh and Jürgen Rossmann

Abstract: Computational simulations are nowadays crucial for the development of any complex mechatronic system. This holds especially true when it comes to robots acting in a highly dynamic environment, such as robots used as mobile machinery in forestry. The ever changing loads acting on these robots result from different weather conditions, ground stability, leverage forces of falling trees during cutting etc. Thus, the structural layout of the robot is rather sophisticated. Nevertheless, it is usually done once in the beginning of the design process and the dynamic loads can only be estimated, leading to huge safety margins. Regarding the ecological consequences every operation in forestry involves, it is of uttermost importance not only to do a neat structural design, but also to computationally analyse the mobile machinery directly in its actual environment. This work proposes a Digital Twin (DT) based approach to structural mechanics. Every feature of the environment (every tree, the soil etc.) as well as the robot itself can be represented by a DT. An existing Rigid Body Dynamics (RBD) is used to record all acting forces and momentums during an operation. They serve as input for a Finite Element Analysis (FEA) thus enabling a holistic simulation framework.
Download

Area 3 - Application Domains

Full Papers
Paper Nr: 35
Title:

Toward Physics-Aware Deep Learning Architectures for LiDAR Intensity Simulation

Authors:

Vivek Anand, Bharat Lohani, Gaurav Pandey and Rakesh Mishra

Abstract: Autonomous vehicles (AVs) heavily rely on LiDAR perception for environment understanding and navigation. LiDAR intensity provides valuable information about the reflected laser signals and plays a crucial role in enhancing the perception capabilities of AVs. However, accurately simulating LiDAR intensity remains a challenge due to the unavailability of material properties of the objects in the environment, and complex interactions between the laser beam and the environment. The proposed method aims to improve the accuracy of intensity simulation by incorporating physics-based modalities within the deep learning framework. One of the key entities that captures the interaction between the laser beam and the objects is the angle of incidence. In this work, we demonstrate that adding the LiDAR incidence angle as a separate input modality to the deep neural networks significantly enhances the results. We integrated this novel input modality into two prominent deep learning architectures: U-NET, a Convolutional Neural Network (CNN), and Pix2Pix, a Generative Adversarial Network (GAN). We investigated these two architectures for the intensity prediction task and used SemanticKITTI and VoxelScape datasets for experiments. The comprehensive analysis reveals that both architectures benefit from the incidence angle as an additional input. Moreover, the Pix2Pix architecture outperforms U-NET, especially when the incidence angle is incorporated.
Download

Paper Nr: 47
Title:

A Sampling-Based Approach to UAV Manipulator Path Planning

Authors:

Zamoum Housseyn, Guiatni Mohamed, Bouzid Yasser, Alouane M. Amine and Khelal Atmane

Abstract: This paper presents a new approach to path planning for unmanned aerial manipulator systems (UAMs) using Sampling-Based Methods and Random Geometric Models (RGM) to efficiently search the configuration space for feasible, collision-free paths. The RGM generates random points in the UAM’s workspace to guide sampling-based algorithms in constructing graphs that link the aerial manipulator’s initial and final positions. These graphs are then explored using the RRT ∗ algorithm to find an optimal collision-free path. The effectiveness of this approach is demonstrated through different scenarios, showing that it outperforms existing path planning techniques in terms of efficiency, computing time, and robustness. The proposed framework is adaptable to various application scenarios and environments, making it a valuable tool for applications such as search and rescue missions, surveillance, and inspection tasks.
Download

Paper Nr: 79
Title:

Increasing Resilience in Production Networks: A Practical Approach Based on Scenario Planning and Simulation-Based Capacity Analysis

Authors:

David Kunz, Tim Maisel, Andreas Kunze and Jörg Franke

Abstract: In the current global economic landscape, companies with an international presence face the challenge of ensuring that their production networks are not only efficient but also resilient to unpredictable events. Recent technological advancements and the close integration of global production networks have been increasingly disrupted. During times of global crises, it becomes evident that traditional approaches are no longer sufficient. Therefore, the focus is shifting from reactive measures to proactive prevention. This paper presents a novel approach for increasing resilience in a production network based on a combination of systematic foresight of unpredictable events using scenario planning and a simulation-based capacity analysis for the identified scenarios. To demonstrate and validate the application of the proposed approach, a case study for the production network of a large German healthcare company is conducted and presented.
Download

Short Papers
Paper Nr: 24
Title:

Utilizing Sensor and Actuator Virtualization to Achieve a Systemic View of Mobile Heterogeneous Cyber-Physical Systems

Authors:

Martin Richter, Reinhardt Karnapke and Matthias Werner

Abstract: When programming cyber-physical systems, application developers currently utilize physical sensors and actuators individually to achieve the desired observations and impacts within the physical world. This is an error-prone and complex task given the size, heterogeneity, and mobility of prevailing cyber-physical systems. We introduce an application model that allows the application developers to take a physical perspective. By means of this model, the programmers describe desired observations and influences with respect to the physical world without directly referencing physical devices. We present an additional model for a runtime environment that transparently utilizes the available physical devices to reach the application developers’ targets. We show that an implementation of our models is functional via simulation.
Download

Paper Nr: 55
Title:

Multi-Method Approaches for Simulation Modelling of Warehouse Processes

Authors:

Pietro De Vito, Umberto Battista, Anna Bolognesi and Stefano Sanfilippo

Abstract: Efficient warehouse management is essential for business-to-business (B2B) operations, ensuring timely delivery, cost minimization, and operational efficiency. To meet these challenges, advanced modelling and simulation techniques are increasingly adopted. This paper shows the application of multi-method simulation approaches, specifically agent-based and discrete event simulation, applied to optimize warehouse processes and resource allocation for a leading sport brand retailer in the B2B sector. By combining these approaches, we aimed to capture the complexity of warehouse operations and identify opportunities for improvement. The simulation model developed using AnyLogic software, integrated agent-based modelling to represent entities such as packages, articles, orders, warehousemen, and trucks, along with discrete event simulation to model key events like order arrival and truck departure. The developed model has been used to optimize the resource allocation ensuring order fulfilment. Scenario analyses revealed varying resource requirements across different demand scenarios, highlighting the challenges posed by increasing demanded volumes. The study underscores the importance of strategic resource planning and proactive measures to address capacity limitations and ensure warehouse efficiency in meeting future demand. Our findings contribute to informed decision-making in warehouse management, guiding strategies for optimization and adaptation to evolving market demands.
Download

Paper Nr: 65
Title:

Adapting Retail Supply Chains for the Race to Sustainable Urban Delivery

Authors:

Angie Ramirez-Villamil, Anicia Jaegler and Jairo R. Montoya-Torres

Abstract: To deal with urban distribution challenges, companies are redesigning their distribution networks. This paper studies a two-echelon vehicle routing problem, one of the most employed models, with a heterogeneous fleet between echelons. Vehicles in the first echelon are mobile satellites that supply the vehicles in the second echelon. Our study aims to minimize the travel time. To solve this complex problem when facing real-life distribution, a heuristic solution approach is followed by decomposing the components of the problem and applying the well-known nearest neighbor procedure. This approach is also justified by the very large amount of delivery points, so the problem dataset can be computationally tractable. Experiments are run using real data from a delivery company in Paris, France. Different scenarios are evaluated, and results show that the consideration of cargo bikes has big potential to reduce some of the externalities caused by conventional delivery systems, while some non-intuitive impacts are also found, such as the increase in land use.
Download

Paper Nr: 88
Title:

Integrated Data-Driven Framework for Automatic Controller Tuning with Setpoint Stabilization Through Reinforcement Learning

Authors:

Babak Mohajer, Neelaksh Singh and Joram Liebeskind

Abstract: We introduce a three-stage framework for designing an optimal controller. First, we apply offline black-box optimization algorithms to find optimal controller parameters based on a heuristically chosen setpoint profile and a novel cost function for penalizing control signal oscillations and direction changes. Then, we leverage cloud data to generate device-specific setpoint profiles and tune the controller parameters to perform well on the device with respect to the same cost function. Finally, we train a control policy on top of the offline tuned controller after deployment on device through an online learning algorithm to handle unseen setpoint variations. A novel reward function encouraging setpoint stabilization is added for preventing destabilization from coupling effects. Bayesian Optimization and Nelder-Mead methods are used for offline optimization, and a state-of-the-art model free Reinforcement Learning algorithm namely Soft Actor-Critic is used for online optimization. We validate our framework using a realistic HVAC hydraulic circuit simulation.
Download

Paper Nr: 93
Title:

Combine Intent Recognition with Behavior Modeling in Teaching Competition Military Simulation Platform

Authors:

Yi Zhang, Shuilin Li, Chuan Ai, Yong Peng and Kai Xu

Abstract: Intent recognition refers to obtaining the observations of an agent and then using the observations to reason its current state and to predict its future actions. Behavior modeling, describing the behavior or performance of an agent, is an important research area in intent recognition. However, few studies have combined behavior modeling with intent recognition to investigate its real-world applications. In this paper, we study behavior modeling for intent recognition for cognitive intelligence, aiming to enhance the situational awareness capability of AI and expand its applications in multiple fields. Taking the combat environment and tanks as the research object, based on the behavior tree and SBR recognition algorithm, this paper designs the framework and experiments for behavior modeling and intent recognition. Firstly, uses the evolution behavior tree algorithm to autonomously generate the behavior model adapted to the environment. Secondly uses the SBR algorithm to effectively recognize actions and plan paths of enemy tank to guide self-tank actions in the TankSimV1.20 simulation platform. The results show that the tank survival rate increases by 80% under the guidance of the intent recognition results, and the method in this paper can provide effective guidance for the intent recognition behavior modeling, which has a broad application prospect.
Download

Paper Nr: 63
Title:

Unlocking Antenna Performance: Harnessing the Power of the Hahn-Banach Theorem in Wireless Communication Systems

Authors:

Muhammad Uzair, Sijjad Ali, Asad Ali, Hamza Amir, Rana A. Bari, Hamid Sharif, Maryam Jamil, M. Hunza, Nabel Akram and Sharofiddin Allaberdiev

Abstract: This paper presents a novel approach for improving the performance of the antenna in wireless communication systems through the utilization of the Hahn-Banach Theorem. With advancements of such standards as 5G and B5G in design-related issues, traditional methods are typically inadequate. Applying the Hahn-Banach Theorem results in a strong mathematical framework that augments several vital antenna parameters including gain, efficiency, bandwidth, and radiation patterns. The approach takes solutions from limited to large design spaces, allowing the search for new configurations within tight constraints. Results show substantial advancement in antenna performance, creating better, more reliable progressions. This interdisciplinary method connects theoretical mathematics with the engineering solutions making a great step forward for practical communication technologies. This guarantees excellent linearity and overall performance which is very important for wireless communication signal integrity, impacting future antenna technology and wireless communication research and development.
Download