Abstracts Track 2022

Area 1 - Modeling and Simulation Methodologies

Nr: 6

Machine-learning in Forecasting and Reliable Monitoring of Floods in High Slope Watersheds


Gonçalo Jesus, Anabela Oliveira, João Rogeiro, Joao Fernandes and Rui Rodrigues

Abstract: Flash floods are known and dangerous natural hazards. In the scope of the INUNDATIO project, we are developing a management support system for flash floods in high slope watersheds according to the hydromorphology. This system associates a sophisticated forecasting platform and a reliable monitoring network with risk analysis methodologies involving scenarios simulation, historical river flow data analysis and vulnerability maps of human and environmental impacts. Herein, we focus on the importance of using machine learning in the development of both forecasting and monitoring platforms. In the reliable monitoring component, we present a machine learning approach to improve the monitoring data associated quality, which is directly related to the detection of data issues or failures in the monitoring data sets. Considering a dependable data quality-oriented methodology, we use neural networks to process the sensing data from multiple sensors and correlate them according to their position along the watersheds, to the monitoring timings and physical processes. These neural networks model the sensors behaviour and allow us to compare their outputs with the actual measurements in order to detect sensor failures affecting the data. In the flash floods forecasting component, we consider two steps, one for the daily or regular situations to detect significant rain events and another step that only runs when a predicted rain event can originate flash foods. While the first step is based on conventional numerical models, the latter step combines the reliable monitoring platform with a machine learning model that considers the available data in order to output faster small-scale predictions. Herein, we use Support Vector Machine (SVMs) techniques since these are data-intensive algorithms and will provide benefits in terms of model validity and easiness of adaptation over time. SVMs have data generalization abilities and are able to adapt to new data, a required functionality for the operational situations. We are validating these developments in the Vinhas Creek basin, which is an area prone to floods that affect the urban area of the city of Cascais, located near Lisbon in Portugal.

Nr: 8

P Systems with Protein Rules: Diabetes Mellitus as a Case Study


Yara Y. Hamshawi, Florin-Daniel Bilbie, Andrei Păun, Assaf Malka and Ron Piran

Abstract: In our recently published paper, we succeeded in adequately demonstrating the power of a new modeling methodology by simulating a system with complex conditions, such as glucose homeostasis in health and disease (i.e., diabetes), while bringing the science of membrane-computing closer to the natural world. P system (aka membrane computing) is a subfield in computer science, which relies on modeling living systems with mathematical tools. In classical P system, cells are surrounded by a simple membrane and computational events take place in both sides of the membrane. When we tried to model the process of maintaining normoglycemia in healthy individuals as well as in type-I and type-II diabetes patients, the main challenge was to prioritize the insulin-producing β-cells over other organs. However, using classical P system, we could not implement this hierarchy. Therefore, we chose to utilize the membrane actual physiology and add its properties to the current definitions of membrane computing. To our gratification, we succeeded to develop a new theoretical tool to be complemented into the science of membrane-computing. This development relies on the membrane structure of the cell and on the biochemical reactions which occur on the membrane of different organs in our body, thus simulating the true nature of membranes. Consequently, allowing deterministic manner operations in a non-deterministic system by giving membrane-specific rules. In addition, we showed that the defined systems are universal employing register machines simulation. In our presentation at Simultech, we will present the model we developed, show a simulation of non-diabetic as well as diabetic patients, and present preliminary in-vivo experimental results, which are based on the prediction made by the model. We hope that these new viewing angles on biological conditions in general and diabetes will lead to the integration of computational modeling into biological science and to a treatment for diabetes mellitus that will also pave the way for additional treatments for many more diseases that currently do not have a cure.

Nr: 18

Validating and Constructing Behavioral Models for Simulation using Automated Knowledge Extraction


Tabea Sonnenschein, Simon Scheider, Ardine de Wit, Nicole den Braver and Roel Vermeulen

Abstract: Human behavior may be one of the most difficult phenomena to model, since there is no single set of behavior determinants that can explain all of human behavior. Furthermore, evidence on behavior determinants is scattered in various publications that take time to read, assess and synthesize into a coherent knowledge base. For this reason, behavioral modelers often lack access to a systematic overview of significant variables for validating the structure of behavioral models. In this paper, we propose a method to compile empirical evidence on any specified behavior choice in an automatized manner into a Behavior Choice Determinants Ontology, which can be used to implement or validate the structure of behavior models of that choice. The method starts by extracting evidence instances of associations of behavior determinants and choice options in relation to study groups and moderators from former studies. For that, first information phrases are extracted using a named entity recognition deep learning model (BERT). Secondly, the evidence relation between the phrases are established using a machine learning model applied to syntactical features of the sentences. Consequently, the extracted knowledge is synthesized into an ontology. Based on that, model components and relationships that are of sufficient quality and relevant for the specific model application can be selected. This evidence base can be either used a) to construct a valid simulation model by translating the declarative knowledge into procedural code, which can be calibrated using local behavioral data or b) to validate the structure of existing simulation models by verifying that the variable selection corresponds to behavioral knowledge. In order to test the feasibility and performance of the method, we present an example application with mode of transport as behavior choice. For that we fed 42 meta-analyses and systematic reviews studying mode of transport choice into the model. The BERT named entity recognition model achieved a weighted F1 score of 0.87 for tagging the evidence information classes, while the evidence relation inference model (random forest) achieved a F1 score of 0.81 for predicting true relations and a weighted F1 score of 0.93. The result is an ontology with 2001 evidence instances, which are combinations of studies finding significant or unsignificant associations between behavior determinants (702 unique ones) and behavior choice options (222 unique ones), whereby some instances specify a studygroup or a moderator. Our next step is to use semantic and string similarity to cluster synonymous variables and therewith further synthesise the evidence. The paper serves as a proof of concept of our method, which can be reused or adapted in future studies. The method can possibly be extended to structural model validation in other disciplines. Furthermore, additional information concerning the evidence instances could be added, such as the method and control variables used.

Area 2 - Simulation Technologies, Tools and Platforms

Nr: 13

Interoperability in BIM: Limitations, Inconsistencies and Strategies


Alcinia Z. Sampaio, Tomas M. Farinha and Augusto M. Gomes

Abstract: Building Information Modelling (BIM) is a methodology supported in a virtual 3D model-based process, allowing modelling and the management of all activities inherent to Construction. In an academic context, a master thesis was developed concerning BIM learning regarding the concept, range of application and the use of tools available. For this purpose, BIM tools, for modelling and structural analysis, were applied to a real case study. The analysis covers the design, modelling and information transposition, in order to evaluate the interoperability between BIM platforms (ArchiCAD/Graphisoft, Revit/Autodesk and ETABS/CSI). This study aims to analyze the level of trust inherent in the information that is transferred in the process. With these type of academic research the students skills become improved contributing to be more prepared to face the future activity as Civil Engineers, as BIM methodology have been adopted by all sectors of the construction field.

Nr: 12

Professional Short Training Course in BIM: From Fundaments to Heritage Buildings Application


Alcinia Z. Sampaio and Augusto M. Gomes

Abstract: The implementation of Building Information modelling (BIM) methodology in the construction industry has been covering a wide applicability with recognized benefits in designing, constructing and operating buildings. A recent short course organized in the University of Lisbon, actualized with the most relevant achievement based in master researches, was offered to professionals of the industry, namely, architects and civil engineers coming from diverse engineering areas, environment, construction, maintenance, consult and patrimonial enterprises and also from public organizations like city councils. The proposed action covers the areas of construction (conflict analysis, planning and materials take-off), structures (interoperability, analyses and transfer of information between software) and the most recent Heritage Building Information Modelling (HBIM) perspective. The course aims to contribute to the dissemination of the potential of BIM in the areas of designing, construction and refurbishing of historical buildings. The participants followed the course with great interest and satisfaction, formulating several questions directed to the particular activity of each of the attendees.

Area 3 - Application Domains

Nr: 17

Optimizing the Spread of Disinformation using Reinforcement Learning


David Butts and Michael Murillo

Abstract: The rise of software bots in social media have helped to expand the spread of disinformation in social media. As a result of the increased spread of disinformation, we have seen the emergence of polarization in social networks, for example the formation of echo chambers (Tornberg, PloS one, 2018). Social media is a target for disinformation and its polarizing effects due to its low cost to access and ease of sharing and discussing stories without fact-checking oversights (Shu et al., SIGKDD Explor. Newsl., 2017). Without interventions, disinformation will continue to spread through social networks and can lead to negative effects in society, for example anti-vaccine disinformation (Loomba et al., Nature human behaviour, 2021; Burki, The Lancet Digital Health, 2019; Cornwall, Science, 2020) convincing people to not get inoculated or potential influences in the 2016 United State presidential election (Badawy et al., IEEE/ACM ASONAM 2018; Fourney et al., CIKM, 2017). There is a growing literature of applying agent-based modeling to the spread of disinformation (Ross et al., European Journal of Information Systems, 2019; Rajabi, et al., AAMAS, 2020; Beskow et al., IEE WSC, 2019; Brainard et al., Revue d’epidemiologie et de sante publique, 2020). We contribute to these studies through developing an agent-based model of a social network where agents can share ideas. The social network is constructed from a graph, where agents are represented by nodes and agent's social connections by edges. Each agent has a set of opinions that they update by interacting with their connections and applying a generalized Attraction-Repulsion Model (Axelrod et al., PNAS, 2021). We introduce an attacker agent whose goal is to maximize the spread of disinformation in the network. The attacking agent can create edges with other agents on the network, and adjust the extremeness of their disinformation. If non-attacking agents detect that the attacking agent's opinions are too extreme, they will remove their connection from the attacking agent. While the rules are known to the attacker, the optimal strategy is not known a priori. We utilize reinforcement learning in the attacking agent to learn the best strategy to maximize the spread of disinformation in a social network.