June 2018, Volume 18, Number 3
DOLINSKÝ, P. ― ANDRÁŠ, I. ― MICHAELI, L. ― GRIMALDI, D. Model for Generating Simple Synthetic ECG Signals |
COJOCAR, G. S. ― GURAN, A. M. On Automatic Identification of Monitoring Concerns Implementation |
MÁRTON, G. ― SZEKERES, I. ― PORKOLÁB, Z. Towards a High-Level C++ Abstraction to Utilize the Read-Copy-Update Pattern |
JAKOB, J. ― TICK, J. Traffic Scenarios and Vision Use Cases for the Visually Impaired |
VADOVSKÝ, M. ― PARALIČ, J. Utilizing Processed Records of Patient´S Speech in Determining the Stage of Parkinson´s Disease |
LEŠO, M. ― ŽILKOVÁ, J. ― BIROŠ, M. ― TALIAN, P. Survey of Control Methods for DC-DC Converters |
MIHELIČ, J. ― ČIBEJ, U. Experimental Comparison of Matrix Algorithms for Dataflow Computer Architecture |
Summary: |
Pavol DOLINSKÝ - Imrich ANDRÁŠ - Linus MICHAELI - Domenico GRIMALDI MODEL FOR GENERATING SIMPLE SYNTHETIC ECG SIGNALS [full paper]
This paper proposes a mathematical model for generating synthetic artificial ECG signal based on geometrical features of a real
ECG signal. By variation of its parameters each particular wave of PQRST complex can be adjusted as needed allowing the generation
of arbitrary ECG patterns typical for diseases and arrhythmia. The input parameters are treated to avoid mixing order of PQRST
waves in case of automatic parameter variation and allow generating different patterns for each subsequent heartbeat independently.
Each particular wave is modelled using an elementary trigonometric function or a Gaussian monopulse. Including possible addition
of equipment noise as well as respiration frequency such an artificial signal can be used as a test signal for some signal processing
methods. The model was tested by comparison of synthetized patterns against patterns generated by LabVIEW Biomedical Toolkit,
while the parameters of model are found using the differential evolution algorithm.
|
Grigoreta-Sofia COJOCAR - Adriana-Mihaela GURAN ON AUTOMATIC IDENTIFICATION OF MONITORING CONCERNS IMPLEMENTATION [full paper]
Automatic identification of crosscutting concerns implementation is still a challenging task in software engineering. The approaches
proposed so far for crosscutting concerns identification are all bottom-up approaches: starting from the source code of a software
system they try to discover all the crosscutting concerns that exist in the system. In this paper we present a top-down approach that
we developed based on the observations gathered after analyzing how monitoring crosscutting concerns are implemented in different
open source object oriented software systems. The approach aims to identify only one type of crosscutting concern, namely monitoring.
It tries to automatically identify the type of the logger used for monitoring crosscutting concerns implementation by analyzing the
attributes defined in Java-based software systems. We also present and discuss the results obtained by applying this approach to
different open source Java software systems.
|
Gábor MÁRTON - Imre SZEKERES - Zoltán PORKOLÁB TOWARDS A HIGH-LEVEL C++ ABSTRACTION TO UTILIZE THE READ-COPY-UPDATE PATTERN [full paper]
Concurrent programming with classical mutex/lock techniques does not scale well when reads are way more frequent than writes.
Such situation happens in operating system kernels among other performance critical multithreaded applications. Read copy update
(RCU) is a well know technique for solving the problem. RCU guarantees minimal overhead for read operations and allows them to
occur concurrently with write operations. RCU is a favourite concurrent pattern in low level, performance critical applications, like the
Linux kernel. Currently there is no high-level abstraction for RCU for the C++ programming language. In this paper, we present our
C++ RCU class library to support efficient concurrent programming for the read-copy-update pattern. The library has been carefully
designed to optimise performance in a heavily multithreaded environment, in the same time providing high-level abstractions, like
smart pointers and other C++11/14/17 features.
|
Judith JAKOB - József TICK TRAFFIC SCENARIOS AND VISION USE CASES FOR THE VISUALLY IMPAIRED [full paper]
We gather the requirements visually impaired road users have concerning a camera-based assistive system in order to develop a
concept for transferring algorithms from Advanced Driver Assistance Systems (ADAS) to this domain. We therefore combine procedures
from software engineering, especially requirements engineering, with methods from qualitative social sciences, namely expert
interviews by means of problem-centered interview methodology. The evaluation of the interviews results in a total of six traffic scenarios,
that are of interest for the assistance of visually impaired road users, clustered into three categories: orientation, pedestrian
and public transport scenarios. From each scenario, we derive use cases based on computer vision and state which of them are also
addressed in ADAS so that we can examine them further in the future to formulate the transfer concept. We present a general literature
review concerning solutions from ADAS for the identified overlapping use cases from both fields and take a closer look on how to adapt
ADAS Lane Detection algorithms to the assistance of the visually impaired.
|
Michal VADOVSKÝ - Ján PARALIČ UTILIZING PROCESSED RECORDS OF PATIENT´S SPEECH IN DETERMINING THE STAGE OF PARKINSON´S DISEASE [full paper]
The medical procedures for disease diagnostics are significantly demanding and time-consuming. Data mining methods can
accelerate this process and assist doctors in making decisions in complex situations. In case of Parkinson´s disease (PD), the
diagnostics of the initial disease stage is the primary issue since the symptoms are not so unambiguous and easily observable.
Therefore, this article is focused on determining the actual stage of PD based on the data recording signals of patient´s speech using
decision trees (C4.5, C5.0 and CART). Methods such as RandomForest, Bagging and Boosting were also employed to improve the
existing classification models. Estimation of model accuracy was achieved by using k-fold cross-validation and validation with
omission of one record (Leave-one-out). In addition, experiments were also performed to remove collinearity in data by computing the
Variance inflation factor (VIF) in order to increase the accuracy of the models.
|
Martin LEŠO - Jaroslava ŽILKOVÁ - Milan BIROŠ - Peter TALIAN SURVEY OF CONTROL METHODS FOR DC-DC CONVERTERS [full paper]
The paper presents a survey of the currently used control methods for DC-DC converters and the current state of research in this
area. Determining the ideal method for output voltage control in DC-DC converters is a demanding task due to their nonlinear
character, various types of connection, and ways of use. The currently prevailing control methods implemented in DC-DC converters
are voltage-mode, current-mode and hysteretic control. Each of these methods comes with certain drawbacks, which is why there is
intensive research going on aimed at more progressive DC-DC converter control methods that would replace or supplement the
currently used types of control in order to increase the reliability and quality of DC-DC converters output voltage control. The paper
includes a basic description of each of the control methods with focus on the principles of their function and on the main pluses and
minuses of each method.
|
Jurij MIHELIČ - Uroš ČIBEJ EXPERIMENTAL COMPARISON OF MATRIX ALGORITHMS FOR DATAFLOW COMPUTER ARCHITECTURE [full paper]
In this paper we draw our attention to several algorithms for the dataflow computer paradigm, where the dataflow computation
is used to augment the classical control-flow computation and, hence, strives to obtain an accelerated algorithm. Our main goal is
to experimentally explore various dataflow techniques and features, which enable such an acceleration. Our focus is to resolve one
of the most important challenges when designing a dataflow algorithm, which is to determine the best possible data choreography in
the given context. In order to mitigate this challenge, we systematically enumerate and present possible techniques of various data
choreographies. In particular, we focus our interest on the algorithms that use matrices and vectors as the underlaying data structure.
We begin with simple algorithms such as matrix and vector multiplication, evaluation of polynomials as well as more advanced ones
such as the simplex algorithm for solving linear programs. To evaluate the algorithms we compare their running-times as well as the
dataflow resource consumption.
|