Information Technology and Computer Engineering https://itce.vntu.edu.ua/index.php/itce <p>The international scientific and technical journal "Information Technology and Computer Engineering" is a Ukrainian scientific professional publication, in which the results of dissertation papers for obtaining the scientific degrees of a doctor and a candidate of technical sciences may be published. The journal publishes new theoretical and practical results in the field of Engineering. Also, we publish surveys of the important scientific problems, a review of scientific conferences held in VNTU.</p> <p>The magazine is published 3 times a year. Founded in July 2004. The magazine has a subscription index of 37465 and is annually included in the Catalog of Publications of Ukraine. Certificate of registration of a periodical № 9007, ser. KV dated July 27, 2004</p> <p>Sections of the magazine:</p> <ul> <li class="show">Biological and medical devices and systems</li> <li class="show">Information technology and coding theory</li> <li class="show">Information and measurement technologies and systems</li> <li class="show">Computer systems and components</li> <li class="show">Mathematical modeling and computational methods</li> <li class="show">Instruments and methods of control and determination of substance composition</li> <li class="show">Radiometers</li> </ul> <p>The journal "Information Technology and Computer Engineering" is a scientific professional publication of Ukraine (category B) (VAC of Ukraine No. 2-05 / 1 dated January 19, 2006) (re-registration, VAC of Ukraine No. 1-05 / 3 dated 08.07.2009) (re-registration, VAC of Ukraine No. 261 dated 06.03.2015) (order of the Ministry of Education and Science № 409 dated March 17, 2020). Issued on the recommendation of the Academic Council of the Vinnytsya National Technical University.</p> VNTU uk-UA Information Technology and Computer Engineering 1999-9941 Construction Guidelines for Optical-Electronic Expert Systems in Blood Rheology https://itce.vntu.edu.ua/index.php/itce/article/view/1019 <p><strong>Abstract. </strong>I Building specifically designed optical-electronic information processing expert systems for blood rheology bioimage analysis requires a painstaking, subtle approach. Such systems provide essential support for diagnostic operations and require an understanding of experimental properties such as the rheology of blood and bioimage analysis. To properly build these systems, guidelines are needed for improving imaging methods, image processing routines, and application of expert knowledge so the blood's rheological properties can be analyzed precisely. nformation features (information parameters) for the analysis of the biomedical images, in particular, for the assessment of the rheologic properties of the blood, are formed. Algorithm and optical-electronic expert system for the analysis of the rheological properties of the blood are suggested, they are used for the increase of the diagnostic validity which is adetermining factor in the biomedical diagnostics. The main focus of modern clinical hemorheology is the search diagnostic and prognostic criteria for various diseases and rheological correction methods violations. Changes in the rheological parameters of blood are one of the significant mechanisms of the formation of insufficient blood supply in the early stages the development of the disease. Main pathological effects violations of rheological properties in the blood can lead to micro-flow failure circulation, the extreme manifestation of which may lead to a decrease in trophism and the development of ischemic syndrome, a violation of micro-rheology and an increase in the viscosity of blood, which causes an increase in total peripheral resistance and the development of arterial hypertension syndrome, to atherosclerotic changes in blood vessels, to a violation of hemorheology, which contributes to increased thrombosis.</p> Jinqiong Li Sergii Pavlov Oleksandr Poplavskyi Copyright (c) 2025 Information Technology and Computer Engineering 2024-10-10 2024-10-10 60 2 107 121 10.31649/1999-9941-2024-60-2-107-121 Information technology for image data processing based on hybrid neural networks using geometric features https://itce.vntu.edu.ua/index.php/itce/article/view/1011 <p><strong>Abstract.</strong> Progress in computing technology has led to a steady increase in computing power, resulting in an exponential growth in the amount of data that needs to be processed. In particular, the enhanced performance of automated systems enables the storage and analysis of large volumes of medical data with high speed and accuracy. Modern medicine is characterized by a significant increase in the information load, necessitating complex processing and in-depth analysis to support clinical decision-making. Information technology plays a pivotal role in ensuring efficient processing of these large datasets, contributing to the accuracy and speed of diagnosis, as well as the effectiveness of subsequent patient treatment. The purpose of this article is to develop and study information technology for processing graphic data based on hybrid neural networks using geometric features of image objects. The paper proposes advanced machine learning methods, deep neural network architectures, and specialized tools for processing graphic data, such as OpenCV, TensorFlow, and others. The data processing workflow during the validation of the proposed methods and architectures included several stages: data pre-processing, model training, and thorough testing of the results. The developed information technology demonstrates a significant improvement in the accuracy of graphic data classification. Experimental studies have shown that the proposed approach ensures efficient processing of large volumes of biomedical data, as evidenced by the high accuracy and speed of analysis. In particular, the accuracy of pathology classification using hybrid neural networks increased by more than 11% compared to the results obtained using classical methods. The practical value of the developed technology lies in its high potential for use in the field of machine vision, including enhancing the efficiency of diagnosis and treatment of patients in the medical field. It can be integrated into modern decision support systems, providing more accurate and faster processing of medical images.</p> Oleksandr Poplavskyi Copyright (c) 2025 Information Technology and Computer Engineering 2024-10-10 2024-10-10 60 2 4 16 10.31649/1999-9941-2024-60-2-4-16 Information technology for secure storing of academic performance results https://itce.vntu.edu.ua/index.php/itce/article/view/1012 <p><strong>Abstract.</strong> The relevance of research on the protection of academic performance results in educational institutions is defined at the article. The legal framework regulating information protection requirements for the case was analyzed. The analysis of used mechanisms and tools of for the academic performance results protection used by known tools was presented. On the basis of the analysis, approaches for known solutions improvement were defined, that became the basis for proposing the solution for such protection. The results of data model designing are presented. On the basis of this model, the requirements for security attributes of the entities related to students’ academic performance were analyzed. To achieve the goal, the method of secure data storing of the academic performance results was adapted in order to improve scalability for the information protection in the academic field. The solution is proposed that involves simultaneous utilizing of centralized and decentralized data repositories, which allows to improve the level of protection of data integrity and availability in comparison too centralized repositories, and to increase the level of privacy protection and reduce data redundancy in comparison to decentralized repositories. To yield proof-of-concept, one of the possible architectures of the software application that implements the proposed information technology is presented. This architecture is implemented as a client-server web application that provides a user interface for secure data storage utilizing the relational database, distributed storage IPFS and blockchain, which supports smart contracts. The testing results of this developed software application for secure storing of academic performance information were presented. This made it possible to prove the security of the developed smart contracts, as well as the possibility of the proposed technology utilization in practical situations within the business processes of educational institutions. The perspectives of further research were defined.</p> Yurii Baryshev Vladyslava Lanova Copyright (c) 2025 Information Technology and Computer Engineering 2024-10-10 2024-10-10 60 2 17 30 10.31649/1999-9941-2024-60-2-17-30 Problems of modern methods of three-dimensional photogrammetry https://itce.vntu.edu.ua/index.php/itce/article/view/1013 <p><strong>Abstract.</strong> Technologies of three-dimensional photogrammetry, which is one of the methods for creating computer-generated 3D models of objects, have a wide range of scientific and practical applications in fields such as manufacturing, construction, architecture, geodesy, and medicine. However, the primary challenges of photogrammetric methods are related to their high labor intensity. This work explores the fundamentals of the photogrammetric method for obtaining three-dimensional models of objects, analyzing its key drawbacks and limitations associated with the need to identify key elements across numerous images of an object taken from different angles and then align them accordingly. One of the most effective image comparison methods that can be used in photogrammetric processing to identify key elements in object images is the scale-invariant feature transformation (SIFT) algorithm. This paper analyzes the main stages of implementing this algorithm and provides an overview of several modifications that enhance its performance by eliminating redundant key points and reducing the dimensionality of descriptors used to distinguish each key point from others. Further improvements in performance and reduction of errors in 3D model creation can be achieved by removing frames or images that do not contain common features due to sharp changes in shooting angle or specific object characteristics in the preliminary stage. To accomplish this, the use of a neural network is proposed to analyze the similarity between each pair of sequentially taken images, which are preprocessed into binary form. Removing such images not only saves time by avoiding unnecessary searches for key points on an object’s image but also reduces the likelihood of obtaining erroneous matches between key points on different images of the object.</p> Artem Tarnovskyi Serhiy Zakharchenko Mykola Tarnovskyi Copyright (c) 2025 Information Technology and Computer Engineering 2024-10-10 2024-10-10 60 2 31 41 10.31649/1999-9941-2024-60-2-31-41 Using neural network tools to accelerate the development of Web interfaces https://itce.vntu.edu.ua/index.php/itce/article/view/1014 <p><strong>Abstract</strong><strong>.</strong> The article is devoted to considering modern neural network tools that allow for speeding up the development of web interfaces and simplifying the work of UI/UX designers. One of the main problems of modern design is quick access to general information and possible structuring of a site with specialized content, as well as obtaining its visual content. Currently, neural networks cannot replace designers, but to a large extent help them solve tasks. All neural networks that can be used in the design of web interfaces can be divided into four main types: convolutional, recurrent, forward propagation, and generative adversarial networks. In his work, the designer can mainly use generative networks, they can be classified according to the principle of "information at the input - information at the output". When working on a project, the designer can create a request to the neural network and get several options, generate different ideas, and create mood boards based on them, selecting colors, gradients, texture, typography, etc. The neural network can create various graphic elements: icons, buttons, illustrations, and photos with the right perspective, style, and colors. Using neural networks to improve images and refine or remove necessary elements is also promising. The process of speeding up the creation of the landing page interface using the Midjourney application is considered. Examples of writing prompts (prompts) that will affect the final quality of the generated image are given. The results are high-quality visual content that can either be placed in a project or used as an idea, element placement, composition, color scheme, photos, icons, etc. After creating the graphic design elements using Chat GPT 3.5, the landing page's content was created. You can use the FIG GPT plugin directly in the Figma environment to quickly generate the required content. Existing shortcomings and generation inaccuracies that arise in the work can be corrected by quickly updating and creating new versions of neural networks.</p> Dmytro Petryna Volodymyr Kornuta Olena Kornuta Copyright (c) 2025 Information Technology and Computer Engineering 2024-10-10 2024-10-10 60 2 42 50 10.31649/1999-9941-2024-60-2-42-50 Assembly and reassembly processes of documents in the document system https://itce.vntu.edu.ua/index.php/itce/article/view/1015 <p><strong>Abstract</strong>. This work is dedicated to analyzing and improving document processing methods in the electronic document management environment, particularly methods of preserving the integrity and authenticity of documents and their automated generation. The focus is on document assembly and reassembly processes. The research is based on extensive experience with electronic document management systems and utilizes publicly available information on the latest methods and practices of processing, protecting, and generating documents for general use. During the literature review, an analysis of modern document management systems was conducted and a consideration of the manual document processing approach.</p> <p>The review part of the work aimed to familiarize with existing implementations of electronic document management systems and to develop their comparative characteristics, highlighting their advantages and disadvantages. Specifically, such electronic document management systems as "DIA", "PandaDoc", and "GoogleDocs" were examined by the author. As a result of analyzing the current state of the issue in the field of automated and manual document processing, a technological chain of a specialized automated document management system was developed. Document assembly and reassembly mechanisms were designed and described, along with other processes accompanying this technological chain. The purpose of the technical part of this work is a detailed examination of the critical mechanisms of a specialized automated document management system and their overall interaction at the client-server level. In conclusion, the scientific novelty lies in improving the technological chain of a specialized automated document management system through software tools for document assembly and reassembly. During the research, an analytical description of software tools for document assembly and reassembly was proposed, considering the possibility of automated document generation.</p> Tetiana Korobeinikova Liudmyla Savytska Leonid Krupelnitskyi Copyright (c) 2025 Information Technology and Computer Engineering 2024-10-10 2024-10-10 60 2 51 65 10.31649/1999-9941-2024-60-2-51-65 Statistical analysis methods application for a task distributor selection in a distributed computing system https://itce.vntu.edu.ua/index.php/itce/article/view/1020 <p>Abstract. This paper focuses on optimizing the task distribution process in distributed computing systems. By applying statistical analysis methods, a strategy has been developed to automate the selection of task performers, improving the efficiency of task distribution, daily productivity, and employee satisfaction. The research shows that the optimized approach reduced the average processing time for specific user requests from 34 to 31 minutes, which is 7% more effective compared to random task allocation, thereby enhancing service quality and overall productivity.<br>The proposed unified model for optimized task distribution considers key factors such as internal user profiles, their workload levels, task priority, interaction among performers, and other available system resources. This model balances employee competencies with the speed of task processing, significantly improving the system's overall performance.<br>Particular attention is given to the methodology based on Salesforce CRM tools, which allows for the effective use of historical data on employee performance to identify the most suitable task performers. Combined with statistical data analysis methods, this approach not only optimizes task distribution but also enables accurate time prediction for task completion, identification of process anomalies, and the development of flexible distribution strategies. Considering both competencies and productivity ensures high-quality task execution, reduces processing time, and minimizes workload, which is critical for the efficient operation of distributed systems.<br>In overall, the proposed study confirms that the use of statistical analysis and CRM tools enhances the efficiency of distributed computing systems. This opens opportunities for the implementation of optimized task distribution strategies across various sectors, especially in the context of the growing volume of data and the complexity of business processes.</p> Roman Slobodian Ilona Bogach Maria Baraban Copyright (c) 2025 Information Technology and Computer Engineering 2024-10-10 2024-10-10 60 2 122 133 10.31649/1999-9941-2024-60-2-122-133 Model-based learning of coordinators of the decentralized multi-zone objects control systems https://itce.vntu.edu.ua/index.php/itce/article/view/1016 <p><strong>Abstract.</strong> Decentralized control systems are gaining more and more expansion, which is due to the increase in the availability and power of microcontrollers. Decentralized control of multi-zone objects is associated with the need to coordinate the local control systems of zones state. Learning systems are preferred for implementation of the coordination methods, as they are able for flexibly adjust to the specifics of control of each zone. However, the training of coordinators is complicated task by the absence at the stage of a system creating of marked datasets for controlled multi-zonal objects. This article considers the creation of a dataset based on a simulation of a decentralized system and four scenarios for training neural coordinators. A model for simulation of a decentralized system was been created on the Scilab/Xcos platform using a pre-built library of blocks for simulating decentralized systems. The scenarios differ depending on the structure of the neural coordinators: a segmented network according to the structure of the coordinator simulation model or an integrated one, as well as on the training strategy: train all the coordinators of the decentralized system in parallel or only one coordinator and then clone the results. Experimental studies of the proposed method of training neural network coordinators, implemented on Python TensorFlow, were conducted. The study showed greater effectiveness of segmented coordinators parallel training. However, in the course of the study, the last step of the scenarios – fine tuning on a real physical object, was not performed. A preliminary evaluation suggests that after such additional training, the advantages of mono-neural coordinators will become more visible, since such additional training will correct the shortcomings of imitation.</p> Volodymyr Dubovoi Copyright (c) 2025 Information Technology and Computer Engineering 2024-10-10 2024-10-10 60 2 66 76 10.31649/1999-9941-2024-60-2-66-76 Evaluating Fast Charging of Electric Vehicles Along Motorways Using Finite Multi-Server Queueing System Simulation https://itce.vntu.edu.ua/index.php/itce/article/view/1017 <p><strong>Abstract </strong>Fast DC charging sites are required along motorways to abrogate the car drivers' anxiety of long-distance travels when driving electric vehicles (EVs) with batteries optimised for efficient average reach. This is important to facilitate the mobility transition to EVs. In this study, a queueing model-based approach to simulate and evaluate fast charging sites equipped with many DC charging points is presented. Charging sites are modelled as multi-server queueing systems with finite waiting space, where the servers represent the charging points and the waiting space the parking area available for EVs waiting for service. To evaluate also arrival and service time distributions that are non-Markovian, the queueing system is evaluated using event based simulation. Exemplary results and a comparison with analogous simulation tools complete the presentation of the simulation approach.</p> <p>On one hand, the simulation reveals the mean potential waiting time per EV before charging can start due to the temporary occupation of all charging points. On the other hand, the tool analyses the aggregated power demand of all charging points. Based on latter, the smart charging mechanism reduces dynamically the individually available charging power if needed to stay below the power grid access limit. This smart charging mechanism causes a small decline in the charging performance at high EV traffic loads when all charging points are maximally occupied. In combination with the state-of-charge depending power demand, the tool provides the user critical insights into realistically expectable waiting times and decreased charging volumes when many EVs charge in parallel. Experimenting with different number of charging points and grid power limitations helps the tool-user, the systems designer, to dimension charging sites along motorways that can efficiently handle future traffic loads</p> Maria Forkaliuk Gerald Franzl Oleg Bisikalo Copyright (c) 2025 Information Technology and Computer Engineering 2024-10-10 2024-10-10 60 2 77 90 10.31649/1999-9941-2024-60-2-77-90 Multidimensional classification matrix for information security risk assessment https://itce.vntu.edu.ua/index.php/itce/article/view/1018 <p><strong>Abstract</strong>. In this study, we address one of the key challenges related to a comprehensive risk assessment system for information security concerning personnel during access delineation to company information resources. The relevance of this research is confirmed by numerous instances of information leaks, which highlight the insufficient effectiveness of traditional classification and access control methods. The research aims to analyze existing classification strategies for company information resources and develop an additional method based on continuous access analysis and dynamic adjustment of resource classification. To achieve this goal, we employed methods such as analyzing current information classification strategies, combining various classification techniques, and implementing a graphical method that combines traditional resource classification with a dynamic component using a multidimensional matrix. The main results of the study involve the development of an enhanced method that allows continuous analysis of personnel access to company information resources and dynamic adjustments to resource classification based on access delineation rules. The proposed approach allows for the inclusion of any number of indicators in a graph as a set of vectors, subsequently calculating overall risk assessments based on the sum or difference of these vectors. The practical value of this work lies in its ability to fully utilize modern access control technologies and serve as a foundation for further research, such as automated information classification using neural network training. Additionally, within this study, we conducted a detailed review of existing risk assessment methods for company information resources, identifying key limitations inherent in traditional approaches. Specifically, we analyzed methods based on fixed access levels and the use of static rules for access control. It became evident that such methods are inadequate in responding to dynamic changes in user behavior and the evolving importance of information resources. Thus, the proposed approach allows for more flexible and adaptive access control to information resources, achieved through continuous access monitoring and automatic adjustments based on behavioral user data and contextual changes in resource utilization.</p> Tetiana Korobeinikova Andrii Yamnych Copyright (c) 2025 Information Technology and Computer Engineering 2024-10-10 2024-10-10 60 2 91 106 10.31649/1999-9941-2024-60-2-91-106