Arhīvs: Valsts pētījumu programmas "Covid-19 seku mazināšanai" projekts VPP-COVID-2020/1-0004 "Drošu tehnoloģiju integrācija aizsardzībai pret Covid-19 veselības aprūpes un augsta riska zonās"
Projekta darbības laiks: 01.07.2020 - 31.12.2020
Valsts pētījumu programmas projekts: Valsts pētījumu programmas “Covid-19 seku mazināšanai” projekts VPP-COVID-2020/1-0004 “Drošu tehnoloģiju integrācija aizsardzībai pret Covid-19 veselības aprūpes un augsta riska zonās” (Integration of reliable technologies for protection against Covid-19 in healthcare and high-risk areas)
The project is funded: Latvian Council of Science/ Latvijas Zinātnes padome
Financing agreement: Nr.6-1/5 no 10.07.2020
Project contract amount 497 581 EUR
RTA contract amount: 33 750 EUR
Implementation period: 01.07.2020 – 31.12.2020
Lead Partner: Riga Technical University/ Rīgas Tehniskā universitāte (RTU)
Rezekne Academy of Technologies, Institute of Engineering / Rēzeknes Tehnoloģiju akadēmijas Inženierzinātņu institūts (RTA);
Institute of Atomic Physics and Spectroscopy, University of Latvia/ Latvijas Universitātes Atomfizikas un spektroskopijas institūts (LU ASI)
Institute of Electronics and Computer Science/ Elektronikas un datorzinātņu institūts (EDI)
Latvian Institute of Organic Synthesis/ Latvijas Organiskās sintēzes institūts (OSI)
Institute of Solid State Physics, University of Latvia/ Latvijas Universitātes Cietvielu fizikas institūts (LU CFI)
Latvian Biomedical Research and Study Centre/ Latvijas Biomedicīnas pētījumu un studiju centrs (BMC)
Riga Stradins university /Rīgas Stradiņa universitāte (RSU)
Latvian State Institute of Wood Chemistry/ Latvijas Valsts Koksnes ķīmijas institūts (LVKĶI)
Project Manager: Tālis Juhna, Academician Dr. sc. ing., RTU Vice-Rector for Research, firstname.lastname@example.org
WP4 Automated and robotic equipment for air and surfaces disinfection / Automatizēts un robotizēts aprīkojums gaisa un virsmu dezinfekcijai (RTA, RTU Department of Artificial Intelligence and Systems Engineering, EDI, LU ASI); coordinator: Andris Martinovs, prof., Dr.sc.ing. email@example.com
o Andris Martinovs
o Ritvars Rēvalds
o Guntis Koļčs
o Igors Maslobojevs
o Edgars Zaicevs
o Viktorija Piziča
o Iļja Sučkovs
Task 4.1. Development of liquid disinfectant sprayer for mobile robot (RTA). Deliverables: The correctly working prototype of a liquid disinfectant sprayer will be made. Developed technical documentation: 3D model, specification, working drawings of parts, instructions for use. TRL 7.
Task 4.2. Development of machine vision system for recognition of objects to be disinfected (EDI). Deliverables: Training data set, trained model for the most often objects being contacted by human (e.g. door handle, light switches).
Dataset for training:
In order for the robot to be able to recognize handles or light switches from other objects, for disinfection, a neural network was trained, which would be able to distinguish the objects to be disinfected from other objects. For this purpose, a database with 1513 pictures was collected, which showed: handles, doors, light switches, as well as various random objects. The dataset consists from pre-existing databases, the collection of pictures from internet, as well as pictures made manually at EDI premises.
Doors, handles and switches in front of them should not have other obscuring objects, such as a person, with any part of the body, as this interfered with the training of the neural network. It was not allowed to choose pictures with watermarks, because the neural network would learn to recognize watermarks as a feature of the objects to be marked, by which to recognize the object.
Yolo_mark was chosen for data labeling. In the pictures, the objects were marked in 3 classes:
0. handle 1. Door 2. switch
Before marking each object, the corresponding object class was selected and marked. Respectively, ‘handle’ class for handle, ‘door’ class for doors, ‘switch’ class for light switches. Marking is done by left-clicking on one corner of the object, and then pressing again at the diagonally opposite corner to create a rectangular outline. The picture shows what 2 doors and 2 handles look like.
After each image was tagged, a ‘.txt’ file with the same name as the captured image was automatically created.
Column (i) - class numerical indicator - 0,1,2, where these numbers mean respectively - handles, doors, light switch.
Column (mw) - the midpoint of the marked rectangle along the x axis divided by the width of the image.
Column (mh) - the centre point of the marked rectangle along the y axis divided by the height of the picture.
Column (w) - the width of the marked rectangle divided by the width of the picture.
Column (h) - the height of the marked rectangle divided by the height of the picture.
In short: index; rectangle: centre of the X axis; y axis four point; width; height. Each marked object was marked on your line.
The figure visually shows the meaning of the 2-5 columns and row 2 digits of the .txt file.
For the purposes of DNN (Deep Neural Network) model Yolo4 architecture was used, since it well fits project requirements, as well as it has good ecosystem and tools available, such as marking tool described in previous section.
In order apply Yolo4 for desired goals, necessary custom configuration was made (yolov4-custom.cfg), this config file could be used for example to identify classes to be used in the network as well it is possible to define pretrained weight file (yolov4.conv.137), what gives an opportunity to experiment with different pretrained models, and other different configuration such as batch, subdivisions, max_batches and steps. Made changes to configuration file, such as number of classes to detect (in each 3 “yolo” layers) and number of filters in 3 convolutional layers before each yolo layer. + some other parameters were changed, like batch, subdivisions, max_batches and steps.
Located at EDI premises HPC was used to iteratively train the model. The model was trained on 1380 images and 130 images were used for validation, which is approximately 90/10 ratio. This ratio is commonly used for training along with 80/20 which was also used in first attempts.
The previous graph shows one of the training iterations, and it performance. It gives an estimation of network performance such as success classification rate, and loss function output. During various iterations success classification rate was up to 85%.
The following stats gives an overview about trained model characteristics:
Task 4.3. Development control system of mobile disinfection robot (RTU). Deliverables: disinfection actuators control system code, prototype of the control system.
Robot navigation accuracy tests have been performed in corridors and other rooms.
Software interfaces have been developed to integrate nozzle control and surface identification mechanisms.
Task 4.4. Testing of the mobile robot under real conditions (RTU, EDI, RTA). Deliverables: Test protocols and related documentation along with demonstration video; TRL 6
This task mainly involves partner contribution towards integration of the developed component on mobile robot platform. To inference model Task 4.2 trained model on real hardware Nvidia Jetson AGX was used, since it has wide capabilities in terms of interfaces as well good computational performance, what makes is SoA number crunching embedded platform in the field. Since the model neded to be integrated into robot system, there was required to define some constraints how it should be deployed on it, in terms of camera position. As the result the best option was used, i.e. point the camera backwards (in relation to the robot movement) under approximately 45 degrees, and make detection in three different FOV section, where the center section corresponds to the nozzle position disinfectant spread area. The following picture illustrates component layout:
As it can be seen when handle/switch appears in middle section it is aligned with nozzle. The only one requirement should be met, i.e. the distance from the wall to the nozzle should be around 40-50 cm. This requirement is achieved by robot path planning system. Since for machine vision system deph camera is used (Intel RealSence) there is possible to filter out detection what are further than 50cm, what eliminates false nozzle engagement.
The integration between EDI machine vision system and RTU robot platform is done through broadcasting UDP packet which has following JSON format payload:
By parsing and analyzing this packet, RTU platform is capable to slow down and then stop at the desired position to engage RTA nozzle system.
The EDI developed machine vision system was deployed/integrated on RTU robotic platform, along with RTA developed nozzle system. Initial tests were done and LTE remote connection to the machine vision module was successfully tested.
At the time of writing this report the demonstrator video is not ready. Since demonstration should occur on December 9th 2020, it will be available after this date using following place holder link: https://rebrand.ly/cov-clean-wp4-video
The nozzle control system has been tested using robot control software.
Robot motion tests and motion accuracy tests were performed under laboratory conditions
Task 4.5. Development of disinfection gate and evaluation under real conditions (RTA). Deliverables: A prototype of disinfection gate will be made. Technical documentation: 3D model, specification, working drawings of parts, instructions for use. TRL 6.
Task 4.6. Development of innovative high-frequency electrodeless UV radiation lamps (LU ASI). Deliverables: 15 electrodeless lamp samples (5 with As, 5- Se, 5- Tl); developed technical documentation with spectral measurements in UV region, TRL 6; Publication in SPIE Proceedings (WoS, Scopus): Zorina N., Skudra A., Revalde G., Abola A. “Study of As and Tl high-frequency electrodeless lamps for Zeeman absorption spectroscopy”,
Task 4.7. Development of equipment for surfaces and air disinfection with ozone and UV radiation and testing of efficiency under laboratory conditions (RTA, RTU, LU ASI). Deliverables: Experimental stand for surfaces and air disinfection with ozone and UV radiation will be made. Developed technical documentation: 3D model, specification, working drawings of parts, instructions for use. TRL 7.