Please use this identifier to cite or link to this item:
http://lib.kart.edu.ua/handle/123456789/26035
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Sadovnykov, B.I. | - |
dc.contributor.author | Zhuchenko, O.S. | - |
dc.date.accessioned | 2024-11-24T14:46:04Z | - |
dc.date.available | 2024-11-24T14:46:04Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Sadovnykov, B.I. Development of an algorithm for detecting moving objects in images from a real-time video stream / B.I. Sadovnykov, O.S. Zhuchenko // Інформаційно-керуючі системи на залізничному транспорті : тези стендових доповідей та виступів учасників 37-ї Міжнародної науково-практичної конференції "Інформаційно-керуючі системи на залізничному транспорті" (Харків, 10-11 жовтня, 2024 р.). – 2024. – № 3 (додаток). – С. 80-81. | uk_UA |
dc.identifier.issn | 1681-4886 (print); 2413-3833(online) | - |
dc.identifier.uri | http://lib.kart.edu.ua/handle/123456789/26035 | - |
dc.description.abstract | One of the tasks of recognizing objects in video is to find their location in the current frame. In algorithms that use neural networks, such as SSD[1], YOLO[2], search and classification are performed by a single model and are inseparable steps. In the case where there is a computing node in the system that has enough power for complex image processing and can do it before the server that recognizes objects, separating the object search and recognition operations can reduce the latency, load on the network and the recognition node. | uk_UA |
dc.language.iso | en | uk_UA |
dc.publisher | Український державний університет залізничного транспорту | uk_UA |
dc.title | Development of an algorithm for detecting moving objects in images from a real-time video stream | uk_UA |
dc.type | Thesis | uk_UA |
Appears in Collections: | Том 29 № 3 (додаток) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Sadovnykov.pdf | 1.73 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.