Comparative efficiency in the execution of artificial vision algorithms
Main Article Content
Abstract
Introduction: A neural network is a programmed algorithm based on an inference model. For the execution of a vision process, advanced inference is necessary. Low energy consumption is also desirable. Objectives: to analyze the comparative efficiency in the execution of artificial vision through embedded systems. Methodology: To compare the inference times involved in executing a deep neural network in an embedded system such as Raspberry Pi 4 and an inference accelerator designed by Intel. Results: The installation and configuration process necessary for compatibility between the two devices is detailed already trained neural network models focused on image processing are used and the inference time involved in executing them is compared. Conclusions: In the end, it is concluded that for these experimental conditions the inference time has been improved by 25%.
Downloads
Article Details
dssfdsf
dsfdsf