Verified @hotmail.com
Faculty of Engineering
Universidad Andrés Bello / Research Professor
Ph.D. in Electronic Engineering from the Universidad Técnica Federico Santa María - Chile. Electronics and Control Engineer from the Escuela Politécnica Nacional (EPN) - Ecuador. Currently, he is a researcher at the Artificial Vision and Intelligence Research Laboratory of the Escuela Politécnica Nacional. He has investigated topics related to Robotics and Artificial Intelligence, Human-Robot Interaction (HRI), Human-Machine Interaction (HMI), assistance and collaborative systems, algorithms able to detect different non-verbal communication techniques such as gestures, actions, and human cognitive parameters. He has also worked in the area of precision agriculture using cooperative robots capable of interacting and sharing the workspace with humans. His areas of interest are robotics, machine learning, deep learning, reinforcement learning, and computer vision.
Ph.D. in Electronic Engineering - Universidad Técnica Federico Santa María - Chile
Electronics and Control Engineer - Universidad Politécnica Nacional - Ecuador
Human-robot interaction, artificial intelligence, robotics, electronics.
Scopus Publications
Pablo Ormeño-Arriagada, Eduardo Navarro, Carla Taramasco, Gustavo Gatica, and Juan Pablo Vásconez
Springer Nature Switzerland
Juan Diego Terneus, Viviana Moya, Faruk Abedrabbo, Juan Pablo Vásconez, and Marcelo Moya
Springer Nature Switzerland
Piero Vilcapoma, Diana Parra Meléndez, Ingrid Nicole Vásconez, Gustavo Gatica, and Juan Pablo Vásconez
Springer Nature Switzerland
Ivan García, Viviana Moya, Andrea Pilco, Piero Vilcapoma, Leonardo Guevara, Robert Guamán-Rivera, Oswaldo Menéndez, and Juan Pablo Vásconez
Springer Nature Switzerland
Piero Vilcapoma, Ivan García, and Juan Pablo Vásconez
Springer Nature Switzerland
Juan Pablo Vásconez, Julio del Río, Viviana Moya, Andrea Pilco, Inesmar Briseño, Jenny Pantoja, and Oswaldo Menéndez
Springer Nature Switzerland
Michael Guerra, Faruk Abedrabbo, Viviana Moya, Angélica Quito, Guillermo Mosquera, and Juan Pablo Vásconez
Springer Nature Switzerland
J.P. Vásconez, I.N. Vásconez, V. Moya, M.J. Calderón-Díaz, M. Valenzuela, X. Besoain, M. Seeger, and F. Auat Cheein
Elsevier BV
Piero Vilcapoma, Diana Parra Meléndez, Alejandra Fernández, Ingrid Nicole Vásconez, Nicolás Corona Hillmann, Gustavo Gatica, and Juan Pablo Vásconez
MDPI AG
The use of artificial intelligence algorithms (AI) has gained importance for dental applications in recent years. Analyzing AI information from different sensor data such as images or panoramic radiographs (panoramic X-rays) can help to improve medical decisions and achieve early diagnosis of different dental pathologies. In particular, the use of deep learning (DL) techniques based on convolutional neural networks (CNNs) has obtained promising results in dental applications based on images, in which approaches based on classification, detection, and segmentation are being studied with growing interest. However, there are still several challenges to be tackled, such as the data quality and quantity, the variability among categories, and the analysis of the possible bias and variance associated with each dataset distribution. This study aims to compare the performance of three deep learning object detection models—Faster R-CNN, YOLO V2, and SSD—using different ResNet architectures (ResNet-18, ResNet-50, and ResNet-101) as feature extractors for detecting and classifying third molar angles in panoramic X-rays according to Winter’s classification criterion. Each object detection architecture was trained, calibrated, validated, and tested with three different feature extraction CNNs which are ResNet-18, ResNet-50, and ResNet-101, which were the networks that best fit our dataset distribution. Based on such detection networks, we detect four different categories of angles in third molars using panoramic X-rays by using Winter’s classification criterion. This criterion characterizes the third molar’s position relative to the second molar’s longitudinal axis. The detected categories for the third molars are distoangular, vertical, mesioangular, and horizontal. For training, we used a total of 644 panoramic X-rays. The results obtained in the testing dataset reached up to 99% mean average accuracy performance, demonstrating the YOLOV2 obtained higher effectiveness in solving the third molar angle detection problem. These results demonstrate that the use of CNNs for object detection in panoramic radiographs represents a promising solution in dental applications.
Ricardo Paul Urvina, César Leonardo Guevara, Juan Pablo Vásconez, and Alvaro Javier Prado
MDPI AG
This article presents a combined route and path planning strategy to guide Skid–Steer Mobile Robots (SSMRs) in scheduled harvest tasks within expansive crop rows with complex terrain conditions. The proposed strategy integrates: (i) a global planning algorithm based on the Traveling Salesman Problem under the Capacitated Vehicle Routing approach and Optimization Routing (OR-tools from Google) to prioritize harvesting positions by minimum path length, unexplored harvest points, and vehicle payload capacity; and (ii) a local planning strategy using Informed Rapidly-exploring Random Tree (IRRT*) to coordinate scheduled harvesting points while avoiding low-traction terrain obstacles. The global approach generates an ordered queue of harvesting locations, maximizing the crop yield in a workspace map. In the second stage, the IRRT* planner avoids potential obstacles, including farm layout and slippery terrain. The path planning scheme incorporates a traversability model and a motion model of SSMRs to meet kinematic constraints. Experimental results in a generic fruit orchard demonstrate the effectiveness of the proposed strategy. In particular, the IRRT* algorithm outperformed RRT and RRT* with 96.1% and 97.6% smoother paths, respectively. The IRRT* also showed improved navigation efficiency, avoiding obstacles and slippage zones, making it suitable for precision agriculture.
Juan Pablo Vásconez, Elias Schotborgh, Ingrid Nicole Vásconez, Viviana Moya, Andrea Pilco, Oswaldo Menéndez, Robert Guamán-Rivera, and Leonardo Guevara
MDPI AG
Intelligent transportation and advanced mobility techniques focus on helping operators to efficiently manage navigation tasks in smart cities, enhancing cost efficiency, increasing security, and reducing costs. Although this field has seen significant advances in developing large-scale monitoring of smart cities, several challenges persist concerning the practical assignment of delivery personnel to customer orders. To address this issue, we propose an architecture to optimize the task assignment problem for delivery personnel. We propose the use of different cost functions obtained with deterministic and machine learning techniques. In particular, we compared the performance of linear and polynomial regression methods to construct different cost functions represented by matrices with orders and delivery people information. Then, we applied the Hungarian optimization algorithm to solve the assignment problem, which optimally assigns delivery personnel and orders. The results demonstrate that when used to estimate distance information, linear regression can reduce estimation errors by up to 568.52 km (1.51%) for our dataset compared to other methods. In contrast, polynomial regression proves effective in constructing a superior cost function based on time information, reducing estimation errors by up to 17,143.41 min (11.59%) compared to alternative methods. The proposed approach aims to enhance delivery personnel allocation within the delivery sector, thereby optimizing the efficiency of this process.
Viviana Moya, Angélica Quito, Andrea Pilco, Juan P. Vásconez, and Christian Vargas
Ital Publication
In recent years, the accurate identification of chili maturity stages has become essential for optimizing cultivation processes. Conventional methodologies, primarily reliant on manual assessments or rudimentary detection systems, often fall short of reflecting the plant’s natural environment, leading to inefficiencies and prolonged harvest periods. Such methods may be imprecise and time-consuming. With the rise of computer vision and pattern recognition technologies, new opportunities in image recognition have emerged, offering solutions to these challenges. This research proposes an affordable solution for object detection and classification, specifically through version 5 of the You Only Look Once (YOLOv5) model, to determine the location and maturity state of rocoto chili peppers cultivated in Ecuador. To enhance the model’s efficacy, we introduce a novel dataset comprising images of chili peppers in their authentic states, spanning both immature and mature stages, all while preserving their natural settings and potential environmental impediments. This methodology ensures that the dataset closely replicates real-world conditions encountered by a detection system. Upon testing the model with this dataset, it achieved an accuracy of 99.99% for the classification task and an 84% accuracy rate for the detection of the crops. These promising outcomes highlight the model’s potential, indicating a game-changing technique for chili small-scale farmers, especially in Ecuador, with prospects for broader applications in agriculture. Doi: 10.28991/ESJ-2024-08-02-08 Full Text: PDF
Oswaldo Menéndez, Juan Villacrés, Alvaro Prado, Juan P. Vásconez, and Fernando Auat-Cheein
MDPI AG
Electric-field energy harvesters (EFEHs) have emerged as a promising technology for harnessing the electric field surrounding energized environments. Current research indicates that EFEHs are closely associated with Tribo-Electric Nano-Generators (TENGs). However, the performance of TENGs in energized environments remains unclear. This work aims to evaluate the performance of TENGs in electric-field energy harvesting applications. For this purpose, TENGs of different sizes, operating in single-electrode mode were conceptualized, assembled, and experimentally tested. Each TENG was mounted on a 1.5 HP single-phase induction motor, operating at nominal parameters of 8 A, 230 V, and 50 Hz. In addition, the contact layer was mounted on a linear motor to control kinematic stimuli. The TENGs successfully induced electric fields and provided satisfactory performance to collect electrostatic charges in fairly variable electric fields. Experimental findings disclosed an approximate increase in energy collection ranging from 1.51% to 10.49% when utilizing TENGs compared to simple EFEHs. The observed correlation between power density and electric field highlights TENGs as a more efficient energy source in electrified environments compared to EFEHs, thereby contributing to the ongoing research objectives of the authors.
Juan Sebastian Estrada, Juan Pablo Vasconez, Longsheng Fu, and Fernando Auat Cheein
Elsevier BV
Oswaldo Menendez, Felipe Ruiz, Daniel Pesantez, Juan Vasconez, and Jose Rodriguez
IEEE
This work introduces a current control strategy for Voltage Source Inverters (VSI) using data-driven control systems, particularly employing a framework based on Deep Reinforcement Learning agents. Unlike the other techniques in the literature, we have avoided using a modulator by including a Deep Q-Network agent. In addition, an analysis of the impact of different Deep Neural Network (DNN) architectures on control system performance, specifically considering the number of layers and neurons, is presented. To this end, different DQN agents were designed, trained, and tested. Also, a two-level voltage source power inverter is simulated to validate the proposed data-driven control based on DQN agents. The performance of the control strategy is analyzed in terms of computational cost, Root Mean Square Error (RMSE), and Total Harmonic Distortion (THD). Simulated results reveal that the proposed control strategy performs strongly in the current control, with a maximum RMSE of 0.83 A and a THD of $5.29 \\%$ at a 10 kHz sampling frequency when a DNN with one layer and five neurons is used.
Samuel Peña, Andrea Pilco, Viviana Moya, William Chamorro, Juan Pablo Vásconez, and José Andres Zuniga
IEEE
The integration of mobile robots and computer vision has revolutionized industrial tasks by enabling precise and efficient automation processes. This work proposes a mobile platform for sorting Petri dishes using advanced deep-learning techniques. We utilize the YOLOv5 framework for real-time color detection and a 6-bar mechanism with a gripper for dynamic sample sorting. The implementation enhances logistics and reduces operational errors through accurate color classification. Our methodology includes creating a training dataset of over one thousand labeled RGB images and validating the system's performance. The trained network achieved over 90% accuracy during validation and testing, demonstrating precise robot positioning and effective Petri dish manipulation. This research successfully addresses automation challenges in industrial settings, offering improved efficiency and accuracy.
Andrea Pilco, Viviana Moya, Angélica Quito, Juan P. Vásconez, and Matías Limaico
EJournal Publishing
This study sheds light on the evolution of the agricultural industry and highlights advances in production area. The salient recognition of fruit size and shape as critical quality parameters underscores the significance of the research. In response to this challenge, the research introduces specialized image processing techniques designed to streamline the sorting of apples in agricultural settings, specifically emphasizing accurate apple width estimation. A purpose-built machine was designed, featuring an enclosure box housing a cost-effective camera for the vision system and a chain conveyor for classifying Malus domestica Borkh kind apples. These goals were successfully achieved by implementing image preprocessing, segmentation, and measurement techniques to facilitate sorting. The proposed methodology classifies apples into three distinct classes, attaining an impressive accuracy of 94% in Class 1, 92% in Class 2, and 86% in Class 3. This represents an efficient and economical solution for apple classification and size estimation, promising substantial enhancements to sorting processes and pushing the boundaries of automation in the agricultural sector.
L. Guevara, P. Wariyapperuma, H. Arunachalam, J. Vasconez, M. Hanheide, and E. Sklar
IEEE
The introduction of mobile robots to assist pickers in transporting crops during fruit harvesting operations is a promising solution to mitigate the impacts of the current labour shortage. However, having robots sharing the workspace with humans involves solving new challenges related to Human-Robot Interaction (HRI). For instance, effective Human-Robot Communication (HRC) methods will be crucial to not only ensure safe and efficient HRI but to mitigate the potential stress of pickers who interact with robots for the first time. If the stress or discomfort levels are not mitigated, then instead of facilitating the harvesting task, the robot can become a burden. In this context, this paper contributes to one of the first user studies investigating the preferences and requirements of end users (actual pickers) to improve the usability of Robot-Assisted Fruit Harvesting (RAFH) solutions. The study involves real-world experiments and the usability assessment of existing and potential future technology which results and lessons learned can be used as guidelines for agri-robotics companies and stakeholders on how to design and deploy their own RAFH solutions.
Juan I. Saez Rojas, Jenny M. Pantoja, Mónica Matamala, Inesmar C. Briceño, Juan Pablo Vásconez, and Alfonso R. Romero-Conrado
Elsevier BV
Mailyn Calderón Díaz, Ricardo Ulloa-Jiménez, Nicole Castro Laroze, Juan Pablo Vásconez, Jairo R. Coronado-Hernández, Mónica Acuña Rodríguez, and Samir F. Umaña Ibáñez
Springer Nature Switzerland
Mailyn Calderón-Díaz, Rony Silvestre Aguirre, Juan P. Vásconez, Roberto Yáñez, Matías Roby, Marvin Querales, and Rodrigo Salas
MDPI AG
There is a significant risk of injury in sports and intense competition due to the demanding physical and psychological requirements. Hamstring strain injuries (HSIs) are the most prevalent type of injury among professional soccer players and are the leading cause of missed days in the sport. These injuries stem from a combination of factors, making it challenging to pinpoint the most crucial risk factors and their interactions, let alone find effective prevention strategies. Recently, there has been growing recognition of the potential of tools provided by artificial intelligence (AI). However, current studies primarily concentrate on enhancing the performance of complex machine learning models, often overlooking their explanatory capabilities. Consequently, medical teams have difficulty interpreting these models and are hesitant to trust them fully. In light of this, there is an increasing need for advanced injury detection and prediction models that can aid doctors in diagnosing or detecting injuries earlier and with greater accuracy. Accordingly, this study aims to identify the biomarkers of muscle injuries in professional soccer players through biomechanical analysis, employing several ML algorithms such as decision tree (DT) methods, discriminant methods, logistic regression, naive Bayes, support vector machine (SVM), K-nearest neighbor (KNN), ensemble methods, boosted and bagged trees, artificial neural networks (ANNs), and XGBoost. In particular, XGBoost is also used to obtain the most important features. The findings highlight that the variables that most effectively differentiate the groups and could serve as reliable predictors for injury prevention are the maximum muscle strength of the hamstrings and the stiffness of the same muscle. With regard to the 35 techniques employed, a precision of up to 78% was achieved with XGBoost, indicating that by considering scientific evidence, suggestions based on various data sources, and expert opinions, it is possible to attain good precision, thus enhancing the reliability of the results for doctors and trainers. Furthermore, the obtained results strongly align with the existing literature, although further specific studies about this sport are necessary to draw a definitive conclusion.
Juan Pablo Vásconez, Mailyn Calderón-Díaz, Inesmar C. Briceño, Jenny M. Pantoja, and Patricio J. Cruz
Springer Nature Switzerland
Patricio J. Cruz, Juan Pablo Vásconez, Ricardo Romero, Alex Chico, Marco E. Benalcázar, Robin Álvarez, Lorena Isabel Barona López, and Ángel Leonardo Valdivieso Caraguay
Springer Science and Business Media LLC
AbstractHand gesture recognition (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) has been investigated for human-machine applications in the last few years. The information obtained from the HGR systems has the potential to be helpful to control machines such as video games, vehicles, and even robots. Therefore, the key idea of the HGR system is to identify the moment in which a hand gesture was performed and it’s class. Several human-machine state-of-the-art approaches use supervised machine learning (ML) techniques for the HGR system. However, the use of reinforcement learning (RL) approaches to build HGR systems for human-machine interfaces is still an open problem. This work presents a reinforcement learning (RL) approach to classify EMG-IMU signals obtained using a Myo Armband sensor. For this, we create an agent based on the Deep Q-learning algorithm (DQN) to learn a policy from online experiences to classify EMG-IMU signals. The HGR proposed system accuracy reaches up to $$97.45 \\pm 1.02\\%$$ 97.45 ± 1.02 % and $$88.05 \\pm 3.10\\%$$ 88.05 ± 3.10 % for classification and recognition respectively, with an average inference time per window observation of 20 ms. and we also demonstrate that our method outperforms other approaches in the literature. Then, we test the HGR system to control two different robotic platforms. The first is a three-degrees-of-freedom (DOF) tandem helicopter test bench, and the second is a virtual six-degree-of-freedom (DOF) UR5 robot. We employ the designed hand gesture recognition (HGR) system and the inertial measurement unit (IMU) integrated into the Myo sensor to command and control the motion of both platforms. The movement of the helicopter test bench and the UR5 robot is controlled under a PID controller scheme. Experimental results show the effectiveness of using the proposed HGR system based on DQN for controlling both platforms with a fast and accurate response.
Juan Pablo Vásconez, Lorena Isabel Barona López, Ángel Leonardo Valdivieso Caraguay, and Marco E. Benalcázar
Elsevier BV
Ángel Leonardo Valdivieso Caraguay, Juan Pablo Vásconez, Lorena Isabel Barona López, and Marco E. Benalcázar
MDPI AG
In recent years, hand gesture recognition (HGR) technologies that use electromyography (EMG) signals have been of considerable interest in developing human–machine interfaces. Most state-of-the-art HGR approaches are based mainly on supervised machine learning (ML). However, the use of reinforcement learning (RL) techniques to classify EMGs is still a new and open research topic. Methods based on RL have some advantages such as promising classification performance and online learning from the user’s experience. In this work, we propose a user-specific HGR system based on an RL-based agent that learns to characterize EMG signals from five different hand gestures using Deep Q-network (DQN) and Double-Deep Q-Network (Double-DQN) algorithms. Both methods use a feed-forward artificial neural network (ANN) for the representation of the agent policy. We also performed additional tests by adding a long–short-term memory (LSTM) layer to the ANN to analyze and compare its performance. We performed experiments using training, validation, and test sets from our public dataset, EMG-EPN-612. The final accuracy results demonstrate that the best model was DQN without LSTM, obtaining classification and recognition accuracies of up to 90.37%±10.7% and 82.52%±10.9%, respectively. The results obtained in this work demonstrate that RL methods such as DQN and Double-DQN can obtain promising results for classification and recognition problems based on EMG signals.