Optimized hybrid decoupled visual servoing with supervised learning

Alireza Rastegarpanah, Ali Aflakian, Rustam Stolkin

Research output: Contribution to journalArticlepeer-review

8 Citations (SciVal)
2 Downloads (Pure)

Abstract

This study proposes an optimized hybrid visual servoing approach to overcome the imperfections of classical two-dimensional, three-dimensional and hybrid visual servoing methods. These imperfections are mostly convergence issues, non-optimized trajectories, expensive calculations and singularities. The proposed method provides more efficient optimized trajectories with shorter camera path for the robot than image-based and classical hybrid visual servoing methods. Moreover, it is less likely to lose the object from the camera field of view, and it is more robust to camera calibration than the classical position-based and hybrid visual servoing methods. The drawbacks in two-dimensional visual servoing are mostly related to the camera retreat and rotational motions. To tackle these drawbacks, rotations and translations in Z-axis have been separately controlled from three-dimensional estimation of the visual features. The pseudo-inverse of the proposed interaction matrix is approximated by a neuro-fuzzy neural network called local linear model tree. Using local linear model tree, the controller avoids the singularities and ill-conditioning of the proposed interaction matrix and makes it robust to image noises and camera parameters. The proposed method has been compared with classical image-based, position-based and hybrid visual servoing methods, both in simulation and in the real world using a 7-degree-of-freedom arm robot.
Original languageEnglish
Pages (from-to)338-354
Number of pages17
JournalProceedings of the Institution of Mechanical Engineers. Part I: Journal of Systems and Control Engineering
Volume236
Issue number2
Early online date30 Jun 2021
DOIs
Publication statusPublished - Feb 2022

Bibliographical note

Copyright © IMechE 2021.
This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).

Data Access Statement

The data that support the findings of this study (the simulation in Gazebo/ROS, C++ codes and demo clip) are openly available in Figshare (https://figshare.com/articles/journal_contribution/Optimized_Hybrid_Decoupled_Visual_Servoing_simulation_and_code/12980009) with doi (10.6084/m9.figshare.12980009.v1).

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship and/or publication of this article: This research was conducted as part of the project called ‘Reuse and Recycling of Lithium-Ion Batteries’ (RELIB). This work was supported by the Faraday Institution (grant no. FIRG005).

Keywords

  • Hybrid visual servoing
  • local linear model tree
  • neuro-fuzzy neural network
  • non-linear models
  • optimized trajectory

Fingerprint

Dive into the research topics of 'Optimized hybrid decoupled visual servoing with supervised learning'. Together they form a unique fingerprint.

Cite this