Spatial relationship representation for visual object searching

Jun Miao*, Lijuan Duan, Laiyun Qing, Wen Gao, Xilin Chen, Yuan Yuan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Image representation has been a key issue in vision research for many years. In order to represent various local image patterns or objects effectively, it is important to study the spatial relationship among these objects, especially for the purpose of searching the specific object among them. Psychological experiments have supported the hypothesis that humans cognize the world using visual context or object spatial relationship. How to efficiently learn and memorize such knowledge is a key issue that should be studied. This paper proposes a new type of neural network for learning and memorizing object spatial relationship by means of sparse coding. A group of comparison experiments for visual object searching between several sparse features are carried out to examine the proposed approach. The efficiency of sparse coding of the spatial relationship is analyzed and discussed. Theoretical and experimental results indicate that the newly developed neural network can well learn and memorize object spatial relationship and simultaneously the visual context learning and memorizing have certainly become a grand challenge in simulating the human vision system.

Original languageEnglish
Pages (from-to)1813-1823
Number of pages11
JournalNeurocomputing
Volume71
Issue number10-12
DOIs
Publication statusPublished - 1 Jun 2008

Keywords

  • Neural network
  • Object searching
  • Sparse coding
  • Spatial relationship
  • Visual context

Fingerprint

Dive into the research topics of 'Spatial relationship representation for visual object searching'. Together they form a unique fingerprint.

Cite this