Using vision and coordination to find unknown target in fixed and random length obstacles

Kiran Ijaz*, Umar Manzoor

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Real-time search methods are suited for tasks in which the agent is interacting with an initially unknown environment but known target position. In these heuristic searches, agent selects its actions with a limited lookahead in a limited amount of time, while sensing only a local part of the environment centered at the agent's current location. To our knowledge, all real time heuristic algorithms work with known or partially known target. In this paper we have proposed a generic technique for real time search algorithms which works with non-moving target with target position unknown and unpredictable by agents in an unknown environment, We have mapped human vision for agents, which is omni directional vision but in a single direction at some point in time. Agents can not see through obstacles so vision can be blocked due to hurdles in search space. In this paper we present an extension to the Learning Real-Time A* (LRTA*) algorithm by utilizing the generic scheme. The new algorithm Vision Based LRTA* (VLRTA*) has been applied to solve randomly generated mazes with multiple agents. We have evaluated this algorithm on a large number of test cases with fixed and random size obstacles as well as varying obstacle ratio. Empirical evaluations shows that our suggested vision technique is effective in both locate target time and solution quality.

Original languageEnglish
Pages (from-to)2400-2405
Number of pages6
JournalWSEAS Transactions on Computers
Volume5
Issue number10
Publication statusPublished - 1 Oct 2006

Keywords

  • Coordination
  • Intelligent agents
  • Multi-agent
  • Real time
  • Unknown target
  • Vision
  • VLRTA·

Fingerprint

Dive into the research topics of 'Using vision and coordination to find unknown target in fixed and random length obstacles'. Together they form a unique fingerprint.

Cite this