Abstract

Localization in a battlefield environment is increasingly challenging as GPS connectivity is often denied or unreliable, and physical deployment of anchor nodes across wireless networks for localization can be difficult in hostile battlefield terrain. This paper proposes a novel framework for the localization of moving objects in non-GPS battlefield environments using stereo vision and a deep learning model by recognizing naturally existing or artificial landmarks as anchors. The proposed method utilizes a custom-calibrated stereo vision camera for distance estimation and the YOLOv8s model, which is trained and fine-tuned with our real-world dataset for landmark anchor recognition. The depth images are generated using an efficient stereo-matching algorithm, and distances to landmarks are determined by extracting the landmark depth feature utilizing a bounding box predicted by the landmark recognition model. The position of the unknown node is then obtained using the efficient least square algorithm and then optimized using the L-BFGSB (limited-memory quasi-Newton code for bound-constrained optimization) method. Experimental results demonstrate that our proposed framework performs better than existing anchor-based DV-Hop algorithms and competes with the most efficient vision-based algorithms in terms of localization error (RMSE).

Department(s)

Computer Science

Comments

Army Research Office, Grant W911NF2120261

Keywords and Phrases

Battlefield Navigation; DV-Hop Method; Landmark Recognition; Non-GPS localization; Stereo Vision; YOLOv8

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2024 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Jan 2024

Share

 
COinS