Abstract

ACOR is a well-established ant colony optimization algorithm that has been applied to neural network training. We present an approach for the dynamic adaptation of the ACOR algorithm's search intensification/diversification parameter q, based on using several pre-specified parameter configurations, which we call personalities. Before an ant begins to generate a candidate solution, it stochastically adopts a personality based on the relative past success of the different personalities. The success of a personality is measured, in turn, by the relative quality of previous solutions generated by ants adopting that personality. The premise of our approach is that some personalities will be more appropriate than others for different phases of the search. This paper follows up on previous work which used a similar approach to adapting ACOR's search width parameter ξ. We evaluate our proposal experimentally in the context of training feedforward neural networks for classification using 65 benchmark datasets from the University of California Irvine (UCI) repository. Our experimental results indicate that our proposal produces better predictive accuracy than standard ACOR, to a statistically significant extent.

Department(s)

Electrical and Computer Engineering

Second Department

Computer Science

Comments

Intelligent Systems Center, Grant 2420248

Keywords and Phrases

Ant colony optimization; Feedforward neural networks; Parameter adaptation; Supervised learning; Swarm intelligence

International Standard Serial Number (ISSN)

1433-3058; 0941-0643

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2025 Springer, All rights reserved.

Publication Date

01 Jan 2025

Share

 
COinS