Location

Havener Center, Miner Lounge / Wiese Atrium, 9:30am-11:30am

Start Date

4-2-2026 9:30 AM

End Date

4-2-2026 11:30 AM

Presentation Date

April 2, 2026; 9:30am-11:30am

Description

Incremental learners deployed on streaming data must remain robust to evolving adversarial perturbations, yet most adversarial-robustness studies assume offline multi-epoch training with repeated access to historical data. We investigate adversarial robustness in Fuzzy ARTMAP, a prototype-based Adaptive Resonance Theory model that supports single-pass learning without replay. We propose WB-Softmax, a differentiable relaxation that aggregates category-level activations into class-level scores for gradient-based attacks. WB-Softmax PGD achieves 89–100% attack success on vanilla models, exceeding transfer and query-based baselines. We then study adversarial training under true streaming constraints by comparing offline versus online adversarial example generation and standard versus selective updates. Offline adversarial training consistently collapses robustness, whereas online training is dataset-dependent. We introduce progressive two-stage selective training that achieves the best overall robustness on all three datasets. Finally, we monitor incremental cluster validity indices to diagnose separation collapse and design a separation-aware absorption rule that improves high-epsilon robustness.

Biography

Shane is a 3rd year PhD student and Kummer I&E Doctoral Fellow in the Computer Science Department, advised by Dr. Don Wunsch in the Applied Computational Intelligence Laboratory. His research focuses on adversarial robustness and interpretability in incremental learning systems, particularly Adaptive Resonance Theory networks, with broader interests in AI safety and policy. Prior to his doctoral studies, Shane gradauted from S&T with a B.S. in Computer Science and gained industry experience through internships at Ford and Howmet Aerospace. Outside of research, he enjoys soccer, chess, and hiking.

Meeting Name

2026 - Miners Solving for Tomorrow Research Conference

Department(s)

Electrical and Computer Engineering

Second Department

Computer Science

Comments

Advisor: Donald C. Wunsch, dwunsch@mst.edu

Document Type

Poster

Document Version

Final Version

File Type

event

Language(s)

English

Rights

© 2026 The Authors, All rights reserved

Share

COinS
 
Apr 2nd, 9:30 AM Apr 2nd, 11:30 AM

Robustness of Fuzzy ARTMAP to Adversarial Attacks and Progressive Adversarial Training for Streaming Learning

Havener Center, Miner Lounge / Wiese Atrium, 9:30am-11:30am

Incremental learners deployed on streaming data must remain robust to evolving adversarial perturbations, yet most adversarial-robustness studies assume offline multi-epoch training with repeated access to historical data. We investigate adversarial robustness in Fuzzy ARTMAP, a prototype-based Adaptive Resonance Theory model that supports single-pass learning without replay. We propose WB-Softmax, a differentiable relaxation that aggregates category-level activations into class-level scores for gradient-based attacks. WB-Softmax PGD achieves 89–100% attack success on vanilla models, exceeding transfer and query-based baselines. We then study adversarial training under true streaming constraints by comparing offline versus online adversarial example generation and standard versus selective updates. Offline adversarial training consistently collapses robustness, whereas online training is dataset-dependent. We introduce progressive two-stage selective training that achieves the best overall robustness on all three datasets. Finally, we monitor incremental cluster validity indices to diagnose separation collapse and design a separation-aware absorption rule that improves high-epsilon robustness.