Deep Convolutional Networks have been very successful in visual recognition tasks recently. Lots of previous works aimed to help people to get senses of why those biology-inspired networks achieved such good performances. Deconvnet[1], Guided propagation[2] and a comprehensive visualization tool box[3] can help people to see features learned at different layers of the networks. These works in some extent provided understanding and support for the biology origin of how convolutional networks emulate visual recognition tasks. However, due to the complexity of searching in very high dimensional parameter space, the whole training remains in black-boxes. Normally a large network needs weeks of training on high-end graphic cards. Fine-tuning of hyper-parameters like the learning rate, depth and width of the network still depends on previous successful architectures or trial and error. In this poster, we study the network as a dynamic system and its learning process as the evolution of parameters. By visualization of the development and evolution of network, we aim to provide facilities to find optimal hyper-parameters.

Meeting Name

1st Workshop on Distributed Infrastructures for Deep Learning, DIDL 2017, Part of Middleware '17 (2017: Dec. 11-15, Las Vegas, NV)


Computer Science

Document Type


Document Version

Final Version

File Type





© 2017 The Authors, All rights reserved.

Publication Date

15 Dec 2017