Deep Convolutional Networks have been very successful in visual recognition tasks recently. Lots of previous works aimed to help people to get senses of why those biology-inspired networks achieved such good performances. Deconvnet, Guided propagation and a comprehensive visualization tool box can help people to see features learned at different layers of the networks. These works in some extent provided understanding and support for the biology origin of how convolutional networks emulate visual recognition tasks. However, due to the complexity of searching in very high dimensional parameter space, the whole training remains in black-boxes. Normally a large network needs weeks of training on high-end graphic cards. Fine-tuning of hyper-parameters like the learning rate, depth and width of the network still depends on previous successful architectures or trial and error. In this poster, we study the network as a dynamic system and its learning process as the evolution of parameters. By visualization of the development and evolution of network, we aim to provide facilities to find optimal hyper-parameters.
X. Chen et al., "TensorViz: Visualizing the Training of Convolutional Neural Network using Paraview (poster)," Proceedings of the 1st Workshop on Distributed Infrastructures for Deep Learning (2017, Las Vegas, NV), Association for Computing Machinery (ACM), Dec 2017.
1st Workshop on Distributed Infrastructures for Deep Learning, DIDL 2017, Part of Middleware '17 (2017: Dec. 11-15, Las Vegas, NV)
© 2017 The Authors, All rights reserved.
15 Dec 2017