On self-organizing reservoirs and their hierarchies
- Current advances in reservoir computing have demonstrated that fixed random recurrent networks with only readouts trained often outperform fully-trained recurrent neural networks. While full supervised training of such networks is problematic, intuitively there should also be something better than a random network. In this contribution we investigate a different approach which is in between the two. We use reservoirs derived from recursive self-organizing maps that are trained in an unsupervised way and later tested by training supervised readouts. This approach enables us to train greedy unsupervised hierarchies of such dynamic reservoirs. We demonstrate in a rigorous way the advantage of using the self-organizing reservoirs over the traditional random ones and using hierarchies of such over single reservoirs with a synthetic handwriting-like temporal pattern recognition dataset.