- Echo State Networks (ESNs) and Liquid State Machines (LSMs) introduced a simple new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, made RNNs accessible for practical applications as never before and outperformed classical fully trained RNNs in many tasks. The latter, however, does not imply that random reservoirs are optimal, but rather that adequate training methods for them are yet to be developed. Thus much of the current research in reservoir computing is done on reservoir adaptation, redefining the paradigm as using different methods for training the reservoir and the readout. This report motivates the new definition of the paradigm and surveys the reservoir generation/adaptation techniques, offering a natural conceptual classification which transcends boundaries of the current "brand-names" of reservoir methods. The survey focuses more on methods relevant to practical applications of RNNs rather than modeling biological brains.