Extreme Learning Machine
I read an interesting article “Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks” (2004) about a new method of learning for a certain type of neural network. This Learning is applied for networks with a single layer (SNLP).
According to the article, this method is clearly more effective in generalization than the classical descent of gradient learning (i.e. backpropagation) and has faster learning speed (sometime thousand faster!). Futhermore no parameter is required to manually tuned the Neural network (except the predefined network architecture). So the slow iterative training with the descent of the gradient and the problems of local minima are now over.
The researchers showed that in this type of neural network with a single layer, the input weights (and the bias) can be randomly choosen because they didn’t interfere in the training. So the output weights and the bias can be determined analytically. In fact we just have to solve a basic linear system, using the invertion of a matrix not square NxM, NR = number of neurons, M = number of examples) by the Moore Penrose’s method to obtain an opposite pseudo matrix corresponding to the required weights and biais. This method was also successfully applied to RBF networks and to sequential training networks.