Continuum Machine Learning Idea? – Small Changes

The idea here is to see if there is such a thing as a continuum method.

Let say you train a hidden layer network from X as input and X as output. This should converge pretty quickly. So I thought in a time series. Like the weather temperature. The overall the temperature series over a year have a characteristic look. Then if I start the network with its weights. Such that the network gives a temperature characteristic graph. Then I hope if I change the input data a little the weights only need to change a little.

It might be that the weight matrices need to be smooth and not so fragile.

The idea with this was to calculate predictions faster.

example of smooth weight matrices,syn0 and syn1. I used in a feedforward network with max(ErrorList) error function with fmin_slsqp optimation.

syn0 = syn0.reshape(32,32)
syn1 = syn1.reshape(16,16)
syn0 = sp.ndimage.filters.gaussian_filter(syn0, sigma=[1,1], mode=’constant’)
syn1 = sp.ndimage.filters.gaussian_filter(syn1, sigma=[1,1], mode=’constant’)

errorList[i] = np.sum((y – l2)**2)
maxError = np.max(errorList)

Anim showing succesive iterations of syn0


smooth weight matrix