Inspired by the genetic algorithm where you have crossover. I imagined my own variant of this and machine learning.
The idea is to let one or more model2() iterate with [1,0] as its argument. These models will adapt itself and share some of its weights with the target prediction model1().
So the idea is to mix in more types of learning aproaches. One ?fixation model2() and one learning model1().
With this I got a little over 98% on MNIST.