I was wondering if you could use machine learning for interpolation. A use of this method could perhaps be to control overfitting.
So you have discrete values in a target data vector. Zeros and ones. My idea here is to find a smart machine learning interpolation such that overfitting is reduced and output of the machine learning network converges to the discrete target values and is able to predict correctly.
Basically the in between interpolation curve should also make sense as well as the target values. Is it possible to give a large network a nudge in the right direction? Fitting for the smart interpolation curve.