I was wondering if you could use machine learning for interpolation. A use of this method could perhaps be to control overfitting.

So you have discrete values in a target data vector. Zeros and ones. My idea here is to find a smart machine learning interpolation such that overfitting is reduced and output of the machine learning network converges to the discrete target values and is able to predict correctly.

Basically the in between interpolation curve should also make sense as well as the target values. Is it possible to give a large network a nudge in the right direction? Fitting for the smart interpolation curve.

Thought Mechanics In Progress // Per Lindholm

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok