I think I found out something wonderful.

For example. If you choose the model() to represent A,b in Ax=b. That is. A,b = model(…). Then use a proper algorithm to calculate x = np.linalg.solve(A,b). The x here is the target value which also have some loss.

Then the idea is that you filter the iteration to always have some truth. You denoise it if you will. The machine learning model can not do all the calculations. Its not intelligent.

I think this can be used with diff equations also. Just let the model represent a system of linear differential equations. Then solve the system with some known algorithm. So the algorithmic solution is done for all iterations.

So the idea is to use algorithms together with the machine learning model to calculate a Linear System Model Of Machine Learning.