Probability Theory – Use Machine Learning Network Objects Instead Of Initial Value Probabilities?

A quick math idea this morning.

I argue that our problem descriptions in probability are often wrong. They state the probability as a non complex value when it ?is a machine learning network. The reason is that a network can ?assume any function.

So what you need is data. Then feed that initial data to a network model to converge.

You can then test the network for probability calculations. However. Since different models can converge to the same data. There should exist a way to estimate this uncertainty.

So in everything you should ask yourself. Could networks assume these problems. If so. Then try it.

(Visited 10 times, 1 visits today)