I was looking at Heisenberg Uncertainty Principle. Since its based on standard deviations. I wonder if its related to machine learning. Which deals with probability. Here the target values in the machine learning network is something that is known. Like all the physical constants.
So the uncertainty principle could perhaps show two network neurons with probability values that together produce a result related to Planck’s constant.
With this I speculate that everything transforms to constants. Inversely this means there are more constants to discover. // Per Lindholm
The reason I think the universe use constants as targets in machine learning networks is that it would make something complex easier. In supervised learning you need examples. If the example is just a number then its much easier than billions and billions of different large data examples that will contain errors or noise.