From machine learning there is the philosophical possibility to direct data from a later layer back to a previous layer. I call those networks internal loop networks.
Then if everything needs to be computed in some way. Could a magnet calculate its solution from recurrent information. The field lines look like loops so why not. So data and energy goes from one layer to the other along the magnet back to the input and output layer.
So I wonder. Is stored energy just a compressed mass information bundle. I mean. When you compress information you get a higher information density. So why could not energy storage be a compress and decompress process?
I mean. If energy is like the image you are trying to compress. Then decompressing it will restore the energy to its original state.
Its when you tap the decompression process its gets interesting. Since then I assume you get information loss in the image but you can create another image with that energy you withdraw.
So you can manipulate the choice for the decompression process. So energy can be iterated out to create another “image”.
So a black hole just compresses all the mass-information into one ?neat compressed energy bundle. To be decompressed I guess somewhere else for energy.
Could information be stored in a material? I mean if you use an error function as a criterion between steel and plastic. Then maybe random() as input to the material model could make plastic with steel properties.
Something like that could be an abstract version of material design.
For example. If you choose the model() to represent A,b in Ax=b. That is. A,b = model(…). Then use a proper algorithm to calculate x = np.linalg.solve(A,b). The x here is the target value which also have some loss.
Then the idea is that you filter the iteration to always have some truth. You denoise it if you will. The machine learning model can not do all the calculations. Its not intelligent.
I think this can be used with diff equations also. Just let the model represent a system of linear differential equations. Then solve the system with some known algorithm. So the algorithmic solution is done for all iterations.
So the idea is to use algorithms together with the machine learning model to calculate a Linear System Model Of Machine Learning.
Very often the physics simulation in Blender gets a bouncing result. Pieces fly in all directions. So I wonder. If this could be a balancing problem. If so then machine learning have solved similar problems.
Then to improve the solutions I think a denoise_function(solution.reshape(biggest_rectangular_shape),weight=0.0001).reshape(org_shape) in the machine learning loop could be used.
So the idea is to improve the physics simulation in Blender.
One ?general way to filter truth is to see if its beneficial. Apply this to the repeating decimal 0.999… It then becomes a more precise truth function. That is. You have an answer = model(0.999…). Here a machine learning model will get you a network truth.
0.999… equal 1 is True when its beneficial and False when its not beneficial.
Freshwater rain is ?meant I guess for life to exist. Following this guess I wonder if a decision network for rain does not want to waste freshwater if conditions are not right. If too salty or too polluted on the ground or lake.
That is. If conditions are so bad it absolutely does make sense to rain freshwater there. It probably wont either.
But there is a way. If people would help the condition. For example some alternative to covering the ground with 100% pavement?
If the physical network ?calculates by heat radiation or something else. That conditions are getting better. That the derivative is on a good path. Plants are growing. Then I think this would increase the chances of rain.
For instance. I think we need to instate simulated rainforests in urban areas with manual irregation. The humidity of many such places would be such a positive change.
If something is complex enough assume a network. Here I speculate about the position of an object. In machine learning terms this position ?would be a classification.
That is. You have a set of integer numbers as the grid points. Then you get a position grid by hot encoding it like [1,0,0,0,0,0,0,0,0,0]. Here the object start a index zero. Then when you calculate a new position with pos_x = model(…) you could get something like [0.1,0.8,0.1,0,0,0,0,0,0]. I assume this can reflect the probability of the next position. Also I assume the misclassification are important.
If the universe is taking advantage of networks. Then this would be a general misclassification problem. But then again. Why not take advantage of the 0.1 misclassification. I wonder if this could be a way to get continuous movement. That is. You have a large but limited number of grid points and just probability in-between.
So the probability is tied to the grid points. Then at each iteration the probability of the position of the object changes. Then a goal of the universe is to control the speed of probability. The speed of the object.