When iterating a machine learning model( rnge() ) you have a small range like rnge() = np.array( [1,0] ). This then lets the model adapt from random to the loss function. Similarly I imagine that the components in a battery need to adapt to its surroundings. Like the ion’s paths through the battery. I ?think its better to use the ions own energy. The heat energy (random) as little as possible. I guess this could cause low efficiency in its iterations to find their paths.
So my guess is that some crafted random secondary help voltage could help. For shared energy in a faster iteration.
As an experiment I thought I try to limit a range for which I randomly sampled numbers from to try a function or an equation = 0. I set up the equation such that it counted np.nonzero(function < tolerance)/1000 for 1000 random samples. This it compared with a tolerance number. So I got 0.001 the first iteration and up to 1.0 as the target.
So in the beginning the tolerance was large to get some hits to work on. Then as the iteration continued the range got smarter or less wide. So in the end the range was the solution.
I did not get the method to be particularly exact or fast.
I guess problems in physics resembles machine learning somehow. If there is at least three layers in a physical processes. Then iterating or letting the layers adapt in a sequential order is probably easier than running them in parallel. Since each layer would adapt or update itself without know the outcome from the other layers. Resulting in oscillation and ?energy losses.
If There Exist A Model( H20 + Salt ) As The Water Network. Then any change would ?have to be iterated in. I guess you then can GAN (random input to get miss classifications) the network. Break the network a little to activate the release salt classification method in the molecule.
Voila and you have a super desalination method inspired by machine learning.
With this algorithm you let the 3 layer model() adapt its initial layers to the last layer with holes in it. With the last layer I mean the last big weight vector. This way you don’t need to store the whole last layer. Which is of equal size of that of the image. Then just iterate the image with x = model( rnge() ) where x is the image.
I think my machine learning methods suffers from lack of deterministic algorithms. To stabilize the solutions. For instance when I have a layer that its small than the previous. As in almost all cases. I think a resize algorithm could be smart. I will check if I get as much oscillations as I used to.
Availibilty of good choices are the recipie for sustainible peace. Everyone should have them. Therefor I think. Why not get the homeless in Russia and elsewhere into farming. I mean. Vegetables are on the rise. McDonalds have vegetarian meals.
This gave me an idea. Why not have a veggi food contract with previously homeless or poverty stricken people growing vegetables?
I wonder. How do you split the input for a given output. Since the deep learning algorithm samples all input. There should exist situations where there a coincidences. In what the algorithm changed the weight number for.
Then how do you model coincidences?
I usually iterate with model([0,1]) and let the model adapt but somehow I think iteration with random could work for something.
Wild guess. For batteries. Inspired by machine learning. If you don’t have enough of a certain metal. Would it be possible to run the process in a loop? Like a compression/decompression cycle? You use the same of something many times over.