I guess the brain does not store a lot of hard coded numbers so why should we. Assigning 1’s to biases seems to be a wasteful thing.
The philosophy of machine learning could perhaps be to take advantage of everything. So from this I’m going to test what predictive biases or state converged biases could do. So y,bias = model(x,bias).
Why does image of a glass ball on a surface look like it does?
To every complex question there is the network answer. It makes sense to a network.
So I will test if I can calculate a simple raytracing scenario using machine learning. The network looks as follows. The atoms are my weights with its parameters.
The idea is to put the output image as an internal layer just before the output layer neuron. The last output neuron is then a truth value.
For the truth I will have to test a little. But to simplify. All light from the emission input image should correspond to the value in the output neuron. Here I will test the sum( input pixel value energy ) = sum ( output image pixel value energy ). Here the internal image layer is bigger than the input image so the energy will have to be distributed. Another truth is that I got index of refraction for my object. So some of my parameter values are already given for my image layer in air and object as glass.
Further if I put a ”circular” layer around the raytracing network maybe I can use that as a similar truth calculation.
A truth calculation is just that you know the output for a given input. So all black input should give an all black or zero output.
From the last image of the spherical surrounding space layer. I guess that if the energy is to be distributed over an infinite amount of neurons. Then the energy on all neurons would get to zero amounts. Some equal split.
But the sum is to be equal to the input energy so here you get another truth maybe.
If there exist input energy above zero then the emission layer depending on the starting position. Some distance from the center point. Will have a distribution of that energy on the outer capture layer neurons. Like a ?normal distribution maybe.
Hmm if you place the object in the center point it could cause a problem. Equal distribution. So I wonder if you can take a second outer layer and generate some difference. Like two eyes are separated from each other. Here you got two separated outer capture layers at some random distance apart.
I was thinking. If you calculate the model with a implicit error function. Then its possible to get complex numbers. However complex numbers provide additional ’svängrum’ (rotational room) for the model.
So I made the implicit error function give two solutions where it could give a complex number. It turned out. The solution gave a double root. That is. Two equal complex numbers. Then all I had to do was to take the absolute length of the number.
I’m currently doing some calculation on weather data. Just a time series.
I thought I get inspired by an equation.
dataIn – dataOut = dataModel
After some guessing I get the error equation. Here dataIn, dataout, … are >= 0. They are scaled from 0 to 1. I make some odd looking equation. sqrt(dataIn) – sqrt(dataOut) = sqrt(dataModel). Even though it does not follow above equation.
Here dataIn + dataOut – 2*((dataIn*dataOut)**0.5) is my target value for my model function for a given dataIn. This because I could get ‘nan’ otherwise where the model update can get negative for dataModel values.
So iterating through all my dataIn(i), dataOut(i) values I get the parameters for my model.
Then to predict a value for a given value x as dataIn. I input that x in my model(dataIn = x). Get an equation:
x + dataOut – 2*((x*dataOut)**.5) = model(x)
From this equation I solve the possible dataOut values as one of the predicted value.
As a way to make money on online magazines and newspapers I wonder if innovation can help. What about coding. In particular machine learning.
In Machine learning its not that hard to get results. The code is short. The first time it might not be so great but with imagination you get better. When you do find the ?right system and model its instant satisfaction.
So for course assignments you have in data, training data and test data at hand. Its a rewarding process of finding the model parameters. Perfect for learning how to code.
So why is this important. Learning to code is only as important as your ability to imagine the possibilities with machine learning. Its not about robots. Its about engineering and physics and our future ability to tackle climate change.