Monthly Archives: January 2018

Machine Learning Insight – Every Parameter Is A Network Including The Bias

The idea is simple.

I guess the brain does not store a lot of hard coded numbers so why should we. Assigning 1’s to biases seems to be a wasteful thing.

The philosophy of machine learning could perhaps be to take advantage of everything. So from this I’m going to test what predictive biases or state converged biases could do. So y,bias = model(x,bias).

Product Idea – Turning The Screen 180 Degrees For A Drawing Tablet Setup

The idea is simple.

Why not take advantage of laptops with 180 degree turnable screens. Then you can use it as a tablet overlay display. Quite cool idea.

This makes a cheap drawing tablet feel professional. Works fine with Linux Budgie. Just rotate the screen in the Nvidia settings or similar.

So if you have an old laptop you dont use anymore. This could bring some joy.

drawing setup with a laptop 180

Math Idea – Mandelbrot Fractal With Recursive c

I was wondering if I could use machine learning with fractals.

So inspired by this I reused the c0 in z(i+1) = z(i)**2 + c0. Changing the c0 for every iteration with some rule seems to have ?split the set.

I was thinking maybe there exist a network model function for c0 for which you can adapt the set or image more.

After some guessing I came up with the “Linux Penguin Light saber Fractal”

Enjoy

penguin light saber fractal Per Lindholm

Linux Penguin Light Saber Fractal // Per Lindholm 2018-01-10

Machine Learning – Is Image Raytracing Using Machine Learning Possible?

I start with a question.

Why does image of a glass ball on a surface look like it does?

To every complex question there is the network answer. It makes sense to a network.

So I will test if I can calculate a simple raytracing scenario using machine learning. The network looks as follows. The atoms are my weights with its parameters.

The idea is to put the output image as an internal layer just before the output layer neuron. The last output neuron is then a truth value.

For the truth I will have to test a little. But to simplify. All light from the emission input image should correspond to the value in the output neuron. Here I will test the sum( input pixel value energy ) = sum ( output image pixel value energy ). Here the internal image layer is bigger than the input image so the energy will have to be distributed. Another truth is that I got index of refraction for my object. So some of my parameter values are already given for my image layer in air and object as glass.

Further if I put a ”circular” layer around the raytracing network maybe I can use that as a similar truth calculation.

A truth calculation is just that you know the output for a given input. So all black input should give an all black or zero output.

From the last image of the spherical surrounding space layer. I guess that if the energy is to be distributed over an infinite amount of neurons. Then the energy on all neurons would get to zero amounts. Some equal split.

But the sum is to be equal to the input energy so here you get another truth maybe.

If there exist input energy above zero then the emission layer depending on the starting position. Some distance from the center point. Will have a distribution of that energy on the outer capture layer neurons. Like a ?normal distribution maybe.

Hmm if you place the object in the center point it could cause a problem. Equal distribution. So I wonder if you can take a second outer layer and generate some difference. Like two eyes are separated from each other. Here you got two separated outer capture layers at some random distance apart.

 

 

Machine Learning Idea – The Light Sigmoid Or Dual Linearity Function

I was looking into my network theory. When it occurred to me that the sigmoid activation function bare resemblance to the light trajectory in water.

So for the universe system to calculate what happens inside the material it ?uses a dual linear function as an output changing function.

So I will test the light activation function with one or two parameters for the angles in the machine learning network model.

dual linearity

Machine Learning Idea – Calculating With Complex Roots

I was thinking. If you calculate the model with a implicit error function. Then its possible to get complex numbers. However complex numbers provide additional ’svängrum’ (rotational room) for the model.

So I made the implicit error function give two solutions where it could give a complex number. It turned out. The solution gave a double root. That is. Two equal complex numbers. Then all I had to do was to take the absolute length of the number.

Machine Learning Idea – Implicit Error Function

I’m currently doing some calculation on weather data. Just a time series.

I thought I get inspired by an equation.

dataIn – dataOut = dataModel

After some guessing I get the error equation. Here dataIn, dataout, … are >= 0. They are scaled from 0 to 1. I make some odd looking equation. sqrt(dataIn) – sqrt(dataOut) = sqrt(dataModel). Even though it does not follow above equation.

dataIn + dataOut – 2*((dataIn*dataOut)**.5) = dataModel(dataIn)

Here dataIn + dataOut – 2*((dataIn*dataOut)**0.5) is my target value for my model function for a given dataIn. This because I could get ‘nan’ otherwise where the model update can get negative for dataModel values.

So iterating through all my dataIn(i), dataOut(i) values I get the parameters for my model.

Then to predict a value for a given value x as dataIn. I input that x in my model(dataIn = x). Get an equation:

x + dataOut – 2*((x*dataOut)**.5) = model(x)

From this equation I solve the possible dataOut values as one of the predicted value.

 

interesting error plot

Some sort of implicit error function. It bounced ?

implicit error function loss

Newspaper And Magazine Innovation – Include Python Coding Courses In Machine Learning?

As a way to make money on online magazines and newspapers I wonder if innovation can help. What about coding. In particular machine learning.

In Machine learning its not that hard to get results. The code is short. The first time it might not be so great but with imagination you get better. When you do find the ?right system and model its instant satisfaction.

So for course assignments you have in data, training data and test data at hand. Its a rewarding process of finding the model parameters. Perfect for learning how to code.

So why is this important. Learning to code is only as important as your ability to imagine the possibilities with machine learning. Its not about robots. Its about engineering and physics and our future ability to tackle climate change.