Author Archives: Per Lindholm

Livelihood Idea – Make Your Own IT Programming Courses On Facebook As Part Of A Livelihood

I think this idea could be really big.

As livelihood idea. In short. I wonder if you could make some money from creating your own python programming courses on Facebook. I mean access to facebook i universal and many people already own a smart phone or computer.

On a smart phone you can do courses with a blue tooth keyboard. As a customer or creator. A full size keyboard is best. Just place the phone closest to you and the keyboard above it.

Some courses I had in mind was Lua programming on Lua Fantasy Computers like tic80 maybe. Another would be to make a course using a ?bluetooth midi keyboard together with the phone.

Further. If anyone is interested I believe there could be an android smartphone programming community with a Facebook page for the latest updates. Howto’s and more.

Study And Learn Idea – Study By Imagination

When you are studying for a test and doing a lot of exercises. Why not train by your imagination.

The idea goes as follows. Take an exercise and before you attempt it. Make one or two new exercises from your imagination and the target exercise as your guide. Here you can transform the exercise to a much easier variant or a little harder one.

After solving both first you will have trained your imagination. The target exercise is meant for verification.

Physics Speculation – Is Gravity Recurrent Information In The Machine Learning Sense?

Inspired by recurrent machine learning networks where you can reuse output data as input. I wonder if this could inspire some information about gravity.

There is a similarity from gravity being a force that is both an output and an input to other weights.

By experience the recurrent data in machine learning converges to a state pretty quickly. So if the universe calculates its solutions according to a network theory.

Then I guess universe ?positions of the weights are calculated by this recurrent gravity information force as both input and output.

So when different weights come in range from each other. The gravity information needs to converge again.

So with this you get smart positions. The universe positions of the weights and its ?surrounding close weights.

 

Machine Learning Idea – Take Advantage Of The Reverse Relationship Between Verification And To Get A Prediction?

The P versus NP problem is a major unsolved problem in computer science. It asks whether every problem whose solution can be quickly verified (technically, verified in polynomial time) can also be solved quickly (again, in polynomial time). P versus NP problem – Wikipedia

https://en.wikipedia.org/wiki/P_versus_NP_problem

For predictions I can see a similarity from this P versus NP problem. The idea is simple.

If you can picture an area where the solution might be confined in then you can ?quickly test if any of those small predictions or points are valid.

It should be easier to find probable wrongs with your initial guess than trying to find the perfect solution right away. This way you can program an algorithm to sort out initial guesses that are not so good. Left are then some strong candidates.

Machine Learning Idea – Draw Functions?

I was wondering. If its easier to draw by hand with a little random motion. Is it also easier for a network model to fit the curve or to find a “solution” if it outputs a little randomly and at a little random process time? Just like drawing.

I think so but it need to be tested.

So the idea is to develop draw functions. The are ?built up with non linear pieces of ?curved line at randomly close proximity to the solution.

The idea is then to let the engineer or another network recognize the solution from the many line draws by the algorithm. Here the lines are superimposed not drawn like a spline.

Worth a try anyway.

Apartment Innovation – Rented Apartment With Included Summer Or Winter Vacation Places?

What if the housing company would provide the residents with optional summer or winter vacation places for short trips. As you share and book the laundry room you can then book at a ?joint venture hotel.

If there is demand for these places then it serves a good purpose. No need to buy a hotel right away. Cooperate with other hotels in a win win situation. The hotel gets a steady stream of customers and the residents get an affordable and trustful place to stay at.

Math Insight – What Are Complex Numbers In The Machine Learning Sense?

The idea is simple. When ever you got a complex problem assume a network.

One reason for the flexibility of complex numbers are its ability to change without changing its real(x) value.

From this I guess complex numbers are small networks. Data and ?three activation functions.

So from a + ib being the data with abs(x), real(x) and imag(x) as the activation functions.

The complex numbers are then in a sens part of a larger set of numbers. The ?Network Numbers. Where you got *any number of activation functions over *any amount and type of data.

Machine Learning Insight – Every Parameter Is A Network Including The Bias

The idea is simple.

I guess the brain does not store a lot of hard coded numbers so why should we. Assigning 1’s to biases seems to be a wasteful thing.

The philosophy of machine learning could perhaps be to take advantage of everything. So from this I’m going to test what predictive biases or state converged biases could do. So y,bias = model(x,bias).

Product Idea – Turning The Screen 180 Degrees For A Drawing Tablet Setup

The idea is simple.

Why not take advantage of laptops with 180 degree turnable screens. Then you can use it as a tablet overlay display. Quite cool idea.

This makes a cheap drawing tablet feel professional. Works fine with Linux Budgie. Just rotate the screen in the Nvidia settings or similar.

So if you have an old laptop you dont use anymore. This could bring some joy.

drawing setup with a laptop 180

Math Idea – Mandelbrot Fractal With Recursive c

I was wondering if I could use machine learning with fractals.

So inspired by this I reused the c0 in z(i+1) = z(i)**2 + c0. Changing the c0 for every iteration with some rule seems to have ?split the set.

I was thinking maybe there exist a network model function for c0 for which you can adapt the set or image more.

After some guessing I came up with the “Linux Penguin Light saber Fractal”

Enjoy

penguin light saber fractal Per Lindholm

Linux Penguin Light Saber Fractal // Per Lindholm 2018-01-10

Machine Learning – Is Image Raytracing Using Machine Learning Possible?

I start with a question.

Why does image of a glass ball on a surface look like it does?

To every complex question there is the network answer. It makes sense to a network.

So I will test if I can calculate a simple raytracing scenario using machine learning. The network looks as follows. The atoms are my weights with its parameters.

The idea is to put the output image as an internal layer just before the output layer neuron. The last output neuron is then a truth value.

For the truth I will have to test a little. But to simplify. All light from the emission input image should correspond to the value in the output neuron. Here I will test the sum( input pixel value energy ) = sum ( output image pixel value energy ). Here the internal image layer is bigger than the input image so the energy will have to be distributed. Another truth is that I got index of refraction for my object. So some of my parameter values are already given for my image layer in air and object as glass.

Further if I put a ”circular” layer around the raytracing network maybe I can use that as a similar truth calculation.

A truth calculation is just that you know the output for a given input. So all black input should give an all black or zero output.

From the last image of the spherical surrounding space layer. I guess that if the energy is to be distributed over an infinite amount of neurons. Then the energy on all neurons would get to zero amounts. Some equal split.

But the sum is to be equal to the input energy so here you get another truth maybe.

If there exist input energy above zero then the emission layer depending on the starting position. Some distance from the center point. Will have a distribution of that energy on the outer capture layer neurons. Like a ?normal distribution maybe.

Hmm if you place the object in the center point it could cause a problem. Equal distribution. So I wonder if you can take a second outer layer and generate some difference. Like two eyes are separated from each other. Here you got two separated outer capture layers at some random distance apart.

 

 

Machine Learning Idea – The Light Sigmoid Or Dual Linearity Function

I was looking into my network theory. When it occurred to me that the sigmoid activation function bare resemblance to the light trajectory in water.

So for the universe system to calculate what happens inside the material it ?uses a dual linear function as an output changing function.

So I will test the light activation function with one or two parameters for the angles in the machine learning network model.

dual linearity

Machine Learning Idea – Calculating With Complex Roots

I was thinking. If you calculate the model with a implicit error function. Then its possible to get complex numbers. However complex numbers provide additional ’svängrum’ (rotational room) for the model.

So I made the implicit error function give two solutions where it could give a complex number. It turned out. The solution gave a double root. That is. Two equal complex numbers. Then all I had to do was to take the absolute length of the number.

Machine Learning Idea – Implicit Error Function

I’m currently doing some calculation on weather data. Just a time series.

I thought I get inspired by an equation.

dataIn – dataOut = dataModel

After some guessing I get the error equation. Here dataIn, dataout, … are >= 0. They are scaled from 0 to 1. I make some odd looking equation. sqrt(dataIn) – sqrt(dataOut) = sqrt(dataModel). Even though it does not follow above equation.

dataIn + dataOut – 2*((dataIn*dataOut)**.5) = dataModel(dataIn)

Here dataIn + dataOut – 2*((dataIn*dataOut)**0.5) is my target value for my model function for a given dataIn. This because I could get ‘nan’ otherwise where the model update can get negative for dataModel values.

So iterating through all my dataIn(i), dataOut(i) values I get the parameters for my model.

Then to predict a value for a given value x as dataIn. I input that x in my model(dataIn = x). Get an equation:

x + dataOut – 2*((x*dataOut)**.5) = model(x)

From this equation I solve the possible dataOut values as one of the predicted value.

 

interesting error plot

Some sort of implicit error function. It bounced ?

implicit error function loss

Newspaper And Magazine Innovation – Include Python Coding Courses In Machine Learning?

As a way to make money on online magazines and newspapers I wonder if innovation can help. What about coding. In particular machine learning.

In Machine learning its not that hard to get results. The code is short. The first time it might not be so great but with imagination you get better. When you do find the ?right system and model its instant satisfaction.

So for course assignments you have in data, training data and test data at hand. Its a rewarding process of finding the model parameters. Perfect for learning how to code.

So why is this important. Learning to code is only as important as your ability to imagine the possibilities with machine learning. Its not about robots. Its about engineering and physics and our future ability to tackle climate change.

Physical Network Desalination – Could You Take Advantage Of Freeze Filtering Salt Water Around Freshwater Ice?

My speculation is that the universe is optimized for life since living organisms could die otherwise.

From this I guess that saltwater can be ?desalinated to some extent through freezing around fresh water ice.

I guess its possible to have a temperature difference between the fresh water ice kept at around -0° C and the salty water above ? -1.8° C.

The idea is to have a digitally temperature filter for freshwater ice generation. Maybe you can have a structure of the ice so that the generation speeds up. Like a iceberg have large understructure.

The network part is that I believe that the freezer needs to be transparent to make physical network convergence faster. The process of filtering the salt.

I wonder if this why a snowflake looks like it does. It forms a ?light-network to make itself more ?withstanding? Could be an idea for a freezer. Use light snow networks.

I think there is more to phases than its classifications. The phase network transformation function is perhaps the answer?

What if you can make freshwater with a controlled solar powered transparent freezer? One which then sheds of freshwater ice like a harvesting function.

Physics Network Speculation – Is Fusion Easier With A Cold Outer Space And Hot Plasma And A Window?

Following my physics network theory. A change of matter configuration. Would be similar to convergence in machine learning. My speculation is that the atom network updates depends on light as the parameter, w(i) values in the layers. So because you have something like entanglement. I speculate that entangled light are involved in update changes that lead to fusion. Since it would make the update ?algorithm possible.

So I guess that fusion in the sun is aware of the cold space around it. Since some light leaves the big network and you have spooky action at a distance revealing instances of ?surrounding potential differences.

So its the light that adapts the atom network for fusion.

This would mean that the fusion reactor needs a transparent window to a colder outer surroundings.

Machine Learning Philosophy Idea – All Data Has A Direction Like A[Concentrated, Pointing, Diffuse, Good, Better, …

Just wanted to share some philosophy.

Inspired by my network theory. Water refractive index is data that can be directed.

From this I gather that. All data can be split into ”word direction vectors”. Be it data = [dataX, dataY , dataZ, dataGood, dataBetter, dataBest, dataDiffuse,dataConcentrated,dataPointing, … ] or something more.

So in machine learning I guess that it could be an idea to split data in more than the obvious groups.

?One way to split data I think is to have one network for each word direction. That is. Let the networks corresponding to each word compete or compare the usage they make of the data samples.

Thinking about this further. Maybe if I have input as output my weights in my ?dynamically expanding network function can be my cluster groups. Here the shape of the network is important. At least they could be related to clusters. Similar to my physical network theory.

I will have to test if this method is something.

Machine Learning Idea – Resample And Zoom

A quick idea.

For time series prediction. I wonder if its possible to use resampling and inverse zooming.

That is. Resample a window of the series to a lower number of samples. Then predict using those. Then as a second type of prediction use the resampled data as input and non resampled data window as the target.

Then it ?should be possible to predict the outline of the function and then predict the details of the outline. A bit like sketching. You just predict the zoomed detail for the second process.

Machine Learning Idea – Calculate With More Truth

I will start with a power sentence. ”Shit in shit out”. In the machine learning context this means that we are doing things a little bit ?wrong. We start with shit from the inside. From randomly initialized weights. Then by a lot of noise cancellations and filters we get something that looks like the target. But it is often shit out.

I suspect there exist improvements. For instance. I think you can calculate with the truth. That is. Exact solutions to linear systems of equations or single equations. Then you have used truth values.

I don’t know what the start value for the weights should be. That is. The arguments A,b to np.linalg.solve(A,b). But from eliminations of options I think it should come from the target and input values. As they are truth values.

So the idea is to combine exact solution equations like Ax = b in the weights as objects. Then puzzle those truths into a bigger picture.

testing …