It should be possible to coursify Linux/Programming tutorials in forums and blogs. That is. I wonder if you could just with some ?HTML code or plugin write a quiz. Meaning a question and a ”hidden” range of possible answers that the browser checks.

I don’t know how to make tutorial courseware but I think there should be open source software for this. Maybe a wordpress plugin could work.

The idea is to make the site interactive and fun. I think this could be done with user generated tutorials with some interaction.

A got the idea from supervised machine learning. The idea is simple. Just input the text you want to learn and output the same text. That is. Rewrite the text just as it is shown on your computer.

So take a creative commons license book on the topic you want to learn and write the text as it is in the pdf in a document of your text editor.

To make it extra efficient you can make a screencast with your voice reading the text.

I guess this will make your brain remember and learn better.

This goes for learning math to programming or writing books. Just enter the same text with your word editor.

I believe that with this method you can overcome learning difficulties and or lack of concentration when reading.

To me. Looking at machine learning networks. It would make sense if the atoms are just different networks.

That is. They handle input and give output. The main difference I speculate is that they also learn or update their weights to keep scores of different properties.

I guess that among the weights are the ”randomly” distances the electron is from the core. So the atom manages so many weight values because its so fast.

The scores I guess is the important part. The score are just the ?energy property values of the specific atom.

Then I guess that changing the score values of the atoms together with the input of colliding atoms could lead to fusion.

If there is a chance fusion takes place everywhere in the sun. Even at lower probability. Then why not analyze the surface being the coldest? What are the requirements?

I think it would be better if decision based machine learning systems generate their own “thought” strings. That is. Text strings that are reflective of the instructions to itself. I think this is better than to rely on low level weight information. Even though its all supervised learning.

I mean error checking would be much easier than to retrain the whole system.

The Mandelbrot set is just the numbers that don’t diverge when the output is recycled. I see that like a filter. Then could some part of the structure of the universe be like a fractal. A filter of some equation.

I belive the filter comes from all the physical laws. The universe is shaped by the mass and energy that don’t diverge from those physical equations. That is. The universe selected energy and matter that survived its laws or truths.

Inspired by e^x where the derivative is equal to e^x. I cold guess that maybe there should exist something like ’derivative similarity’.

So I will test a machine learning approach to this. I will start with some function and take the derivative of it. Place 2 derivatives and the function in separate series. Label the samples coming from f,df,df2.

After learning the function and its first and second derivative I will If I’m successful see what happens when I try to maximize prob[0]+prob[2]. That I want to see if there is a function that has a second derivative but no first derivative. Sounds crazy. Probably is. : ).

Well I think there is a lot to be tested with machine learning.

Here I calculated something that maybe can be called continous derivate. Ex. D(.25)(f(x))

I wonder. Could the weight matrices that corresponds to the mean filtered training samples be good starting values?

I mean. It should be easier to correct a ?generalized starting image than to begin from scratch with random noise.

So my idea is to calculate and train on mean summarized collections of the input samples. Here I will use an extra label. Then use those converged weights as starting values. For when you want to generate.

So when I want to generate an image from label 2. I will try with the weight matrices from the mean calculation. Then let the algorithm work itself to an image that looks like label 2 from the extra label. That is from the mean filtered solution image to the image that looks like coming from label 2.

For generative machine learning. I wonder if you can use a classifier and evaluate the probabilities that the sample belongs to different categories of noise levels.

That is. You precompute samples with different levels of noise and label them accordingly. So you have samples with 0% noise 10%noise 20%noise … 100%noise labels.

A fast way to precompute is just to add random noise of a certain % amount to the image or sample.

The reason for this approach is that I think getting to a generated sample of 0% noise gets faster if the algorithm can ”think” or recognize a little improvement in the noise level.

From what I can see. A genetic algorithm generates a lot of guesses. This makes it kind of slow. So I wonder. What is the effectiveness of the guesses.

Could the random number generator be improved. So that you get quality input to choose from. Not just choose the best choice from low quality input.

The way I see it is that random numbers are just samples that can be taken from a ?2D image. Then the problem is just to generate a good image. I think the image should look diverse.

From this I wonder. Does there exist a deep genetic algorithm with internal fitness or score values from each layer.

Inspired psy trance. Why not try to deliver concepts of mathematics, philosophy, programming and physics through music.

By concepts I mean the hard to understand or important truths that when you understand them a little generates feelings and new thoughts.

If you think of the brain as using all sorts of inspiring data to generate understanding thoughts. Then expanding concepts which are very compressed pieces of information to include music and their accompanying feelings is important.

In the very least. Music enhanced knowledge should help you remember better because of the generated feelings. Much like you remember the music of your youth.

I will try to generate a 3D object from first training on a 3D object and then generating similar objects. To reconnect with one of my first blog posts. That about evolution and machine learning.

After my first attempt. The algorithm stumped. To many possible points. Then I remembered that blender had something of a flat 2D image that could be turned into a 3D object. I will try this. I hope I remember correctly. Havent found what Im looking for yet.

Still I hope this will make the problem much easier. A 2D image is a lot easier to generate than a 3d volume.

Something I think can be interesting is if machine learning can be used tocreate high detailed displacement maps. Because these would only need to be realistic looking. Creating lines and bumps might be possible and very effective.

Come to think of it. Generating textures should not be a problem. Any texture.

You just replace the images of digits with images of different textures of wood, stone, skin and more. Then let the machine learning classifier recognize the different textures. That is. Put a number on each type of texture. Then use a genetic algorithm to generate an image with a perticular id number as the target. You just generate until the system says its 99% sure its texture id #. That from any random start.

Just a quick idea. I thought I use pyevolve ( genetic algorithm – python module ) to see if I could generate an image from a machine learning network. I thought I have a classifier which I would get some real valued number from (going to strip the binary function in the end). This indication is going into the genetic algorithm as a score. Then the algorthm generates new guesses which will be selected from.

Its my project for this time.

https://youtu.be/iNnabJX7wGE

Have not got around to testing. But I found out something interesting. Using the handwritten digits I manage to get much better accuracy for the digit recognition from adding noise digits and classifying them as their own special number.

Thats is. I added 180 numpy.random.rand(8,8) matrices to the digits random images. After training. The 180 random images were recognized as the number 77 and the other numbers got better at their recognition.

If you train the parameters of the machine learning setup using a genetic algorithm you can put in almost anything in the system.

From my trial with smooth gaussian blur filters I realised that I could use the weights as parameters for a 2D spline function. From this I get pretty much any size of a weight matrix for free.

Because I believe you need to noise filter the weights. Its ridiculus to have so much gradient supporting weights. Just use a spline function or 2D spline surface.

The other day I was at the local IT-shop. The other customer there wanted to repair his mobile. Actually the screen was cracked.

So come to mind that maybe customers should have the option to choose easily repairable phones.

So what I propose is an interchangeable screen. So when it cracks. You just pop it out like a battery and replace the screen. Here you could select different screens based on color appearance and resolution.

Just calculate the weights of ?any multilayerd perceptron using a genetic algorithm. // Per Lindholm

Looking at the many different activation functions. I wonder. Calculating the weights with a genetic algorithm should also allow for optimized activation functions. So I will test a spline activation function which I will calculate the weights for.

I found an easy way to customize pretty much everything. The trick was to take a screenshot of the interface. Load it in GIMP and enhance or adjust the colors. Then just color pick the new different colors.

I think there is a possibilty to use real time effect filters on the home screen for that cool effect. Maybe we could have OpenGL shaders and filters like in shadertoy as an image overlay.

Here in the example I used the pretty slow GIMP filters. But they could also work.

Just some ideas for the smartphone and for Linux distros.

I think you can solve partial differential equations using genetic algorithms. If the solution gets to noisy I will check if a noise filter or image stacking could make it better.

I was going to test if you could ”paint” with genetic algorithms. I thought it would generate good enough approximation to the target image.

That is. I set the score of the genetic algorithm to reflect the image. Then as the score got higher. The solution would get more similar to the target.

What I found was that the solution converged extremely slowly and still looked noisy.

So I wonder. If I do image noise reduction for the intermediate steps. Would the calculation converge quicker and be of better quality?

Example. Several “enfuse – Linux terminal command” stacked images of a image solution.

If you call heat noise. Then a lot of heat is like an incompressible signal. My guess is that noise acts like a lot of signals that the fusion process can sample from. At the exact right time. Meaning that the process have all the information or circumstances that ?causes fusion.

Then if fusion is the reaction to the noise maybe it is some kind of noise filter. Maybe a low pass filter. I don’t know.

There might be something important here. I mean. There is heat when you charge a battery and there is heat when you discharge. In a battery you want high energy density. Likewise in a harddrive for storing a lot of information in a small area.

Thought Mechanics In Progress // Per Lindholm

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok