It should be possible to coursify Linux/Programming tutorials in forums and blogs. That is. I wonder if you could just with some ?HTML code or plugin write a quiz. Meaning a question and a ”hidden” range of possible answers that the browser checks.
I don’t know how to make tutorial courseware but I think there should be open source software for this. Maybe a wordpress plugin could work.
The idea is to make the site interactive and fun. I think this could be done with user generated tutorials with some interaction.
I think it would be better if decision based machine learning systems generate their own “thought” strings. That is. Text strings that are reflective of the instructions to itself. I think this is better than to rely on low level weight information. Even though its all supervised learning.
I mean error checking would be much easier than to retrain the whole system.
The Mandelbrot set is just the numbers that don’t diverge when the output is recycled. I see that like a filter. Then could some part of the structure of the universe be like a fractal. A filter of some equation.
I belive the filter comes from all the physical laws. The universe is shaped by the mass and energy that don’t diverge from those physical equations. That is. The universe selected energy and matter that survived its laws or truths.
Inspired by e^x where the derivative is equal to e^x. I cold guess that maybe there should exist something like ’derivative similarity’.
So I will test a machine learning approach to this. I will start with some function and take the derivative of it. Place 2 derivatives and the function in separate series. Label the samples coming from f,df,df2.
After learning the function and its first and second derivative I will If I’m successful see what happens when I try to maximize prob+prob. That I want to see if there is a function that has a second derivative but no first derivative. Sounds crazy. Probably is. : ).
Well I think there is a lot to be tested with machine learning.
Here I calculated something that maybe can be called continous derivate. Ex. D(.25)(f(x))
I wonder. Could the weight matrices that corresponds to the mean filtered training samples be good starting values?
I mean. It should be easier to correct a ?generalized starting image than to begin from scratch with random noise.
So my idea is to calculate and train on mean summarized collections of the input samples. Here I will use an extra label. Then use those converged weights as starting values. For when you want to generate.
So when I want to generate an image from label 2. I will try with the weight matrices from the mean calculation. Then let the algorithm work itself to an image that looks like label 2 from the extra label. That is from the mean filtered solution image to the image that looks like coming from label 2.
Inspired psy trance. Why not try to deliver concepts of mathematics, philosophy, programming and physics through music.
By concepts I mean the hard to understand or important truths that when you understand them a little generates feelings and new thoughts.
If you think of the brain as using all sorts of inspiring data to generate understanding thoughts. Then expanding concepts which are very compressed pieces of information to include music and their accompanying feelings is important.
In the very least. Music enhanced knowledge should help you remember better because of the generated feelings. Much like you remember the music of your youth.
I will try to generate a 3D object from first training on a 3D object and then generating similar objects. To reconnect with one of my first blog posts. That about evolution and machine learning.
After my first attempt. The algorithm stumped. To many possible points. Then I remembered that blender had something of a flat 2D image that could be turned into a 3D object. I will try this. I hope I remember correctly. Havent found what Im looking for yet.
Still I hope this will make the problem much easier. A 2D image is a lot easier to generate than a 3d volume.
Something I think can be interesting is if machine learning can be used tocreate high detailed displacement maps. Because these would only need to be realistic looking. Creating lines and bumps might be possible and very effective.
Come to think of it. Generating textures should not be a problem. Any texture.
You just replace the images of digits with images of different textures of wood, stone, skin and more. Then let the machine learning classifier recognize the different textures. That is. Put a number on each type of texture. Then use a genetic algorithm to generate an image with a perticular id number as the target. You just generate until the system says its 99% sure its texture id #. That from any random start.
Just a quick idea. I thought I use pyevolve ( genetic algorithm – python module ) to see if I could generate an image from a machine learning network. I thought I have a classifier which I would get some real valued number from (going to strip the binary function in the end). This indication is going into the genetic algorithm as a score. Then the algorthm generates new guesses which will be selected from.
Its my project for this time.
Have not got around to testing. But I found out something interesting. Using the handwritten digits I manage to get much better accuracy for the digit recognition from adding noise digits and classifying them as their own special number.
Thats is. I added 180 numpy.random.rand(8,8) matrices to the digits random images. After training. The 180 random images were recognized as the number 77 and the other numbers got better at their recognition.
The other day I was at the local IT-shop. The other customer there wanted to repair his mobile. Actually the screen was cracked.
So come to mind that maybe customers should have the option to choose easily repairable phones.
So what I propose is an interchangeable screen. So when it cracks. You just pop it out like a battery and replace the screen. Here you could select different screens based on color appearance and resolution.
Just calculate the weights of ?any multilayerd perceptron using a genetic algorithm. // Per Lindholm
Looking at the many different activation functions. I wonder. Calculating the weights with a genetic algorithm should also allow for optimized activation functions. So I will test a spline activation function which I will calculate the weights for.
I found an easy way to customize pretty much everything. The trick was to take a screenshot of the interface. Load it in GIMP and enhance or adjust the colors. Then just color pick the new different colors.
I think there is a possibilty to use real time effect filters on the home screen for that cool effect. Maybe we could have OpenGL shaders and filters like in shadertoy as an image overlay.
Here in the example I used the pretty slow GIMP filters. But they could also work.
Just some ideas for the smartphone and for Linux distros.
If you call heat noise. Then a lot of heat is like an incompressible signal. My guess is that noise acts like a lot of signals that the fusion process can sample from. At the exact right time. Meaning that the process have all the information or circumstances that ?causes fusion.
Then if fusion is the reaction to the noise maybe it is some kind of noise filter. Maybe a low pass filter. I don’t know.
There might be something important here. I mean. There is heat when you charge a battery and there is heat when you discharge. In a battery you want high energy density. Likewise in a harddrive for storing a lot of information in a small area.
Thought Mechanics In Progress // Per Lindholm