I experimented recently with predicting histograms cutting the time series into slices and learning the histogram for each piece. Then using histogram(i) to histogram(i+1) predictions.
I wonder if the same could be done also with objects like machine learning networks. That is. Predicting the network parameters.
So the idea is to cut the time data into slices. Then I will try experimenting with setting the input as object(slice (i)) and the target as object(slice (i+1)). After training the model. I ?will be able to predict a network model object for a future object with the last object as input.
I probably need objects that does not have so sensitive parameters so it allows for a little error.
I was wondering if you could use machine learning for interpolation. A use of this method could perhaps be to control overfitting.
So you have discrete values in a target data vector. Zeros and ones. My idea here is to find a smart machine learning interpolation such that overfitting is reduced and output of the machine learning network converges to the discrete target values and is able to predict correctly.
Basically the in between interpolation curve should also make sense as well as the target values. Is it possible to give a large network a nudge in the right direction? Fitting for the smart interpolation curve.
Could information be data that make sense to a network?
So you start with information that makes sense. Some measurement. The target information to a machine learning network model. Then the idea is that you test some input data to see if you got some reasonable accuracy. Then you can say that the input data makes sense to this network. Which is based on sensible information. So the input data is relevant information to a degree which make sense to a network.
I think you can split the information into time information and frequency information ( histogram ). The frequency information is ?always present. Like different distributions that are somewhat predictable throughout the data.
To see the problematic areas of your training. After the training is done. I guess that it could be good to cut the time series into slices and compare loss for each time slice. It should show where the model has problems and perhaps ways to improve.
Image generated after one epoch. The plot shows different loss for different times of the year. Could it be that summer time has more energy to turn things around. Making a prediction more difficult?
I came across the desert twin water producer when search for ”water from air desert”. Hmm. Two parts. This sounds like a two part air conditioner with refigerants.
I think you can run an air conditioner on solar power these days. So you have one part for lowering the temperature for water extraction and one for removing heat from the process. Like an air conditioner. The idea was to build something upon availible technology to make it cheaper.
By generalization I mean handling more inputs. Then a strategy could be to update the weights so that you get a non random looking image. I think you can do this by applying a noise filter to the matrices.
Then to get a better generalization I wonder if you can apply a Google type super resolution to these weight matrices that now look like images. So with this you have updated the weights based on experiences from many previous images.
If something like this works then you have effectively trained your network with a ?lot of samples you did not have.
If you sample the audio from machines with better and better efficiency. I assume its then possible to imagine or predict the sound from a slightly better machine using machine learning. Then with this predicted audio sample. Then I wonder. Could you then use this together with other data to construct this slightly better machine with the target audio?
Since we use cardinal directions north, east, south, and west as some kind of position coordinates for places. I wonder if you just can add a fuzzy radial component to this.
So my idea is to split the radial component into something like outer and inner. So with this my town is located north of outer Stockholm. Without the radial component the location would be somewhere in between what considers inner and outer.
Conclusion: No bigg difference between MP3 and OGG below ?15kHz. Yeah well just for fun.
Come to think of it. Maybe this can be used as an automatic EQ. Just use pyevolve or “for” to see what EQ setting produces the best sonogram. For the best settings in time for the sample or the best overal setting. Like a dynamic EQ for the whole song.
I assume many functions that machine learning tries to mimic has divisions in them. With division comes to possibility for division by zero. However I think there is a relation between high risk and high reward in machine learning also.
For anybody interested in education check out kaggle.com. Its a social education site for computer science and machine learning. Apart from tutorial project competitions you can learn from discussions on subjects. Pretty good. This is what I would like for university higher education. If you could learn from talented students. We would get much better value for the time and money spent.
The idea is that we could have very effective higher education if we build sites like these for the other subjects.
Here is my take on imperfect mimicry. I wonder if there are such a thing as imperfect mimicry.
If you take evolution into a count then the specie might just give the illusion that they are a closely related to the dangerous specie.
An example would be a bird coming to a different island where the wasps look a bit different in terms of size, shape or more. The wasps would probably keep some similarity which would not cost so much in terms of performance, like the color of a car.
So, perfect mimicry is not necessary. The specie just has to fool the predator that it is related.
I think it should be called kinship mimicry, not imperfect.
My phone has double the number of cores (8) of my stationary computer and it was still much more affordable. This might be the case for some future to come. The phone being a device people buy a lot of. Drops in price much faster and you can afford higher speeds.
So here you can create your own smartphone wifi connected compute stick.
What you need is the android app Termux and some googeling. A repository is available for termux which enables you to install gcc, numpy, scipy of the latest versions. This for jupyter notebook. A web app that lets you connect on the internal wifi network to your phone from a web browser on your slow computer.
I installed via pip the latest keras with theano and sklearn. Great for novice users in machine learning.
With this you could enable fast computing with old CRT screen computers, raspberry pi and the like.
Below I connected the phone computing device with USB internet sharing. Typed arp in the command window to get the ip of the phone. Plugged that ip:8888 in chromium browser. You need to enable ‘*’ all ip in jupyter config.
With an external keyboard you can get started with machine learning on the phone. The procedure is not perfect. But I attached some screenshots. The main app to use is Termux. It does not hog the cpu as much or not at all.
For termux you need the “pointless” repository that installs gcc, scipy and numpy. The rest can be installed by pip.
DONT run all cpu cores on the phone. My phone (8 cores) overheated temporarily shutting own one core. However this is probably not healthy for the phone.
You can limit the number of used cores by limiting numpy, see screenshot. I limited it to just two cores. I ran some test for keras using theano as the backend. I also got sklearn installed from pip.
You can use pip to install jupyter. It installs a http server app you then access from the regular android webbrowser “localhost:8888”.
With this I think you can follow some online courses on machine learning that does not use the GPU.