The idea is to work the input towards an easier recognition. For a letter dataset this would mean that the first network makes a cleaner version of the hard to recognize letters. That is. A smart transformation that produces a simpler to recognize image.
I think this would mean that for already clean looking letters there would be no different image.
I wonder if you can take the images with the best confidence of score of recognition. Use them as examples for the rewrite network model. That is. Use the correct best selected images with the usual data,label network.
This so the network can ask if the generated image is clean or fake. It competes with the set of most recognizable images. That is. I think I should mix in all the recognizable images from the training set into one set of most recognizable images. To extract the clean property I mean.
Then I guess I run the procedure in reverse. First the clean competition then the recognition network.
One problem could be that if non letters get into the mix they would get produce the wrong decision.
So another problem is to recognize if its a letter at all. Again some network that tells if the image is a letter or fake. It almost feels like this is a kind of biological mimicry. Fooling the network that you belong to a another set of specie. But here we have another property, eatable.
For biological mimicry the fooling specie I think should be most successful if it can stay relatively close to the target specie but not too close. This so it will not try to reproduce with those it cant and are not eaten.
The little cheap Sound Enhancer hardware was a great addon to the Amiga Computer. I always wanted that sound ever since. Now hmm a little late I tried the calf audio plugins with Linux. So I thought I run some Amiga Demos from youtube into the calf audio plugins via qjackctl.
You need QjackCtl, pulseaudio-module-jack and calf-plugins. Which you apt install from the terminal.
Then you run ”pactl load-module module-jack-sink” in the terminal. Then you need to select Jack sink under the sound settings as output.
Be sure to check qjackctl so that you dont have multiple outputs to the system out channels.
I was wondering. How do you improve the train station experience?
If you start simple. To see if things work. Why not add an entertainment page at the website. The idea is to show something like Swedish nature films on the SL.se. Just youtube links. Then let travelers vote.
So you can create competitions on the best nature films. By the public and professionals. I got this idea from a TV program where they showed nature photos from places around the country from viewers.
High Resolution cameras are cheaper today. So why not take advantage of them.
Then maybe in the future if successful we could have best films air at the train stations somehow.
What if there is fast way to make a photo super high resolution?
Splitting the photo into a larger grid with photo pixels and additional transparent pixels should do the trick. Just use a GAN network to imagine the extra pixels to real looking color pixels. An extra pixel could be stored in a separated pixel matrix.
Then this could work for other GAN problems. Just use the model on a small downscaled photo and then apply pixel separation and GAN imagination to increase the resolution. Should be much faster than to do it all at once for a large image.
I was wondering. Philosophically you can place balls of steel in a 2D pyramid and apply a little voltage. So you created something that looked like a machine learning network.
So I was wondering. Would it be possible to learn a little more about the electric network. For use as inspiration in batteries and motors.
From my machine learning speculation it could be that the layers of the network oscillates a little in finding its target weights. Knowledge about this could perhaps make things more effective?
Are there electric machine learning elements objects one can use to enhance motors and batteries for instance.
Thinking outside the box creating a parallel electric network. Instead of the crude fully connected network. Convolution networks are important in machine learning. What counterpart is it possible to create physically.
Maybe we get a CPU Network electrical Wire in the future.
Using a GAN network I think I have the answer to the many difficult questions. Noise. Noise put trough a generator and iterated with the help of a discriminator can create just about ?anything. From mass, energy, time and so on.
So if you want to travel in time you need to figure out a way to generate mass and time from noise and iterate it with a discriminator associated with that mass and target time.
So to understand this idea use the following guess sentences.
Because it makes sense to a network.
Since it makes sense to nothingness.
So we have one new idea to the why question.
So in my mind what came before nothingness. Maybe there is some math here. If time is a generator(noise) and noise is related to frequency then there should be the possibility the noise somehow restarts itself. So we got the start of a new time ?counter. To make use of a fully expanded universe looking like empty ?noise another generator is needed perhaps with new time to use.
A battery can be used in pretty much any energy product. Be it a car or a computer. Also I wondered if there could be 3 function layers of a battery.
So a strategy could be to do research in a controlled CPU environment with the CPU as load. Like the mobile phone where you got graphs of energy usage for any program. So it ?should be possible to run ?CPU simulated car loads on single batteries.
An interesting idea would be if its possible to identify or recognize the individual battery and battery type from energy usage data.
From this I thought. What is a smart surface area? A large surface area is comparable to a surface platter hard drive. Where as a large and fast information storage device is related to 3D solid state drives. The solid state drive wins in super performance.
Another thought was if its possible to track areas of the cells. If you make the part of the cell areas look individualized. Like sinus type waves. Then I wonder if its possible to track individual areas also. To generate more detailed data. For use in machine learning.
This gives you an idea perhaps on what is a smart surface area.
I was wondering. Why would something shine. In my theory of machine learning inspired networks its because it make sense to a network. It might sound a little vague. But the universe is so complex that only a networks are capable to solve the many physical problems.
So the reason I see it is that light can change a physical network very fast. So the model is Light+Matter.
Basically you ?need light as a fast network changer.
Starfish has some kind of eyes on their arms. From this I wonder if it could mix data from feel sensors and light sensors. If the model mixes the data the starfish could somewhat indirectly see in the dark with its feel sensors.
Then are we sure we made an all visual recognition machine learning model?
I speculate that if the output of layer_n to layer_n+1 is not visually recognizable it could easily be fooled. I mean if the output looks like noise its possible to fool the model down the line.
I took a model of a standard CPU fan at blendswap by grooh – creative commons zero. Then loaded a screenshot into GIMP from Blender. There I removed the blades. Converted the white space to transparency and applied Gmic – Repair – Solidify filter.
It looked like that maybe the filled area could very well be a fan. A blade less fan. So inspired by this I made the new kind of Bladeless CPU Fan.
In the future I hope we can have a Gmic – Solidify and Heal Section Plugin that is 3D model vertex aware. So we could innovate directly in Blender.
Inspired by the P versus NP problem. I wonder if its generally faster to reformulate a problem than to solve it.
Then how does reformulation look like in machine learning? What inverse relationships could possibly exist for a too effective reformulation? Does the network reformulate in a safe way etc. Could you recognize models that are in risk groups of certain faulty groups of decisions.
Since I at least tend to use the same looking model for many problems.
I wonder if a strategy would be to have datasets with not just data but with reformulation examples for inspiration.
I guessed before that you need damping to stabilize a GAN network. From looking at the error graph. It goes up and down and so on.
If you then apply the same logic to the model layers of a deep layer network. Then there could exist models which experience internal oscillation between layers.
Then perhaps you need a smart damping model function of the updating gradient data.
I guess you get into trouble if the ?first and ?last layer is slow to learn in the back propagation. I suspect you get oscillations if the learning between different layers are not working well together.
I think I invented a interesting method to aid detail generation. Its based on GIMP Heal section improvement filter and Blender.
My method is simple. Just take a screenshot of the object you want to improve. Then load that in GIMP. Select boring areas with the selection tool and filter the area with heal section.
The output is just a hint or a suggestion towards some detail improvement for the continuation of the 3D object modification.
Be sure to watch the video. I think the heal section algorithm could be improved. Maybe with a GAN machine learning network. As a this is just inspiration to the human 3D artist. It should work just fine.
This idea can perhaps be ground for a similar tool built in Blender. A 3D content aware resynthesizing algorithm.
The guess is that everything needs to be engineered.
Taking some inspiration from network GANs I wondered how to stabilize the error curve. Then it occurred to me that machine learning does not care what data it is. It could be mass moving in springs or just data. The graph could look similar.
So from this I wonder if you in a GAN network could add a Damping model. A damping regulator of some kind with the error rate as input.
From this I thought. Is there always a damper associated with a speed.
Then what damping constant is associated with the speed of light? If the iteration speed of universe is at max the speed of light.
Then how do you get it at that. What is the model?
Why not denoise with a variable strength over the image? I think a model can learn the many parameters using machine learning over other images.
Maybe an iterative approach could work. Where you apply a little denoising each iteration in a loop.
So the idea is to find x and y coordinate parameters for the weight value strength of the denoise filter.
Perhaps I need to make a system with at least two choices. One that has been a little bit denoised and one previous iteration. This ?way I could select with some machine learning model which ?8×8 groups are passed on to the next iteration. Either from the iteration0 or subgroup from iteration1. This will end up in a image iteration2.
I think this could be a trillion dollar industry. Make government buildings or every building really reusable. That is. Walls and floors are detachable for other house projects.
Basically you don’t destroy anything when pulling down a building. But you probably have to adapt building methods for this.
Perhaps inspiration can be taken from modular housing. But here a modular and reuse interior strategy. For example. Why could not office chairs and tables look as if they could also fit in a home. Saves money.
Infrastructure is expensive so the idea is to make architecture reusable. This way costs can be kept down.
At the same time it would make it possible to create more affordable homes for homeless and connected risk groups.
If you can use imaginative networks like GANs to generate photo realistic images. Then why not use machine learning math to improve math.
What I mean is that machine learning can give you a new perspective on things.
For instance. What could a higher constant be. Since I don’t know. I assume something noisy, something random. Then I could say that that the Normal distribution is a constant, a random constant.
Another example would be from fake but realistic image generation. Like painting a cat. There could exist a random constant in the form of a best type of noise for the generator model. The noise should give the generation model a typical looking cat. That is. There is a better chance to get to all different cats from this type of noise. The noise is action ready and constant shaped in some way.
I was thinking. To create an Internet experience for the blind. An idea would be to create on demand stops in TV series or movies that describe the scene with affordable text to voice software.
I think this can be done rather cheaply and easy for Youtube or other online media. The person just listens to the video. Then press a key to get a TTS voice reading some info text during the pause. Enters the resume button and you get a better experience.
The idea is simple. Everything. I mean everything. Need to be engineered.
Together with my Why power guess. Assume therefor a network. Then something like distance is not real or photo realistic without a generator network model. This in a GAN machine learning network.
This could perhaps explain why action at a distance work. The distance between two objects are ?two fold. The underlying noise distance and the real distance. The real distance. The physical distance is the generator model function of that noise distance.
This could mean the distance between ?small objects are pretty much non determined. You need a generator model to fixate that noise distance data.
For larger objects. Its probably not so simple. Perhaps its has got to do with a large number of random objects put together forming a mean value or something. So its basically a random distance with billions of limitations. Basically an ordinary object physics distance.
I suspect the universe would not be as large as it is if it was limited by a normal distribution for many number samples. I guess, just guess that you get vanishing gradients somewhere in the system. Causing the ?network not getting updated.
I wonder if a method could be to use a Generative Adversarial Networks or GAN? With that I could create a small sample space of even distributions and use that as Real examples in the GAN network.
The generator could use an elaborate system of previous GANs as noise input. But I’m wondering if it could use a normal distribution as input. Just to make it more robust. The target of the network is then to imagine an even distribution from that normal distributed noise.
The target of the discriminator is then to predict if its input comes from the generative model or the real examples. Based on errors the discriminator and generator gets updated.
In the end I hope the generated noise like distribution of many many samples looks like a even distribution and not a normal distribution.
If this works. I think one can say that the central limit theorem was because of a non updating model. An independent random variable needs a network to keep it random.
For machine learning I think this could be important in mitigating vanishing gradients somehow.