Speculation – If Cancer Evolves Then Fool The Machine Learning Algorithm?

This is just my guess. My speculation on some additional ?possible information around cancer.

I read that cancer evolves. From my own speculation there is something like smart evolution. Smart because it uses some kind of machine learning algorithm to get the ”right” results quickly.

Then my guess is that we should know how to fool machine learning algorithms to get a treatment.

Again this is just my guess.

My Super Idea – Use A Game Engine For Powerpoint Presentations

Oh I got an exciting idea today. Why use the boring Libreoffice Impress or Powerpoint. You can customize a very interesting presentation app using a game engine.

All this can be done with open source software on Linux.

I plan to use the Godot Game engine, Blender and GIMP.

The idea is kind of fresh so I will return and reedit this post after some days.

So the idea is to program a presentation application using the Godot Game engine. Since its uses 3D acceleration the presentations will look innovative.

You can create a 3D world you can interact with during the show. Or why not present a 3D car simulation for that new electric car.
With a game engine you can really make some super cool 3D presentations.

Tip – Use A Read Text Plugin In Libreoffice To Get More Readable Text

For those of us like my self. Writing is not so fluent. I forget words and sentences sometimes get too long.

So the idea is to let someone else read. Then if a syntesized speech voice could read you the text and it sounds ok. Well then you know it will be readable for everyone else. Usually you only need to shorten or rewrite sentences.

To capture audio in Linux. I used the somewhat odd solution of recording a screencast with simplescreenrecorder. Then just strip the audio from it with ffmpeg.

libreofficeReadTextplugin readTextLibreOffice

ffmpeg -i blog.mkv -acodec mp3 ReadText.mp3

Idea – Machine Learning And Smart Biomimicry?

The idea is simple. Innovation can be I assume clustered into groups. What I imagine is that the smaller the innovations are. The more similar the innovations become. This because of the less degree of freedom.

This would mean there is a connection between biology and technological innovation. I think biomimicry tries to do this. Take innovation from biology to technology.

I imagine a similar approach. Since the innovations should be more similar at say the ?nano level. A machine learning approach should be able to learn from images from those biological ”innovations”.

The idea is then that you present a yet not finished technological innovation to the machine learning program. It could then use this as enough information for generation. Inserting probable pixels in the generated image. From analyzing the generated image. An engineer would then know what the innovation should look like from the perspective of the machine learning program. This based on what it had learned from biology.
Maybe this approach could be used for a new battery?

Idea – Lossy Calculations?

I think I have a simple idea for how lossy calculations could work. The simplest way would be to lossy compress parts of the data in the problem and decompress/compress as needed.

For comparison a jpeg image which uses lossy compression have very little memory requirements. Similarly calculations with big matrices could perhaps be made memory efficient with lossy compression.

I will try this method but I need lossy compression package in octave first.

Idea – Keeping Multiple Values For Signals

The idea is. With lots of data representing a signal you want to use. You might get lots of different values for the same x coordinate. That is. More than one y value for the x coordinate.

Then I thought why reduce a signal to a function of statisitical points. With some least square method for some line. I mean. We have computers capabable of taking advantage of as many data points we can sample.

Still to make some size reduction my idea is to store all the data in some kind of fourier series. That is. With multiple values y for some x coordinate. Not making some statistical reduction to get a function.

This information I imagine could be useful for a machine learning algorithm.

An example would be a camera photo. Where you will have multiple colors for some of the pixels. Not reducing the pixel color to one color. Just storing all the colors that the pixel got from the sensor. I think this could be taking advantage of in machine learning.

Since there are several statistical methods available. Some will be better than others for the particular problem. However my hope is that machine learning will find the best method possible. And since machine learning evolves its better to store all the data for future improvements.

Another idea would be to keep the value of one of the multiple values in every multivalued point. So that in a lossy fourier based compression. Each of the “choice values” makes a good contribution to the signal. By good. I mean it does not add to some ?high frequency noise.