I’m not a Linux expert. I have two ways of fixing my computer. Either searching via Google for the same problem or running sudo apt-get update and then upgrade.
This morning my Fedora 25 installation would not get to the login screen. To fix this you want to login. Just press Ctrl+Alt+F4 for a black prompt login screen. Enter your name and password. If F4 doesnt work try some other F key. Here enter commands.
if ‘sudo dnf update’ doesnt not work. You might not have a working Internet connection. Then instead of getting a cable to the networkcard. Run Wifi sharing over USB from your phone instead. Some WifiUSB app.
Run sudo dnf update or upgrade and this should fix the problem.
This gave me an idea.
Could we have something like a smallLinux appImage. Single file with no installation procedure that opens and runs a graphical desktop. This way a lot of people could fix there computer without knowing many commands.
To accelerate small business and livelihood making in poor areas. That might not have much of Internet access. I wonder if we could have people selling apps person to person. With secure mobile payments I think this could be a livelihood for a lot of people.
So the idea is that we have a secure way to sell apps and content. Seller to customer. Maybe here something like M-KOPA could work.
First I was wondering if Linux Live USBs could be used for distributing offline courses. You need Linux because the tech courses rely on software that needs to be installed. Like Octave or Python and many of their modules.
A Live Linux USB is an .iso file that can be written to a USB thumbdrive. It contains a somewhat finished Linux distribution. Like KDE, Ubuntu MATE and more.
So the idea is if its possible to include like a data scientist, machine learning or web developer courses with say PCLinuxOS. I think Linux ICT could be included as well.
To get money to build these courses I then wonder if content could be unlocked via SMS on the phone. Somewhat I think like M-KOPA?
For those who only have one machine and perhaps only run a version of Windows. A virtual computer on the Internet or the cloud could prove very useful. Everything from following programming courses or other MOOCs. To employment or selfemployment. With Linux you could learn many skills for little to no cost.
Because of the cost. I thought. An idea would be to mimic TV ads but for virtual computers. I mean its just a screen is it not.
So the idea is that you get virtual computer for watching ads. The computer is then accessible from the web browser. This way you only need a phone and keyboard to get a Linux computer.
An idea would be if there was a service like gmail but you got access to a virtual Linux computer. Then with only your browser on the phone or windows pc you could play around with Linux and installl python modules and learn to program.
As a tip and as a reminder to myself. I think I came up with a novel way to study online courses.
The idea is to create a rich HTML of the lecture with both images and audio. Here you essentially replicate the lecture.
Take a screenshot of the video and replicate the slides with hand drawings on paper. Then take a photo of the paper and write some notes to read.
Then you record your voice reading with the notes as support while you create the HTML page. Then you just put in the audio and images in the HTML documents using the HTML5 tags.
The idea is that you remember better than if you just answer quizzes and read.
The apps I used for this on my phone was just a text editor and a web browser. For convenience there was an app, ”Open in Browser” that automatically loaded local files in the browser from the file manager.
I put in the headings tags and img tag after finishing the recording while writing from the notes. So first pure text + audio recording. Then add the audio and image tags. The image tag had to have the alt attribute otherwise it would not show. Even for local files.
As an European education project. As a service for its citizens. Could not we create online education for the school children primary and secondary years. The idea is that we could gather the best teachers and people from within Europe to do this.
I think this will be the smartest move in education we can make.
Just by manually writing the same text as you would normally just read. You could improve. I guess the understanding is tied together with the memory. So if you could remember the text better. Your brain will have an easier time to process the data and understand it.
This method really goes with everything you want to learn.
So the idea is to encode the data in brain memory.
You can configure Linux MATE to capture the screenshot area with shift-Print. With this you just capture the area of a handwritten photo of the math formulas. Then do some basic adjustments for contrast and light and color.
Another idea I have is to run jupyter notebook (iPython) on a raspberry pi. Just connect a bluetooth keyboard the the phone and you can run the notebook, and console programs in the browser of the phone. This way you don’t need VNC on the phone. Just a browser will do. Octave will run the terminal which could be quite handy.
Take an example of two classes, label red and blue. I wonder if you can make a classifier with the help of image filters rather than defining parameters to time consuming more analytic classifiers like SVM (RBF).
Further. Taking inspiration from the gaussian filterered image. I wonder if classfiers can be built with holes, islands and unsure areas. Islands being a red cluster within the blue group. An unsure area being the area with a much blue and red dots near each other.
To prevent further tragedies. Could we take the technology developed from smart guns and apply it to trucks. The principle is the same. Only the authorized driver of the truck can use it. Otherwise it would stop.
There are several smartgun technologies to choose from. Like the Armatix companion watch.
Found an app Pydroid – IDE for Python 2 that let me install via pip. Hmm this could be huge. So I installed ipython. Then from the Pydroid terminal i could run ipython version 5.3.0 with python version 2.7.12
A test of the functionality of numpy revealed some errors and fails. Could not plt.plot() to the screen with matplotlib.pyplot but I could just save it as an image. Then display it in the gallery.
Pretty good for school projects. Just get a bluetooth keyboard to the phone. Its not as bad as you would think. You can place the phone anywhere you like. I prefer it on the left side of the keyboard.
I recently discovered that I could very well program on my smartphone. I just bought a simple bluetooth keyboard and placed it on the short left side.
To my surprise it worked pretty good. The battery time was far greater than my old laptop. With a powerbank my time programming is not cut short. Tried it on a visit to the local café.
There are android course apps available that lets you run the code you write and check if the output is correct. Pretty fun.
What I would like to have is a small phone case that is also a small computer like a raspberry pi zero. Then I could use the phone as a usb connected mobile screen and shared internet. With this I could install many python programming modules.
Using pyevolve and a score as the accuracy of the machine learning classifier. I wonder if you can just evolve the extra input data for better classification. That is. I belive you can create features from random or the evolved genome that will help in the classification. In theory it might work. I will see how well it works.
Even using a spline function with a limited number of parameters. The decision boundary for the classifier was too unpredictable.
From the image you can see that the decision areas look pretty neat. But probably wrong. There are lots of information that might be handy to the algorithm. Also I think neat decision areas are somewhat not complex enough. For instance you could define problem areas with another third color and call it problem area. The areas with suspect decision boundary.
Maybe you can run a ?generative model on this one. Here you generate new data that are “photo realistic”. That is. The data is belivable.
If machine learning is going to used for cars then I think it has to mature a lot. I mean they are trying to simulate brain functions with classifiers that could have very strong score for confidense but wrong anyway.
In my opinon machine learning is still in experimental stage. When its matured enough to be boring. Because the many ?security measures. Then self driving cars could be a goal.
I believe we need much more confidence and accuracy type measurements. For example. I would have to have much more points precisely at the decision boundary. So its not only how many samples you have but also how much the samples reveal the boundary and more.
On a further note. If we develop self driven cars. Where is it going to stop. People will loose their jobs and then who will pay for those cars. The sallory goes to the robots. The little tax that comes from them will pay for the roads.
However I think machine learning could be used to reveal what there is to know about the universe. Physics that will give us much better batteries for human driven cars. Fusion energy to help us survive.
I was wondering. Could you treat a battery as a energy information device. But first a speculation.
If you treat a plant like a machine learning cell network. I guess the plant need energy but also information. It needs a signal to the network for further processing. A network is like a function(). You need input. I guess the parameters of the plant function gets updated by comparing the input light, sun spectra with the output light, green color. So the plant not only needs input information but also output information.
From this speculation I wonder. Could we create a artificial machine learning battery cell network. Could light as information input/output to the battery help energy storage, charging and power output?
I wonder if one can create a scientific battery to mimic a deep neural networks. The idea is that you can take after the plant leaves. That is. The battery cells sheets have short distances between input and output.
Then with the heat images from each layer I guess you can tell where there should be visible light ”losses”. By this I mean that the responding weight values should probably have been higher. Meaning not only heat radiation output is needed.