What if it is possible to take advantage of Internet peer grading to enhance schools in rural areas with small resources. With the use of mobile technology like a smartphone or tablet equipped with a camera and a limited Internet connection. The student could photograph his paper and pen solutions of the extra exercises and upload them to a private site for grading. The idea is that the student could use this option in before taking a test.
Example using a simple smartphone and a document scanner application.
Why not use the computer to help you understand the text. The idea is that you can practice words and meanings for the books you want to read in advance. It would give you a much better experience. I would like the Project Gutenberg. The free ebook project to collect data on their books so the reader could practice words he doesn’t know but are in the book. The practice would be done on any of our computing devices. The PC, tablet or phone. I don’t know how this would work but I think you need to group the words together. For example you have basic, intermediate, advance, new and old english words. I think however you can produce a list of the most frequent difficult words of each book. This could work for paper books also if the books has already been scanned. You just search for the book and go to the word test.
I can’t remember all the things I searched for. Terminal commands and their uses. So I thought why not document what I was doing and the solutions I found working. I am currently testing TiddlyWiki as an offline and simple wiki that runs in the browser. This way I can repeat the things I forgot much faster than having to go on a search quest again.
Inspired by simple.wikipedia.org I think there should be additional documentation for the Linux terminal commands. Two versions where one should contain basic words and simple wordings. If it could help more people understand complex programs I think it will be worth it. To get more documentation in the hands of the users I suggest using wikipedia as a storage. Maybe there is a possibility to crowd source comments to improve the simplified manual.
ls is the command in Linux to list files in folders. If you enter ‘ls’ in the terminal you list the files in the current folder.
The command has many options. Where some are.
-l list more detailed information.
-R also show content of sub folders.
Here you write ‘ls -l’ in the terminal to use the -l option.
I think Linux can take advantage of podcasts for a better user experience. For example when installing the distribution you could have the option to play a relevant and fun podcast. Compressed with Opus. It would not take to much space on the .iso. Also as a way to engage new users. Getting them to read the Official User Guide it would be nice if there is a complementary podcast where a group of people discuss each of the chapters. I think this would keep them interested reading the whole thing through.
The idea is to take an Open Source Textbook on mathematics, physics, chemistry or biology and make an Linux style inspired podcast series. Here two or more people discuss the theory on each of the topics, problem solving techniques, its applications and more.
Even if this is a podcast you can have visual content. Just add links to images/videos referenced in the podcast to your web page. It should make it less expensive then to do a pure video podcast.
I think following a podcast for your textbook where they ask interesting questions, where you can learn from good explanations or maybe just get motivation from a fun session could go a long way.
My idea is simple. For media that depends on lossy compression it should be possible to ”patch” the media file with a smaller file of same resolution but a lower quality. The backup file is smaller due to more lossy compression. For this to work on image files the format probably need to loose some dependency on the previous block. One error should not affect the rest of the image. On the blocks you then perform some checks. The idea is then that if you encounter a error in the original big image file then you can patch it with the block of pixels in the smaller backup file. The block in the backup file would be of lower quality but look similar. You probable would not notice that the image has been fixed.
For a .jpg photo of 3.3MB I saw that I would only need a 350k version. To keep it simple one could store both versions in the same file as an option. For those who don’t make backups I think this would make it easier to recover otherwise lost photos.
After watching the below video on machine learning. Which showed some sort of generation. I imagine that this could be replicated for 3D objects as well. Say you let the matrix-setup train on a biological part. With many variations of this part the setup will have learned what it should look like. Then as we saw in the video. The setup can also generate different shapes of the same object. I don’t know the implication for evolution. I guess however the genetic makeup would also include support data (the learned matrix). To help generate biological elements like the trained part. This thought experiment I think shows how evolution can produce so much variation. Without the energy cost due to error.
Below is a link to the Neural Network video and a would be training set for an arbitrary biological part.
Would be training set for an arbitrary biological part.