Inspired by simple.wikipedia.org I think there should be additional documentation for the Linux terminal commands. Two versions where one should contain basic words and simple wordings. If it could help more people understand complex programs I think it will be worth it. To get more documentation in the hands of the users I suggest using wikipedia as a storage. Maybe there is a possibility to crowd source comments to improve the simplified manual.
ls is the command in Linux to list files in folders. If you enter ‘ls’ in the terminal you list the files in the current folder.
The command has many options. Where some are.
-l list more detailed information.
-R also show content of sub folders.
Here you write ‘ls -l’ in the terminal to use the -l option.
I think Linux can take advantage of podcasts for a better user experience. For example when installing the distribution you could have the option to play a relevant and fun podcast. Compressed with Opus. It would not take to much space on the .iso. Also as a way to engage new users. Getting them to read the Official User Guide it would be nice if there is a complementary podcast where a group of people discuss each of the chapters. I think this would keep them interested reading the whole thing through.
The idea is to take an Open Source Textbook on mathematics, physics, chemistry or biology and make an Linux style inspired podcast series. Here two or more people discuss the theory on each of the topics, problem solving techniques, its applications and more.
Even if this is a podcast you can have visual content. Just add links to images/videos referenced in the podcast to your web page. It should make it less expensive then to do a pure video podcast.
I think following a podcast for your textbook where they ask interesting questions, where you can learn from good explanations or maybe just get motivation from a fun session could go a long way.
My idea is simple. For media that depends on lossy compression it should be possible to ”patch” the media file with a smaller file of same resolution but a lower quality. The backup file is smaller due to more lossy compression. For this to work on image files the format probably need to loose some dependency on the previous block. One error should not affect the rest of the image. On the blocks you then perform some checks. The idea is then that if you encounter a error in the original big image file then you can patch it with the block of pixels in the smaller backup file. The block in the backup file would be of lower quality but look similar. You probable would not notice that the image has been fixed.
For a .jpg photo of 3.3MB I saw that I would only need a 350k version. To keep it simple one could store both versions in the same file as an option. For those who don’t make backups I think this would make it easier to recover otherwise lost photos.
After watching the below video on machine learning. Which showed some sort of generation. I imagine that this could be replicated for 3D objects as well. Say you let the matrix-setup train on a biological part. With many variations of this part the setup will have learned what it should look like. Then as we saw in the video. The setup can also generate different shapes of the same object. I don’t know the implication for evolution. I guess however the genetic makeup would also include support data (the learned matrix). To help generate biological elements like the trained part. This thought experiment I think shows how evolution can produce so much variation. Without the energy cost due to error.
Below is a link to the Neural Network video and a would be training set for an arbitrary biological part.
Would be training set for an arbitrary biological part.