The idea is to take an Open Source Textbook on mathematics, physics, chemistry or biology and make an Linux style inspired podcast series. Here two or more people discuss the theory on each of the topics, problem solving techniques, its applications and more.
Even if this is a podcast you can have visual content. Just add links to images/videos referenced in the podcast to your web page. It should make it less expensive then to do a pure video podcast.
I think following a podcast for your textbook where they ask interesting questions, where you can learn from good explanations or maybe just get motivation from a fun session could go a long way.
My idea is simple. For media that depends on lossy compression it should be possible to ”patch” the media file with a smaller file of same resolution but a lower quality. The backup file is smaller due to more lossy compression. For this to work on image files the format probably need to loose some dependency on the previous block. One error should not affect the rest of the image. On the blocks you then perform some checks. The idea is then that if you encounter a error in the original big image file then you can patch it with the block of pixels in the smaller backup file. The block in the backup file would be of lower quality but look similar. You probable would not notice that the image has been fixed.
For a .jpg photo of 3.3MB I saw that I would only need a 350k version. To keep it simple one could store both versions in the same file as an option. For those who don’t make backups I think this would make it easier to recover otherwise lost photos.
After watching the below video on machine learning. Which showed some sort of generation. I imagine that this could be replicated for 3D objects as well. Say you let the matrix-setup train on a biological part. With many variations of this part the setup will have learned what it should look like. Then as we saw in the video. The setup can also generate different shapes of the same object. I don’t know the implication for evolution. I guess however the genetic makeup would also include support data (the learned matrix). To help generate biological elements like the trained part. This thought experiment I think shows how evolution can produce so much variation. Without the energy cost due to error.
Below is a link to the Neural Network video and a would be training set for an arbitrary biological part.
Would be training set for an arbitrary biological part.