The idea is simple.

I suspect the universe would not be as large as it is if it was limited by a normal distribution for many number samples. I guess, just guess that you get vanishing gradients somewhere in the system. Causing the ?network not getting updated.

I wonder if a method could be to use a Generative Adversarial Networks or GAN? With that I could create a small sample space of even distributions and use that as Real examples in the GAN network.

The generator could use an elaborate system of previous GANs as noise input. But I’m wondering if it could use a normal distribution as input. Just to make it more robust. The target of the network is then to imagine an even distribution from that normal distributed noise.

The target of the discriminator is then to predict if its input comes from the generative model or the real examples. Based on errors the discriminator and generator gets updated.

In the end I hope the generated noise like distribution of many many samples looks like a even distribution and not a normal distribution.

If this works. I think one can say that the central limit theorem was because of a non updating model. An independent random variable needs a network to keep it random.

For machine learning I think this could be important in mitigating vanishing gradients somehow.