In a previous blog post I figured out that if you want to mimic a random function. You can input samples of previously computed random data.

It was much easier this way than to let the function store all the information about the random behavior. To clarify. The target data was a lot of random numbers and the function was going to adapt to it.

I’m pretty sure evolution doesn’t miss such an easy method. For instance in decision making I speculate that if there is a majority decision then it doesn’t want to end up with an equal amount for each choice. Maybe sampling of random clusters could help.

So my idea is that this could be avoided and at the same time getting results fast. By using the predicted results of enough samples as the voting basis. This way a decision can be made any time. Although as time progresses the decision gets more accurate. Then the score of could have done otherwise then gets lower. Meaning your less likely to think your decision was flawed in any way.

Then if two choices converge at 50 % at decision time you just predict the curve some time ahead and choose the first winning choice.

Thought Mechanics In Progress // Per Lindholm

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok