I made a test of an error recognition with the irids dataset. The idea was to add an additional column to the target examples. The column was just a 1 for true examples and 0 for random generated examples.
So what I did first was to train the model network on the just the true examples.
Then I added double the amount of randomly generated data in the input data. With that random data I run it through the model. Predicting which class the random data would belong to. This falsie predicted data I could then use as false examples. That is. I added a 0 column to that target data.
This data together with the old data got me new data to train my second model. One which could tell if the input was ?wrong and still predict the correct output.
Although it took longer to converge.
So with this test I could tell if the sample looked like it came from a random generator giving close to 0.0 at the first column. Similar it could tell a value of .99 for data coming from true examples.
Otherwise the network would just use the method of elimination which is not so good.
Code is just for me to remember the idea. Not anything to copy. Maybe to correct though : )