I wonder if this learning technique could work. The idea is to split the data into two or more model() groups. Each belonging to a separate model().
So the way I would test this would be to look at some sort of ?confidence score for each classification the two models could do. Like in the long vector binary encoding. Where you want 1.0 and the rest to be zeros.
So then the idea is to move the sample data into the model group with the highest confidence. This way the two models compete to have data in its group.
Then if I get 100% confidence for each model for training data within its group. Then the sample data transformation output from the test set would be selected from the model with the highest confidence score.
My first try (jupyter notebook) Sort Data into Model() Groups
98.11% Accuracy score