Machine Learning Math – Its Faster To Reformulate A Problem Than To Solve It??

Inspired by the P versus NP problem. I wonder if its generally faster to reformulate a problem than to solve it.

Then how does reformulation look like in machine learning? What inverse relationships could possibly exist for a too effective reformulation? Does the network reformulate in a safe way etc. Could you recognize models that are in risk groups of certain faulty groups of decisions.

Since I at least tend to use the same looking model for many problems.

I wonder if a strategy would be to have datasets with not just data but with reformulation examples for inspiration.