Machine learning and compression have always been closely tied together. It's trying to learn the "rules" that describe the data rather than memorizing all the data.
I remember implementing a paper older than me in our "Information Theory" course at university that treated the creation of a decision tree as compression. Their algorithm considered sending the decisions tree and all the exceptions to the decision tree and the tree itself. If a node in the tree increased the overall message size, it would simply be pruned. This way they ensured that you wouldn't make conclusions while having very little data and would only add the big patterns in the data.
Fundamentally it is just compression, it's just a way better method of compression than all the models that we had before.
EDIT: The paper I'm talking about is "Inferring decision trees using the minimum description length principle" - L. Ross Quinlan & Ronald L. Rivest