It depends on the task, but in general a lot of the models have fallen into a dark pattern of Goodhart's Law, targeting the benchmarks but suffering at other things.
So as an example, while GPT-4 used to correctly model variations of the wolf, goat, cabbage problem with token similarity hacks (i.e. using emojis instead of nouns to break pattern similarity with the standard form of the question), now it even fails for that with the most recent updates, whereas mistral-large is the only one that doesn't need the hack at all.
It depends on the task, but in general a lot of the models have fallen into a dark pattern of Goodhart's Law, targeting the benchmarks but suffering at other things.
So as an example, while GPT-4 used to correctly model variations of the wolf, goat, cabbage problem with token similarity hacks (i.e. using emojis instead of nouns to break pattern similarity with the standard form of the question), now it even fails for that with the most recent updates, whereas mistral-large is the only one that doesn't need the hack at all.
Interesting. That's not something I've heard about until now, but something I'll surely look into.