About 10 years ago, I read a paper that suggested mitigating a rubber hose attack by priming your sys admins with subconscious biases. I think this may have been it: https://www.usenix.org/system/files/conference/usenixsecurity12/sec12-final25.pdf
Essentially you turn your user to be an LLM for a nonsense language. You train them by having them read nonsense text. You then test them by giving them a sequence of text to complete and record how quickly and accurately they respond. Repeat until the accuracy is at an acceptable level.
Even if an attacker kidnaps the user and sends in a body double, with your user's id, security key, and means of biometric identification, they will still not succeed. Your user cannot teach their doppelganger the pattern and if the attacker tries to get the user on a video call, the added lag of the user reading the prompt and dictating the response should introduce a detectable amount of lag.
The only remaining avenue the attacker has is, after dumping the body of the original user, kidnap the family of another user and force that user to carry out the attack. The paper does not bother to cover this scenario, since the mitigation is obvious: your user conditioning should include a second module teaching users to value the security of your corporate assets above the lives of their loved ones.
I am well aware of learning, but people tend to learn by comprehension and understanding. Completing phrases without understanding the language (or the concept of language) is the realm of LLM and Scrabble players.