relaxdontdoit

joined 1 year ago
[โ€“] [email protected] 1 points 1 year ago

@Gaywallet I'm coming to think that expecting models to produce human-like values and underlying representations is a mistake, and we should recognize them as cognition tools which are entirely possible to misuse.

Why? LLMs get worse at tasks as you attempt to train them with RLHF - and those with the base models will use them without filtering for a significant intelligence-at-scale advantage. They'll give the masses the moralized, literally dumber version.

[โ€“] [email protected] 1 points 1 year ago

@Acetamide This has been coming for a long time, I'm glad the community is being a chance to move on to spaces which aren't just trying to suck blood.