this post was submitted on 23 Jul 2023
112 points (95.2% liked)

Technology

58150 readers
4344 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Key Facts:

  • The AI system uses ten categories of social emotions to identify violations of social norms.

  • The system has been tested on two large datasets of short texts, validating its models.

  • This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 1 year ago

Unless this is just for identifying social norms violations in written communication for the purpose of government to government communication, this seems vastly.... Infeasible, I guess. Because norms change over time, and you're going to have to be updating this model when it's finally noticed that a change has occurred. If anything, it might generate a completely new form of grammar/phrasing expectations due to the feedback from this likely-to-not-change-very-much ruleset... As in, if you thought politically correct phrasing was annoying now, just wait until the ai says you're not doing it well enough.

Idk though, this isn't my specialty area, anyone care to tell me how I'm wrong? What good can this really do?

(I swear I did read the article, it just isn't clicking over the sound of my loud pessimism)