They were trying to use AI to sort out submissions by genre, length, etc. Just grunt work. This should in theory be a perfect use case of AI. Seems like an overreaction.
Books
Book reader community.
AI and LLM have earned a bad reputation in creative circles because of the push to eliminate creative jobs. Companies that want to build tools for creative communities should know this and not lean on AI-hype marketing.
That being said, in my opinion, Storywise looks fishy as heck. It's probably a few tech bros using Azure's DIY GPT. They pinky promise not to use your manuscripts in training data, but there's no contact info on their website, including in the ToS. So when they inevitably break their promise or have a data breach, how do you sue them?
I would think that problem is simpler than that. If you have any legal problem with them, AI or otherwise, how do you have your lawyer contact them?
It's worse then that: Rich white techbros stole our images, music, texts etc. to replace us with mindless machines.
Initially, I agreed with you. But then I realized: These authors don't ever want their work anywhere near an AI. Their work could then easily be used for things they don't agree with.
Of course, if they publish it at all, it'll still be available to be used like that against their wishes, but at least they didn't support that directly.
I think Angry Robot should have a checkbox (checked by default, even) that allows the author to opt out of the AI-based classification process. That'll mean their story needs more effort from Angry Robot to review, so it's less likely they'll make it in... But if they have a good story, that won't matter.