The problem with automated, algorithm driven moderation is that a simple batch of code that looks for prohibited words in the "wrong" order can't look for context. So for instance if you reply to something like say: "Canadian healthcare is so good you'll be seen in 2 years for that broke leg" with "Well, you could always commit suicide!", the code doesn't know it's a facetious reply.
In fact, the Indians who review this stuff (maybe, if you're lucky) probably don't even know what "facetious" means. What happened here was the system saw "commit suicide" with a verb structured in such a way that in isolation it could be seen as an encouragement to self-redact. Obviously to a human it's not an encouragement do anything, but a smart-ass remark commenting on on how Canadian health care and assisted suicide is out of control. But does the code know that? Of course not. So without warning you get a 7 day suspension for nothing more than putting words in an order that the code doesn't like. Nothing wrong was done, but computers are only as smart as they're programmed to be. For instance, I could (and many have) written: "You should go and Canadian healthcare yourself," meaning "kill yourself." But is the system smart enough to recognize that? No, of course not. It's like a blind traffic cop only pulling over speeders in Ferraris. This could be resolved very quickly with a human to look at the code, but apparently the Twitter appeal process is nothing but automated denials. Maybe you get lucky and someone gets tired of seeing it pop up and a human looks at it at some point. Or maybe not. It's a dumb system because 1. there's no warning that the speech is bad and what the rules are and 2. there effectively is no appeal if no human looks at it. So if you put words in the "wrong" order, you're automatically in the wrong. It's like being informed that something is illegal after you've done it. No, more correctly it's something being determined to be wrong after the fact because someone didn't like it. In any event, I'm in time out for no more reason than computer code. Elon or not, Twitter isn't a free speech zone. It's better, but not without the follow through to make sure that speech is free and with poorly implemented policies and codes. But hey, moderators and programmers cost money! Of course, there doesn't seem to be any sort of algo that can detect the pedophiles or ban the bots. Whatever. The enshitification of the Internet continues. Comments are closed.
|
Author Don ShiftDon Shift is a veteran of the Ventura County Sheriff's Office and avid fan of post-apocalyptic literature and film who has pushed a black and white for a mile or two. He is a student of disasters, history, and current events. Archives
October 2024
Categories
All
As an Amazon Associate I earn from qualifying purchases.
|