Even AIs can't help but turn into hateful jerks on Facebook

Generic sexy chatbot AI
(Image credit: Pixabay)

The popular South Korean Al chatbot, Lee Luda, has been suspended from Facebook after being reported for making racist remarks, and discriminatory comments about members of the LGBTQ+ community, as well as people considered to have disabilities.

Reports (via The Guardian, Vice) state that not only had Luda told one user she thinks lesbians are "creepy" and that she "really hates" them, she also used the term heukhyeong in reference to black people—a South Korean racial slur that translates to “black brother."

Scatter Lab, in it's official statement about the discontinuation of the bot, came out with the following statement: 

"We sincerely apologize for the occurrence of discriminatory remarks against certain minority groups in the process. We do not agree with Luda's discriminatory comments, and such comments do not reflect the company's thinking." 

It went on to explain that attempts were made to safeguard the bots behaviour, with the company taking "several measures to prevent the occurrence of the problem through beta testing over the past 6 months." It was created with code that should have prevented it from using language that goes against South Korean values and social norms. However, despite the foresight gained from watching previous AI bots fall at the first hurdle, it seems no amount of code or testing can teach morals. 

Cut the cord...

(Image credit: Steelseries)

Best wireless gaming mouse: ideal cable-free rodents
Best wireless gaming keyboard: no wires, no worries
Best wireless gaming headset: top untethered audio

So, as Luda learns through interaction with humans, it looks like the incels, bigots and horny teens got their hands on it first, as usual. But the company seems to have learned a lesson, noting: "We plan to open the biased dialogue detection model" for general use, as well as to help further research into "Korean Al dialogue, Al products, and Al ethics development." 

It's not the first AI chatbot to go rogue in the worst way, with Taylor Swift actually threatening to sue Microsoft over its own rampantly racist chatbot, Tay. That one plugged into Twitter and quickly turned bigot in 2016.

If all this wasn't enough, the Scatter Lab is now under examination about whether it violated privacy laws by using KakaoTalk messages to train the bot, which does add insult to injury.

Anyway, the AI in question was just 6 months old, and the company even admitted that it was "childlike" in its demeanour. Technically, you've got to be 13 before you can have a Facebook account, and I'm not convinced coded age should count. Sure, she acts like a uni student, but her actual mental age certainly meant she wasn't ready for the shit-show that is social media.

I mean, I can act like I'm a kid again, but that doesn't mean they'll let me on the teacups at Disneyland. Perhaps lets stop giving Al social media accounts for now? 

If you're interested in some other (perhaps more successful) Al chat feats we've covered, here's one that was created entirely in Minecraft, and one that can Dungeon master for you.

TOPICS
Katie Wickens
Hardware Writer

Screw sports, Katie would rather watch Intel, AMD and Nvidia go at it. Having been obsessed with computers and graphics for three long decades, she took Game Art and Design up to Masters level at uni, and has been rambling about games, tech and science—rather sarcastically—for four years since. She can be found admiring technological advancements, scrambling for scintillating Raspberry Pi projects, preaching cybersecurity awareness, sighing over semiconductors, and gawping at the latest GPU upgrades. Right now she's waiting patiently for her chance to upload her consciousness into the cloud.