AI regulation
Attempts to understand and regulate it fail because were human


We all know our google smart devices listen to us, right?

Shout out to my homies planning The Revolution™ in front of a television shaped like ‘one of those rap guys’ girlfriends’ in the back.

“Algorithm” has become a dirty word to consumers and sellers alike over the years because the flow of information is so one-sided. Sure, it’s creepy to talk about having food poisoning in your living room and then seeing ads for baby wipes and electrolytes on your phone in the bathroom. No arguing that. But I’ve been on the marketing side of this as well.

When I said ‘We need to include Pro-Lactation Bloggers in Facebook blasts for The Ultra Nip Soother9,000’ (not its real name [not for lack of desperate lobbying on my end] )—it was an obvious move in theory. Putting in that work just to end up seeing execution garner the company followers who never saw another thing from them via native means until we pulled out the wallet again, got me a little hostile.

And Google, listening as always, picked up on a way to communicate how exactly some algorithms worked for people on both sides. For instance, when you ask ‘Why’d my husband get approved for credit card X when I’ve got a higher credit score keeping me in nice office skirts’, Google’s shiny new deal can supposedly give you an answer.

On one hand, this move by google makes sense. It’ll increase transparency, which helps build trust, and builds profits.

On the other hand, if we go back to the Apple Card-esque hypothetical I gave, I wonder…is there any answer that would be satisfactory?

Let me pull up my parents as an example.

Mommy and Chief are around the same age, same race, each married twice, each working since their teens, and each GREAT with money management. If they both apply for a personal plane loan online so they can do a Snoopy vs Red Baron flight routine for their anniversary, and my stepdaddy gets a better deal than my mother, and the AI Explainer says ‘Well, ma’am, you were born in another country, and that influenced the decision’, that’s obviously 100% unacceptable.

The program is still in beta, but I think it’s missing a key point right out of the gate: Crappy motives behind decisions being known doesn’t make anyone feel better about them.

Providing a window isn’t enough to build trust. We have to like what we see when we look through it.

The days of ‘Just blame the algorithm’ are coming to a close. If you’re still working on cleaning up your company’s act, put more money into adequate training, re-hiring, and not being a d-bag before you try to set up transparency measures.

Opening the curtains is great, Google! Just make sure the house is clean.



Source link