YouTube’s way of flagging videos

In the last months, right after the series of Apocalypse that YouTube faced, YouTube decided to create a new algorithm to flag videos that weren’t “advertiser friendly”. This new system worked, well kind of, depending of who you ask. According to the Google CEO this new system removed over 8 million videos from YouTube, of that number 6.8 million were first flagged by the computers, which is about 76% of the total.

Now the question is, does it really work? Does it actually just flag bad content, or does it also take down content that follow YouTube’s policies? And that is the core of the problem, because the system is not perfect (nothing really is) but YouTube is leaning a lot in a system to catch the “bad” videos. I

am sure that a lot of the videos that gets flagged are actually non advertiser friendly. But a fare share is okay.

Here is where we get to machine learning and if it will be good or not. No matter how much we try to, we won’t be able to program emotions or common sense to a computer (or at least any time soon). So, how much should we depend on a computer to do work where there is a lot of common sense involved? Yes the systems do the work for us, but at what cost?

In my opinion, we should keep a check on it, monitoring their behavior, us the humans making sure that the computer actually does what is supposed to do without damaging others, just as YouTube is trying to do. I am a little scared of what a system like that could do unchecked, but I hope it never happens.

Anuncios

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión /  Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión /  Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión /  Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión /  Cambiar )

Conectando a %s