jawafour wrote:I imagine that the only way to effectively monitor the social media platforms is for all uploads to be viewed and assessed ahead of being made available? I wonder if this would be feasible? I can't really think of an alternative approach - things like copyrighted sound and images can be identified via technological comparisons, but I don't think the tech is in place to check other "original" examples of imagery and speech.
It's difficult, but there are things than can be done. Speech to text is getting better (youtube already has often decent automatic captioning). You can then do some natural language processing on that to get an idea of what subject is being discussed. There are legitimate reasons to discuss subjects that are themselves inappropriate, but what it can do is highlight what videos are in most need of manual review.
On the manual review of uploads before they go live, I also don't think it has to be all or nothing. A variety of factors could contribute towards whether an upload needs review before going live for example, so a new account with no uploads and not linked to a google account might be deemed "riskier" and therefore an upload more likely to require manual review before going live. The service might also adopt a tiered approach to content, so while content goes live it might be unfavoured in search results compared to videos that have been reviewed. Videos could be visible marked when they have been manually reviewed, so an unsuspecting user knows before watching whether it has been looked over. Similarly a video might be marked as pending review if it's been reported by trusted users already.
Tineash wrote:Just start banning nazi chuds from Twitter and Facebook.
But yes this is the obvious thing that needs to start happening! We can talk lots about how to differentiate content and catch out people trying to game the system, but there is so much blatant stuff out there that it's baffling those people haven't already been banned.