While I understand the desire to have Instagram, and others, police their sites to prevent bullying, I also think it's pointless to a large extend.
The head of Instagrams public policy is very correct when she states: Teens are exceptionally creative. Assholes and bullies have always existed, 25 years ago at least you had a respite at home after school, but now social media makes victims available 24-7.
Not to sound defeatist, but I don't think social media can exist while preventing bullying. For the victims it's binary, either they're being bullied or they're not, so social media needs to prevent 100% for bullying and that is not going to happen, people will always find a way to be assholes.
In my mind the effort is misplaced, at least for teenagers who often know the bully. It's the schools and parents that need to take action, not some random faceless US corporation.
For 100% online bullying... I don't think there's away around that. You can try to reduce the amount, but you won't be able to eliminate it.
I do not agree with your assertion. Bullying is similar to spam and other unwanted messages. You can never get 100% protection, but you can get pretty far.
Things like Whatsapp not giving you a way to prevent you from being added to a group (until very recently) are fixable. Victims could not even prevent themselves from being constantly added to the "Jimmy go kill yourself" group.
If I translate your post to spam filtering all efforts there are useless and everyone should disable their spam filters, because either you receive spam or you don't.
>Bullying is similar to spam and other unwanted messages.
It really isn't. You can mentally deal with a spam email or two a day. It only takes one nasty message or post on social media to devastate a kid for a month, so you need 100% perfect protection and you're not going to get that, regardless of how much monitoring, moderating and AI you throw at the problem.
My argument isn't that we shouldn't try to prevent online bullying, but we need to set our goals for success lower than we'd like. If Instagram tries to prevent bullying, we can't complain when they aren't 100% successful, but people will do just that.
Bullies then figure out how to make new unblocked accounts, stalk you around the internet on other services and spam you there.
I think we need to address bullies directly, figure out their reasoning and help to resolve those issues. Problem is few actually do get addressed long before it's too late.
>Victims could not even prevent themselves from being constantly added to the "Jimmy go kill yourself" group.
What’s the difference between inviting Jimmy to that group and just messaging him in private asking him to kill himself? Seems like a pointless feature. (Let’s remember that users you’ve blocked could never add you to groups.)
You know they don't actually have to stop 100% of it, right? If they could even get like 50%, that could actually change the psychology of bullies to stop being bullies, because they're learning that the behavior isn't going to work for them. If even a small percentage of the millions of users could be "corrected" into being less mean, that seems worth it.
The schools and parents absolutely should take action, but there's a lot that social media sites can do to give people (not just kids) the tools they need to prevent bullying, harassment, threats and other forms of abuse.
> You can try to reduce the amount, but you won't be able to eliminate it.
That's fine. Just because you can't completely solve a problem doesn't mean you shouldn't take steps to mitigate it. Perfect is the enemy of good, etc.
Not sure why people are suggesting shadow-banning doesn't work. Shadow bans have existed for a long time, already. Message forums and MMOs have used this method to deal with toxic community members and it has already shown to be successful.
The way they measure the decrease in toxicity is based on support tickets. When organizations implement this feature, a reduction in complaints means they don't require as much support staff to deal with it. It saves them money.
The logic behind it is simple: Toxic community members want a voice to be heard. This method sends players the message that if you post something offensive or unwelcomed, it'll be voted down and no one will see it.
>Go after bullying too aggressively and risk alienating users with stricter rules and moderation that feels intrusive at a time when the company is a bright spot of growth for Facebook. Don’t do enough, especially after promising to set new standards for the industry, and risk accusations of placing profits over the protection of kids
Seems these days the second option is always the preferred one. It's easier to ask forgiveness (and make money while you do) than it is to get permission.
>Seems these days the second option is always the preferred one.
Not on the internet, no. The 2/4/8/whatever-chans are alive and kicking. Ditto the good old IRC. There's plenty of other services that are more rough-and-tumble than milktoasty. Any of the new social media like Mastodon, Minds, BitChute have rather hands-off moderation - any lawfull content has rather good chances of staying there undisturbed.
The difference is one of perception - namely, which ones get favorable opinions in the media as family safe, and which ones get condemned as the seedy, scary underbelly.
I feel it's only a matter of time before they get blocked at the router or at least DNS level in a western European country. They were blocked in New Zealand for a while after the chan-radicalized mass murder there.
Could have, but didn't, because phones are a 1-1 medium and people don't go around cold-calling others to see if they're into white supremacy and conspiracy theories.
Not only that, it wasn't one person, or rather it was not one person alone. He was fed, indoctrinated and enabled by his co-conspirators who egged him on with anti-Islam material. In some sense they got off lightly: if the situation had been reversed and we were talking about an Islamic supremacist board promoting violence against Christians which resulted in a massacre, they might have been candidates for an airstrike. "We kill people based on metadata", remember? https://www.nybooks.com/daily/2014/05/10/we-kill-people-base...
The ban would most likely be for refusal to comply with rules regarding extremist or crime-promoting material. For example the UK has a unit pointed at the major social networks making binding takedown requests against (primarily) Daesh material: https://wiki.openrightsgroup.org/wiki/Counter-Terrorism_Inte...
At the moment the authorities are surprisingly un-worried about white supremacist terrorism compared to Islamist terrorism, of which there is very little on the chans. So they largely escape takedown requirements. But if the extremism escalates that may not stay that way.
Handing the responsibility for the mass murder on a platform is not much different than blaming books, rock music or TV. There are good things and bad things on the chans. Blocking it would just be a sign of losing control and having no explanation for the deed.
But yes, increasing the pressure on these kind of online spaces seems to have had a overall negative effect. If you don't know the demographic and how gripers react to pressure, you should not preach about the source of radicalization.
And yes, you also will net the ire of people that like their freedom and haven't done anything.
The point remains that crime/radicalization can occur over any site/medium. Banning a site or medium seems heavy-handed unless it exists expressly to facilitate crime/radicalization, in my opinion. TOR, for example, has been a known avenue for the spread of child pornography, yet we don't shut it down.
This is a weird quibble. Is your argument simply to let me know I could've used an example with more child porn? Unless you're arguing there's no CP on TOR, my point in the context of the original comment stands.
Because there was a more convenient option. I think they'll just run their own alternative, especially as domains are cheap. Also, what about that namecoin thing? Seems like a workaround.
Who says it's done for 1 person? 4chan, 8chan and the like have a long history of radicalising people and fostering all sorts of hate and abuse. That mass murderer was not the first one.
> Not on the internet, no. The 2/4/8/whatever-chans are alive and kicking.
These places are almost like art. Fully dependent on the goodwill of a handful of patrons that regard them as valuable while many would prefer to see them dissapear.
I don't think it's as many as some think. A lot of this censorship banter comes from a very vocal minority who appear to be a majority only in their own minds.
While the actual majority is mostly busy with their jobs and lives. ....and probably blowing off steam on some image board or news / link aggregator website after work.
Free speech has the same issue as data security or privacy have. For most people it does not feel to be a problem in their daily lifes so the active players can push for whatever is the most profitable option.
The difference is actually that Facebook/Instagram, Google/Youtube dwarf anything else by a wide margin. So when they choose to do something it's representative and certainly relevant "on the internet". The chans or IRC are a spec in this picture.
The point was placing profits over the protection makes sense from their perspective. They take a big risk whatever option they choose, might as well make money doing it. And they will certainly be forgiven because on the internet memory is very short. Or perhaps many people just don't care. Facebook's user numbers have been steadily increasing for years.
It is interesting framing game. Neither option considers consequences on what goes on inside platform and which kind of behavior platform rewards. The only two concerns are indeed profits and risk that someone accuses you of placing profit over the kids.
This thinking will lead to following: that you will simultaneously alienate people by random nonsensical moderation while being accused of placing profits over the protection of kids. And the both accusations will be right simultaneously.
the case described had nothing to do with the victims account on any specific platform, it was people misusing his picture for lulz. If banned on instagram they could have continued anywhere else.
But that's a really weird way of looking at the problem, unless you're Instagram.
Continuously closing accounts, blocking content and reporting bullying isn't fixing anything. In some cases I get the feeling that it will make the problem worse.
We want to stop the bullying, not simply remove the evidence of it. Instagram can't prevent bullying, they can only hide parts of it.
> Instagram can't prevent bullying, they can only hide parts of it.
But they can take steps to mitigate it on their platform. If you're doing your best and genuinely helping solve the problem on your platform, then you can say to kids, "hey - come hang out on this platform. It's a safer place. This is a place where you can socialize with less risk than other places."
That's good for the kids, and for the platform.
Arguing against is like saying, "My kid gets bullied, but why should I only solve the problem for him?"
I'm not saying this shouldn't try, but you're making my point: with less risk than other places
It's not "no risk", it's not a safe space, it's just safer than it could have been with no policing. And which other spaces is it safer than? School, the kids room, or 4Chan? Because I would argue that 4Chan is safer than Facebook or Instagram.
You need to address bullying on an individual level, because there's nothing you can do on a large scale that will eliminate all bullying.
>Arguing against is like saying, "My kid gets bullied, but why should I only solve the problem for him?"
That's not what it is like at all, though. I would say your approach is more similar to "my kid is being bullied at a local community pool, so instead of trying to address the bullying issue occurring to my kid directly, I will simply try to institute a bunch of rules to make that local pool a 'bully-free zone'".
> I will simply try to institute a bunch of rules to make that local pool a 'bully-free zone'
That sounds like a great idea for a pool to implement, and I would feel better about taking my kid there vs. the local arcade that just shrugged their shoulders and said "what do you want, if they don't get bullied here, they'll get bullied elsewhere."
Moderating a text-only message board is much easier than moderating real life interactions (in qualitative terms, not quantitative). There are so many ways to instigate bullying in real life without saying anything inflammatory at all. And that's not even accounting for the fact that kids can be very creative, when it comes to coming up with ways to make someone feel excluded and miserable.
The solution is to fix the school system so that people are not forced to stay with a bunch of classmates they didn't choose and not prevented to do what they would really like to do in life, resulting in activities like this as a replacement.
The persecuted should not have to live life on the run, fleeing from one ever-shrinking, insecure refuge to another. Instead, we hold victimizers responsible for the problems they cause, and insist on fixing those problems at the source.
Yeah, in normal adult social circles if you behave like an asshole you get ignored and if you force others to put up with you you get kicked out and police is called if you refuse.
If the social circle you are in doesn't kick assholes you find another one.
But in school someone can just terrorize everyone else with impunity because people are usually not expelled, you can't easily choose a school that expels people and the abusers don't care about being ostracized or potentially expelled because they don't care about being in school in the first place.
It’s basically impossible to expel children with anything resembling a colourable legal case from the education system. That’s what the right to education means in practice. The rights of the disruptive and abusive to education do interfere with actual education but it has limited effect on school’s primary purpose, child storage during the working day.
How are you supposed to get children to go to school if it’s voluntary? School isn’t a hellscape for a substantial portion of its inmates by accident. You can’t force people to attend an institution for years of their life that they hate, full of people who they have no connection with, at best, without almost total societal buy in.
Remember kids, your happiness is less important than warehousing you so your parents can go to work.
I think you're giving too much agency to pre-teens. At grade 4, "what they would really like to do in life" was read comics. Unless you're suggesting that we only let the older students make their own choices, at which point it's far too late, and kids have already been picked on for a decade.
I can only imagine the ostracization if children were allowed to opt out of classes based up on classmates - you'd be left with the one picked-on kid all alone because no one wanted to be in their class.
The head of Instagrams public policy is very correct when she states: Teens are exceptionally creative. Assholes and bullies have always existed, 25 years ago at least you had a respite at home after school, but now social media makes victims available 24-7.
Not to sound defeatist, but I don't think social media can exist while preventing bullying. For the victims it's binary, either they're being bullied or they're not, so social media needs to prevent 100% for bullying and that is not going to happen, people will always find a way to be assholes.
In my mind the effort is misplaced, at least for teenagers who often know the bully. It's the schools and parents that need to take action, not some random faceless US corporation.
For 100% online bullying... I don't think there's away around that. You can try to reduce the amount, but you won't be able to eliminate it.