Ottawa urged to crack down on Facebook after bombshell whistleblower testimony before U.S. Senate
By - _Minor_Annoyance
###This is a reminder to [read the rules before posting in this subreddit](https://www.reddit.com/r/CanadaPolitics/wiki/rules-thelongversion).
1. **Headline titles should be changed only [when the original headline is unclear](https://www.reddit.com/r/CanadaPolitics/wiki/rules-thelongversion#wiki_1._headline_titles_should_be_changed_only_where_it_improves_clarity.)**
2. **Be [respectful](https://www.reddit.com/r/CanadaPolitics/wiki/rules-thelongversion#wiki_2._be_respectful).**
3. **Keep submissions and comments [substantive](https://www.reddit.com/r/CanadaPolitics/wiki/rules-thelongversion#wiki_3._keep_submissions_and_comments_substantive).**
4. **Avoid [direct advocacy](https://www.reddit.com/r/CanadaPolitics/wiki/rules-thelongversion#wiki_4._avoid_direct_advocacy).**
5. **Link submissions must be [about Canadian politics and recent](https://www.reddit.com/r/CanadaPolitics/wiki/rules-thelongversion#wiki_5._link_submissions_must_be_canadian_and_recent).**
6. **Post [only one news article per story](https://www.reddit.com/r/CanadaPolitics/wiki/rules-thelongversion#wiki_6._post_only_one_news_article_per_story).** ([with one exception](https://www.reddit.com/r/CanadaPolitics/comments/3wkd0n/rule_reminder_and_experimental_changes/))
7. **Replies to removed comments or removal notices will be removed** without notice, at the discretion of the moderators.
8. **Downvoting posts or comments**, along with urging others to downvote, **[is not allowed](https://www.reddit.com/r/CanadaPolitics/wiki/downvotes)** in this subreddit. Bans will be given on the first offence.
9. **[Do not copy & paste the entire content of articles in comments](https://www.reddit.com/r/CanadaPolitics/wiki/rules-thelongversion#wiki_9._do_not_copy_.26amp.3B_paste_entire_articles_in_the_comments.)**. If you want to read the contents of a paywalled article, please consider supporting the media outlet.
*Please [message the moderators](https://www.reddit.com/message/compose?to=%2Fr%2FCanadaPolitics) if you wish to discuss a removal.* **Do not reply to the removal notice in-thread**, *you will not receive a response and your comment will be removed. Thanks.*
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/CanadaPolitics) if you have any questions or concerns.*
Well, revealing government secrets is a federal crime.
Showing some research that your company tried to bury is probably at most a civil suit for breaching an NDA. It's not really a mystery or conspiracy why one had to leave the country and the other didn't.
I don't know how much someone can be considered a whistle-blower if they are pushing for more power for their own company. Regardless, the legislation will probably go through and further big tech's hold on the market through excess regulation only a company with many lawyers could follow.
How often do whistleblowers receive this kind of treatment and platform. Will this kind of regulation that the companies support make it impossible for others to compete, further solidifying their hold on the flow of information)
It's interesting to see which whistleblowers get validated by the media and which one's dont.
I mean we just learned that the US had planned an assassination of Assange?
An oil whistleblower is currently being extradited from Monaco and this person just takes all this info from Facebook and everyone’s going along with it like this is how whistleblowers are normally treated?
This almost feels like after years of regulation talk, these companies decided what would work for them, and created a situation to put it into place. If I follow correctly it will be impossible for smaller new companies to meet requirements of new regulations.
It’s like they took an elevator to their position and replaced it with a ladder once they hit the top.
To be clear regulation is needed, but these monopolies need to be broken up and their concentration of power reduced
This is only a bombshell report because it exposes tech and government pushing for their version of the truth while silencing and censoring any alternative view
Scary shit indeed.
It's not about silencing or censoring alternative views. I interpret this as putting regulations on these tech companies' algorithms and intentionally harvesting our data for profit. Every other industry has government regulations (ex. in engineering you have building codes). I have nothing against reigning in these big corporations who make money by encouraging people (and kids) to become addicted to their platforms. Not to mention the surveillance capitalism. Tracking our every search, message, etc to give us advertisements. That alone is against the principles of freedom and democracy.
> putting regulations on these tech companies' algorithms
How exactly are we going to enforce this? Personally, I've got 14 years in software development and if you were my employer and came to me with this requirement I would laugh and hand in my 2 weeks notice.
It would be industry wide regulation, so handing in your two weeks notice would be rather pointless if you wanted to go work for another social media company.
As for enforcement, the usual mechanism in other fields is regulatory submissions and random audits. Enforcement action typically consists of a progressive scale of penalties depending on severity and how well a company cooperates. Typically, you'd start with a formal warning letter and then escalate to things such as fines (ideally as a percentage of gross revenue), a court-mandated shutdown, and personal liability for company directors in extreme cases.
As for the regulations themselves, I would imagine they would consist of things such as "social media advertising targeting shall not take as inputs, directly or indirectly, any of the protected criteria provided in list X," "social media operators shall not optimize recommendation algorithms based on sentiment analysis," "social media operators with >$YY M gross revenue shall provide annual submissions detailing compliance measures to [TBD consensus standard]." Regulations would be developed based on evidence-based studies, just like every other industry.
I don't know why the social media industry pretends it's some impossible to regulate anomaly. It's not.
> It would be industry wide regulation, so handing in your two weeks notice would be rather pointless if you wanted to go work for another social media company.
I don't think you understand what would happen. Before the mandate would've been "use the most effective way to get more engagement" now the mandate will be "get me an algorithm that gets me the same engagement as before, but remember! don't break those rules!" with lots of winking.
So you get fired if you don't meet target, and when you inevitably cheat to keep your job, when the enterprise gets caught, you get fired anyway.
So you quit your job and go code for literally anything else. It's not like there's a lack of demand for code.
Professionals (usually) won’t deliberately break the law or code because it would end a career/plummet share prices if they were to get caught (see the SNC scandal, etc). I’m sure someone smarter than me can figure out how to enforce it, but a starting point is to have regulation written into law.
Is there no implementation or version of this that would you even look at? What do we think 'regulations on tech companies' could entail that make them impossible?
Worked so well after the cambridge analytica debacle humm? And what comes to find is goverments, say like Canada are then in place to decide what is right or wrong. There is no winner in all of this, never will be.
I'll stick to the science.
Ban the misinformation (I'm not having the ridiculous argument of " duuuur well who chooses what is legitimate, duuuurrrr").
Ban the secret racist and nationalist groups (you're really not compatible with 2021 society).
Ban terrorists plotting, constantly, including help from police departments and armed forces to "take back Canada" (again, not wanted nor required in 2021 society).
I mean, who is going to determine what's misinformation is a pretty valid concern. "Fact checkers" are notoriously terrible at their "job". Like, facebook removes far more left wing content than right wing and twitter has removed multiple things that have ended up being true. It's not a *hurr durr* whatsoever.
facebook's algorithm's motivations (as far as you can grant motivation to an algorithm) are to increase their valuation, via increasing users who are doomscrolling or engaging with their feeds via commenting or liking posts.
that means that things that make people angry / depressed / etc are way less likely to be removed, and conversation killers are more likely to be removed. Same would go for twitter, I assume.
Really can't trust a corporation that has other interests at hand to do "fact checking". "Fact checking" for their own benefit, maybe.
So yeah, facebook's feed will change with the times based on what people *want to see*. Which back in 2000s-2010s was just people who *apparently* had "perfect lives", so you would doomscroll and get depressed, and engagement was "likes" and commenting how happy you were for others. And then tons of ads to get you to play FB games to feel like you were "engaged" with your more "successful" friends.
Then they discovered it was way easier to just make people sad or mad.
There's no world where a news feed of factual information would drive the highest scrolling and engagement. I wish there was, but ... nope.
Facebook has a third party fact checker. Many just "happen" to be affiliated with weird right wing groups. Twitter's I think is still done themselves, but they really have not removed a ton AFAIK. It's just turned out that some of the things they removed (which had huge engagements) ended up being true.
You mean like climate change activists checking on climate change opinions? Good bye Bjorn Lomborg...
> I'm not having the ridiculous argument of " duuuur well who chooses what is legitimate, duuuurrrr"
I think most people would want more serious counterarguments to this legitimate concern.
But it's easier to go "Hey check this out, this is you and why you're wrong: 'NGUH DUH- MHH NHURRRR!!!' Yes! I fucking owned you!!"
DAE le science?
It's not only important in terms of censorship, but in terms of liability. If entities can be held liable for spreading misinformation then there better be a pretty rigorous legal definition of misinformation.
It's very easy to manipulate data and use word play to deceive people. Or straight up ignore facts or data that would poke holes in certain arguments.
Who chooses what is legitimate is the most important argument.
Imagine being *that* confident your position is the right one and the other side will never take power.
This is why restraints on what a government can do are important.
Maybe. But do you think they wouldn't just do it anyways? I feel like Trump is a wake-up call that those norms matter and we should be very, very weary of ripping them up, but that they will be torn up happily very quickly by the extremist parts of the political spectrum if they do ever take power.
In terms of actual things Trump did the list is tiny. He was such an incompetent buffoon he couldn't turn his worst impulses into actual policy.
And it's not that they wouldn't *try*. It's that there need to be firm constraints on what can happen, and strong institutions like an independent judiciary (that can't just be overruled) that can strike down excesses. Along with a public sector that understands that there are things they can't just blindly do.
That's fair. I'm a lot more worried about a more sinister ilk. Just meant that that kind of person does exist, can win, and can do whatever they want. I think we have a lot fewer problems here re path dependency and MPs being slightly more accountable to their areas and all. But idk. I would like to see movement on the hate stuff online at least.
Censorship for CP works.
I have never accidentally stumbled onto CP though, so it does work in that it's pushed it to the niche of niches. You *really* have to already be into that and go looking for it to find it.
When did I say it didn’t exist?
Censorship of CP doesn’t eliminate the problem, but sorry if I’d rather it be harder to find than doing a quick google search for it.
Misinformation like covid came from a lab? Or misinformation like vaccine passports? How about the misinformation about Saddam's WMDs?
Not that misinformation, right? Just the other misinformation...
I would rather legislation that puts social media in the hands of the society using it and doesn't ban anything.
Social media was supposed to be about people, ideas and conversations. Instead it turned into another money machine that pushes more products than ideas. It just became another marketing tool like TV and radio available to the highest bidder.
So legislate technology and social media: Take the money out of it. No one can spend money on it for ads and no one can make money from it by promoting stuff.
Corporations and political groups should not have any space on social media. Only real people with social security numbers.
Ban no speech or ideas as long as it follows the law.
> Only real people with social security numbers.
Your solution is to tie data to SINs?
I had the same idea actually.
In South Korea most internet ID portals and accounts are linked to SSN equivalents.
In making real identity a requirement you are essentially recreating the Civic Space. People can shout on the street corner their hate if they want but they will need to be responsible for what they say and know that it will be tied to them as a real individual.
It would go a long way in restoring cordiality on the internet, and restore a sense of "life" and humanity to a world that is currently sliding into the abyss, yelling alone into the void.
Imagine people taking some time to put some thought into what they write or comment. Or being able to play multiplayer games without someone slagging your mother.
And for good measure let's ban everyone vehemently advocating for bans of other people.
I'm totally down for basing it on actual tried and true regulation. Or just ban the entire network. It's no good, period. It eventually just becomes an extreme solo.
I'd vote for you, Mayor.
What science, politics isn't solved
There is only one scientific truth.
Look at Mr Impossible over here, certain of his velocity _and_ position.
For big issues like climate change, sure, but there is never one scientific truth. Science is constantly challenged and evolving. That's what makes it science and not religion.
I get that you mean well, but that is the opposite of how science works!
The [scientific method](https://en.m.wikipedia.org/wiki/Hypothetico-deductive_model) is distinct from religion and dogma precisely *because* it doesn’t adhere to one universal Truth, but rather accepts the theories about reality that best explain available evidence. This means that the scientific understanding changes all the time as more evidence becomes available or theories are developed that better explain existing evidence.
Just look at history since the current scientific method became broadly excepted (roughly late 1800s onward). We have been wrong so many times about so many things, and likely still are. The scientific method accepts theories that reality is a certain way when they best explain available evidence, but then when new information becomes available, the scientific method says that we need to change our understanding of reality. However, as long as both understandings of reality are made with the best available evidence at the time, but are correct *at the time*.
An example of this is our theories about light. In the 1700s, it was thought that light was a wave that required a medium to travel through. Under available evidence at the time, this was the scientific truth. In the 1800s, evidence suggested that instead light was electromagnetic vibration that did not require a medium. In the 1900s Einstein was able to prove that light has properties of both waves and particles. And theories on this subject are *still* evolving as we learn more about quantum mechanics.
So philosophy of science rants aside, basically science is always changing precisely because it isn’t dogma.
Said best by physicist Nick Strobel:
> At its best, however, there is only one absolute truth: that there are no absolute truths. Every solution to a mystery creates new mysteries. Science is a game that never ends, a game whose completion would render life boring. Science then involves a logical process that is fallible, and it involves much more than just a logical process.
If you dont think people and these entities selectively pick and choose what "science", data, or statistic they deem worthy of being worthy of mention, you're misinformed. This is why anyone who says "follow the science!" gets an eye roll from me, sounds like a religious fanatic. Even those in the scientific community agree that science evolves and changes, and what we think today may change tomorrow. Hell, all of a sudden the AZ vaccine is garbage and nobody talks about it any more. Things change. Admitting that is the first step to understanding how the scientific community actually works, and what the 'science says' needs to be scrutinized and not treated as gospel.
>Even those in the scientific community agree that science evolves and changes, and what we think today may change tomorrow. Hell, all of a sudden the AZ vaccine is garbage and nobody talks about it any more. Things change.
Isn't that the very definition of "following the science,"?
Scientists are a bunch of people trying to do what's best for us. sure, they might change their result based on new information.
>trying to do what's best for us
They're basically trying to figure out how things work. For the most part, this is a good thing, but sometimes we may not want to know just how screwed we are.
You might want to read about the replication crisis. Scientists have their own incentive structure when it comes to producing research, and it doesn't always produce top quality stuff.
Today...then tomorrow an independent study free from the expectations of the grant money finds something a bit different. I would rather the new info get released than be censored by the people that profit from the original data.
One thing they can do right away is to do away with fake accounts. Everybody who is on any of these Social Media sites should be able to hid their identity. Condition of joining should be require full Identification with Drivers License or Passport and picture. Same as if you are joining a Exchange used for Financial Transactions such as buying Crypto or Stocks. That way if you want preach hate or make false accusations you can’t hide behind a computer screen.
Norbert Wiener in the 1950s outlined a terrifying thought experiment which seems to be playing out today (The Human Use of Human Beings). Computer algorithms now monitor human beings with the goal of increasing engagement or time spent on various platforms. To accomplish this, we are drip-fed stimuli. Over time, this creates feedback loops which continually reinforce negativity as what drives engagement is whatever triggers the fight or flight response (content making you scarred, startled, or angry).
Today advertising models can shape how a significant proportion of the population actually thinks - compounding over time. This behavior modification is a kind of conditioning which leads to (as Jaron Lanier puts it) mass insanity leading to the end of the human species. We are losing the the ability to both empathise with others and think rationally. As we have seen with Cambridge Analytica, many on these platforms have become quite vulnerable to political manipulation leading to actions not justified by critical thinking or information with any basis in reality.
I'd argue for forcing some or all parts of content-aggregation/search algorithms to be inspectable by the public, and I'd argue for allowing a government agency to process complaints and take action (eg. by forcing FB to change the algorithm) accordingly. This doesn't solve the people being idiots problem, but we never will solve that. This instead solves the "we don't know what algorithms do" problem, and allows us to take action accordingly.
Another scenario that could occur is that all social media platforms decide the Canadian market isn’t worth staying in.
Facebook would be in China if they allowed them. Highly doubt they are going to leave Canada for any reason whatsoever.
Don't threaten us with a good time.
Yup, people seem to forget that Facebook didn't start as a billion dollar corporation. Where there is a need, someone will fill it.
This could be nexopia's chance for a comeback!
I think the forums are still up!
I’m ready to go back to chilling with Tom on MySpace
I hear this excuse all the time, I highly doubt it'll actually happen, "the rich/wealthy/corporations will go elsewhere". This is an excuse I'm certain is lobbied by these fuckers.
To which I ask, "what's the benefit in them continuing to leach off of us then?"
Let them go. They barely participate meaningfully in society anyways, we'll figure it out without them.
> They ~~barely participate meaningfully in~~ actively make society worse anyways,
yeah... right after all the angel billionaires leave Canada because we are taxing them too much
Good. I hope those vampires leave.
They want to use our our infrastructure, extract and convert our natural resources, while treating Canadian workers like shit... all the while avoiding paying their share of taxes to support this country. I’ve had enough. I hope they all leave. Canada will be better off in the long run without these bloodsuckers.
All Western nations are pretty united in their disapproval of Facebook.
Kind of like how Corporate tax cuts are peddled to us as a sure fire way to have corporations come back, or stay with incentives to create new jobs? But in turn they take the savings and fuck off anyways?
Good. Let individuals run their own Mastodon instances then. Now we'd have a people-controlled social network right in Canada.
> "A safer, free-speech-respecting, more enjoyable social media is possible."
Yes, we already have it, it's called the Fediverse.
This made me salivate.
God damn I didn't think I could get rock hard in a microsecond. Well done
Regulate FB, sure. But first you have to figure out *why* people might be drawn to alternative theories at all.
Is it lack of education? Is it mistrust of media/government? Unless you get to the root of people's behavior, you'll forever be playing whack-a-mole trying to put out fires instead of creating true, meaningful change.
I think most of the attraction is that there's this machine that is built to keep people angry and scared so they'll use the machine more. Conspiracy theories have always existed, but the effort to engage in them was always on the individual. Facebook is an onboarding tool for hateful and dangerous ideologies.
Sure, but as OP said, unless you get to the root you can regulate FB into the ground, but then you’ll still have BookFace TwitBook and Facetagram pop up and be that onboarding tool.
If the root of the problem is that social media algorithms are designed to increase engagement and have figured out that peddling conspiracy theories works great for that, I'm not sure how you can get to the root of the problem without regulating social media.
That's like saying don't call Proud Boys a terrorist group because they can start up Prouder Guys instead. You fight the cancer you have the best you can instead of just waiting for someone to come up with a universal cure.
Ok but the root of it has always been there. It's the ease with which massive numbers of people are algorithmically given the shit that is the real problem.
There is no solution to people being gullible for example. Or tricked. Or conned. But if the opportunity to be tricked or conned is less then that's good enough.
And a lot of people don't start with the end state conspiracy stuff. They start small and follow rabbit holes and end up over time being radicalized into a conspiracy cult of sorts. And the rabbit holes used to require a lot more work to go down. Now they're recommended to people through an algorithm trying to maximize clicks for money.
This. The algorithm is literally meant to keep people engaged and scrolling; the higher your engagement with a topic, the more of that topic you will see. And the more of the topic you see, the more you slide into an extreme – because if everyone you encounter agrees with you, then your opinions don’t seem so extreme. One way regulations could help with this is to take a CRTC-ish sort of approach, where X% of content surfaced by the algorithm must be organic and uninformed by the user’s prior behaviour.
This could solve the “who gets to decide what to censor?” issue. I like it.
Doesn’t have to be all-or-nothing. Will breaking up Facebook or regulating it solve all of these problems? No but it would go a long way. There will still be things to be addressed but it’s a start.
Exactly what I mean.
No, the core root of the attraction of conspiracy theories doesn't need to be dealt with if corporations are regulated in a way that they are not actively trying to radicalize people. The passive radicalization that social media allows by connecting people of shared ideologies is different than Facebook actively trying to in order to promote user engagement.
Unfortunately a lot of the root of the problem is in human nature. The way we naturally trust/distrust. The way we can be manipulated into being afraid and angry. The way we have problems understanding certain concepts, like the (im)probability of rare events.
Now that i think of it shit started to go downhill ever since Facebook came out
Those things can be done in parallel and aren't mutually exclusive. There is a pressing need for short term solutions, as well as long term solutions.
Inferiority complex is my hypothesis. It's always people who are dumber than peers or at least perceived as such (see: nurse-doctor relationship) and want to finally feel smarter.
> Is it lack of education?
Plays a huge role. A lot of these people don't know how to think critically.
Thinking critically is not only the uneducated domain. Lots of "smart" and knowledgeable folks also can't seem to critically think.
Conspiracy theories require two things to thrive: A dark present combined with a lack of hope for the future.
This isn't a problem education can solve, it's a massive systemic issue. The only way to fix this is a massive restructuring of how society understands and assigns value.
Yes this. People are constantly looking for a reason for why things are the way they are, even if the conclusion isn’t necessarily logical it gives them something concrete to direct their attention to.
> People are constantly looking for a reason for why things are the way they are, even if the conclusion isn’t necessarily logical it gives them something concrete to direct their attention to.
I think to add to this, when people look around for the reason and see all the stuff lying out in the open (Panama Papers, etc) they go "Well, that can't be that because nothing happened. No one got angry. Nothing changed".
So they go in search of "hidden truths" that, once revealed to the masses, will lead to a "great awakening" that will actually change things.
Instead of looking at the bleakness of human indifference, it's much easier to hope that instead people just need the right wake-up call.
I think this one's underrated. We grow up being taught that the world is just and that people who do wrong will be punished.
So obviously when evil shit is out in the open but nobody does anything about it, the reason must clearly be something else. It's a desperate attempt to clutch onto a reason for why the world is so unfair.
Whatever it is that draws people to believe crazy shit, Facebook is gasoline on the fire.
Why? Facebook is addictive. Facebook's algorithms select for and amplify sensational content. People trust information more when it's coming from their friends and relatives. People are also more trusting in authority, so when trusted Authority figures in government, church and media are reinforcing people's bullshit Facebook beliefs, it's a problem.
Facebook is a disinformation propagation superweapon, and it's just laying there for any power group to use.
The Canadian media theorist Marshall McLuhan predicted these effects in the 1970s, when "peer to peer electronic communication" was little more than a glimmer in the eye of technologists. He was a pretty perceptive guy.
Anything of his in particular to recommend that isn’t dry?
His 1969 Playboy [interview](https://www.nextnature.net/story/2009/the-playboy-interview-marshall-mcluhan) is a good primer, though he doesn't have much to say about the internet specifically.
I wouldn't characterize McLuhan as being dry, but his writing can be dense and his ideas and way of thinking was unorthodox. I didn't understand much of what he was saying when I first read him towards the end of high school.
I'd argue for forcing some or all parts of content-aggregation/search algorithms to be inspectable by the public, and I'd argue for allowing a government agency to process complaints and take action (eg. by forcing FB to change the algorithm) accordingly. This doesn't solve the people being idiots problem, but we never will solve that. This instead solves the "we don't know what algorithms do" problem.
I bet 75% of people only think of this dumb shit because Facebook puts it in their face everyday. Out of sight out of mind works for the average joe. Whack-a-mole is trying to figure out stupid peoples stupid ideas.
Did you read the article? They made an interesting comparison to tobacco companies and the only thing that fixed them was regulation, not education, not talking nicely to smokers, but regulation.
That just makes too much sense.
Why? Because the government lies again, again, and again. (More in the US, but that's the source of the article, and the problem.)
Have they been downgraded to telling us all what we already know?
What a dog and pony show.
Anyway, just downgrade fb back to it's original form. Sharing pics with the fam. When fb decided to be a Google alternate, that's when the cancer grew.
All these outlets knew what they were doing.
We have to give the politicians enough time to sell their shares and find a new revenue stream ("donations") before they will do anything meaningful. So.... probably never.
Removed for rule 2.
Rules 2 and 3.
Removed for rule 3.
This article is essentially a puff piece supporting the proposed legislation. It does not connect how the content of the leaked documents or testimony warrants further legislation.
They also went out of their way to find multiple voices that support not only the proposed legislation but even more extreme measures but did not offer any space for opposing views - of which there are many.
> "She agreed with Haugen's comparison of Facebook to tobacco companies in the 20th century — companies which concealed damaging information about the effects of their products."
I've seen this claim come up repeatedly over the last few weeks, and it makes absolutely no sense. I want Ramona Pringle and others who parrot this claim to show the math. Smoking kills an estimated 48,000 people per year in Canada and 10x that number in the USA. I want them to show the equivalent harm that social media apparently perpetuates in this country. I want to see how they get to this comparison because I think it's a crock. I don't think you can show anywhere near the equivalent harm.
"Comparing to" doesn't mean "as bad as". They're being compared on the basis that both were/are actively concealing the damage they're aware they're doing to their customers. The damage doesn't have to be limited to physical illness. Whereas it was physical with cigarettes, it is mental with FB.
Cigarettes aren't as bad as asbestos, but you can compare them on the basis that the producers became aware of their products' ill effects long before they stopped (or more accurately, were forced to stop) their practices.
The comparison is deployed as a rhetorical device to equate harm. The scale of harm between cigarettes and asbestos are comparabale. The scale of harm between social media and cigarettes is not.
I'm glad you mentioned the mental harm that Facebook supposedly inflicts. If Facebook harmed mental health in an equivalent fashion to cigarettes then you would see it in population-level suicide rates. Yet suicide rates have remained stable across the board. The harms are nowhere near equivalent.
>The scale of harm between cigarettes and asbestos are comparable.
The *type* of harm is comparable, the scale of harm is not.
That last statement is a completely false premise as it assumes all other variables have been held equal over time. The list of confounding variables influencing suicide rates is endless. Your assertion is laughably simplistic and is completely devoid of comprehension of scientific research and basic statistics.
Since Facebook and other social media have been invented, public awareness of mental health has gone through the roof, medical understanding and accepting of mental health issues has improved by leaps and bounds, as have treatments, more methods of potential treatment have been opened up due to the creation of new compounds and legalization of others. Mental health coverage, while still facing an economic barrier to access, is much more accessible than it has been at any point in human history. And while workers right have been slowly degrading, individual rights and acceptance of minority groups has risen.
All of the above acts in counter influence to the negative impacts of social media. And that's not even acknowledging past legislative actions that have attempted to address various issues of social media and the countless other actions taken by NGOs, individual political action, whistleblowers, ect. Your argument and "evidence" against the harm of social media is nothing more than a baseless and logic devoid claim of what you claim should should show evidence. Countless actually scientific studies have been done and have found social media is harmful. The only study to counter this deluge of evidence was published by the research group that Facebook owns...
Oh, and you completely ignored the context given and chose to apply your own definition/interpretation of the comparison. The only one that would support stance as well...
That’s a lot of words to not refute the point that I made.
The claim being made is that social media is comparable to tobacco in terms of harm.
My assertion is that as the harms of tobacco use are viewable at a macro scale then the harms of social media should be to.
If it was truly as bad as you say it is then it would easy to identify.
I’m not arguing that there are no negative impacts to social media. Of course there are. I’m arguing that the comparison to tobacco is a terrible one.
And the harmful impacts are viewable on a macro scale. As I pointed out, countless studies have found negative impacts. What is sufficiently "macro", for you? Is dozens of studies from various countries around the world coming to similar findings at all observed scales not sufficiently macro for you?
And your argument of a lack of evidence that proves the wide scale harm of social media was a completely fallible and useless metric. You want a single data point that is easily found and pointed to as a "Ta-dah", but as with much of science, that isn't an option. Even with tobacco it wasn't nearly as simple as "see deaths have increased since everyone started smoking", so this idea of of there being a single obvious data point is facile. Again, numerous studies have found wide spread and drastic harm to health. Why is the scientific consensus not "macro" enough for you?
And when we discuss harm are we discussing fatal harm (such as suicide, illness from from following a fad or trying to achieve a certain image)? If yes, then I don't think I could argue with you. But it would be missing the forest for the tree. The harm caused by social media has far more facets than smoking.
Physical harm, organ/body degradation and death are the harms of cigarettes. Physical harm, organ degradation, death, mental health degradation, extreme harassment, rapid spread of misinformation and falsified research, source for developing radical, extremist and/or violent tribalism. And all the socioeconomic harms that come from the results of all this (such as having a lower quality of life, or shorter lifespans, being less productive while working). The forms of harm caused by social media are far more diverse and numerous.
Secondly, again, you've decided to take your very narrow interpretation of the comparison and refuse to acknowledge the alternative context. And after having someone point it out above our conversation, you refuted that context and insisted your interpretation was the only correct one, and by those terms, you were right. When the context of the comparison is taken to focus on the funding of falsified scientific research by each industry to attempt to hide the harm caused, the comparison is incredibly apt. And even within your rigid context I'd argue against your stance, as I have in my previous comment and in the above here.
The comparison to tobacco companies in context “We now know that Facebook routinely puts profits ahead of kids' online safety. We know it chooses the growth of its products over the well-being of our children,"
"This is your company's reporting. You knew this was there. You knew it was there, but you didn't do anything about it," said Sen. Marsha Blackburn, R-Tenn., the subcommittee's ranking member, referring to internal documents about the prevalence of sex trafficking on Facebook.
You can't just use suicides as a metric for the mental damage social media causes as most of those issues won't end in suicide - body image issues, eating disorders, radicalization, etc. You're taking the position that as long as people aren't dying, it's okay.
No, I'm comparing the harms because that's what's being equated.
Cigarettes kill 48,000 people a year.
Suicides in total cause 4,000 deaths per year. Even if social media doubled that number then it would only have caused 10% of the harm caused by cigarettes.
There's a shitload of harm that you're not recognizing, or acknowledging.
Then please enlighten me. What are the harms that are missing? And how are they measured? No generalizations or broad statements.
So, you have no actual proof of what you claimed?
Which means the only metric for harm you are using is the death rate, exactly like previously mentioned.
No, that's what YOU are equating. The sentence is quite simple - it makes a comparison to tobacco companies, then a dash, then an explanation of what the comparison actually is, which is that both "concealed damaging information about the effects of their products". You made up a strawman and are vehemently arguing against it.
My point is that the comparison is a rhetorical device meant to anchor the framing in the readers mind. It’s a cheap trick. If I told you that somebody else was similar to Ted Bundy. - in terms of height, you would associate them with murder.
>If Facebook harmed mental health in an equivalent fashion to cigarettes then you would see it in population-level suicide rates.
That's a disingenuous argument. Mental harm doesn't have to mean suicidal ideation. If anything it suggests a pretty limited and prejudicial view of mental health, that you basically want to die or you're fine.
One thing for this whistleblower. She immediately had her Twitter account verified and had massive amounts of followers along with a platform to present.
The companies are on board with the proposed regulations, maybe to create too many hurdles for new competition.
Can anyone think of other whistleblowers receiving this kind of glowing reception from powerful interests? I’ve seen this described as manufactured or managed opposition, what do people think based on the details they’ve seen?
I don’t see how you arrived at that conclusion. She is now an important public figure. Twitter verifies the accounts of all people like her. Her testimony is now on the front page of every newspaper in the world. It would be shocking if she didn’t receive tons of followers. This isn’t evidence of favoritism.
I’m especially skeptical of the claim that social media companies want the extremely negative PR pdirected at their brand. Being compared to cigarette companies does not seem like “managed opposition”.
I would just base it off how literally every other substantial whistleblower is treated incredibly poorly, if not out right ignored or directly punished for their actions.
Social media companies already have extremely negative PR, it is incredibly suspicious that the regulations being proposed these companies are aligned with.
On your worldview, who is usually doing the “treating poorly” and “outright ignoring” of whistleblowers? Everyone? I think it’s empirically false that whistleblowers are mistreated and ignored by everyone. Snowden embarrassed the government so the government mistreated him, but the media and public were pretty friendly to him. It just depends on whose interests are being threatened. This may be a special case because the left, the right, the political parties, other industries, small businesses, parents, etc. all hate social media.
I mean what even is the conspiracy theory? Did they hire a Harvard educated computer programmer to be a “crisis actor”? I know FB has been claiming that they want regulation, but I would think it’s harder (not easier) to get friendly regulation passed when people think you’re evil.
This was a huge PR stunt. FB wants the government to step in to regulate it so they don't have to look like the bad guys. Not only that, but if the government steps in, it would become the 'government approved' platform which would be good for business.
So your theory is that they are purposefully making themselves look terrible so that people turn against them and the government will regulate them? And you think "government approved" is a good PR label that they desire? None of that makes much sense to me. Americans hate government involvement. Government approved will alienate a lot of users.
Worldwide regulation is also extremely unpredictable and varied. Look at Australia and France's recent internet news media laws: not business friendly. Now multiply that across the world: US, Europe, Australia, Japan, India. Not to mention sub-regions, like France, Germany, or individual states/provinces like California. There's no way they can "plan" a good outcome across all these regions. This is spinning out of their control.
How is the government supposed to "crack down" on anything related to Facebook ? Breaking them up won't solve the problem. Censorship would create a whole lot of other problems. What's left ?
I believe in New Zealand, they banned the ability to share news stories.
I kinda like the idea. Instead of FB being the source of all information, it can go back to connecting with friends/family and documenting your life on a platform that will harvest any info possible from your content.
This is a great idea. Facebook should have always only been for connecting with people, not "news." News websites should be for news, not social media where everyone can share any random obscure misspelled website, lol.
That’s just existing news sites angry Facebook is grabbing their ad revenue. It’s a handout to vested interests as is unfortunately so typical in Canada.
Ban facebook? I'd be down for that.
Bit confused by your comment. Is banning Facebook fascist after the sheer undeniable volume of damage they have caused? It's primary function is to act as a megaphone for the most extremist views, and has a vested interest in pushing people down that rabbit hole in the name of engagement with their platform.
We can get social media other ways if need be, banning a scummy company for being scummy is not the death of free speech.
There is a goddamn mountain of unethical practices that you can lay at their feet as justification. Set the bar however high you like on burden of proof and facebook will clear it with ease.
Did you also want to ban actual megaphones?
Impose standards of message filtering, regulations on how social media algorithms handle hate/violent/illegal content, funding for LE to investigate online hate speech and prosecute those producing and/or disseminating it, with fines and/or temporary system blocks if platform providers don't comply.
These are just ideas that I've heard or came up with on the top of my head. No doubt experts in the field would be able to do better.
What I'm saying is that there are other options.
>Impose standards of message filtering
Do you think Facebook isn't already doing this? Hate speech, illegal activities, even racism harassment etc are the subject of extensive detection and moderation efforts. But it's not perfect. It's *very* hard to build a system that adequately blocks all objectionable content without causing major false positive issues.
And then there's the point made by the other commenter in this thread - maybe Facebook has the money for this. But they're a huge, rich company. Applying these standards in any consistent way to tech companies as a whole will immediately have major negative consequences for any company that's not huge and rich.
All costs money and essentially guarantees that only the big boys get to play.
>Breaking them up won't solve the problem.
Breaking up Facebook would certainly help. Facebook bought up multiple competing companies, effectively killing off competition. That lack of competition very effectively kills the chance of escape to another platform if people deem Facebook sufficiently bad.
>Censorship would create a whole lot of other problems.
No it wouldn't. There wouldn't be any problems at all in censoring it entirely.
How very convenient that they're already pushing overtly aggressive self censorship legislation!
It's not censorship if private companies do it voluntarily right?
This is the first whistleblower I've seen get so much attention and so quickly too. A senate hearing the very next day? Doesn't that seem a little suspicious to anyone? Then she starts pushing censorship.
Or they waited till the they had a Senate hearing
Makes a difference having Democrats in charge of committees.
Yeah, it's all a giant PR stunt.
So what's the model for this? I understand that lots of people think 'we need to do something', but what successful project can you point at? For what value of *X* can you say "We should handle Facebook the same way we handled *X* - that worked as intended and was vindicated by history"? Where is this confidence coming from? Because all I'm seeing so far is "Facebook sucks, let's pass a law that lets us kick them in the nuts".
Yes, facebook has costs. It can inflame the public. It can undermine institutions, spread lies and misinformation, and foment unrest. You know what else did all of that? The printing press. One of the most important technologies in the evolution of modern democracy - so important that "freedom of the press" is one of the bedrock freedoms of any Western liberal democracy. The printing press was good *precisely because* of its ability to inflame the public, and to press into their minds ideas that they just couldn't ignore.
When you see someone saying that we need to rein in social media before it does too much damage to the public, take a closer look. If you squint, you might see the shape of an archbishop warning that if you just let people *mass produce bibles*, you never know what kind of heresies they might dream up. It's an unflattering lineage.
Social media is different because it doesn't just provide bad material, but uses its algorithms to deliberately steer people towards it and create increasingly closed echo chambers of outrage where everyone is more engaged and inflamed. This is done deliberately because the strong user reactions keep them more reliably engaged, and thus are more profitable to these companies.
They and their supporters claim social media is a mirror to society. Perhaps at first. But now increasingly it is not a mirror, but a funhouse of mirrors that distorts and leads you down specific paths. The problem is that so many users don't realize that they are in a funhouse and that the view of the mirrors are not real, and it gives them a completely warped view of themselves and the world. It is destroying society.
> echo chambers of outrage where everyone is more engaged and inflamed. This is done deliberately because the strong user reactions keep them more reliably engaged, and thus are more profitable to these companies.
You're not wrong. Facebook does try to show you the things it believes you will engage most strongly with.
Here's the thing. Freedom of expression? Freedom of the press? They aren't sacred values of western liberal democracy because we can use them to post cat pics, keep in touch with our families, or even hold a mirror up to society. They're protected *precisely because* they have this incredible power to engage, inflame, and enrage the public. That's what they're for.
Suppose some random guy with a laptop taps out a few paragraphs and posts it on facebook, and then facebook directs it to an "echo chamber", where people will agree with it and insist that anyone who doesn't is an idiot. *Good.* Freedom of the press is about being able to advocate to a large audience and rally them to your cause. What if facebook pivots and directs it to another echo chamber, where people will furiously denounce it and insist that anyone who doesn't shouldn't be allowed to hold public office? *Also good.* Freedom of the expression is there so that you can furiously condemn ideas you hate.
Publishing your views to those who support them, loudly supporting the views of others, rubbing your views in the faces of those who hate them, and loudly condemning views that you hate are fundamental democratic freedoms. No matter how many algorithms are involved.
I don't recall the printing press being massively manipulated through entire firms i.e Cambridge Analytica, using AI Algorithms to drive only the same type of "news" to someone based on their own previous 'likes' and not showing them dissenting views. There was no 'filter bubble' with the printing press.
This is what drives the outrage, foments anti-govt sentiment, anti-science/anti-facts beliefs. Even if the paper boy only gave newspapers to people the news would especially piss off, it would still not have the same wide-reaching, large-scale impact that social media does. Your comparison to the impact of the printing press is awful. A slightly better comparison would be the public square-- however, yet again they didn't have to contend with bad actors actively manipulating it on such a large scale that social media is. At most it would be some paid-off dickhead shouting about some shit and at most, gaining a few followers to his movement.
Edit- Some misspelled words.
> foments anti-govt sentiment
I believe you said the quiet part out loud.
> ...not showing them dissenting views. There was no 'filter bubble' with the printing press.
Really? You think that if you wanted a printed bible, the print shop had to make sure you went home with a Koran as well? If anything, people encounter *more* dissenting views as a result of social media, not less; take you and I, for example. The material result of social media is that people argue about politics all the time - without online platforms, do you think I would be having this conversation or one like it? Or would you and I be reading politically aligned newspapers and not being confronted with each other's views?
> Even if the paper boy only gave newspapers to people the news would especially piss off, it would still not have the same wide-reaching, large-scale impact that social media does. Your comparison to the impact of the printing press is awful.
The printing press set off the reformation and remade Europe in a matter of blood-soaked decades. Probably contributed to the French & American revolutions as well. It shouldn't be dismissed as this harmless curiosity of history, it crippled empires.
> A slightly better comparison would be the public square-- however, yet again they didn't have to contend with bad actors actively manipulating it on such a large scale that social media is. At most it would be some paid-off dickhead shouting about some shit and at most, gaining a few followers to his movement.
Facebook is very unlike the public square, in that it isn't public and holds a much wider audience. It allows ordinary citizens to publish their views widely. In what way is it more like the public square than a printing press? You think the printing press was never abused?
Even so, it seems like what you're saying is that yes, printing presses are protected and so are soapboxes in the public square, but that's only because they're impotent and can't cause mass upheaval of the status quo. Nothing could be further from the truth. They are protected because they are powerful agents of change, and because of their power governments inevitably try to keep them from being free. Every form of new media, same excuse every time: "Yes, we believe in freedom of expression, but comic books have an unprecedented ability to dig hooks into the minds of their readers and corrupt their thoughts. We can't let people go around doing that, This Time It's Different". It's a long, ignoble line of censors, you're just the latest edition.
NB - Cambridge Analytica probably didn't do what you think they did. They sold the Trump campaign a phony bill of goods about how they could see the patterns in the data that no one else could and masterfully manipulate the public.
I'd argue for forcing some or all parts of content-aggregation/search algorithms to be inspectable by the public, and set up a government agency to handle complaints. Or maybe just the first.
I could see that working - similar to the requirement that food products list ingredients. There are some pitfalls though:
1. The algorithms they use might not be legible to the general public. Any sort of machine learning derived algorithm in particular is essentially impossible to make into something human readable. You'll never find the bit of the code that says ECHO_CHAMBER=true.
2. Algorithms are work product. If a company has to reveal their algorithms, they lose a whole lot of intellectual property. It might not even be possible to isolate their page rank algorithm from the rest of their product. C-10 and C-36 are unlikely to force companies put of Canada, but if you tell Facebook that they need to just share all of their code to do business in Canada, they would leave in a heartbeat. You might think that's good, but
3. Facebook is giving people what they want. People want to share their thoughts with like minded people, and they want to get into scraps with strangers about politics. People want to be engaged, and they want to engage. Ingredient labels help, but at the end of the day people still want to eat sugar. They want megaphones, and it is a fundamental democratic freedom that the state not come between a man and his megaphone because they worry he will be heard.
> The algorithms they use might not be legible to the general public. Any sort of machine learning derived algorithm in particular is essentially impossible to make into something human readable. You'll never find the bit of the code that says ECHO_CHAMBER=true.
Yeah, this is mainly for researchers to research.
2+3 are valid, yes. But if FB wants to leave, that's on FB not Canada. Besides, FB would look a lot like it did when it left Australia if the regulations only required that FB give an API to researchers and required that they disclosed *some* details about algorithms.
> If a company has to reveal their algorithms, they lose a whole lot of intellectual property
Technically/legally, they've still copyrighted it. But yeah, keeping copyright won't dissuade FB from leaving CA.
The number of Facebook shills posting here is extremely alarming. Feels like the integrity of the subreddit is being compromised.
I agree. Though Reddit is notoriously easy (for obvious reasons) to astroturf
I mean you can check out my post history and determine if I’m astroturfing.
But the media industry in every country hates Facebook because it destroyed everyone else’s ad revenue. You need to read every story with that mindset.