What if the supervision of Facebook fails?
[ad_1]
What about us? After all, we are 3 billion. What if every Facebook user decides to become a better person, to think harder, to learn more, to be kinder, patient, and tolerant? Well, we have been working to improve humanity for at least 2000, but progress has not been smooth. Even with “media education” or “media literacy” efforts aimed at young people in a few rich countries, we have no reason to believe that we can count on human progress—especially when Facebook aims to take advantage of our preference for superficial, emotional and extreme The expression, our better angels will avoid.
Facebook is designed for animals that are better than humans. It is designed for creatures that do not hate, exploit, harass, or intimidate each other-such as golden retrievers. But we humans are hateful beasts. Therefore, we must standardize and design our technology to correct our weaknesses. The challenge is to figure out how.
First, we must Recognize that the threat of Facebook is not in certain marginal aspects of its products, or even in the nature of the content it distributes. Zuckerberg is rooted in these core values in all aspects of the company: commitment to unremitting growth and participation. It is implemented by Facebook’s ubiquitous monitoring used to target ads and content.
In most cases, this is the overall detrimental effect of Facebook on our collective thinking ability.
This means that we cannot just focus on the fact that Donald Trump used Facebook for his own benefit in 2016, or that Donald Trump was expelled by Facebook in 2021, or even the fact that Facebook directly contributed to the mass deportation and murder of Rohingya. Organize political movement of Burmese. We cannot unite people with Facebook’s dominant and compelling view in the global online advertising market. We can’t explain the nuances of Article 230 of the Communications Regulations Act, and we can’t expect to reach any consensus on how to deal with it (or even if the reform of the law will have an impact on Facebook). These are not enough.
Facebook is dangerous because 3 billion people are collectively affected by continuous surveillance, and then their social relationships, cultural stimulation, and political awareness are managed by predictive algorithms that favor continuous, increased, and immersive participation. The problem is not that a certain weird person or president is popular on Facebook in one corner of the world. The problem with Facebook is Facebook.
For decades, Facebook could be so powerful—if not stronger. Therefore, while we strive to live better with (and with each other), we must all imagine a more radical reform plan in the next few years. We must fight Facebook (and the roots of Google). More specifically, there has recently been a regulatory intervention that, although it is mild, can be a good first step.
In 2018, the European Union began to insist that all data-collecting companies respect certain basic rights of citizens.TonThe resulting General Data Protection Regulation Grant users some autonomy over the data we generate and insist on a minimum of transparency when using that data. Although law enforcement is uneven, And the most obvious sign of GDPR is an additional warning that requires us to click to accept the terms. The law provides some potential to limit the power of big data vacuums like Facebook and Google. It should be carefully studied, strengthened, and spread around the world. If the US Congress (and the parliaments of Canada, Australia, and India) place more emphasis on citizens’ data rights than content regulation, there may be some hope.
[ad_2]
Source link