WEB RULERS & THEIR RULES
http://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech – Considering how much of modern human existence flows through mediated channels–and this proportion certainly approaches 100%–a critically important and yet for reasons of complexity and excitement and technical content and more all too often simply overlooked arena of interest, here in the form of an analysis from The Verge about the subtleties and paradoxes of that which permits what we tweet and greet and bleat and meet online every day to come to pass, which is to say the content moderation that underlies all platforms and any publisher that seeks to engage users at all,which is to say up to the vast majority of the activity that happens in our online lives, a lengthy and fascinating–at once hilarious and spooky, nauseating and infuriating–take on things that an aggregation from Benton.org supplements in interesting ways, and which an article in the Columbia Journalism Review complements compellingly with its documentation of how algorithms–tricky, biased, GIGO(Garbage-In-Garbage-Out), manipulative, Orwellian algorithms–can exercise oversight over the realm of the possible in human life in ways that are at once undemocratic, suboptimal, and more or less entirely hidden from view–in total a way of characterizing the underbelly or skeleton of the intermediated Earth that we currently occupy, where the surface bursts with conflict that probably would recede or at least appear in a different form if scrappy scribes and stalwart citizens understood it in terms of these deeper operational protocols, as in the case of copyright issues such as the general point made by an Atlantic piece that juxtaposes intellectual property and free speech, such as one briefing from TechDirt that presents a micro-review of a profoundlycritical-of-copyright copyright expert‘s–as in, he authored one of the standard law school texts on the subject–book on some of the subject’s current wrinkles, such as another TechDirt essay that offers a trenchant critique of a superficial acceptance of the notion that the recent Google-Digitization-case refusal of Certiorari has delivered a major blow towriters’, as opposed to publishers’, interests; as in the case of questions about access and secrecy and freedom of information, such as a truly frightening assessment from Rolling Stone that takes the Obama administration to task for its fatuous, false, and verifiably purposeful choice to seek to keep under seal vast troves of materials that embarrass or otherwise discomfit or inconvenience those who rule the roost and the rest of the world as well, such as an interview from TruthDig with a literal hero of Freedom of Information Act battles; as in the case of an additional bit of reportage from TechDirt about a recent unobtrusive bill from our erstwhile elected representatives that would serve to gut more or less entirely the possibility of any modicum of a robust defense of Net Neutrality; all of which serves to illustrate a much wider context of how we hope to live, whether we will have power and agency, the parameters of joy and freedom vis-a-vis alienation and oppression that bound our lives, as an article about possible web dystopias from Atlantic makes clear, and which comments from Noam Chomsky about Orwellian scenarios even more pertinently portrays, concerns about the viability of free people and free media that the recent travails of Information Clearinghouse and its founder illustrate with a graphic pathos that any writer ought to take to heart whose social base is not a trust fund and whose economic foundation is not a corporate sinecure: “(The capsulization of the problems that this topic brings to the fore requires a voluminous back story with many uncertainties and plenty of treachery. Knowing that a moderation choice is right if often just impossible). In 2012, for instance, when headlines were lauding social media for its role in catalyzing the Arab Spring, a Syrian protester named Dana Bakdounes posted a picture of herself with a sign advocating for women’s equal rights. In the image Bakdounes is unveiled, wearing a tank top. A Facebook moderator removed the photo and blocked the administrators of an organization she supported, Uprising of Women in the Arab World. Her picture had been reported by conservatives who believed that images of women, heads uncovered and shoulders bare, constituted obscenity. Following public protest, Facebook quickly issued an apology and ‘worked to rectify the mistake.’
The issue of female nudity and culturally bound definitions of obscenity remains thorny. Last spring, Facebook blocked a 1909 photograph of an indigenous woman with her breasts exposed, a violation of the company’s ever evolving rules about female toplessness. In response, the Brazilian Ministry of Culture announced its intention to sue the company. Several weeks later, protesters in the United States, part of the #SayHerName movement, confronted Facebook and Instagram over the removal of photographs in which they had used nudity to highlight the plight of black women victimized by the police.
The majority of industry insiders and experts we interviewed described moderation as siloed off from the rest of the organization. Few senior level decision-makers, they said — whether PR staff, lawyers, privacy and security experts, or brand and product managers — experience the material in question first-hand. One content moderator, on condition of anonymity, said her colleagues and supervisors never saw violent imagery because her job was to remove the most heinous items before they could. Instead, she was asked to describe it. ‘I watched people’s faces turn green.’
Joi Podgorny is former vice president at ModSquad, which provides content moderation to a range of marquee clients, from the State Department to the NFL. Now a digital media consultant, she says founders and developers not only resist seeing the toxic content, they resist even understanding the practice of moderation. Typically cast off as ‘customer-service,’ moderation and related work remains a relatively low-wage, low-status sector, often managed and staffed by women, which stands apart from the higher-status, higher-paid, more powerful sectors of engineering and finance, which are overwhelmingly male. ‘I need you to look at what my people are looking at on a regular basis,’ she said. ‘I want you to go through my training and see this stuff [and] you’re not going to think it’s free speech. You’re going to think it’s damaging to culture, not only for our brand, but in general.’
(Different corporate ‘cultures,’ Reddit versus Pinterest, for instance generate vastly different views about free speech and engagement protocols. Protection from trolls and orchestrated campaigns of abuse are often tenuous at best. Companies offload their responsibilities whenever possible to the lowest-paid and most exploited labor). Sarah T. Roberts, the researcher, cautions that ‘we can’t lose sight of the baseline.’ The platforms, she notes, ‘are soliciting content. It’s their solicitation that invites people to upload content. They create the outlet and the impetus.’ If moderators are, in Dave Willner’s estimation, platforms’ emotional laborers, users are, in the words of labor researcher Kylie Jarrett, their ‘digital housewives’ — volunteering their time and adding value to the system while remaining unpaid and invisible, compensated only through affective benefits. The question, now, is how can the public leverage the power inherent in this role? Astra Taylor, author of The People’s Platform, says, ‘I’m struck by the fact that we use these civic-minded metaphors, calling Google Books a ‘library’ or Twitter a ‘town square’ — or even calling social media ‘social’ — but real public options are off the table, at least in the United States.’ Though users are responsible for providing and policing vast quantities of digital content, she points out, we then ‘hand the digital commons over to private corporations at our own peril.'”—The Verge
“(Three levels of analysis are available to algorithm assessors: content in, the operational programs themselves, and output. In looking at any particular point of entry, or a combination, however, many times background complications–the assumptions of the code, the ‘learning’ parameters and how they encourage continued suboptimality or even abuse if users seem to engage with that–simply disappear). Much of the reporting on algorithms thus far has focused on their impact on marginalized groups. ProPublica’s story on The Princeton Review, called ‘The Tiger-Mom Tax,’ found that Asian families were almost twice as likely to be quoted the highest of three possible prices for an SAT tutoring course, and that income alone didn’t account for the pricing scheme. A team of journalism students at the University of Maryland, meanwhile, found that Uber wait times were longer in non-white areas in DC.
Bias is also the one of the biggest concerns with predictive policing software like PredPol, which helps police allocate resources by identifying patterns in past crime data and predicting where a crime is likely to happen. The major question, says Maurice Chammah, a journalist at The Marshall Project who reported on predictive policing, is whether it will just lead to more policing for minorities. ‘There was a worry that if you just took the data on arrests and put it into an algorithm,’ he says, ‘the algorithm would keep sending you back to minority communities.’
(Even obvious cases are full of difficulty–teasing out discrimination from the necessary operation of profiteer corporate structures, a conclusion of which is that ‘not every story has a bad guy.’ Looking at the code itself can be more full-throated, as it were.) Having access to the source code or the design of the algorithm provides a whole new level of insight. That’s clear from The Marshall Project’s reporting on predictive policing. In the piece titled ‘Policing the future,’ Chammah, together with Hansen, reported on software called HunchLab, which is similar to the more widely used PredPol, with at least one major difference: It is much more transparent. Azavea, the company behind HunchLab, shared its methodology and models with The Marshall Project’s reporters. While the piece doesn’t go into the details of how the software works, it does address how both HunchLab and the police agencies implementing the software are grappling with concerns about computed policing. For example, HunchLab only maps violent crimes, not drug-related crimes, which is seen as an area of systemic racial disparity in the criminal justice system.
But the black box is difficult to access, both conceptually and literally. Most algorithms, whether used by business or government, are proprietary, and it isn’t entirely clear what kinds of source codes would be available under FOIA. Several cases have gone to court, on FOIA grounds or otherwise, to access source codes or documents related to them, but most are thwarted by the trade secret exemption or security concerns. In one FOIA case, the Electronic Privacy Information Center, a nonprofit organization in Washington, DC, requested documents for a system that assigns threat assessments to air and land passengers traveling in the US. The Analytic Framework of Intelligence, as it’s called, takes in data from a large collection of both governmental and nongovernmental databases, including internet feeds, and then spits out a risk assessment.
Computer scientists, says Hansen, ‘are putting into code our society’s values and ethics,’ and that process has to be out in the open so the public can participate. In whatever form it takes, reporting on algorithms will likely become more of a required skill. Journalists need to up their game, both with respect to demanding algorithmic transparency, and in augmenting the current journalistic skill set so we can deal with humanity’s augmented intelligence.”—Columbia Journalism Review
“(With a frank and rational creative-common, copyright-reform, open-source bias, the author contextualizes the Supreme Court’s refusal of Certiorari. Unsurprisingly),the Authors Guild — which has been tilting at this particular windmill for over a decade — was upset about the refusal to hear the case, but I wasn’t quite expecting the level of ridiculous sour grapes that were put on display: ‘Blinded by the public benefit arguments, the Second Circuit’s ruling tells us that Google, not authors, deserves to profit from the digitization of their books,’ said Mary Rasenberger, executive director of the Authors Guild.
Did you get that? The Authors Guild is so completely out of touch that it actually thinks that ‘public benefit arguments’ have no place in copyright disputes, despite the very fact that the Constitutional underpinnings of copyright law is to maximize the public’s benefit. And, of course, this all ignores the fact that the vast, vast majority of authors greatly benefit from such a searchable index in that it drives more sales of books.
(Calling hyperbolic claims of ‘colossal loss’ and noting assertions about threats to the very vitality of American culture, the author continues). This is ridiculous on so many levels. First, most authors cannot make a living today because most books don’t sell. That’s not the fault of Google Books. In fact, as noted time and time again, Google Books acts as adiscovery mechanism for many books and increases sales (I’ve bought dozens of books thanks to finding them via Google Book Search). Second, the gloom and doom predictions of legacy industries over new technologies is time-worn and has never been even remotely correct.
What(Authors Guild Executive Director Mary) Rasenberger leaves out of her ignorant whine is the fact that in the time that Google Books has existed, the number of authors has increased massively. No, they’re not all making a living, but the purpose of copyright law is to incentivize the creation of new works for the public, and the public is getting an astounding amount of new works — a totally unprecedented amount of new works actually — and it’s got nothing to do with anything the Authors Guild has done.
(Undaunted, the A.G. will continue a vigilant stance and intends to monitor library and Google practices for ‘fair use abuse.’) To ensure that fair use isn’t abused? Lovely people at the Authors Guild … outright declare themselves against public benefit, and then worry about the ‘expansion’ and ‘abuse’ of fair use. Does no one at the Authors Guild recognize that their authors are protected by fair use as well and many of them rely on it all the time? Who would ever join such a backwards looking and thinking organization?”—TechDirt
“In brief opening remarks this morning I brought up the crucial fact that rights are typically not granted, but rather won, by dedicated and informed popular struggle. …I also mentioned that the United States and Turkey, though differing in many respects, provide clear and instructive illustrations of the ways in which rights are won and once won, protected. With regard to the United States, it is commonly believed that the right to freedom of speech and press was guaranteed by the First Amendment to the Constitution over two centuries ago. That is true only to quite a limited extent, first because of its wording, but more importantly because the law in practice is what the Courts decide and what the public is willing to defend. I will return to this tomorrow, but would just like to point out now that it was not until the 1960s that the US courts took a strong stand protecting freedom of speech. They did so under the pressure of the civil rights movement and other activism over a wide front. And with the decline of activism, the rights are being eroded, as we heard today.
(A)question about freedom of speech …arises when we consider longer-term objectives. The question I have in mind is by no means new. One person who raised it was George Orwell, who is best known for his critique of totalitarian enemies, but was no less acid in addressing the ills of his own society. One pertinent example is an essay on what he called ‘literary censorship in England.’ The essay was written as the introduction toAnimal Farm, … . In this introductory essay Orwell instructs his British audience that they should not feel too complacent about his exposure of the crimes of Stalinism. In free England, he writes, ideas can be suppressed without the use of force. He gives some examples, and only a few sentences of explanation, but they capture important truths. ‘The sinister fact about literary censorship in England,’ Orwell wrote, ‘is that it is largely voluntary. Unpopular ideas can be silenced, and inconvenient facts kept dark, without any need for any official ban.’ One reason is the centralization of the press in the hands of ‘wealthy men who have every motive to be dishonest on certain important topics.’ Another, and I think more important reason, is a good education and immersion in the dominant intellectual culture, which instills in us a ‘general tacit agreement that `it wouldn’t do’ to mention that particular fact.’
(Little known–it suffered perhaps ‘literary censorship’–its ideas demand consideration). A little historical perspective is useful. A century ago, in the more free societies it was becoming more difficult to control the population by force. Labor unions were being formed, along with labor-based parliamentary parties; the franchise was extending; and popular movements were resisting arbitrary authority, not for the first time to be sure, but with a wider base and greater success. In the most free societies, England and the US, dominant sectors were coming to recognize that to maintain their control they would have to shift from force to other means, primarily control of attitudes and opinion. Prominent intellectuals called for the development of effective propaganda to impose on the vulgar masses ‘necessary illusions’ and ’emotionally potent oversimplifications.’ It would be necessary, they urged, to devise means of ‘manufacture of consent’ to ensure that the ‘ignorant and meddlesome outsiders,’ the general population, be kept ‘in their place,’ as ‘spectators,’ not ‘participants in action,’ so that the small privileged group of ‘responsible men’ would be able to form policy undisturbed by the ‘rage and trampling of the bewildered herd.’ I am quoting from the most respected progressive public intellectuals in the US in the 20th century, Walter Lippmann and Reinhold Niebuhr, both Wilson-Roosevelt-Kennedy liberals, the latter president Obama’s favorite philosopher.
At the same time the huge public relations industry began to develop, devoted to the same ends. In the words of its leaders, also from the liberal end of the spectrum, the industry must direct the general population to the ‘superficial things of life, like fashionable consumption’ so that the ‘intelligent minority’ will be free to determine the proper course of policy.
These concerns are persistent. The democratic uprising of the 1960s was frightening to elite opinion. Intellectuals from Europe, the US, and Japan called for an end to the ‘excess of democracy.’ The population must be returned to apathy and passivity, and in particular sterner measures must be imposed by the institutions responsible for ‘the indoctrination of the young:’ the schools, universities, churches. I am quoting from the liberal internationalist end of the spectrum, those who staffed the Carter administration in the United States and their counterparts elsewhere in the industrial democracies. The right called for far harsher measures. Major efforts were soon undertaken to reduce the threat of democracy, with a certain degree of success. We are now living in that era.”—Chomsky.info