This turned out to become a very long post. Sorry. For several years I’ve watched Wikipedia developing and gaining ground, and during those years my ambivalences and critiques of it somewhat shifted but never really disappeared. I became interested in the “Open” and “Free,” but saw it too often correlated with amateurish quality. I admire the whole aspect of “folk-knowledge” just to see it more and more endangered by Wikipedia’s strive for respectability. But what I miss most is some self-critical assessment on the side of Wikipedia and “Wikipedians” themselves. Jimmy Wales’ goal of “the sum of all knowledge everybody can share in” sounds so preposterous, on so many levels, but goes without any question. On the other hand, the success of Wikipedia over the last 10 years had some very negative consequences that I don’t find discussed either.
The main occasion to write about some of my misgivings with Wikipedia came with Wikimania 2011, in Haifa, Israel. I posted my bewilderment about the conference: How can Wikimedia have a conference on “free knowledge” in a democratic state that just recently curtailed the freedom of speech? How can Wikimedia accept sponsorship from a right-wing government institution? I didn’t receive an answer, and Wikimedia didn’t and doesn’t seem to have any problem with that. So I started to think about a “Seal for Fair Knowledge” to accompany any “Open Knowledge”, to ensure it wasn’t “produced” under exploitive conditions. And I started to think more intensely about the exploitive, negative, even threatening effects of Wikipedia in general. Or rather, to summarize them.
These effects are not the result of hyperbole stemming from a mind trained in some areas of the Humanities, driven by intellectual snobbery. I don’t oppose the “Open” or “Free,” but I don’t accept exploitive conditions under their names either. And the reasons why I cherish the “Open” have far more to do with ecology than with politics.
I don’t know how true the following is. Perhaps it is a kind of “Alternate Universe Analysis” or a Philip K. Dick story, in which neither side, the protagonists or the reader, can be sure what is going on and who is right. But at least we should have some such description of the ugly sides of Wikipedia, just to counter the self-aggrandizing hymns Wikimedia promotes in its press releases. (“Are you passionate about free knowledge?” – Fuck, no! I’m passionate about my girlfriend!) So in what follows I describe Wikipedia as a problematic monopoly. I don’t ignore its positive sides, but they don’t play any role in this text.
In recent years Wikipedia has become the dominant medium of encyclopaedic information. And like every monopoly in a free market society it is bound to free market conditions as it creates new ones. In order to keep a healthy distance to its propaganda, we should approach it with some rules of thumbs in mind, modelled after Jerry Mander’s recommendations for assessing the impact of new technologies some 20 years ago  :
- The flashy appeal of a technology is meaningless as the negative aspects are slow to emerge.
- Don‘t believe in the positive claims of the proponents of new technologies, rather look at the material consequences.
- Don‘t judge a technology by the way it benefits you personally but by who benefits the most and who bears the negative consequences.
- Assume all technology “guilty until proven innocent“.
So we should start with skipping the shop talk. We should ignore, at least for the time being, the buzz that comes with “free knowledge“, “open access“, “the encyclopaedia of all human knowledge“, Creative Commons, the frenzy about an “emerging sharing culture“ that is said to miraculously entail a new paradigm of collaborative economics. Instead of listening to the mission statements of Wikipedia (and other endeavours of the Open/Free), we should observe what occurs, how it acts, and what it effects.
1 Volunteers or Exploited Workers?
Wikipedia is a brand created by a huge number of unpaid volunteers. It became the dominant medium for encyclopaedic information in that it supplanted most other media due to both the number of its articles and the convenience of access. But this is not the only negative effect. In fact, Wikipedia was able to supplant other means of information in very much the same way that Ubuntu was able to become the world’s dominant Linux distro: Both enticed people to participate in the name of a specific ideology to better the world. Participating was/is seen as a step to bring about this better world when in fact it was/is to bring about a more appealing product. That in the case of Wikipedia this isn’t very obvious is no refutation. It’s simply a demand to make it more clear.
In order to gain a different understanding of the impacts of Wikipedia we should leave aside all the happy talk about Wikipedia as an “encyclopaedia created by volunteers”  and take its form as content. Similar to Marshall McLuhan’s “The medium is the message,” we should scrutinize the effects Wikipedia’s form has.
Seen in its functional aspects, we may regard Wikipedia as a method of crowdsourced text-production that uses a Wiki-editor and open licensing systems like GNU Free Documentation License (GFDL) and Creative Commons. But this functional description doesn’t accurately incorporate what the contributors to Wikipedia actually do. So to be more precise: Wikipedia is a method to obtain, store, and accumulate content-donations by means of a Wiki-editor and open licensing systems like GFDL and Creative Commons.
The Wiki-editor and the open licensing systems are fundamental to facilitate these content-donations on a large scale. Had there been no Wiki-editor that could be used via a browser, and had there been no licensing system that makes every contribution attainable for re-use to every other contributor, Wikipedia wouldn‘t have been able to accumulate countless tiny content donations over a long stretch of time. Instead it would have to rely e.g. on article submissions. But being a Wiki with open licences, it only needed to wait until the donations dripped in, and with the articles growing in size it pulled further (meta-) content donations like editing, commenting, discussing, and revising. With this approach Wikipedia never had to care about release dates or release cycles (like Ubuntu and other Linux distros). Every state of an article was the best – i.e. lexicographic functional because readable – version available whereas Linux distros have to balance there ingredients to be a functioning whole at all.
Inasmuch as Wikipedia is a splendid way of organizing content donations, in what way can it then be called exploitive? Isn’t a donation, after all, something voluntary? The critical point is that some in Wikipedia/Wikimedia are paid whereas the
overwhelming majority is not.
Wikipedia is hosted by the Wikimedia Foundation, an income tax exempt non-profit organization with residence in California, USA. The Foundation and the 35 country Chapters in 6 continents are organized as a non-profit organization and unincorporated associations respectively. The Foundation licenses name, logo, etc. to the Chapters while the Chapters send a part of the locally acquired donations to the Foundation. In that the relation between the Foundation and local Chapters resembles a franchise, but not the ubiquitous one of private sector companies like Starbucks or McDonalds, but more like that of educational and religious organizations, NGOs, or cults.
In May 2011 the Foundation had 65 (now 75) paid employees with an unclear amount of paid employees in all the Chapters worldwide. Wikipedia (in all its 280 language versions) has around 15.000.000 registered users (some 145.000 active) and an unknown number of unregistered collaborators. So on the one hand there are some 145.000 people who are writing, editing, commenting, discussing, researching, etc., and on the other hand there are some 100 people who are mainly concerned with marketing, recruitment, technical maintenance, tool programming. The explanation for this divide into an ocean of unpaid contributors doing the content and a tiny number of paid people doing the maintenance work (technical and other) may be something like this: There is work to be done on a regular basis for which the Foundation or the Chapters cannot find people committing on a regular and voluntary basis. So in order to keep the projects going people have to be hired to do this necessary work.
On the face of it this seems a rather straightforward and plausible explanation. But in reality it is an excuse and evasion and shows how utterly unaware even employees of Wikimedia Foundation are of the nature of their occupation. Maintenance work like office work, server maintenance, book-keeping, tool programming have to be done on a regular basis. It is the missing commitment and a lack of qualification for this work that seemingly forces the Foundation and the Chapters to purchase these repeated activities by hiring staff. But there are other tasks and obligations that have to be met on a regular basis as well that aren‘t paid either: All the admins who engage in content maintenance, who endure the cruel, endless discussions with authors about tiny changes in a piece of text, who delete or merge articles, who guard the quality of articles by supervising and at times restricting collaboration. The English Wikipedia alone has approx. 1.500 admins . Do they receive a salary like the Chapter- and Foundation-members? Is their work and the continuity of it less important, less demanding than that of any “Community Manager“?
The issue seems straightforward: As there is enough maintenance work to be done that is not paid for, the reason why Foundation- and Chapter-members receive any salary is unconvincing. As neither authors nor admins are paid, this means that the creation/donation and maintenance of content in general are not paid for. Would nobody receive a salary, then the argument for the existence of exploitation in Wikipedia would be much harder to make. But it is the payment to some 100 people that makes the unpaid work of every other contributor so shameful abusive. And so the question becomes: Why are the Chapter- and Foundation employees paid at all?
The reason for it is surprisingly simple. Wikipedia lives from content donations that it re-invests in articles. It is a structure (or a platform, a tool, a method) that allows for and enables the accumulation of tiny content donations. As the content is produced without pay, the volunteers will and can offer only a certain amount of time, energy, and expertise. Accordingly, this unpaid workforce has to be replaced permanently. As people come and go, the project has to keep going. Volunteers have to be recruited over and over again. This is achieved, e.g., with the help of events the Chapters organize but that have nothing to do with direct work on the encyclopaedia. So recruitment, not maintenance, is the main task of the employees of the Wikimedia Foundation and Chapters. It is the guarantee of a steady influx of unpaid but sufficiently qualified workforce that the employees of Foundation and Chapters are paid for. (The emphasis on “sufficiently qualified workforce” is important because it is one of Wikipedia’s everlasting myths that it is an “encyclopaedia that anyone can edit”. In fact, the less often an editor, e.g. a beginner, contributes, the higher the statistical chances that the edit will be altered or deleted .) In this, the talk of “free knowledge“, of “sharing“, etc. serves as motivational pep-talk for reasons of recruitment. It‘s a kind of “management by birthday party“ rather than the regular “management by terror“ we find in the private sector. To keep the project going there is a need for cheerleaders who recruit people with the tales of the Open and the Free. And Open/Free has to be a positive ideology to entice and convince people to participate at all. In fact, Open/Free is just this: a recruitment tool.
That people donate constantly in order to be part of a greater endeavour is an all to well known phenomenon. It happens in political campaigns, organized religion, in the private sector, in franchise-style cults. In fact, that “Wikipedians” donate content to sustain and grow Wikipedia without keeping copyright on “their“ content and see it mingled with other content is similar to devotees in cults that constantly donate money or volunteer in order to keep the promise of redemption and with that the project going. That the completion of the mission is neither ever achieved in a cult nor in Wikipedia should be a further hint.
2 The Rich and the Poor
At first glance, Wikipedia is a collection of 280 different, language specific encyclopaedias. These encyclopaedias should be seen as distinct projects, as interchangeability between language versions isn‘t possible: even within the same lemma, the corresponding articles differ in content, sources, links, authors, quality. There is no matching across language versions besides the cross-linking of lemma-titles. The most comprehensive 10 language versions are English, German, French, Italian, Polish, Spanish, Japanese, Russian, Dutch, Portuguese. Chinese ranks #12, Persian #24, Arabic #25, Hindi #39, Yoruba is on # 72, Swahili # 78, Egyptian Arabic # 115, to name just a few. With regard to prominence approx. 57,5% of Wikipedia‘s worldwide traffic goes to the English Wikipedia, the rest divides among other versions: German 7,5%, Japanese 6,3%, Russian 6,3%, Spanish 5,3%, French 3,7%, Italian 3%., Portuguese 1,4%, Chinese 1,4%, Polish 1,3% The remaining 6,3% (!) of worldwide traffic splits over the rest of the 280 language versions all way beneath the 1%-threshold. 
The most comprehensive and most often visited language versions of Wikipedia are from affluent industrialized nations in Europe, North-America, Asia, and Oceania. But besides this, data show that most of the page views of these sites come from the same affluent regions as well.  Even the access to Wikipedia via mobile devices is primarily from the affluent parts of the North, West and South, and not, as one might have expected, from Africa.
Given the ideology and goal of Wikimedia that open information should benefit every human being and the fact that Wikipedia isn’t much accessed in the southern hemisphere, some obvious questions arise :
- Why is it that the most comprehensive and most often visited language versions come from affluent industrialized nations?
- What is the meaning of its being mostly affluent societies that create (and consume) open content?
- Why is it that affluent societies produce this open content the Openness of which they don‘t really need given their education system?
- Why do affluent nations keep producing such content in local language versions when there is no way that poor countries can participate in it due to language barriers?
- As open and free knowledge was supposed to be a beneficial tool for development, why does the majority of the world population seemingly declines to engage in Wikipedia at all?
- Why do poorer nations not invest in translating articles from affluent (!) language versions, as the licences allows them to do so?
- In short: Why do only the rich use Wikipedia and not the poor?
There are several components in the answer, some inherent to Wikipedia, some inherent to the societies that seemingly don’t use Wikipedia. Starting with the latter: Most people in agrarian (instead of urban) regions of the world not only might lack access to the internet, but they cannot afford the time to engage much with Wikipedia, be it as reader or contributor. With that comes the utterly uselessness of Wikipedia for the majority of people on the planet that are not yet living in a “knowledge society.”  The main problem here is Wikimedia’s data-centric and accordingly distorted view on knowledge, information, and learning:
Wikimedia’s flagship project, Wikipedia, empowers people to learn about whatever they want. Wikipedia succeeds because it is huge and comprehensive: it has information on practically every topic imaginable. But when Wikipedia does not have information on a topic, or our information is incomplete or inaccurate, we must do better. (Wikimedia Strategic Plan, p.10)
Nobody learns a skill, a technique, by studying an encyclopaedia. You don’t learn a new language, how to do a proof, how to build a house, how to cure your child’s stomach pains by reading such articles. Knowledge may be the result of incorporating such encyclopaedic information, but it is ridiculous to assume that people invest time and energy to read an encyclopaedia just to learn something that helps them make “rational decisions about their lives.” (Wikimedia Strategic Plan, p. 2) To think that information, data, “facts”, as stored in encyclopaedias has anything to do with knowledge that enables people to make rational decisions is rather the reflection of a geek’s concept of knowledge than something vivid.
But the main answer to the questions raised above lies in the features of Wikipedia itself. As we saw, Wikipedia, in order to function properly, has to rely on three things:
- it has to be a Wiki (in order that several people can work on the same item);
- it has to have an open licence (in order to make continued use of the accumulated content (especially merger) even possible);
- it must structurally allow for even tiniest content donations (that can be accumulated over time for self-revising text-production).
In order for Wikipedia to function along these lines, it needs well-educated volunteers with enough spare time to spend on creating content donations. These conditions are mostly met in the affluent northern societies, not in the other nations.  But the main reason lies in the very process of content creation itself.
What happens when well-educated volunteers create content to donate for Wikipedia? They summarize what they‘ve learned. But what and where they have learned it is decisive – they have learned the things in schools and universities, from libraries, books, publicly paid teachers, publicly or privately funded education institutions, etc. So what they do when they create content is that they take content that has a closed licence (books, training materials, etc.) or that is otherwise restricted in access (archives, libraries) and create content with open licence from it. More precisely: The license of the content changes as the volunteer gives the content a new form. The same happens in all co-operations between Wikipedia or Wikimedia and publicly funded image archives, museums, collections, etc.: content that has either a restricted or closed licence is transformed into something that has an open licence. In the process of this conversion the format of the carrier of content is changed. So the result of this is: Wikipedia is a mechanism to convert content with closed licence into content with open licence.
It is for this characteristic of licence conversion that Wikipedia‘s language versions differ that much in quality and quantity: Poor countries simply don‘t have that much closed content – e.g., acquired in school and education from copyrighted books, special dictionaries, etc. – that contributors could put into a form that takes a CC licence and becomes open content. Likewise the continuing decline of voluntary contributors to language versions in the prosperous north is due to the increasing scarcity of closed content a volunteer could easily access and transform into open content that could become a content donation to Wikipedia.
3 The Threat of Proprietary Re-licensing
If licence conversion lies at the heart of the whole endeavour, then the public in the affluent societies faces some ugly prospects.
Wikipedia engages in co-operations with national archives, cultural institutions, educational bodies. They open their content to Wikipedia. But its only Wikipedia, not them, that benefits from these co-operations, as it is the institutions that give content to Wikipedia and not the other way around. Likewise, donations collected by the Foundation and the Chapters are not shared with the institutions they co-operate with, even as these institutions starve heavily due to lack of public funding. Like Ubuntu, Wikipedia takes without much giving back.
Public institutions like libraries, archives, etc., not only store content, or make it accessible and improve the ways access may be given, they also curate content. The task of these bodies is to safeguard content-integrity for a long time. That is the reason why we have archives at all, and why we fund them – either publicly or privately. However, due to its Wiki-structure and other technical reasons Wikipedia cannot even in principle meet such obligations. So when in a license conversion a closed content becomes open, the attached obligation to safeguard the content-integrity gets lost. (One might even make the claim that copyright restrictions entail an obligation to curate content-integrity.) The important question is whether within such co-operations public institutions (archives, libraries, educational bodies) can reclaim the content and re-convert the license from open to closed. If this is not possible, and what became open stays open, then all the co-operating institutions and bodies place themselves at the mercy of a private foundation that is not accountable to the public, but only to its Board and its members. So the handing over of content is not trivial. But the public (and some private institutions) produced, stored, curated the content with public and private money for many years, even centuries. So why does the public hand over its accumulated data and content to a private foundation that has no oversight whatsoever? As the Wikimedia Foundation maintains all the servers that store Wikipedia, what happens if it simply shuts them down? Or starts to re-license the content? Where does this tremendous faith that Wikipedia simply cannot do evil come from?
These questions shouldn’t be easily dismissed. We’ve already seen such negative developments in the case of Ubuntu.
The flavour of Ubuntu 6.04 (from 2006) (and earlier) changed remarkably with Ubuntu 10.04 (from 2010). The former seemed to be a humanistic enterprise created in acts of solidarity, harmony, and mutual respect.  With the latter, Mark Shuttleworth changed look, feel, desktop apps, and tried to give it a more business-like and professional appearance. At the time that the Ubuntu of Ubuntu was replaced with UbuntuOne, Ubuntu had earned a major share in the market of Linux distros. M.S. succeeded in convincing Dell to offer laptops with Ubuntu pre-installed . But still the whole business idea of Canonical Ltd. (registered in the tax haven Isle of Man ) – to give away the distro for free and earn money with support – didn’t recoup costs or garner any profits. After 7 years without revenues and profit (and a lot of private money of M.S.) the whole project Ubuntu seems a failure. There is no money in the Open. So in 2010 and 2011 M.S. adjusted the distro, turned away from the desktop market (where there is no money), and tried to enter the server-, cloud-, and music-store-market. With that came proprietary software like UbuntuOne, conflicts of distributing fees from affiliated online music stores, and proprietary re-lincensing issues. But most importantly, M.S. changed the Contributor License Agreement in order to give Canonical more control and rights over the programming that were contributed to Ubuntu.  In the same period of time Canonical didn‘t seem to give back newly produced code from Ubuntu to the Debian project it once had heavily relied on. This was a major grievance in FLOSS discussions in 2010, contributing to the image of Canonical trying to exploit free contributions of the FLOSS movement for financial gains.
So, as Canonical was able to change the whole attitude, flavour, and licensing practice within 4 (!) years of development of Ubuntu, what guarantees do we have that Wikimedia may not one day do the same and re-license “its” content or make it available in some kind of dual licensing as is custom in FLOSS (and often harshly criticised)? Nowhere in Wikimedia’s license statements do we find something that actually bars a future dual licensing that might keep the public’s access free while placing some “corporate version” under copyright and market it. Wikimedia could achieve this by altering or enhancing in-house the public free version of Wikipedia (or some other of its projects) and tailor it to the specific needs of a customer. It could create, e.g., parallel search queries through all its different language versions as a “premium service” to sell. Or it could harvest meta data. There are many ways Wikimedia could make products out of the content it has collected so far, and given the possibility of a dual licensing, it may or may not entail in some “corporate version” a “promise back” commitment that ensures that the content assembled earlier under a CC license stay available CC-free even if it shows up in the proprietary licensed “corporate version”.  Or it could go other ways and shut down its servers, start building a paywall, demand a fee for privileged editor rights, or what have you.
With those possibilities at hand, the whole conversion of content under closed license into content under open license looks really scary. True, it has advantages for readers, as many specialized encyclopaedias are still not “open” or even online. In this respect much of the conversion looks helpful. But we should not conflate the advantages for readers with the advantages for the Foundation, its Chapters, or for the co-operating institutions. (More strictly, the advantages for the readers are a good cover for the exploitation that happens on several levels.) The options of dual licensing and other forms of proprietary re-use of Wikipedia content remains, and scary little of it is discussed publicly yet.
Seen this way, Wikipedia is very much a content kraken like Google is data kraken. It pulls content donations that convert licensing of the content, it pulls from institutions with which it “co-operates“, and it leaves territory fallow where there is no possibility of conversion of licence.
4 The Impact of Monoculture
Wikipedia is by now the primary medium of encyclopaedic information on the planet. Its dominance pushed many other encyclopaedias – be it print or online – into oblivion or out of business . With 97% of market share of online encyclopaedias in 2009, it became the most prominent tool in online search for factual information. In being thus pervasive, it created a monoculture of encyclopaedic information. This monoculture has several aspects that arise partly from Wikipedia’s own functionality and partly from the interplay with external factors and conditions.
Wikipedia’s pervasiveness was fostered dramatically through the interplay with search engines like Google, Bing or any other engine that ranks search results due to certain criteria. Results from Wikipedia now show up in the first 5 returns of any search query. That means that any other online dictionary or encyclopaedia is now most often left out of the immediate return list or banned to a place far away. The consequence is that on default we only get (or reach) articles from Wikipedia on the topics we are interested in. With that we lose a multitude of perspectives on a topic that only comes with a variety of dictionaries and sources that are now simply no longer visible.
Finding information without Wikipedia becomes increasingly difficult. In order to gain different results, one needs to know the names of other reliable and accepted encyclopaedias and specialized dictionaries. Simply excluding Wikipedia with a Boolean operator doesn’t suffice. Without those names, one is not able to even start a search query that tries to go beyond Wikipedia simply because they are too low on Google’s ranking list and don’t show up. So pervasiveness of Wikipedia and the lack of knowledge of appropriate encyclopaedic alternatives leave one at the mercy of Wikipedia. 
The situation is further aggravated by Wikipedia’s editing policy. Classical encyclopaedias were edited and written by experienced researchers or trained editors (under supervision of an editorial board). Their expertise not only resided in the command of their field of expertise, but due to their experience they were able to weigh the relevance of sources that would enter into the article. Today, Wikipedia believes that anyone can do this work and achieve the same quality. But given the collaboration of individual authors and volunteering editing admins, the articles consist primarily in adding pieces of information to given ones. This may prove sensible in the areas of SMT, where we can point to data and facts, but it yields rather clumsy results in the areas of the Humanities and Social sciences.
Wikipedia’s success is rather a disadvantage. The loss of different perspectives on a topic cannot be countered with an increase of information in one article. (Leaving problems of qualification and supervision of authors and editors aside.) In former times different encyclopaedias dealt with the same topics in different ways. Restrictions in space not only necessitated selection of topics or brevity in description, but demanded the abandonment of any goal of completeness. Accordingly, different encyclopaedias would have treated the same topic from different angles. This is not only true for special dictionaries but for the big encyclopaedias as well. The advantage of scarcity of printable space was the deliberate abandonment of the goal of any completeness in description, and instead the deliberate choice of an editorial perspective under which the topics were to be treated. Accordingly, a variety of dictionaries would have given a far more complete account of an item than one major encyclopaedia. In this respect Wikipedia misread the advantages of scarcity of space, and fell victim to the illusion that limitless space suffices to render all aspects of an item in one dictionary entry by just stringing together pieces of information under some ominous “neutral standpoint“. 
As Wikipedia and Google teamed up (not intentionally) to give returns to search queries, the simple vastness of Wikipedia’s articles creates some delicate side effects: It is very difficult to retrieve information about Wikipedia, because every query about it via a search engine will return a Wikipedia entry that merely explicates the search term. Queries about Wikipedia will be “redirected” to lemmata inside of Wikipedia. (If you want to know, e.g., from what places on the planet the most and the fewest visits of Wikipedia occur, you don’t get an overview of such places, but, e.g., an entry dealing with tourism and what places to visit.) That means, that in order to retrieve information about Wikipedia you have to rely primarily on statements and pages from Wikimedia Foundation itself – provided you find any. So not only does Wikipedia’s prevalence hinder the access to other sources of knowledge, it prevents finding information about itself. Transparency and public control are effectively curtailed.
With all this, the consequences are rather bleak. Wikipedia’s success shows how more and more “information” doesn’t enhance “knowledge” but rather prevents it while streamlining curiosity. Surely, this was not the objective Wikipedia had in mind when it started business. But perhaps this is what it embraces to become one.
 ∧ Jerry Mander, In the Absence of the Sacred: The Failure of Technology and the Survival of the Indian Nations (San Francisco: Sierra Club Books, 1992), p. 49 f.
 ∧ As can be found, e.g., in the “Wikimedia Strategic Plan” for 2015.
 ∧ It is really difficult to get any reliable information on such numbers as web pages are hard to find.
 ∧ The numbers were retrieved from Alexa Internet , August 6, 2011; the article “Wikipedia“ cites slightly different numbers, as they were retrieved May 24, 2011.
 ∧ Compare Erik Zachte‘s “Wikipedia edits visualized“, especially the animation that can be switched to page views (after choosing topographic of country-map presentation, press ‘ 8 ‘ to switch from the edits to the page views. Press ‘ 9 ‘ to see the dispersion of mobile devices).
 ∧ The term “knowledge society” rather means “data society”. Every populace, all indigenous people, all oral cultures, all human collaboration that is not distracted by employment and consumerism but deals with dreams, stories, and the narrative strata of reality is a “knowledge society”. Most of them will just not be “data society” though.
 ∧ Note that the well-educated elites in the southern hemisphere will often use one of the popular language versions of Wikipedia rather than create a local one. Note further that local elites (like every elite worldwide) disdain their fellow countrymen, resulting in eagerness to separate themselves from them. So no local Wikipedia language version will be produced by them.
 ∧ The air of community, of togetherness in building something new became not only characteristic of the Ubuntu experience (!), it was used to build a very strong and loyal support community around the distro. Add to this teasers like the little video with Nelson Mandela, explaining the concept of “Ubuntu”, and this was seemingly all you needed to convince young urbanites that being part of this “community” that built and strengthened a private company’s Linux distro was tantamount to building a better world. It became fashionable to have Linux on your machine, even if you didn’t know what Linux was, couldn’t operate the shell, and had no idea what FLOSS was supposed to mean. Freedom wasn’t part of the reasoning. That Nelson Mandela had elaborated and relied on the concept of Ubuntu as a way to reconcile the different populaces in South Africa after Apartheid is usually lost. In this context, Ubuntu was an emphasis of reconciliation, togetherness, unity and solidarity. Desmond Tutu gave a description of a person that is or has Ubuntu as “a person that is open and that supports others and doesn’t feel threatened when others are brilliant in something.” Mark Shuttleworth‘s use of this concept in order to create a product with the help of many volunteers seems a perversion of the concept of Ubuntu.
 ∧ The new Contributor License Agreements (retrieved August 6, 2011) Canonical wants individual contributors and entities to sign seem different from those that were under heavy critique in autumn 2010. For a brief overview on the state of the CLA then and the critique, see Bruce Byfield, “Ubuntu, Canonical Wallow in Muddy Waters with Contributors’ Agreements”. I will not follow these discussion here any further.
 ∧ With regard to Ubuntu and FLOSS, these problems are intensively discussed by Bradley M. Kuhn in his “Project Harmony Considered Harmful” and “Does “Open Core” Actually Differ from Proprietary Relicensing?”, the latter of which coins the cited phrase “promise back”, alluding to Richard Stallman’s “When a company asks for your copyright”. All three are highly recommended.
 ∧ For years my most famous example was the entry “Alchemy” that used to be covered by awfully bad articles in the English, German, and Spanish Wikipedia. Even as the quality of those articles improved much, they are still very different from the sophisticated entry in The Dictionary of the History of Ideas. The same goes for the lemma “Reformation”. Or compare the biographic entry on the British philosopher George Edward Moore in the English Wikipedia with the entry in the Internet Encyclopedia of Philosophy. Worlds apart ! Now, while I’m able to assess the quality of some dictionaries and encyclopaedias from the Humanities, I’d be utterly lost with regard to those from the SMT. And as I don’t know the alternatives here, how could I find credible alternatives to Wikipedia?
 ∧ This might be countered with the help of the 280 language versions of Wikipedia. Given the same lemma in different versions, the corresponding articles will differ drastically in length, scope, content, links, etc. Different language versions could thus mimic different editorial perspectives (as realized in different encyclopaedias). But before this could become a proper feature, the quality of the articles would have to increase substantially.
* * *