To state it bluntly in advance, I think the answer is “no”.
With this I don’t mean to belittle the work of thousands of volunteers (and of some paid Wikimedia staff too) who still have the stamina to go through the endless procedures of clarifying problems and cleaning up messes. Despite the recent turmoil over “sexist categorization”  and revenge-editing , I am not sure how much of this is a structural problem or mainly the result of an abuse of possibilities existing editing policies didn’t effectively rule out. But whatever it may be, those problems should be seen in connection with the vast amount of articles, in many language versions, that managed not to stir up trouble so far.
Having said that, I want to point out five aspects that suggest Wikipedia’s decline. They are not five distinct areas in which Wikipedia loses importance, but features that jointly contribute to its diminishing relevance.
1. Built-in untrustworthiness
The latest “scandals” around Wikipedia need not be read as structural deficiencies. But that doesn’t mean that Wikipedia’s tools and structures are in any sense “neutral” either, allowing the good as well as the bad. In fact, its tools and structures, designed to facilitate collaborative text-production, directly contribute to a loss of trustworthiness in the veracity and balance of its articles.
To collect tiny “donations” of content and editing efforts for reuse and mash-up one needs the tools of a wiki and of open licenses. But a huge mass of donations can only be collected when the process of contribution is anonymous. Anonymity makes editing easy and thus invites more contributors to engage. (It also helps to avoid charges of libel.) But it also provides the main entry for revenge-editing, marketing-editing, PR-editing, biases, hoaxes, incompetent re-editing.
The downsides of collaborative text-production are thus manifold. Articles keep changing, and if not someone monitors an article constantly, its quality decreases, not increases over time. Articles may stay the same for quite a while, with all their faults and flaws. They will be edited and corrected only when editors are pushed to do so, e.g., in case of some discovery that caused public outcry. But with millions of articles in the English, German and now Spanish language version, most flaws will take years to be detected.
But still, the main creed of Wikipedians is that regardless of all the articles’ flaws and errors, over the long run, errors will be detected and corrected. In the long run, it is said, collaborative editing will produce reliable content. In the meantime, we have to live with the stub.
In this respect, conventional encyclopaedias and Wikipedia don’t seem to differ. But whereas this holds true in the case of conventional dictionaries, in the case of Wikipedia it is mistaken and in fact undermines its credibility as a collection of reliable encyclopaedic information.
Every encyclopaedia is a faulty endeavour, be it Wikipedia or e.g. the Encyclopaedia Britannica. Articles will always be provisional, relative to the state of research, the current understanding of the subject matter, or specifics of the editorial policies. But in general, people think, articles will improve over time, as understanding of the subject matter improves and research proceeds.
It is at this junction that Wikipedia and other encyclopaedias part ways. In conventional encyclopaedias factual errors are unintended and coincidental. In Wikipedia errors and flaws may be the result of intent, even malice and may go undetected for years. Whereas the reader of conventional encyclopaedias and dictionaries can usually trust the veracity and balance of the articles because errors in the article are coincidental, this is not the case in Wikipedia. Here one cannot trust the veracity and balance of the articles, simply because errors are often deliberate (due to marketing, revenge editing, hoaxes, vandalism, incompetent re-editing), and are detected only by chance. The consequence is that even as Wikipedia strives for the truth in some distant future, the present state of its articles is far from “as good as we know it now”. Rather, its articles are “as good as we until now had contributors willing to write it down (fingers crossed)”.
This discrepancy in trustworthiness between conventional encyclopaedias and Wikipedia even holds if the articles in both should happen to have exactly the same number of flaws. It wouldn’t really matter. Skepticism towards the reliability of Wikipedia’s articles is the consequence of a constant, anonymous and open revision process of articles. It undermines the trustworthiness of Wikipedia’s articles.
This doesn’t mean that all is doom and gloom. But it means that there is a serious constraint on what to trust in Wikipedia. When it comes to the (“hard”) sciences from the STM-area, most information will be more or less reliable, esp. as nobody would gain from rendering them biased. But this is not so in most areas of politics, history, economy, literature, and all fields where the articles are about living persons. Here people may gain from bias. And it will not be true in the field of the humanities (term applied broadly) either. Here the articles rest less on information or data, and more on aspects and perspectives. But these cannot properly be dealt with in Wikipedia, because perspectives are mapped as authors’ opinions, not as features of the topic, and conflicting opinions are usually eliminated towards the level of least common denominator.
2. Built-in biases on topics and audiences
Wikipedia’s most comprehensive and most visited language versions come from affluent nations. It’s affluent societies, not poor or developing ones, that produce and consume this open content. It is ironic that Wikipedia’s goal – to make “all human knowledge” available to everybody on the planet – doesn’t seem to resonate much in those parts of the globe its authors think would benefit the most.
This is not only due to the fact that “westernized topics” and biases are irrelevant to most communities and societies elsewhere. It is far more the result of a western concept of knowledge that treats knowledge – i.e., the knowable content – as something distinct from the knowing subject. This separation enables the “access” of many persons to “the same” content, thereby de-personalizing it and constituting its objectivity. But in most non-industrialized societies around the globe, be it agrarian or nomadic, be it literal or oral societies, the separation of knowledge from the knowing subject(s) isn’t shared. Knowledge as something different from the knowing person makes knowledge the “content” it is – an object, to be passed around until it arrives in some Smithonian Institute for the delight of some beaming anthropologists. In most parts of the globe however, knowledge is dependent on persons, it is a “know how”, not a “know that”. It cannot be converted into “literal content” without losing its essence. It is thus no convenient subject matter people would like to “share” in some Wikipedia language version.
With this comes thirdly that for most people aside from westernized (or industrialized) societies Wikipedia is simply a compendium of irrelevant information: you don’t heal the stomach pain of your child, or build a house, or learn a language by reading an encyclopaedia. In sum: there are structural reasons why only the rich (in the “Global North West”, not the poor (in the “Global South”) use Wikipedia.
3. An outdated model of knowledge-organization
Wikipedia not only relies on a crowdsourced wiki-process whose open licenses make possible the contribution and reuse of tiny contributions of content. It also relies on the structural feature of articles and lemmata, into which those “content donations” are organized. But with data-mining, semantic web, big data, automated text-processing, and developments in automated text-generation, there will be search- and editing-software in the near future that will crawl the web to find the most relevant information relative to a given search query. The queries will pull together information from a myriad of sources, all distributed and no longer confined to one site. In the near future, knowledge will no longer be laid down in form of articles. Rather, those articles will be written ad hoc and automatically as people proceed from one query to the next. Given this development, Wikipedia stands at the end of an era of media history, not at the beginning, as it is still structured like a printed book, not like a chatter on the marketplace. Search- and editing-algorithms will circumvent and replace Wikipedia’s static features of article and lemmata in a few years time.
4. Collaboration is not a model for curation
For Wikipedia there are not many “content donations” left to attract. To provide new content or edit the existing ones now takes more and more time for which the “content donation model” of Wikipedia and its “barnstar-reward-system” aren’t well suited. As Wikipedia’s growth slows and editors depart, the housekeeping and curation of the existing texts becomes more and more cumbersome. The state of a text versions is often not decided by argument but by playing the system on the one side and rather dogmatic decisions on the side of many admins. The crowdsourced process of text-generation seems to have reached its limit and doesn’t work well in curation and enhancement of existing pieces. The quality of Wikipedia’s main language versions will thus deteriorate in the long run, not improve.
5. The lack of alternatives
Wikipedia has never been a good encyclopaedia. Its main advantage isn’t the quality of its content or the number of its articles but easy access: just two or three clicks in the browser is far more convenient than getting up from the chair to look into a printed dictionary sitting on the shelf. With its easy access and Google’s ranking Wikipedia became the dominant provider for encyclopaedic information on the planet, supplanting all other online alternatives. For a quick-and-dirty research over breakfast news this is good enough. But the consequence is an encyclopaedic monoculture and thus a depletion in (qualified) perspectives only a plurality of sources, dictionaries, and encyclopaedias can provide. (Even if only Nobel Prize winners had written Wikipedia’s articles, the loss of a plurality of dictionaries and encyclopaedias means a loss, not a gain in knowledge.) Instead of trying to enhance Wikipedia’s quality, people should therefore put their energies into convincing publishers to make their specialized encyclopaedias and dictionaries available online as well. We need the already existing dictionaries and encyclopaedias freely accessible online. For that we will need publicly funded endowments that reimburse publishers for making their copyrighted content available. Those publishers would keep their copyright (and with that: the obligation to curate the content) but would be paid by the public. With these five points in mind I am inclined to shrug my shoulders when it comes to the prospects of Wikipedia. It is a rather outdated form of organizing encyclopaedic information that will last as long as there are people left willing to do the thankless work of maintaining it. Again, I don’t wish anybody bad nor do I want to belittle the tremendous efforts people have put into it. But to acknowledge the work done and the experiences gained doesn’t mean the project itself is still relevant. It rather shows what cannot be done “inside” Wikipedia and is still left to do in order to make accessible to all the huge amount of knowledge that already exists. This work lies outside Wikipedia and the impoverishing monoculture that came with it.
 ∧ Just three articles : James Gleick, Wikipedia’s Women Problem, The New York Review of Books (April 29, 2013, 5:09 p.m.). | Kevin Morris, Does Wikipedia’s sexism problem really prove that the system works?, The Daily Dot (May 1, 2013). | Martha Nichols / Lorraine Berry, What Shoud We Do About Wikipedia? : A Call for Information Activism, talking writing (May 20, 2013).
 ∧ Andrew Leonard, Revenge, ego and the corruption of Wikipedia, Salon (May 17, 2013, 6:05 PM UTC). | Andrew Leonard, Wikipedia cleans up its mess, Salon, (May 21, 2013, 4:31 PM UTC).
 ∧ Cf. René Koenig, “Wikipedia : Between lay participation and elite knowledge representation”, Information, Communication & Society, vol. 16, no. 2, March 2013, pp. 160-177. In this paper, Koenig discusses via the example of the 9/11 attacks how in the development of the corresponding Wikipedia article “alternative interpretations” have first been labelled “conspiracy theories”, and then placed outside the article that now give just the “official” version of the events. I find Koenig’s approach very interesting as it may help explain why Wikipedia articles on topics from the Humanities are usually of minor quality. His approach might explain, how, in the search for the least common denominator, complexity, different perspectives, and vague issues fall prey to an editorial process that favours the attitude of scientism.
 ∧ See my Shadows in Wikipedia (August 18, 2011), section 2 and endnote 5.
 ∧ This is not peculiar to Wikipedia or any other encyclopaedia, but to all endeavours to “archive” and “curate” eletronically indigenous knowledge.
 ∧ For some elaboration see my After Wikipedia (August 1, 2012).
 ∧ I am not able to assess the role of Wikidata in this regard.
 ∧ To see a somewhat scary “recipe” how to overcome any opposition on the way to the Wikipedia-article you like, see the comment by Tektok (May 6, 2013, 17:23) to G.[lenn] F.[leishman], Who Really Runs Wikipedia?, The Economist (May 5, 2013, 23:50).
* * *