1. Max Weber, 1897.
2. Jean-Francois Lyotard, 1979.
3. Helen Prejean, 1997.
4. Jim Hickey, 2014.
Numero Uno—“There is no absolutely ‘objective’ scientific analysis of culture – or put perhaps more narrowly but certainly not essentially differently for our purposes – of ‘social phenomena’ independent of special and ‘one-sided’ viewpoints according to which – expressly or tacitly, consciously or unconsciously – they are selected, analysed and organised for expository purposes. The reasons for this lie in the character of the cognitive goal of all research in social science which seeks to transcend the purely formal treatment of the legal or conventional norms regulating social life. The type of social science in which we are interested is an empirical science of concrete reality. Our aim is the understanding of the characteristic uniqueness of the reality in which we move. We wish to understand on the one hand the relationships and the cultural significance of individual events in their contemporary manifestations and on the other the causes of their being historically so and not otherwise. Now, as soon as we attempt to reflect about the way in which life confronts us in immediate concrete situations, it presents an infinite multiplicity of successively and coexistently emerging and disappearing events, both ‘within’ and ‘outside’ ourselves. The absolute infinitude of this multiplicity is seen to remain undiminished even when our attention is focused on a single ‘object,’ for instance, a concrete act of exchange, as soon as we seriously attempt an exhaustive description of all the individual components of this ‘individual phenomenon,’ to say nothing of explaining it causally. All the analysis of infinite reality which the finite human mind can conduct rests on the tacit assumption that only a finite portion of this reality constitutes the object of scientific investigation, and that only it is ‘important’ in the sense of being ‘worthy of being known.’ But what are the criteria by which this segment is selected? It has often been thought that the decisive criterion in the cultural sciences, too, was in the last analysis, the ‘regular’ recurrence of certain causal relationships. The ‘laws’ which we are able to perceive in the infinitely manifold stream of events must – according to this conception – contain the scientifically ‘essential’ aspect of reality. As soon as we have shown some causal relationship to be a ‘law,’ (i.e., if we have shown it to be universally valid by means of comprehensive historical induction, or have made it immediately and tangibly plausible according to our subjective experience), a great number of similar cases order themselves under the formula thus attained. Those elements in each individual event which are left unaccounted for by the selection of their elements subsumable under the ‘law’ are considered as scientifically unintegrated residues which will be taken care of in the further perfection of the system of ‘laws.’ Alternatively they will be viewed as ‘accidental’ and therefore scientifically unimportant because they do not fit into the structure of the ‘law;’ in other words, they are not typical of the event and hence can only be the objects of ‘idle curiosity.’ Accordingly, even among the followers of the Historical School we continually find the attitude which declares that the ideal, which all the sciences, including the cultural sciences, serve and toward which they should strive even in the remote future, is a system of propositions from which reality can be ‘deduced.’ As is well known, a leading natural scientist believed that he could designate the (factually unattainable) ideal goal of such a treatment of cultural reality as a sort of ‘astronomical’ knowledge.
Let us not, for our part, spare ourselves the trouble of examining these matters more closely – however often they have already been discussed. The first thing that impresses one is that the “astronomical” knowledge which was referred to is not a system of laws at all. On the contrary, the laws which it presupposes have been taken from other disciplines like mechanics. But it too concerns itself with the question of the individual consequence which the working of these laws in a unique configuration produces, since it is these individual configurations which are significant for us. Every individual constellation which it “explains” or predicts is causally explicable only as the consequence of another equally individual constellation which has preceded it. As far back as we may go into the grey mist of the far-off past, the reality to which the laws apply always remains equally individual, equally undeducible from laws. A cosmic “primeval state” which had no individual character or less individual character than the cosmic reality of the present would naturally be a meaningless notion. But is there not some trace of similar ideas in our field in those propositions sometimes derived from natural law and sometimes verified by the observation of “primitives,” concerning an economic-social “primeval state” free from historical “accidents,” and characterised by phenomena such as “primitive agrarian communism,” sexual “promiscuity,” etc., from which individual historical development emerges by a sort of fall from grace into concreteness?
The social-scientific interest has its point of departure, of course, in the real, i.e., concrete, individually-structured configuration of our cultural life in its universal relationships which are themselves no less individually structured, and in its development out of other social cultural conditions, which themselves are obviously likewise individually structured. It is clear here that the situation which we illustrated by reference to astronomy as a limiting case (which is regularly drawn on by logicians for the same purpose) appears in a more accentuated form. Whereas in astronomy, the heavenly bodies are of interest to us only in their quantitative and exact aspects, the qualitative aspect of phenomena concerns us in the social sciences. To this should be added that in the social sciences we are concerned with psychological and intellectual phenomena the empathic understanding of which is naturally a problem of a specifically different type from those which the schemes of the exact natural sciences in general can or seek to solve. Despite that, this distinction in itself is not a distinction in principle, as it seems at first glance. Aside from pure mechanics, even the exact natural sciences do not proceed without qualitative categories. Furthermore, in our own field we encounter the idea (which is obviously distorted) that at least the phenomena characteristic of a money-economy – which are basic to our culture – are quantifiable and on that account subject to formulation as “laws.” Finally it depends on the breadth or narrowness of one’s definition of “law” as to whether one will also include regularities which because they are not quantifiable are not subject to numerical analysis. Especially insofar as the influence of psychological and intellectual factors is concerned, it does not in any case exclude the establishment of rules governing rational conduct. Above all, the point of view still persists which claims that the task of psychology is to play a role comparable to mathematics for the Geisteswissenschaften in the sense that it analyses the complicated phenomena of social life into their psychic conditions and effects, reduces them to their most elementary possible psychic factors and then analyses their functional interdependences. Thereby a sort of “chemistry,” if not “mechanics,” of the psychic foundations of social life would be created. Whether such investigations can produce valuable and – what is something else – useful results for the cultural sciences, we cannot decide here. But this would be irrelevant to the question as to whether the aim of socioeconomic knowledge in our sense, i.e., knowledge of reality with respect to its cultural significance and its causal relationships, can be attained through the quest for recurrent sequences. Let us assume that we have succeeded by means of psychology or otherwise in analysing all the observed and imaginable relationships, of social phenomena into some ultimate elementary “factors,” that we have made an exhaustive analysis and classification of them and then formulated rigorously exact laws covering their behaviour. – What would be the significance of these results for our knowledge of the historically given culture or any individual phase thereof, such as capitalism, in its development and cultural significance? As an analytical tool, it would be as useful as a textbook of organic chemical combinations would be for our knowledge of the biogenetic aspect of the animal and plant world. In each case, certainly an important and useful preliminary step would have been taken. In neither case can concrete reality be deduced from “laws” and “factors.” This is not because some higher mysterious powers reside in living phenomena (such as “dominants,” “entelechies,” or whatever they might be called). This, however, presents a problem in its own right. The real reason is that the analysis of reality is concerned with the configuration into which those (hypothetical!) “factors” are arranged to form a cultural phenomenon which is historically significant to us. Furthermore, if we wish to “explain” this individual configuration “causally” we must invoke other equally individual configurations on the basis of which we will explain it with the aid of those (hypothetical!) “laws.”
The determination of those (hypothetical) “laws” and “factors” would in any case only be the first of the many operations which would lead us to the desired type of knowledge. The analysis of the historically given individual configuration of those “factors” and their significant concrete interaction, conditioned by their historical context and especially the rendering intelligible of the basis and type of this significance would be the next task to be achieved. This task must be achieved, it is true, by the utilisation of the preliminary analysis, but it is nonetheless an entirely new and distinct task. The tracing as far into the past as possible of the individual features of these historically evolved configurations which are contemporaneously significant, and their historical explanation by antecedent and equally individual configurations would be the third task. Finally the prediction of possible future constellations would be a conceivable fourth task.
For all these purposes, clear concepts and the knowledge of those (hypothetical) “laws” are obviously of great value as heuristic means – but only as such. Indeed they are quite indispensable for this purpose. But even in this function their limitations become evident at a decisive point. In stating this, we arrive at the decisive feature of the method of the cultural sciences. We have designated as “cultural sciences” those disciplines which analyse the phenomena of life in terms of their cultural significance. The significance of a configuration of cultural phenomena and the basis of this significance cannot however be derived and rendered intelligible by a system of analytical laws, however perfect it may be, since the significance of cultural events presupposes a value-orientation toward these events. The concept of culture is a value-concept. Empirical reality becomes “culture” to us because and insofar as we relate it to value ideas. It includes those segments and only those segments of reality which have become significant to us because of this value-relevance. Only a small portion of existing concrete reality is colored by our value-conditioned interest and it alone is significant to us. It is significant because it reveals relationships which are important to us due to their connection with our values. Only because and to the extent that this is the case is it worthwhile for us to know it in its individual features. We cannot discover, however, what is meaningful to us by means of a “presuppositionless” investigation of empirical data. Rather, perception of its meaningfulness to us is the presupposition of its becoming an object of investigation. Meaningfulness naturally does not coincide with laws as such, and the more general the law the less the coincidence. For the specific meaning which a phenomenon has for us is naturally not to be found in those relationships which it shares with many other phenomena.
The focus of attention on reality under the guidance of values which lend it significance and the selection and ordering of the phenomena which are thus affected in the light of their cultural significance is entirely different from the analysis of reality in terms of laws and general concepts. Neither of these two types of the analysis of reality has any necessary logical relationship with the other. They can coincide in individual instances but it would be most disastrous if their occasional coincidence caused us to think that they were not distinct in principle. The cultural significance of a phenomenon, e.g., the significance of exchange in a money economy, can be the fact that it exists on a mass scale as a fundamental component of modern culture. But the historical fact that it plays this role must be causally explained in order to render its cultural significance understandable. The analysis of the general aspects of exchange and the technique of the market is a – highly important and indispensable – preliminary task. For not only does this type of analysis leave unanswered the question as to how exchange historically acquired its fundamental significance in the modern world; but above all else, the fact with which we are primarily concerned, namely, the cultural significance of the money-economy – for the sake of which we are interested in the description of exchange technique, and for the sake of which alone a science exists which deals with that technique – is not derivable from any “law.” The generic features of exchange, purchase, etc., interest the jurist – but we are concerned with the analysis of the cultural significance of the concrete historical fact that today exchange exists on a mass scale. When we require an explanation, when we wish to understand what distinguishes the social-economic aspects of our culture, for instance, from that of Antiquity, in which exchange showed precisely the same generic traits as it does today, and when we raise the question as to where the significance of “money economy” lies, logical principles of quite heterogenous derivation enter into the investigation. We will apply those concepts with which we are provided by the investigation of the general features of economic mass phenomena – indeed, insofar as they are relevant to the meaningful aspects of our culture, we shall use them as means of exposition. The goal of our investigation is not reached through the exposition of those laws and concepts, precise as it may be. The question as to what should be the object of universal conceptualisation cannot be decided “presuppositionlessly” but only with reference to the significance which certain segments of that infinite multiplicity which we call “commerce” have for culture. We seek knowledge of an historical phenomenon, meaning by historical: significant in its individuality. And the decisive element in this is that only through the presupposition that a finite part alone of the infinite variety of phenomena is significant, does the knowledge of an individual phenomenon become logically meaningful. Even with the widest imaginable knowledge of “laws,” we are helpless in the face of the question: how is the causal explanation of an individual fact possible – since a description of even the smallest slice of reality can never be exhaustive? The number and type of causes which have influenced any given event are always infinite and there is nothing in the things themselves to set some of them apart as alone meriting attention. A chaos of “existential judgments” about countless individual events would be the only result of a serious attempt to analyse reality “without presuppositions.” And even this result is only seemingly possible, since every single perception discloses on closer examination an infinite number of constituent perceptions which can never be exhaustively expressed in a judgment. Order is brought into this chaos only on the condition that in every case only a part of concrete reality is interesting and significant to us, because only it is related to the cultural values with which we approach reality. Only certain sides of the infinitely complex concrete phenomenon, namely those to which we attribute a general cultural significance, are therefore worthwhile knowing. They alone are objects of causal explanation. And even this causal explanation evinces the same character; an exhaustive causal investigation of any concrete phenomena in its full reality is not only practically impossible – it is simply nonsense. We select only those causes to which are to be imputed in the individual case, the “essential” feature of an event. Where the individuality of a phenomenon is concerned, the question of causality is not a question of laws but of concrete causal relationships; it is not a question of the subsumption of the event under some general rubric as a representative case but of its imputation as a consequence of some constellation. It is in brief a question of imputation. Wherever the causal explanation of a “cultural phenomenon” – a “historical individual” is under consideration, the knowledge of causal laws is not the end of the investigation but only a means. It facilitates and renders possible the causal imputation to their concrete causes of those components of a phenomenon the individuality of which is culturally significant. So far and only so far as it achieves this, is it valuable for our knowledge of concrete relationships. And the more “general” (i.e., the more abstract) the laws, the less they can contribute to the causal imputation of individual phenomena and, more indirectly, to the understanding of the significance of cultural events.
What is the consequence of all this?
Naturally, it does not imply that the knowledge of universal propositions, the construction of abstract concepts, the knowledge of regularities and the attempt to formulate “laws” have no scientific justification in the cultural sciences. Quite the contrary, if the causal knowledge of the historians consists of the imputation of concrete effects to concrete causes, a valid imputation of any individual effect without the application of “nomological” knowledge – i.e., the knowledge of recurrent causal sequences – would in general be impossible. Whether a single individual component of a relationship is, in a concrete case, to be assigned causal responsibility for an effect, the causal explanation of which is at issue, can in doubtful cases be determined only by estimating the effects which we generally expect from it and from the other components of the same complex which are relevant to the explanation. In other words, the “adequate” effects of the causal elements involved must be considered in arriving at any such conclusion. The extent to which the historian (in the widest sense of the word) can perform this imputation in a reasonably certain manner, with his imagination sharpened by personal experience and trained in analytic methods, and the extent to which he must have recourse to the aid of special disciplines which make it possible, varies with the individual case. Everywhere, however, and hence also in the sphere of complicated economic processes, the more certain and the more comprehensive our general knowledge the greater is the certainty of imputation. This proposition is not in the least affected by the fact that even in the case of all so-called “economic laws” without exception, we are concerned here not with “laws” in the narrower exact natural-science sense, but with adequate causal relationships expressed in rules and with the application of the category of “objective possibility.” The establishment of such regularities is not the end but rather the means of knowledge. It is entirely a question of expediency, to be settled separately for each individual case, whether a regularly recurrent causal relationship of everyday experience should be formulated into a “law.” Laws are important and valuable in the exact natural sciences, in the measure that those sciences are universally valid. For the knowledge of historical phenomena in their concreteness, the most general laws, because they are most devoid of content, are also the least valuable. The more comprehensive the validity – or scope – of a term, the more it leads us away from the richness of reality since in order to include the common elements of the largest possible number of phenomena, it must necessarily be as abstract as possible and hence devoid of content. In the cultural sciences, the knowledge of the universal or general is never valuable in itself.
The conclusion which follows from the above is that an “objective” analysis of cultural events, which proceeds according to the thesis that the ideal of science is the reduction of empirical reality to “laws,” is meaningless. It is not meaningless, as is often maintained, because cultural or psychic events for instance are “objectively” less governed by laws. It is meaningless for a number of other reasons. Firstly, because the knowledge of social laws is not knowledge of social reality but is rather one of the various aids used by our minds for attaining this end; secondly, because knowledge of cultural events is inconceivable except on a basis of the significance which the concrete constellations of reality have for us in certain individual concrete situations. In which sense and in which situations this is the case is not revealed to us by any law; it is decided according to the value-ideas in the light of which we view “culture” in each individual case. “Culture” is a finite segment of the meaningless infinity of the world process, a segment on which human beings confer meaning and significance. This is true even for the human being who views a particular culture as a mortal enemy and who seeks to “return to nature.” He can attain this point of view only after viewing the culture in which he lives from the standpoint of his values, and finding it “too soft.” This is the purely logical-formal fact which is involved when we speak of the logically necessary rootedness of all historical entities in “evaluative ideas.” The transcendental presupposition of every cultural science lies not in our finding a certain culture or any “culture” in general to be valuable but rather in the fact that we are cultural beings, endowed with the capacity and the will to take a deliberate attitude toward the world and to lend it significance. Whatever this significance may be, it will lead us to judge certain phenomena of human existence in its light and to respond to them as being (positively or negatively) meaningful. Whatever may be the content of this attitude, these phenomena have cultural significance for us and on this significance alone rests its scientific interest. Thus when we speak here of the conditioning of cultural knowledge through evaluative ideas (following the terminology of modern logic), it is done in the hope that we will not be subject to crude misunderstandings such as the opinion that cultural significance should be attributed only to valuable phenomena. Prostitution is a cultural phenomenon just as much as religion or money. All three are cultural phenomena only because, and only insofar as, their existence and the form which they historically assume touch directly or indirectly on our cultural interests and arouse our striving for knowledge concerning problems brought into focus by the evaluative ideas which give significance to the fragment of reality analysed by those concepts.
All knowledge of cultural reality, as may be seen, is always knowledge from particular points of view. When we require from the historian and social research worker as an elementary presupposition that they distinguish the important from the trivial and that they should have the necessary “point of view” for this distinction, we mean that they must understand how to relate the events of the real world consciously or unconsciously to universal “cultural values,” and to select out those relationships which are significant for us. If the notion that those standpoints can be derived from the “facts themselves” continually recurs, it is due to the naive self-deception of the specialist, who is unaware that it is due to the evaluative ideas with which he unconsciously approaches his subject matter, that he has selected from an absolute infinity a tiny portion with the study of which he concerns himself In connection with this selection of individual special “aspects” of the event, which always and everywhere occurs, consciously or unconsciously, there also occurs that element of cultural-scientific work which is referred to by the often-heard assertion that the “personal” element of a scientific work is what is really valuable in it, and that personality must be expressed in every work if its existence is to be justified. To be sure, without the investigator’s evaluative ideas, there would be no principle of selection of subject-matter and no meaningful knowledge of the concrete reality. Just as without the investigator’s conviction regarding the significance of particular cultural facts, every attempt to analyse concrete reality is absolutely meaningless, so the direction of his personal belief, the refraction of values in the prism of his mind, gives direction to his work. And the values to which the scientific genius relates the object of his inquiry may determine (i.e., decide) the “conception” of a whole epoch, not only concerning what is regarded as “valuable,” but also concerning what is significant or insignificant, “important” or “unimportant” in the phenomena.
Accordingly, cultural science in our sense involves ‘subjective’ presuppositions insofar as it concerns itself only with those components of reality which have some relationship, however indirect, to events to which we attach cultural significance. Nonetheless, it is entirely causal knowledge exactly in the same sense as the knowledge of significant concrete natural events which have a qualitative character. Among the many confusions which the overreaching tendency of a formal-juristic outlook has brought about in the cultural sciences, there has recently appeared the attempt to ‘refute’ the ‘materialistic conception of history’ by a series of clever but fallacious arguments which state that since all economic life must take place in legally or conventionally regulated forms, all economic ‘development’ must take the form of striving for the creation of new legal forms. Hence it is said to be intelligible only through ethical maxims, and is on this account essentially different from every type of ‘natural’ development. Accordingly the knowledge of economic development is said to be ‘teleological’ in character. Without wishing to discuss the meaning of the ambiguous term ‘development,’ or the logically no-less-ambiguous term ‘teleology’ in the social sciences, it should be stated that such knowledge need not be ‘teleological’ in the sense assumed by this point of view. The cultural significance of normatively regulated legal relations and even norms themselves can undergo fundamental revolutionary changes even under conditions of the formal identity of the prevailing legal norms. Indeed, if one wishes to lose one’s self for a moment in fantasies about the future, one might theoretically imagine, let us say, the ‘socialisation of the means of production’ unaccompanied by any conscious ‘striving’ toward this result, and without even the disappearance or addition of a single paragraph of our legal code; the statistical frequency of certain legally regulated relationships might be changed fundamentally, and in many cases, even disappear entirely; a great number of legal norms might become practically meaningless and their whole cultural significance changed beyond identification. De lege ferenda discussions may be justifiably disregarded by the ‘materialistic conception of history,’ since its central proposition is the indeed inevitable change in the significance of legal institutions. Those who view the painstaking labor of causally understanding historical reality as of secondary importance can disregard it, but it is impossible to supplant it by any type of a ‘teleology.’ From our viewpoint, ‘purpose’ is the conception of an effect which becomes a cause of an action. Since we take into account every cause which produces or can produce a significant effect, we also consider this one. Its specific significance consists only in the fact that we not only observe human conduct but can and desire to understand it.
Undoubtedly, all evaluative ideas are ‘subjective.’ Between the ‘historical’ interest in a family chronicle and that in the development of the greatest conceivable cultural phenomena which were and are common to a nation or to mankind over long epochs, there exists an infinite gradation of ‘significance’ arranged into an order which differs for each of us. And they are, naturally, historically variable in accordance with the character of the culture and the ideas which rule men’s minds. But it obviously does not follow from this that research in the cultural sciences can only have results which are ‘subjective’ in the sense that they are valid for one person and not for others. Only the degree to which they interest different persons varies. In other words, the choice of the object of investigation and the extent or depth to which this investigation attempts to penetrate into the infinite causal web, are determined by the evaluative ideas which dominate the investigator and his age. In the method of investigation, the guiding ‘point of view’ is of great importance for the construction of the conceptual scheme which will be used in the investigation. In the mode of their use, however, the investigator is obviously bound by the norms of our thought just as much here as elsewhere. For scientific truth is precisely what is valid for all who seek the truth.” Max Weber, “‘Objectivity’ in Social Science;” 1897
Numero Dos—“1. The Field: Knowledge in Computerised Societies Our working hypothesis is that the status of knowledge is altered as societies enter what is known as the postindustrial age and cultures enter what is known as the postmodern age. This transition has been under way since at least the end of the 1950s, which for Europe marks the completion of reconstruction. The pace is faster or slower depending on the country, and within countries it varies according to the sector of activity: the general situation is one of temporal disjunction which makes sketching an overview difficult. A portion of the description would necessarily be conjectural. At any rate, we know that it is unwise to put too much faith in futurology.
Rather than painting a picture that would inevitably remain incomplete, I will take as my point of departure a single feature, one that immediately defines our object of study. Scientific knowledge is a kind of discourse. And it is fair to say that for the last forty years the ‘leading’ sciences and technologies have had to do with language: phonology and theories of linguistics, problems of communication and cybernetics, modern theories of algebra and informatics, computers and their languages, problems of translation and the search for areas of compatibility among computer languages, problems of information storage and data banks, telematics and the perfection of intelligent terminals, to paradoxology. The facts speak for themselves (and this list is not exhaustive).
These technological transformations can be expected to have a considerable impact on knowledge. Its two principal functions – research and the transmission of acquired learning-are already feeling the effect, or will in the future. With respect to the first function, genetics provides an example that is accessible to the layman: it owes its theoretical paradigm to cybernetics. Many other examples could be cited. As for the second function, it is common knowledge that the miniaturisation and commercialisation of machines is already changing the way in which learning is acquired, classified, made available, and exploited. It is reasonable to suppose that the proliferation of information-processing machines is having, and will continue to have, as much of an effect on the circulation of learning as did advancements in human circulation (transportation systems) and later, in the circulation of sounds and visual images (the media).
The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information.” We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The “producers” and users of knowledge must now, and will have to, possess the means of translating into these languages whatever they want to invent or learn. Research on translating machines is already well advanced.” Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as “knowledge” statements.
We may thus expect a thorough exteriorisation of knowledge with respect to the “knower,” at whatever point he or she may occupy in the knowledge process. The old principle that the acquisition of knowledge is indissociable from the training (Bildung) of minds, or even of individuals, is becoming obsolete and will become ever more so. The relationships of the suppliers and users of knowledge to the knowledge they supply and use is now tending, and will increasingly tend, to assume the form already taken by the relationship of commodity producers and consumers to the commodities they produce and consume – that is, the form of value. Knowledge is and will be produced in order to be sold, it is and will be consumed in order to be valorised in a new production: in both cases, the goal is exchange.
Knowledge ceases to be an end in itself, it loses its “use-value.”
It is widely accepted that knowledge has become the principle force of production over the last few decades, this has already had a noticeable effect on the composition of the work force of the most highly developed countries and constitutes the major bottleneck for the developing countries. In the postindustrial and postmodern age, science will maintain and no doubt strengthen its preeminence in the arsenal of productive capacities of the nation-states. Indeed, this situation is one of the reasons leading to the conclusion that the gap between developed and developing countries will grow ever wider in the future.
But this aspect of the problem should not be allowed to overshadow the other, which is complementary to it. Knowledge in the form of an informational commodity indispensable to productive power is already, and will continue to be, a major – perhaps the major – stake in the worldwide competition for power. It is conceivable that the nation-states will one day fight for control of information, just as they battled in the past for control over territory, and afterwards for control of access to and exploitation of raw materials and cheap labor. A new field is opened for industrial and commercial strategies on the one hand, and political and military strategies on the other.
However, the perspective I have outlined above is not as simple as I have made it appear. For the merchantilisation of knowledge is bound to affect the privilege the nation-states have enjoyed, and still enjoy, with respect to the production and distribution of learning. The notion that learning falls within the purview of the State, as the brain or mind of society, will become more and more outdated with the increasing strength of the opposing principle, according to which society exists and progresses only if the messages circulating within it are rich in information and easy to decode. The ideology of communicational “transparency,” which goes hand in hand with the commercialisation of knowledge, will begin to perceive the State as a factor of opacity and “noise.” It is from this point of view that the problem of the relationship between economic and State powers threatens to arise with a new urgency.
Already in the last few decades, economic powers have reached the point of imperilling the stability of the state through new forms of the circulation of capital that go by the generic name of multi-national corporations. These new forms of circulation imply that investment decisions have, at least in part, passed beyond the control of the nation-states.” The question threatens to become even more thorny with the development of computer technology and telematics. Suppose, for example, that a firm such as IBM is authorised to occupy a belt in the earth’s orbital field and launch communications satellites or satellites housing data banks. Who will have access to them? Who will determine which channels or data are forbidden? The State? Or will the State simply be one user among others? New legal issues will be raised, and with them the question: “who will know?”
Transformation in the nature of knowledge, then, could well have repercussions on the existing public powers, forcing them to reconsider their relations (both de jure and de facto) with the large corporations and, more generally, with civil society. The reopening of the world market, a return to vigorous economic competition, the breakdown of the hegemony of American capitalism, the decline of the socialist alternative, a probable opening of the Chinese market these and many other factors are already, at the end of the 1970s, preparing States for a serious reappraisal of the role they have been accustomed to playing since the 1930s: that of, guiding, or even directing investments. In this light, the new technologies can only increase the urgency of such a re-examination, since they make the information used ‘in decision making (and therefore the means of control) even more mobile and subject to piracy.
It is not hard to visualise learning circulating along the same lines as money, instead of for its “educational” value or political (administrative, diplomatic, military) importance; the pertinent distinction would no longer be between knowledge and ignorance, but rather, as is the case with money, between “payment knowledge” and “investment knowledge” – in other words, between units of knowledge exchanged in a daily maintenance framework (the reconstitution of the work force, “survival”) versus funds of knowledge dedicated to optimising the performance of a project.
If this were the case, communicational transparency would be similar to liberalism. Liberalism does not preclude an organisation of the flow of money in which some channels are used in decision making while others are only good for the payment of debts. One could similarly imagine flows of knowledge travelling along identical channels of identical nature, some of which would be reserved for the “decision makers,” while the others would be used to repay each person’s perpetual debt with respect to the social bond.
2. The Problem: Legitimation
That is the working hypothesis defining the field within which I intend to consider the question of the status of knowledge. This scenario, akin to the one that goes by the name “the computerisation of society” (although ours is advanced in an entirely different spirit), makes no claims of being original, or even true. What is required of a working hypothesis is a fine capacity for discrimination. The scenario of the computerisation of the most highly developed societies allows us to spotlight (though with the risk of excessive magnification) certain aspects of the transformation of knowledge and its effects on public power and civil institutions – effects it would be difficult to perceive from other points of view. Our hypotheses, therefore, should not be accorded predictive value in relation to reality, but strategic value in relation to the question raised.
Nevertheless, it has strong credibility, and in that sense our choice of this hypothesis is not arbitrary. It has been described extensively by the experts and is already guiding certain decisions by the governmental agencies and private firms most directly concerned, such as those managing the telecommunications industry. To some extent, then, it is already a part of observable reality. Finally, barring economic stagnation or a general recession (resulting, for example, from a continued failure to solve the world’s energy problems), there is a good chance that this scenario will come to pass: it is hard to see what other direction contemporary technology could take as an alternative to the computerisation of society.
This is as much as to say that the hypothesis is banal. But only to the extent that it fails to challenge the general paradigm of progress in science and technology, to which economic growth and the expansion of sociopolitical power seem to be natural complements. That scientific and technical knowledge is cumulative is never questioned. At most, what is debated is the form that accumulation takes – some picture it as regular, continuous, and unanimous, others as periodic, discontinuous, and conflictual.
But these truisms are fallacious. In the first place, scientific knowledge does not represent the totality of knowledge; it has always existed in addition to, and in competition and conflict with, another kind of knowledge, which I will call narrative in the interests of simplicity (its characteristics will be described later). I do not mean to say that narrative knowledge can prevail over science, but its model is related to ideas of internal equilibrium and conviviality next to which contemporary scientific knowledge cuts a poor figure, especially if it is to undergo an exteriorisation with respect to the “knower” and an alienation from its user even greater than has previously been the case. The resulting demoralisation of researchers and teachers is far from negligible; it is well known that during the 1960s, in all of the most highly developed societies, it reached such explosive dimensions among those preparing to practice these professions – the students – that there was noticeable decrease in productivity at laboratories and universities unable to protect themselves from its contamination. Expecting this, with hope or fear, to lead to a revolution (as was then often the case) is out of the question: it will not change the order of things in postindustrial society overnight. But this doubt on the part of scientists must be taken into account as a major factor in evaluating the present and future status of scientific knowledge.
It is all the more necessary to take it into consideration since – and this is the second point – the scientists’ demoralisation has an impact on the central problem of legitimation. I use the word in a broader sense than do contemporary German theorists in their discussions of the question of authority. Take any civil law as an example: it states that a given category of citizens must perform a specific kind of action. Legitimation is the process by which a legislator is authorised to promulgate such a law as a norm. Now take the example of a scientific statement: it is subject to the rule that a statement must fulfil a given set of conditions in order to be accepted as scientific. In this case, legitimation is the process by which a “legislator” dealing with scientific discourse is authorised to prescribe the stated conditions (in general, conditions of internal consistency and experimental verification) determining whether a statement is to be included in that discourse for consideration by the scientific community.
The parallel may appear forced. But as we will see, it is not. The question of the legitimacy of science has been indissociably linked to that of the legitimation of the legislator since the time of Plato. From this point of view, the right to decide what is true is not independent of the right to decide what is just, even if the statements consigned to these two authorities differ in nature. The point is that there is a strict interlinkage between the kind of language called science and the kind called ethics and politics: they both stem from the same perspective, the same “choice” if you will – the choice called the Occident.
When we examine the current status of scientific knowledge at a time when science seems more completely subordinated to the prevailing powers than ever before and, along with the new technologies, is in danger of becoming a major stake in their conflicts – the question of double legitimation, far from receding into the background, necessarily comes to the fore. For it appears in its most complete form, that of reversion, revealing that knowledge and power are simply two sides of the same question: who decides what knowledge is, and who knows what needs to be decided? In the computer age, the question of knowledge is now more than ever a question of government.
3. The Method: Language Games
The reader will already have noticed that in analysing this problem within the framework set forth I have favoured a certain procedure: emphasising facts of language and in particular their pragmatic aspect. To help clarify what follows it would be useful to summarise, however briefly, what is meant here by the term pragmatic.
A denotative utterance such as “The university is sick,” made in the context of a conversation or an interview, positions its sender (the person who utters the statement), its addressee (the person who receives it), and its referent (what the statement deals with) in a specific way: the utterance places (and exposes) the sender in the position of “knower” (he knows what the situation is with the university), the addressee is put in the position of having to give or refuse his assent, and the referent itself is handled in a way unique to denotatives, as something that demands to be correctly identified and expressed by the statement that refers to it.
If we consider a declaration such as “The university is open,” pronounced by a dean or rector at convocation, it is clear that the previous specifications no longer apply. Of course, the meaning of the utterance has to be understood, but that is a general condition of communication and does not aid us in distinguishing the different kinds of utterances or their specific effects. The distinctive feature of this second, “performative,” utterance is that its effect upon the referent coincides with its enunciation. The university is open because it has been declared open in the above-mentioned circumstances. That this is so is not subject to discussion or verification on the part of the addressee, who is immediately placed within the new context created by the utterance. As for the sender, he must be invested ‘with the ’ authority to make such a statement. Actually, we could say it the other way around: the sender is dean or rector that is, he is invested with the authority to make this kind of statement – only insofar as he can directly affect both the referent, (the university) and the addressee (the university staff) in the manner I have indicated.
A different case involves utterances of the type, “Give money to the university”; these are prescriptions. They can be modulated as orders, commands, instructions, recommendations, requests, prayers, pleas, etc. Here, the sender is clearly placed in a position of authority, using the term broadly (including the authority of a sinner over a god who claims to be merciful): that is, he expects the addressee to perform the action referred to. The pragmatics of prescription entail concomitant changes in the posts of addressee and referent.
Of a different order again is the efficiency of a question, a promise, a literary description, a narration, etc. I am summarising. Wittgenstein, taking up the study of language again from scratch, focuses his attention on the effects of different modes of discourse; he calls the various types of utterances he identifies along the way (a few of which I have listed) language games. What he means by this term is that each of the various categories of utterance can be defined in terms of rules specifying their properties and the uses to which they can be put – in exactly the same way as the game of chess is defined by a set of rules determining the properties of each of the pieces, in other words, the proper way to move them.
It is useful to make the following three observations about language games. The first is that their rules do not carry within themselves their own legitimation, but are the object of a contract, explicit or not, between players (which is not to say that the players invent the rules). The second is that if there are no rules, there is no game, that even an infinitesimal modification of one rule alters the nature of the game, that a “move” or utterance that does not satisfy the rules does not belong to the game they define. The third remark is suggested by what has just been said: every utterance should be thought of as a “move” in a game.
This last observation brings us to the first principle underlying our method as a whole: to speak is to fight, in the sense of playing, and speech acts fall within the domain of a general agonistics. This does not necessarily mean that one plays in order to win. A move can be made for the sheer pleasure of its invention: what else is involved in that labor of language harassment undertaken by popular speech and by literature? Great joy is had in the endless invention of turns of phrase, of words and meanings, the process behind the evolution of language on the level of parole. But undoubtedly even this pleasure depends on a feeling of success won at the expense of an adversary – at least one adversary, and a formidable one: the accepted language, or connotation.
This idea of an agonistics of language should not make us lose sight of the second principle, which stands as a complement to it and governs our analysis: that the observable social bond is composed of language “moves.” An elucidation of this proposition will take us to the heart of the matter at hand.
4. The Nature of the Social Bond: The Modern Alternative
If we wish to discuss knowledge in the most highly developed contemporary society, we must answer the preliminary question of what methodological representation to apply to that society. Simplifying to the extreme, it is fair to say that in principle there have been, at least over the last half-century, two basic representational models for society: either society forms a functional whole, or it is divided in two. An illustration of the first model is suggested by Talcott Parsons (at least the postwar Parsons) and his school, and of the second, by the Marxist current (all of its component schools, whatever differences they may have, accept both the principle of class struggle and dialectics as a duality operating within society).”
This methodological split, which defines two major kinds of discourse on society, has been handed down from the nineteenth century. The idea that society forms an organic whole, in the absence of which it ceases to be a society (and sociology ceases to have an object of study), dominated the minds of the founders of the French school. Added detail was supplied by functionalism; it took yet another turn in the 1950s with Parsons’s conception of society as a self-regulating system. The theoretical and even material model is no longer the living organism; it is provided by cybernetics, which, during and after the Second World War, expanded the model’s applications.
In Parsons’s work, the principle behind the system is still, if I may say so, optimistic: it corresponds to the stabilisation of the growth economies and societies of abundance under the aegis of a moderate welfare state. In the work of contemporary German theorists, systemtheorie is technocratic, even cynical, not to mention despairing: the harmony between the needs and hopes of individuals or groups and the functions guaranteed by the system is now only a secondary component of its functioning. The true goal of the system, the reason it programs itself like a computer, is the optimisation of the global relationship between input and output, in other words, performativity. Even when its rules are in the process of changing and innovations are occurring, even when its dysfunctions (such as strikes, crises, unemployment, or political revolutions) inspire hope and lead to belief in an alternative, even then what is actually taking place is only an internal readjustment, and its result can be no more than an increase in the system’s “viability.” The only alternative to this kind of performance improvement is entropy, or decline.
Here again, while avoiding the simplifications inherent in a sociology of social theory, it is difficult to deny at least a parallel between this “hard” technocratic version of society and the ascetic effort that was demanded (the fact that it was done in name of “advanced liberalism” is beside the point) of the most highly developed industrial societies in order to make them competitive – and thus optimise their “irrationality” – within the framework of the resumption of economic world war in the 1960s.
Even taking into account the massive displacement intervening between the thought of a man like Comte and the thought of Luhmann, we can discern a common conception of the social: society is a unified totality, a “unicity.” Parsons formulates this clearly: “The most essential condition of successful dynamic analysis is a continual and .systematic reference of every problem to the state of the system as a whole … A process or set of conditions either ‘contributes’ to the maintenance (or development) of the system or it is ‘dysfunctional’ in that it detracts from the integration, effectiveness, etc., of the ‘system.” The “technocrats” also subscribe to this idea. Whence its credibility: it has the means to become a reality, and that is all the proof it needs. This is what Horkheimer called the “paranoia” of reason.
But this realism of systemic self-regulation, and this perfectly sealed circle of facts and interpretations, can be judged paranoid only if one has, or claims to have, at one’s disposal a viewpoint that is in principle immune from their allure. This is the function of the principle of class struggle in theories of society based on the work of Marx.
“Traditional” theory is always in danger of being incorporated into the programming of the social whole as a simple tool for the optimisation of its performance; this is because its desire for a unitary and totalising truth lends itself to the unitary and totalising practice of the system’s managers. “Critical” theory, based on a principle of dualism and wary of syntheses and reconciliations, should be in a position to avoid this fate. What guides Marxism, then, is a different model of society, and a different conception of the function of the knowledge that can be produced by society and acquired from it. This model was born of the struggles accompanying the process of capitalism’s encroachment upon traditional civil societies. There is insufficient space here to chart the vicissitudes of these struggles, which fill more than a century of social, political, and ideological history. We will have to content ourselves with a glance at the balance sheet, which is possible for us to tally today now that their fate is known: in countries with liberal or advanced liberal management, the struggles and their instruments have been transformed into regulators of the system; in communist countries, the totalising model and its totalitarian effect have made a comeback in the name of Marxism itself, and the struggles in question have simply been deprived of the right to exist. Everywhere, the Critique of political economy (the subtitle of Marx’s Capital) and its correlate, the critique of alienated society, are used in one way or another as aids in programming the system.
Of course, certain minorities, such as the Frankfurt School or the group Socialisme ou barbarie, preserved and refined the critical model in opposition to this process. But the social foundation of the principle of division, or class struggle, was blurred to the point of losing all of its radicality; we cannot conceal the fact that the critical model in the end lost its theoretical standing and was reduced to the status of a “utopia” or “hope,” a token protest raised in the name of man or reason or creativity, or again of some social category such as the Third World or the students – on which is conferred in extremes the henceforth improbable function of critical subject.
The sole purpose of this schematic (or skeletal) reminder has been to specify the problematic in which I intend to frame the question of knowledge in advanced industrial societies. For it is impossible to know what the state of knowledge is – in other words, the problems its development and distribution are facing today – without knowing something of the society within which it is situated. And today more than ever, knowing about that society involves first of all choosing what approach the inquiry will take, and that necessarily means choosing how society can answer. One can decide that the principal role of knowledge is as an indispensable element in the functioning of society, and act in accordance with that decision, only if one has already decided that society is a giant machine.
Conversely, one can count on its critical function, and orient its development and distribution in that direction, only after it has been decided that society does not form an integrated whole, but remains haunted by a principle of oppositions The alternative seems clear: it is a choice between the homogeneity and the intrinsic duality of the social, between functional and critical knowledge. But the decision seems difficult, or arbitrary.
It is tempting to avoid the decision altogether by distinguishing two kinds of knowledge. one, the positivist kind, would be directly applicable to technologies bearing on men and materials, and would lend itself to operating as an indispensable productive force within the system. The other the critical, reflexive, or hermeneutic kind by reflecting directly or indirectly on values or alms, would resist any such “recuperation.”
5. The Nature of the Social Bond: The Postmodern Perspective
I find this partition solution unacceptable. I suggest that the alternative it attempts to resolve, but only reproduces, is no longer relevant for the societies with which we are concerned and that the solution itself is stilt caught within a type of oppositional thinking that is out of step with the most vital modes of postmodern knowledge. As I have already said, economic “redeployment” in the current phase of capitalism, aided by a shift in techniques and technology, goes hand in hand with a change in the function of the State: the image of society this syndrome suggests necessitates a serious revision of the alternate approaches considered. For brevity’s sake, suffice it to say that functions of regulation, and therefore of reproduction, are being and will be further withdrawn from administrators and entrusted to machines. Increasingly, the central question is becoming who will have access to the information these machines must have in storage to guarantee that the right decisions are made. Access to data is, and will continue to be, the prerogative of experts of all stripes. The ruling class is and will continue to be the class of decision makers. Even now it is no longer composed of the traditional political class, but of a composite layer of corporate leaders, high-level administrators, and the heads of the major professional, labor, political, and religious organisations.
What is new in all of this is that the old poles of attraction represented by nation-states, parties, professions, institutions, and historical traditions are losing their attraction. And it does not look as though they wilt be replaced, at least not on their former scale, The Trilateral Commission is not a popular pole of attraction. “Identifying” with the great names, the heroes of contemporary history, is becoming more and more difficult. Dedicating oneself to “catching up with Germany,” the life goal the French president [Giscard d’Estaing at the time this book was published in France] seems to be offering his countrymen, is not exactly exciting. But then again, it is not exactly a life goal. It depends on each individual’s industriousness. Each individual is referred to himself. And each of us knows that our self does not amount to much.
This breaking up of the grand Narratives (discussed below, sections 9 and 10) leads to what some authors analyse in terms of the dissolution of the social bond and the disintegration of social aggregates into a mass of individual atoms thrown into the absurdity of Brownian motion. Nothing of the kind is happening: this point of view, it seems to me, is haunted by the paradisaic representation of a lost organic” society.
A self does not amount to much, but no self is an island; each exists in a fabric of relations that is now more complex and mobile than ever before. Young or old, man or woman, rich or poor, a person is always located at “nodal points” of specific communication circuits, however tiny these may be. Or better: one is always located at a post through which various kinds of messages pass. No one, not even the least privileged among us, is ever entirely powerless over the messages that traverse and position him at the post of sender, addressee, or referent. One’s mobility in relation to these language game effects (language games, of course, are what this is all about) is tolerable, at least within certain limits (and the limits are vague); it is even solicited by regulatory mechanisms, and in particular by the self-adjustments the system undertakes in order to improve its performance. It may even be said that the system can and must encourage such movement to the extent that it combats its own entropy, the novelty of an unexpected “move,” with its correlative displacement of a partner or group of partners, can supply the system with that increased performativity it forever demands and consumes.
It should now be clear from which perspective I chose language games as my general methodological approach. I am not claiming that the entirety of social relations is of this nature – that will remain an open question. But there is no need to resort to some fiction of social origins to establish that language games are the minimum relation required for society to exist: even before he is born, if only by virtue of the name he is given, the human child is already positioned as the referent in the story recounted by those around him, in relation to which he will inevitably chart his course. Or more simply still, the question of the social bond, insofar as it is a question, is itself a language game, the game of inquiry. It immediately positions the person who asks, as well as the addressee and the referent asked about: it is already the social bond.
On the other hand, in a society whose communication component is becoming more prominent day by day, both as a reality and as an issue, it is clear that language assumes a new importance. It would be superficial to reduce its significance to the traditional alternative between manipulatory speech and the unilateral transmission of messages on the one hand, and free expression and dialogue on the other.
A word on this last point. If the problem is described simply in terms of communication theory, two things are overlooked: first, messages have quite different forms and effects depending on whether they are, for example, denotatives, prescriptives, evaluatives, performatives, etc. It is clear that what is important is not simply the fact that they communicate information. Reducing them to this function is to adopt an outlook which unduly privileges the system’s own interests and point of view. A cybernetic machine does indeed run on information, but the goals programmed into it, for example, originate in prescriptive and evaluative statements it has no way to correct in the course of its functioning – for example, maximising its own performance, how can one guarantee that performance maximisation is the best goal for the social system in every case. In any case the “atoms” forming its matter are competent to handle statements such as these – and this question in particular.
Second, the trivial cybernetic version of information theory misses something of decisive importance, to which I have already called attention: the agonistic aspect of society. The atoms are placed at the crossroads of pragmatic relationships, but they are also displaced by the messages that traverse them, in perpetual motion. Each language partner, when a “move” pertaining to him is made, undergoes a “displacement,” an alteration of some kind that not only affects him in his capacity as addressee and referent, but also as sender. These moves necessarily provoke “countermoves” and everyone knows that a countermove that is merely reactional is not a “good” move. Reactional countermoves arc no more than programmed effects in the opponent’s strategy; they play into his hands and thus have no effect on the balance of power. That is why it is important to increase displacement in the games, and even to disorient it, in such a way as to make an unexpected “move” (a new statement).
What is needed if we are to understand social relations in this manner, on whatever scale we choose, is not only a theory of communication, but a theory of games which accepts agonistics as a founding principle. In this context, it is easy to see that the essential element of newness is not simply “innovation.” Support for this approach can be found in the work of a number of contemporary sociologists, in addition to linguists and philosophers of language. This “atomisation” of the social into flexible networks of language games may seem far removed from the modern reality, which is depicted, on the contrary, as afflicted with bureaucratic paralysis. The objection will be made, at least, that the weight of certain institutions imposes limits on the games, and thus restricts the inventiveness of the players in making their moves. But I think this can be taken into account without causing any particular difficulty.
In the ordinary use of discourse – for example, in a discussion between two friends – the interlocutors use any available ammunition, changing games from one utterance to the next: questions, requests, assertions, and narratives are launched pell-mell into battle. The war is not without rules, but the rules allow and encourage the greatest possible flexibility of utterance.
From this point of view, an institution differs from a conversation in that it always requires supplementary constraints for statements to be declared admissible within its bounds. The constraints function to filter discursive potentials, interrupting possible connections in the communication networks: there are things that should not be said. They also privilege certain classes of statements (sometimes only one) whose predominance characterises the discourse of the particular institution: there are things that should be said, and there are ways of saving them. Thus: orders in the army, prayer in church, denotation in the schools, narration in families, questions in philosophy, performativity in businesses. Bureaucratisation is the outer limit of this tendency.
However, this hypothesis about the institution is still too ‘unwieldy”:’ its point of departure is an overly ‘reifying’ view of what is institutionalised. We know today that the limits the institution imposes on potential language ‘moves’ are never established once and for all (even if they have been formally defined). Rather, the limits are themselves the stakes and provisional results of language strategies, within the institution and without. Examples: Does the university have a place for language experiments (poetics)? Can you tell stories in a cabinet meeting? Advocate a cause in the barracks? The answers are clear: yes, if the university opens creative workshops; yes, if the cabinet works with prospective scenarios; yes, if the limits of the old institution are displaced. Reciprocally, it can be said that the boundaries only stabilise when they cease to be stakes in the game.
This, I think, is the appropriate approach to contemporary institutions of knowledge.” Jean-Francois Lyotard, The Post-Modern Condition: a Report on Knowledge; Chapters One-Five, 1979
Numero Tres—“I was scared out of my mind. I went into the women’s room because it was the only private place in the death house, and I put my head against the tile wall and grabbed the crucifix around my neck. I said, ‘Oh, Jesus God, help me. Don’t let him fall apart. If he falls apart, I fall apart.’ I was in over my head.
All I had agreed to in the beginning was to be a pen pal to this man on Louisiana’s death row. Sure, I said, I could write letters. But the man was all alone, he had no one to visit him.
It was like a current in a river, and I got sucked in. The next thing I knew I was saying, ‘OK, sure, I’ll come visit you.’
He had suggested that on the prison application form for visitors I fill in ‘spiritual advisor,’ and I said, ‘Sure.’ He was Catholic, and I’m a Catholic nun, so I didn’t think much about it; it seemed right.
But I had no idea that at the end, on the evening of the execution, everybody has to leave the death house at 5:45 p.m., everybody but the spiritual advisor. The spiritual advisor stays to the end and witnesses the execution.
Vengeance is whose?
People ask me all the time, “What are you, a nun, doing getting involved with these murderers?” You know how people have these stereotypical ideas about nuns: nuns teach; nuns nurse the sick.
I tell people to go back to the gospel. Look at who Jesus hung out with: lepers, prostitutes, thieves—the throwaways of his day. If we call ourselves Jesus’ disciples, we too have to keep ministering to the marginated, the throwaways, the lepers of today. And there are no more marginated, thrown-away, and leprous people in our society than death-row inmates.
There’s a lot of what I call “biblical quarterbacking” going on in death-penalty debates: people toss in quotes from the Bible to back up what they’ve already decided anyway. People want to not only practice vengeance but also have God agree with them. The same thing happened in this country in the slavery debates and in the debates over women’s suffrage.
Religion is tricky business. Quote that Bible. God said torture. God said kill. God said get even.
Even the Pauline injunction “Vengeance is mine, says the Lord, I will repay” (Rom. 12:19) can be interpreted as a command and a promise—the command to restrain individual impulses toward revenge in exchange for the assurance that God will be only too pleased to handle the grievance in spades.
That God wants to “get even” like the rest of us does not seem to be in question.
One intractable problem, however, is that divine vengeance (barring natural disasters, so-called acts of God) can only be interpreted and exacted by human beings, very human beings.
I can’t accept that.
Jesus Christ, whose way of life I try to follow, refused to meet hate with hate and violence with violence. I pray for the strength to be like him.
I cannot believe in a God who metes out hurt for hurt, pain for pain, torture for torture. Nor do I believe that God invests human representatives with such power to torture and kill. The paths of history are stained with the blood of those who have fallen victim to “God’s Avengers.” Kings, popes, military generals, and heads of state have killed, claiming God’s authority and God’s blessing. I do not believe in such a God.
But here’s the real reason why I got involved with death-row inmates: I got involved with poor people. It took me a while to wake up to the call of the social gospel of Jesus. For years and years when I came to the passages where Jesus identified with poor and marginated people I did some fast-footed mental editing of the scriptures: poor meant “spiritually poor.”
When I read in Matthew 25, “I was hungry and you gave me to eat,” I would say, “Oh, there’s a lot of ways of being hungry.” “I was in prison, and you came to visit me,”—”Oh, there’s a lot of ways we live in prison, you know.”
Other members of my religious community woke up before I did, and we had fierce debates on what our mission should be. In 1980, when my religious community, the Sisters of St. Joseph of Medaille, made a commitment to “stand on the side of the poor,” I assented, but only reluctantly. I resisted this recasting of the faith of my childhood, where what had counted was a personal relationship with God, inner peace, kindness to others, and heaven when this life was done. I didn’t want to struggle with politics and economics. We were nuns, after all, not social workers.
But later that year I finally got it. I began to realize that my spiritual life had been too ethereal, too disconnected. To follow Jesus and to be close to Jesus meant that I needed to seek out the company of poor and struggling people.
So in June 1981 I drove a little brown truck into St. Thomas, a black, inner-city housing project in New Orleans, and be-gan to live there with four other sisters.
Growing up a Southern white girl right on the cusp of the upper class, I had only known black people as my servants. Now it was my turn to serve them.
It didn’t take long to see that for poor people, especially poor black people, there was a greased track to prison and death row. As one Mama in St. Thomas put it: “Our boys leave here in a police car or a hearse.”
I began to understand that some life is valued and some life is not.
It didn’t take long to see how racism worked. When people were killed in St. Thomas and you looked for an account of their deaths in the newspaper, you’d find it buried on some back page as a three-line item. When other people were killed, it was front-page news.
Drug activity took place in the open, but when the sisters went to the mayor’s office to complain, the officials would just shrug their shoulders and say, “Well, you know, Sister, every city has a problem with drugs. At least we know where they are.”
I began to understand that some life is valued and some life is not.
One day a friend of mine from the Prison Coalition Office casually asked me if I’d be a pen pal to someone on death row in Louisiana.
I said, “Sure.” But I had no idea that this answer would be my passport to a strange and bizarre country. God is a mystery, but one of the definite characteristics of God is that God is sneaky.
When I began visiting Patrick Sonnier in 1982, I couldn’t have been more naive about prisons. My only other experience with prisoners had been in the ‘60s when once Sister Cletus and I went to Orleans Parish Prison to play our guitars and sing with the prisoners. This was the era of the singing nuns—”Dominique-nique-nique.”
The guards brought us all into this big room with over 100 prisoners, and I said, “Let’s do ‘If I Had a Hammer,’” and the song took off like a shot. The men really got into it and started making up their own verses—”If I had an Uzi”—laughing and singing loud, and the guards were rolling their eyes, and Sister Cletus and I weren’t invited back there to sing any more.
I wrote Patrick about life at Hope House in St. Thomas, and he told me about life in a 6-by-8-foot cell, where he and 44 other men were confined 23 hours a day. He said how glad he was when summer was over because there was no air in the cells. He’d sometimes wet the sheet from his bunk and put it on the cement floor to try to cool off; or he’d clean out his toilet bowl and stand in it and use a small plastic container to get water from his lavatory and pour it over his body.
Patrick was on death row four years before they killed him.
I made a bad mistake. When I found out about Patrick Sonnier’s crime—that he had killed two teenage kids—I didn’t go to see the victims’ families. I stayed away because I wasn’t sure how to deal with such raw, unadulterated pain. I was a coward. I only met them at Patrick’s pardon-board hearing. They were there to demand Patrick’s execution. I was there to ask the board to show him mercy. It was not a good time to meet.
Here were two sets of parents whose children had been ripped from them. I felt terrible. I was powerless to assuage their grief. It would take me a long time to learn how to help victims’ families, a long time before I would sit at their support-group meetings and hear their unspeakable stories of loss and grief and rage and guilt.
I would learn that the divorce rate for couples who lose a child is over 70 percent—a sad new twist to “until death do us part.” I would learn that often after a murder friends stay away because they don’t know how to respond to the pain.
I don’t see capital punishment as a peripheral issue about some criminals at the edge of society that people want to execute. I see the death penalty connected to the three deepest wounds of our society: racism, poverty, and violence.
In this country, first the hangman’s noose, then the electric chair, and now the lethal-injection gurney have been almost exclusively reserved for those who kill white people.
The rhetoric says that the death penalty will be reserved only for the most heinous crimes, but when you look at how it is applied, you see that in fact there is a great selectivity in the process. When the victim of a violent crime has some kind of status, there is a public outrage, and especially when the victim has been murdered, death—the ultimate punishment—is sought.
But when people of color are killed in the inner city, when homeless people are killed, when the “nobodies” are killed, district attorneys do not seek to avenge their deaths. Black, Hispanic, or poor families who have a loved one murdered not only don’t expect the district attorney’s office to pursue the death penalty—which, of course, is both costly and time-consuming—but are surprised when the case is prosecuted at all.
In Louisiana, murder victims’ families are allowed to sit in the front row in the execution chamber to watch the murderer die. Some families. Not all. Almost never African American families.
Ask Virginia Smith’s African American family. She was 14 when three white youths took her into the woods, raped her, and stabbed her to death. None of them got the death penalty. Their fathers knew the district attorney, and they had all-white juries.
Why do poor people get the death penalty? It has everything to do with the kind of defense they get.
In regard to this first and deepest of America’s wounds, racism, we’d have to change the whole soil of this country for the criminal-justice system not to be administered in a racially biased manner.
The second wound is poverty. Who pays the ultimate penalty for crimes? The poor. Who gets the death penalty? The poor. After all the rhetoric that goes on in legislative assemblies, in the end, when the net is cast out, it is the poor who are selected to die in this country.
And why do poor people get the death penalty? It has everything to do with the kind of defense they get.
When I agreed to write to Patrick Sonnier, I didn’t know much about him except that if he was on death row in Louisiana he had to be poor. And that holds true for virtually all of the more than 3,000 people who now inhabit death-row cells in our country.
Money gets you good defense. That’s why you’ll never see an O.J. Simpson on death row. As the saying goes: “Capital punishment means them without the capital get the punishment.”
I had to learn all this myself. My father was a lawyer. I used to think, “Well, they may not get perfect defense, but at least they get adequate defense.”
I tell you it is so shocking to find out what kind of defense people on death row actually have had.
The man I have been going to see on death row now for over six years is a young black man who was convicted for the killing of a white woman in a small community in Many, Louisiana. He had an all-white jury, and he was tried, convicted, and sentenced to death in just one week. Dobie Williams has now been on death row for 10 years, and I believe he’s innocent. But it is almost impossible for us to get a new trial for him. Why? Because if his attorney did not raise any objections at his trial, we cannot bring them up in appeals.
Finally, the third wound is our penchant for trying to solve our problems with violence. When you witness an execution and watch the toll this process also takes on some of those who are charged with the actual execution—the 12 guards on the strap-down team and the warden—you recognize that part of the moral dilemma of the death penalty is also: who deserves to kill this man?
On my journey with murder victims’ families, I have seen some of them go for vengeance. I have seen families watch executions in the electric chair and still be for vengeance. I have also witnessed the disintegration of families because some parents got so fixated on vengeance that they couldn’t love their other children any more or move on with life.
But I have also watched people like Marietta Jaeger of the group Murder Victims for Reconciliation or Lloyd LeBlanc, the father of one of Patrick Sonnier’s victims. Although they have been through a white-hot fire of loss and violence, they have been healed by God’s grace and been able to overcome their desire for revenge. They are incredible human beings with great courage, and to me they are living witnesses of the gospel and the incredible healing power of Jesus in the midst of violence.
Circle of light
Patrick had tried to protect me from watching him die. He told me he’d be OK. I didn’t have to come with him into the execution chamber. “The electric chair is not a pretty sight, it could scare you,” he told me, trying to be brave.
But I said, “No, no, Pat, if they kill you, I’ll be there.”
Then I remembered how the women were there at the foot of Jesus’ cross, and I said to him, “You look at my face. Look at me, and I will be the face of Christ for you.” I couldn’t bear it that he would die alone. I said, “Don’t you worry. God will help me.”
And there in the women’s room, just a few hours before the execution, my only place of privacy in that place of death, God and I met, and the strength was there, and it was like a circle of light around me. If I tried to think ahead to what would happen at midnight I came unraveled, but there in the present I could hold together and be strong.
And Patrick was strong and kept asking me, “Sister Helen, are you all right?”
Being in that death house was one of the most bizarre, confusing experiences I have ever had. It wasn’t like visiting somebody dying in a hospital, where you can see the person getting weaker and fading. Patrick was so fully alive, talking and responding to me and writing letters to people and eating.
I’d look around at the polished tile floors—everything so neat—all the officials following a protocol, the secretary typing up forms for the witnesses to sign afterwards, the coffee pot percolating, and I kept feeling that I was in a hospital and the final act would be to save this man’s life.
It felt strange and confusing because everyone was so polite. They kept asking Patrick if he needed anything. The chef came by to ask him if he liked his last meal—the steak (medium rare), the potato salad, the apple pie for dessert.
When the warden with the strap-down team came for him, I walked with him. God heard his prayer, “Please, God, hold up my legs.” It was the last piece of dignity he could muster. He wanted to walk.
I saw this dignity in him, and I have seen it in the three men I have accompanied to their deaths. I wonder how I would hold up if I were walking across a floor to a room where people were waiting to kill me.
The essential torture of the death penalty is not finally the physical method of death: bullet or rope or gas or electrical current or injected drugs. The torture happens when conscious human beings are condemned to death and begin to anticipate that death and die a thousand times before they die. They are brought close to death, maybe four hours away, and the phone rings in the death house, and they hear they have received a stay of execution. Then they return to their cells and begin the waiting all over again.
The role of the church
The UN Universal Declaration on Human Rights states that there are two essential human rights that every human being has: the right not to be tortured and the right not to be killed.
I wish Pope John Paul II in his encyclical “The Gospel of Life” had been as firm and unconditional as the UN.
The pope still upholds the right of governments to kill criminals, even though he restricts it to cases of “absolute necessity” and says that because of improvements in modern penal systems such cases are “very rare, if not practically nonexistent.”
Likewise, the US Catholic bishops in their 1980 “Statement on Capital Punishment,” while strongly condemning the death penalty for the unfair and discriminatory manner in which it is imposed, its continuance of the “cycle of violence,” and its fundamental disregard for human dignity, also affirm in principle the right of the state to kill.
But I believe that if we are to have a firm moral bedrock for our society, we must establish that no one may be permitted to kill—no one—and that includes government.
I have been told, although not by any official sources, that the pope has seen “Dead Man Walking” and that he was very taken with it. In fact, last year he interceded on behalf of three people scheduled for executions in the US Most recently he spoke up for Joseph O’Dell, a death-row inmate in Virginia who is probably innocent.
I am encouraged to see the leadership of the Catholic Church become engaged in this and some other cases. But overall I am afraid I haven’t seen a lot of moral energy coming from Catholic leaders on the issue of the death penalty. I don’t hear many sermons preached about it.
The death penalty is still foremost a poor person’s issue, and of course it’s very controversial. But I’ve learned that if you try to live the gospel of Jesus, controversy will follow you like a hungry dog.
In this last decade of the 20th century, US government officials kill citizens with dispatch with scarcely a murmur of resistance from the Christian citizenry. In fact, surveys of public opinion show that those who profess Christianity tend to favor capital punishment slightly more than the overall population—Catholics more than Protestants.
True, in recent years leadership bodies of most Christian denominations have issued formal statements denouncing the death penalty, but generally that opposition has yet to be translated into aggressive pastoral initiatives to educate clergy and membership on capital punishment. I do not want to pass judgment on church leaders, but I invite them to work harder to do the right thing.
I also believe that we cannot wait for the church leadership to act. We have to put our trust in the church as the people of God; things have to come up from the grassroots.
I have no doubt that we will one day abolish the death penalty in America.
The religious community has a crucial role in educating the public about the fact that government killings are too costly for us, not only financially, but—more important—morally. Allowing our government to kill citizens compromises the deepest moral values upon which this country was conceived: the inviolable dignity of human persons.
I have no doubt that we will one day abolish the death penalty in America. One day all the death instruments in this country—electric chairs, gas chambers, and lethal-injection needles—will be housed behind velvet ropes in museums.
Today, however, executions are still the order of the day, and people are being executed at an ever-increasing rate in this country.
People are scared of crime, and they’ve been manipulated by politicians who push this button for all it’s worth. For politicians, the death penalty is a convenient symbol and an easy way to prove how tough they are on criminals and crime. It allows them to avoid tackling the complex issue of how to get to the roots of crime in our communities.
But we may be close to bottoming out, which has to happen before momentum can build in the other direction. Right now we may be at just the beginning of the dawning of consciousness.
The death penalty is firmly in place, but people are beginning to ask, “If this is supposed to be the solution, how come we’re not feeling any better? How come none of us feels safer?” People are beginning to realize that they have been duped and that the death penalty has not so much to do with crime as it has to do with politics.
The bottoming out that has to happen is kind of like in the 12-step program: the first step is to admit that as a society we have a problem and need help.
People are capable of change, and the beauty and the power of the gospel is that when people hear it, they will respond to it.
When people support executions, it is not out of malice or ill will or hardness of heart or meanness of spirit. It is, quite simply, that they don’t know the truth of what is going on.
And that is not by accident. The secrecy surrounding executions makes it possible for executions to continue. I am convinced that if executions were made public, the torture and violence would be unmasked and we would be shamed into abolishing executions.
When you accompany someone to the execution, as I have done three times as a spiritual advisor, everything becomes very crystallized, distilled, and stripped to the essentials. You are in this building in the middle of the night, and all these people are organized to kill this man. And the gospel comes to you as it never has before: Are you for compassion, or are you for violence? Are you for mercy, or are you for vengeance? Are you for love, or are you for hate? Are you for life, or are you for death?
And the words of Jesus from the gospel kept coming to me that night: “And the last will be first” and “This too is my beloved son, hear him.” On death row I grasped with such solidity and fire the grace of God in all human beings, the dignity in all human beings.
I am not saying that Patrick Sonnier was a hero. I do not want to glorify him. He did the most terrible crime of all. He killed. But he was a human being, and he had a transcendence, a dignity. He—like each of us—was more than the worst thing he had done in his life. And I have one consolation: he died well. I hope I die half as well.
That night I walked with him, prayed with him through Isaiah 43, “I have called you by your name, you are mine.” I played for him the tape “Be Not Afraid,” which we had also played at the communion service we had before he died.
In his last words he expressed his sorrow to the victims’ family. But then he said to the warden and to the unseen executioner behind the plywood panel, “but killing me is wrong, too.”
At the end I was amazed at how ordinary the last moments were. He walked to the dark oak chair and sat in it. As guards were strapping his legs and arms and trunk he found my face and told me that he loved me. His last words of life were words of love and thankfulness. I took them in like a lightning rod.
I kept thinking of the execution of Jesus. I said to myself, “My God, how many times have I looked at that crucifix? How many times have we heard that story? How many times have we heard that Mary was there?”
I was watching a person being killed with an electrical current, in a few seconds. I couldn’t imagine what it must have been for Jesus to be executed, hanging there on the cross, dying slowly.
It gave me an entirely new awareness of what it means to have an executed criminal as a savior. What a scandal that must have been!
I held on to the Bible Pat had given me. I closed my eyes because I knew Pat couldn’t see me any more. I heard them clank the switch. They pulled it three times. Then I looked up. One hand had grasped the chair. The fingers on the other were kind of curled. The doctor went in. They removed the mask. He was dead. And I began to pray to him.
I came out of the execution chamber that night having watched a man die in front of my eyes, whose last words were words of love. And when I turned to his Bible, thumbworn and underlined, I found that in the front of his Bible, where births, marriages, and deaths are recorded, he had written in his own handwriting the date of his own death.
It reminded me of Jesus’ words: “You don’t know the day and the hour.” But when you die at the hands of the state of Louisiana, you do know the day and the hour very well.
Out of this experience has come a fire that has galvanized me and that cannot die in me. In the Catholic Church, when we receive sacraments, we say that an indelible mark is left on our souls. Being present at Pat’s death has left an indelible mark on my soul. I think of it as a kind of second baptism in my life, for it forever committed me to pursuing the gospel as it relates to poor people and the quest for justice.
And it is this that has made me speak out about the death penalty ever since, and I will continue to do so to my dying day. I cannot not tell this story and proclaim the gospel message as I came to understand it that night. And it was this experience that led me to write the book Dead Man Walking.
How the book got published, the movie got produced, and how both have been received—to me it’s nothing short of a rip-roaring, Old-Testament, Yahweh-split-the-Red-Sea miracle.
I made a promise to Patrick before he died: ‘Patrick, I will tell your story across this land.’ I didn’t know what I was saying. ‘Perhaps then your death can be redemptive for other people.'” Helen Prejean, “Would Jesus Pull the Switch;” DioSCG, 1997
Numero Cuatro—“The past, in the form of our individual genes’ predecessors and in terms of the collective expressions of human individuality that have held sway for tens of thousands of years, completely determine the real parameters of this discourse and activity (in regard to ‘drugs’). Yet we mostly dismiss or are almost completely unaware of this readily discernible history.To an extent, at least quite plausibly, our characteristics in the universe of ‘special substances’ are at least somewhat common among other mammals. Cats will imbibe their catnip; dogs slurp up fermented foodstuffs; the ungulates have plants that send them spinning on occasion.
Whether any of this activity among our evolutionary kin is volitional, Homo Sapiens in any event in some sense almost universally choose to or need to ‘get high.’ The ineluctable actuality of this statement is possible to illustrate in many, many ways, three of which form the primary focus today. In the first place, anthropological, archeological, and forensic science point out the omnipresence, over a hundred thousand years or more, of consciousness alteration in the overall species formation of human social bonds. In the second place, mythic and legendary and other early storytelling sources reveal this same tendency. Finally, historical proof also elicits the same or very similar conclusions.
Robert Graves, in his 1958 forward to still-iconic volumes, provides a place from which we can briefly convey all of these contextual components. ‘Since revising The Greek Myths…, I have had second thoughts about the drunken god Dionysus, about the Centaurs with their contradictory reputation for wisdom and misdemeanour, and about the nature of divine ambrosia and nectar. These subjects are closely related, because the Centaurs worshipped Dionysus, whose wild autumnal feast was called ‘the Ambrosia.’ I no longer believe that when his Maenads ran raging around the countryside, tearing animals or children in pieces…, they had intoxicated themselves solely on wine or ivy-ale. The evidence suggests that Satyrs(goat-totem tribesmen), Centaurs(horse-totem tribesmen), and their Maenad womenfolk, used these brews to wash down mouthfuls of a far stronger drug: namely a raw mushroom, amanita muscaria, which induces hallucinations, senseless rioting, prophetic sight, erotic energy, and remarkable muscular strength. Some hours of this ecstasy are followed by complete inertia.’
Graves goes on to admit that contemporary rituals in Mexico parallel what he describes. He himself partook of these rites, which in the Western Hemisphere utilise psilocybin. The Maenad’s ‘tearing of the heads’ of their victims is easily imaginable as a symbolic beheading of the mushroom itself, which in both ancient Greek and present-day Tlaloc bears the moniker, “food of the gods.”
Terrence McKenna, both much maligned and much worshipped but in any event a credentialed scholar who knew how to gather and present evidence, titled his ‘magnum opus’ with the same phrase. Food of the Gods advances a thesis that hallucinogens, particularly psilocybin mushrooms, impacted human cultural and social development. He essentially sees what Riane Eisler labels a ‘partnership’ model as having been possible during many millennia when these little fungi were a regular part of human meals.
“The primate tendency to form dominance hierarchies was temporarily interrupted for about 100,000 years by the psilocybin in the paleolithic diet. This behavioral style of male dominance was chemically interrupted by psilocybin in the diet, so it allowed the style of social organisation called partnership to emerge, and … that occurred during the period when language, altruism, planning, moral values, aesthetics, music and so forth — everything associated with humanness — emerged… .
About 12,000 years ago, the mushrooms left the human diet because they were no longer available, due to climatological change, and the previous tendency to form dominance hierarchies re-emerged. So, this is what the historic dilemma is: we have all these qualities that were evolved during the suppression of male dominance that are now somewhat at loggerheads with the tendency of society in a situation of re-established male dominance.
The paleolithic situation was orgiastic and this made it impossible for men to trace lines of male paternity, consequently there was no concept of ‘my children’ for men. It was ‘our children’ meaning ‘we, the group.’”
Wanton wildness; indiscriminate orgies; explosive expression of music and dance and elocution; sacred ‘partying’ that went on for days and days: these were our ancestors’ annual bows to nature and themselves. Having never attended an ‘ecstasy rave,’ I could not say first hand, but a certain descriptive resonance, based on recorded observations, feels approximately accurate.
In any case, that our type of creatures inaugurated their ‘social conquest of Earth,’ as E.O. Wilson put the case, in the presence of such activity is unquestionably likely and arguably certain. As often happens when such a point-of-view gets closer and closer to a sure bet, the story or intellectual history of the proposition itself is quite interesting.
Just a cursory glance at this chronicle is possible today, but even this briefing will contain high points, so to say, well worth further investigation. In any event, both sites of collected assessments and individually composed monographs and aggregated materials are now ubiquitous in scholarship, spiritual thinking, and otherwise.
ANTHROPOLOGY & SUCH
Friedrich Engels not only worked alongside and supported financially the lifelong efforts of Karl Marx. He also was, in his own right, a groundbreaking researcher on several fronts. One of them was establishing the social bases and implications of the whole human story, essentially part of the initiation of an anthropological perspective.
In any case, in his Origins of the Family, Private Property, & the State, he drew liberally from Lewis Henry Morgan’s seminal work. While Engels did not himself infer ethnobotanical facts, he certainly implied that the natural foundations of Native American ritual included plants and their use.
“The possession of common religious conceptions (Mythology) and ceremonies—After the fashion of barbarians the American Indians were a religious people.’ Their mythology has not yet been studied at all critically. They already embodied their religious ideas—spirits of every kind—in human form; but the lower stage of barbarism, which they had reached, still knows no plastic representations, so-called idols. Their religion is a cult of nature and of elemental forces, in process of development to polytheism. The various tribes had their regular festivals, with definite rites, especially dances and games. Dancing particularly was an essential part of all religious ceremonies; each tribe held its own celebration separately.”
Marx himself took up the fascinating challenges that his colleague had laid out. His final work, interrupted by mortality, was largely to be a study of Native American social lives. When he went to his grave, he had already collected hundreds of pages of notes that included multiple entries about employing exalted plants, most often tobacco that councils smoked together in ritual fashion.
As alluded above, Engels based much of his thinking, as did Marx, on the efforts of pathfinding American anthropologist Lewis Henry Morgan. Though Morgan’s primary intention was to depict the lineage models and relations of aboriginal power that developed in the Americas, and to some extent around the world, he too also noted and implied that universal holy practices existed, essentially group rituals and shamanism. Moreover, these were omnipresent in a way that inherently fit human use of herbal and other earthen substances that people had concluded were sacrosanct.
Decades hence, after many other intervening investigators had added their assessments, Bronislaw Malinowski also wrote extensively on these matters, often taking as his locus of observation Australia and the archipelagos of the Pacific. He is squeamish about some of what he learned. He labels as ‘boasting’ what indigenous inhabitants have conveyed to him. But the content of what he does include in his writings again and again establishes, in his Sexual Repression in Savage Society and elsewhere, that rites that initiated young people as adults, sexual animals who would sire and bear children, also involved secret rituals, formulas, and holy plants.
A chillingly evocative recounting tells of a legendary sister’s seduction of her brother and how their death resulted from this transgression, even as they discovered a key element of their clan’s magic and persistence. ”’The two are dead in the grotto of Bokaraywata and the sulumwoya is growing out of their bodies. I must go.’ He took his canoe, and he sailed across the sea between his island and that of Kitava. Then from Kitava he went to the main island, till he alighted on the tragic beach. There he saw the reef-heron hovering over the grotto. He went in and he saw the sulumwoya plant growing out of the lovers’ chests. He then went to the village. The mother avowed the shame which had fallen on her family. She gave him the magical formula, which he learned by heart.”
This same sulumwoya, in his The Sexual Lives of Savages, Malinowski portrays as the basis for a love elixir specifically and for “love magic” generally. Other oils and the stimulant, Betel, also bear mention.
In the years since this monograph’s publication in 1932, hundreds of other volumes and thousands of articles have considered this minty herb alone, at the same time that different scholars, apparently in the hundreds of thousands, have delved hallucinogenic mushrooms and various other growths from Earth’s bountiful stores that have played this role, a central component of erotic ritual and performance, around the world. Most of these volumes eventually touch on the issues at the heart of Malinowski’s inquiry—the borders and connections among magic and science and religion. Inescapably, our examination in this report also ponders such matters.
James Needham, who anthologized Malinowski’s work, gathered a dozen thinkers around him to discuss and inasmuch as possible discern the conflicts and possibilities for rapprochement between science and religion. Though nearly a century old, the discussions in the collection might easily have originated yesterday; truly, they might just as well have come from a hundred years hence, should humanity manage not to immolate itself.
In his introductory passage, Malinowski speaks to what traditional values brought to a so-called primitive culture. Stripped of any lingering supremacist bias, these arguments have power still. And, to those who would overturn a hundred thousand years of human sacred practice in order to achieve some temporary political economic goal or objective of social dominance, they might carry at least the echo of a warning.
Although he is speaking about the coming-of-age ceremonies in the lines below, his point is that the inculcation of the reality of the dependence of the present on past generations lies at the heart of what happens in those circumstances. That these transitional rites involved altered awareness goes without saying: that was the purpose. For our ends, we might at least acknowledge that forgetting, lying about, or otherwise so distorting our past as to make it unrecognizable ought to seem at least of dubious utility given the way that beginnings lay the basis for completion, come what may.
“The primitive man’s share of knowledge, his social fabric, his customs and beliefs, are the invaluable yield of devious experience of his forefathers, bought at an extravagant price and to be maintained at any cost. Thus, of all his qualities, truth to tradition is the most important, and a society which makes its tradition sacred has gained by it an inestimable advantage of power and permanence. Such beliefs and practices, therefore, which put a halo of sanctity round tradition and a supernatural stamp upon it, will have a ‘survival value’ for the type of civilisation in which they have been evolved.
We may, therefore, lay down the main function of initiation ceremonies: they are a ritual and dramatic expression of the supreme power and value of tradition in primitive societies. There, they also serve to impress this power and value upon the minds of each generation, and they are at the same time an extremely efficient means of transmitting tribal lore, of ensuring continuity in tradition and of maintaining tribal cohesion.
We still have to ask: What is the relation between the purely physiological fact of bodily maturity which these ceremonies mark, and their social and religious aspect? We see at once that religion does something more, infinitely more, than the mere ‘sacralising of a crisis of life.’ From a natural event it makes a social transition; to the fact of bodily maturity it adds the vast conception of entry into manhood with its duties, privileges, responsibilities, above all with its knowledge of tradition and the communion with sacred things and beings. There is thus a creative element in the rites of religious nature. The act establishes not only a social event in the life of the individual but also a spiritual metamorphosis, both associated with the biological event but transcending it in importance and significance.”
A modern onlooker might find tempting a phrase like “polymorphous perverse” as a descriptor of these forebears of ours. The types of practices that passed on secrets of sacred acts, that made the sexuality that we treat as shameful a part of a public rite, under the influence of plants with godlike powers, must strike the prudish prudence of “just say no” as positively salacious.
However, such a judgment is far outside of any rooted reading. In the context of often the thinnest of margins of existence, such developments were the opposite of prurient. They were survival techniques that affirmed the need to love and create in the most fundamental way, as procreators in the teeth of beasts and other daunting components of the world and its creatures. In any case, judged harshly or not, this juicy jettisoning of inhibition has acted as an ineluctable bedrock that founds human socialization and coming-of-age.
Of course, dozens of other investigators also contributed to this early outpouring of anthropological ideas. One might go on if one wanted to conduct thorough research in this arena. In any event, the contemporary scene has not only for the most part confirmed the extended outlines of these earlier conclusions, but they also have broadened the scope of study and deepened both the empirical basis and theoretical richness of this area of knowledge, the focus of all of which is a reality-based description of our own nature.
Thus, as such scholars as Helen Fisher state frankly, one upshot of ruminating on these issues is that we cannot avoid the conclusion that humanity’s has been a sex drive that is rich and potent. And this longing to couple has for tens of thousands of years connected with eating, drinking, and smoking what those in charge of today’s societies now insist are criminal acts merely to possess.
Fisher—who absolutely abhors the hideous sexual and neural and amorous ‘side-effects’ of the serotonin-absorption- inhibiting ‘drugs of choice’ of the present pass —may only elliptically make this conjunction about aphrodisiacal effects of various drugs, but others do so very explicitly: the popularly-invoked formulation, “sex and drugs and rock-and-roll,” in fact forms an interconnected threesome that underlies, at many levels, essential aspects of being human.
One way of thinking about this invokes a deep analysis of language itself, where even a quick look reveals simply countless ways that, for example, psychedelic fungi evoke sexual meaning. Entheogens—plants that bring contact with God or the infinite—in this view act as a catalyst to culture’s deepest delvings.
“This mushroom on the wick is called snuff in English, but ‘snot’ in former times. …In Greek ‘snot’ is muxa but also the nose or nozzle of the oil lamp. The mushroom was linked with nasal mucous because the membrane virile discharges a mucous liquor of magical potency. The lamp-nozzle with its dripping wick carries the same idea with fire involved. Ancient medical writers and Pliny attributed a sexual character to Amanita muscaria. There is a startling association in the complex of words and figures of speech for fire, the nose and its mucous, and mushrooms, and various erotic connotations. The same fossilized figures survive in French, Spanish, and English. ‘Punk’ in English…is the name applied to a powdered fungus…; it also means a harlot who sparks her client. In French, the word for ‘punk’ is amadou. ‘Spunk’ in colloquial English means seminal fluid. It is a doublet for ‘punk’ and both are cognate with Greek spongia/(or) ’sponge ’(and Latin fungus).”
In Spanish, the association is even more graphic. There, “the word for snuff or the burnt end of a wick is seta, meaning mushroom, and also moco, meaning mucous.” In addition to linguistic, one can readily locate scores of citations that employ spiritual, sociological, psychological, genetic, sociobiological, and interdisciplinary ideation to espouse and explain the intertwining of Eros and a plant world as much a part of human engagement with sexuality as is copulation itself. Overall, tens of million*s* of sources probe these interesting matters.
Not that sexual accoutrements of universal deployment of sacred plants were exclusive or primary in these affairs, quite the contrary, the ritual and therapeutic use of hallucinogens or other botanical specimens that had ‘mind-blowing’ effects impacted many realms of early humans’ lives. Malinowski and countless other sources have pointed out this truth. People gained confidence from their imbibing. The ‘magic’ applied in the spheres of domestic production, hunting, and dealings between clans, as well as in various healing ways.
As with Cupid’s and Psyche’s play, a truly, massively vast trove of documents deal with the ways that occasional, ritualised, sacral drug use served as a substrate to enculturation, maturation, and different aspects of human life for a hundred millennia. Such experiences in a real sense made life possible; that is why they were both so extensive and so persistent. Logically, their continued—police and sold-out, so-called scientists might chime in, “intractable”—clinging to human behavior is inviting us still to affirm our lives rather than snuff them out.
A modern scholar synthesises many of these ideas, in The Evolution of Paleolithic Cosmology [and Spiritual Consciousness and the Temporal and Frontal Lobes](http:// journalofcosmology.com/ Consciousness155.html). Of course, as one of many threads about such conceptualisations makes plain, psychoactive plants and their ritual use attended every step in this evolutionary journey. The general point is important to expand on at some length.
“Complex mortuary rituals and belief in the transmigration of the soul, of a world beyond the grave, has been a human characteristic for at least 100,000 years. The emergence of spiritual consciousness and its symbolism, is directly linked to the evolution of the temporal and frontal lobes and to the Neanderthal and Cro-Magnon peoples, and then the first cosmologies, 20,000 to 30,000 years ago. These ancient peoples of the Upper and Middle Paleolithic were capable of experiencing love, fear, and mystical awe, and they carefully buried those they loved and lost.
They believed in spirits and ghosts which dwelled in a heavenly land of dreams, and interned their dead in sleeping positions and with tools, ornaments and flowers. By 30,000 years ago, and with the expansion of the frontal lobes, they created symbolic rituals to help them understand and gain control over the spiritual realms, and created signs and symbols which could generate feelings of awe regardless of time or culture.
Because they believed souls ascended to the heavens, the people of the Paleolithic searched the heavens for signs, and between 30,000 to 20,000 years ago, they observed and symbolically depicted the association between woman’s menstrual cycle and the moon, patterns formed by stars, and the relationship between Earth, the sun, and the four seasons. These include depictions of … the 13 new moons in a solar year. Although it is impossible to date these discoveries with precision, it can be concluded that spiritual consciousness first began to evolve over 100,000 years ago, and this gave birth to the first heavenly cosmologies over 20,000 years ago.”
A compilation that looks at science and technology as a hundred thousand year continuum recognises the use of hallucinogenic plants as a technique worthy of mention. It proposes that a rational contextualisation of human advance would have no choice but to consider the inclusion of such activities, which in any event almost certainly accompanied humanity’s relatively rapid spread to every corner of the planet outside of Antarctica.
Another recent study has noticed the central role of herbs and other plants in the production of magic and knowledge, empirical medicine and divination. From the highlands of Northern Europe and the British isles to the New Guinea wilds, such rubrics have appeared, schematics that in multiple ways evidence ancient roots.
In practice, social problems have led groups, from the dawn of the human day, so to speak, to “consult healers, who usually belong to distant communities and even non-[ethnic] groups. These ‘dream men,’ whom we would label mediums, enter altered states of consciousness through the rapid inhaling of tobacco and the use of other plant materials that produce trances and hallucinations. Information is also gleaned from dreams. Such diviners are then able to identify” correct courses of action or guilty parties or complex compromises as a result of such chemically-mediated foresight and insight.
Investigating these kinds of phenomena and then labelling them as shamanism, meanwhile, has become both a popular and important corner of the scholarly enterprise. Many anthropologists and archaeologists who participate in this undertaking have noted the obvious longevity of these practices and the concomitant probability that drug-induced hallucinations accompanied such designations of ‘guiding spirits’ within the clans or bands from which we and our immediate forebears have sprung.
The capacity for this kind of ‘second sight’ is “of great antiquity,” probably hundreds of thousands of years at least. As a South African professor stated the point, “The widespread appearance of shamanism results not from diffusion but…from universal neurological inheritance that include the capacity of the nervous system to enter altered states and the need to make sense of the resultant hallucinations within a foraging community. There seem to be no other explanations for the remarkable similarities between shamanic traditions worldwide. It is therefore probable that some form of shamanism…was practiced by the hunter-gatherers of Upper Paleolithic Europe.”
More recent scholars, putting into practice advances in forensic science—dating and identifications of molecules and more—can now say without equivocation that aboriginal human networks from tens of thousands of years ago frequently engaged in devotions that involved heightened consciousness, often including hallucinatory and other states of arousal. Such evidence comes from around the planet.
It indicates the role of such ‘expanded awareness,’ for example in The Shamans of Prehistory: Trance and Magic in the Painted Caves, in the production of magnificent artistic output tens of thousands of years old. It countenances the probability that imbibing one or another psychoactive plant or fungus contributed to, or formed a bedrock of, the rites both that defined early social development in aggregate and that related to the use of these grottoes and caverns so filled with an evocative, creative mystery that astounds us to this day.
From the Americas, one finds that these patterns have characterized past human groups from the Amazon to Mexico at the very least. Psilocybin and Ayahuasca’s use are, at a minimum, thousands of years old.
Such practices were medicinal. They as elsewhere frequently pertained to both carnal relations generally and to the sexual initiation of pubescent members of the social group. Some data indicates that these substances played a part in the rites of human sacrifice that came to characterize Aztec and Mayan cultures at the ends of their ecological ropes.
From throughout the European neck of the **[Eurasian land mass](http://www. pearsonhighered.com/assets/ hip/us/hip_us_pearsonhighered/ samplechapter/0205744222.pdf), artistry of different sorts proves the presence of organically induced hallucinations. Graves have contained residues of psychoactive materials placed with the corpse, as in cannabis at a burial site in Siberia.
“Buried with the ‘princess’ were six saddled-and-bridled horses, bronze and gold ornaments – and a small canister of cannabis. She is not known to be a ‘princess’, as her name implies. Experts are divided over whether she was a poet, healer, or holy woman.”
From East Asia, one finds evidence of marijuana gathered in quantity from five thousand years or more ago. Moreover, hallucinogens have a many sided and ancient lineage in Japan. “Magic mushrooms references in Japan are often referred to as dance-inducing(Odoritake and Maitake) or laughter-inducing (Waraitake) mushrooms. These “laughing mushrooms” are the subject of a number of folktales as well as the names of ancient dance forms in Japan.”
From before the dawn of history, various hallucinogenic or otherwise intoxicating plants were present in China as well. Wherever one looks in these particular ‘cradles of civilization,’ their forebears took part ritually in gatherings at which participants took into their bodies the basis for transformed consciousness and vision.
From the Pacific and South Asia, we have already seen extensive documentation regarding cultures and peoples of Oceana. India attests to ancient usage of Soma, a plant-based substance that led to reputedly almost omnipotent experiences. “The identity of the ancient plant known as Soma is one of the greatest unsolved mysteries in the field of religious history. Common in the religious lore of both ancient India and Persia, the sacred Soma plant was considered a God. When Soma was pressed and made into a drink, the ancient worshipper who imbibed it gained the powerful attributes of this God. The origins of Soma go back into the shadowy time of prehistory, back to the common Aryan ancestors of both the Vedic Hindu religion of India and the Persian religion of Mazdaism.”
Thus, when plus-or-minus five thousand years ago, Aryan conquerors came on the scene in the subcontinent, they brought with them an already well-formulated and long-practiced drug dynamic. This ‘Soma’ included many aspects of the soon-to-predominate culture
. It related religiously to the powers of the gods to grant strength to believers. It related to political control. It informed the musical tradition of Rig Veda.
“A significant number of its hymns sing the praises of soma, a psychoactive potion that was made and consumed during a ritual sacrifice. Using 108 bricks, a hearth was constructed in the shape of a bird, within which priests would build a fire. An animal, tethered to a post was beheaded and the main part of the ritual began. The priests would lay out a leather mat and place upon it two circular grinding-stones. A certain plant was crushed between these stones with an admixture of milk or water to make an inebriating drink which was then consumed. As this process allows no time for fermentation we must infer that soma (also called amṛita “immortality”) was a decoction of a psychoactive plant, and not alcohol. Alcohol was certainly known to the Aryans but it was allowed only to the caste of warriors and kings (Skt., kṣatriya).”
The entire globe proffers scholarship, investigation, and knowledge of the folk roots of many of the drugs in the pharmacopoeia, a substantial portion of which served ‘ritual’ purposes and other ways of affecting consciousness. General accounts of the origins of language, religion, and culture now treat the contention as close-to-established theory. Such life forms as psilocybin and tobacco and coca and on and on and on, acted as a conduit to humanity’s unfolding persistence.
Moreover, the intake of these transformative lifeforms probably predated culture and anatomically modern humans as such. Its tendency is a much more deeply embedded phenomenon. According to some scholars, this pattern stems from the following of ungulates and the partaking of the fungal forms that proliferated in the herbivores’ stools. Such dietary choices likely came before primates en masse migrated from Africa and continued through successive waves of wandering that underlay the manifestation of people more or less just like us.
That such thoughts constitute components—and arguably core pieces of the overall construct at that—of science, of scientific knowledge, discomfits many folks. This is arguably especially true in the United States where at best puritanical positions all-too-often stubbornly continue. One recent scholar, whose works illuminate the inevitable conjunctions of magic, religion, and science, of actual awareness and fingers-crossed mumbo jumbo, speaks eloquently about these things.
His thoughts establish a sort of benchmark for this essay’s contention that a widespread and revered practice of psychoactive and psychedelic ritual emanates from every social sector of the world, past and present. Their ubiquity in his estimation both is only possible inasmuch as they worked for the peoples involved and proves that the substances themselves were part of a complicated web of problem and need, of human possibility and consciousness. He takes us from China to Europe, from Africa to Australia, from the Pacific islands to the Americas.
‘[M]any Northwest Coast people…do have so intense an emotional feeling[about nature] that ‘love’ is the only word[for it]. …[They] feel that the trees, animals, and rocks of their areas are home and family—living spirit persons… .the result of thousands of years of having to take forests and animals seriously. If one has to interact with plants and animals over time, one cannot help developing emotional and moral feelings toward them. Humans simply do not remain neutral about things they have to take constantly into account. Interactions construct our world. Our very selves are born of interacting. Interactions with beings we take seriously are powerful emotional events, and, indeed, more than that; …Our selves are the products of our interactions… .
In the cases noted here, these Native American peoples must depend on the forests and animals, and must be responsible for caring for them. …A worldview grounded in this sort of involvement does not lead to cutting the world into magic, science, and religion. It leads, rather, to cutting the world into ethical versus nonethical behavior, into local versus nonlocal place, into factual versus nonfactual claims, into effective versus ineffective ways of living and working, into prosocial versus antisocial behavior(remembering that animals and plants are part of society), and into one’s immediate social world—including animals and plants’ and everything else.
In such a context, one engages with all that nature proffers. One does not generally reject, let alone criminalize, those things that have through immemorial practice expressed rites of passage and transformation. To do so would seem not only bizarre but also immoral, perhaps wickedly insane.
Again, one could continue. However, perhaps the finding should seem plausible enough without venturing further. Currently-‘proscribed’ plants have served as chosen and beneficent coventurers on the paths that human ‘traffic’ has followed. Strong medicines, stalwart tonics, and useful stimulants that our ancestors have utilized ‘time out of mind’ are now the basis for ‘life-in-prison’ or worse.” Jim Hickey, “Capitalism on Drugs: the Political Economy of Contraband and Control From Heroin to Ritalin;” Contributoria, 2014