In most forms of publishing it is a square assumption that the medium of the material should not require too many concessions to its intended audience. Even the most cynical book and newspaper editors do not doubt the ability of their readers to turn a page or look something up in an index. The remote control and the advent of red buttons may present new and exciting challenges for interaction design, but from the editorial point of view, television really only requires its audience to sit there and watch (assuming they have made it beyond the not-to-be-underestimated obstacle of the plug and the ‘on’ button).
Books, newspapers, and the screen do, of course, demand that editors make decisions about the way in which to present material through the medium. Knowing how to prioritise articles, and how they should be written, what the structure of the material should take, and how it is likely to play with the intended audience are among the many creative decisions that editors need to make. But the need to consider how easily the audience engages with the medium itself (in other words how ‘usable’ they find it) is closer to something happily taken for granted.
Many of the creative decisions that face editors in other publishing environments apply equally to those who edit web-sites. But web-sites contain an added complication in that they require those who are reading (or watching) them to operate a machine at the same time. This means that web editors are acting as intermediary between a complex bit of kit and the many and various ways in which it is (mis)understood and used.
Web-sites and their incipient grammar (hyperlinks, ‘navigation’, ‘menus’, ‘crumb trails’, ‘home’ pages, ‘landing’ pages, ‘about us’ pages, etc) have evolved within the prior technical grammar of operating systems and the real world metaphors they often adopt (‘windows’, ‘recycle bins’, ‘files and folders’). ‘Human-computer interaction’ is the academic field that studies this collective syntax and vocabulary. And not surprisingly it is a strange hybrid sub-discipline that borrows equally from computer science and fields like behavioural psychology.
A branch of Human-Computer Interaction that has become an increasingly industry-standard practice, particularly for large organisations, is user-centred design (sometimes called human-centred design). Those with editorial responsibility for web-sites call on the services of experts in the field of user-centred design to carry out studies into the context of business and user needs and make it as easy as possible for the web-site to meet these needs. The ‘menus’, ‘site maps’, ‘search’, ‘shopping trolleys’ and , latterly, ‘customisable home-pages’ and ‘tag clouds’ become the signs and symbols (otherwise known as ‘functionality’) from which users can then make sense of the site.
This has also had a knock-on effect on the way in which web editors edit content for the web. User research shows that the threshold of patience is so low that only a pitiful percentage of users read line by line. Most skim down the left-hand side, dodging the text by searching through the headings, links, keywords, bulleted lists or anything that will help them find the precious bit of content or information for which they are looking quickly. So when writing for the web, we are broadly taught to indulge a habit that shows something less than the patience of children. Text is pithy, to the point, broken down into digestible chunks and written for people with a low reading age.
This means that creative, design and editorial decisions are informed by research which clearly shows that is it necessary to – in a sense – ‘dumb down’ the complex bit of machinery that stands before each user, in order to make it useful. In other words web-sites (and by extension the broader world of operating systems and software design) start by creating an interface that deliberately simplifies what is ‘actually’ going on. They are, in effect, not only tolerating but also requiring a professional and largely benevolent form of deception. Or to soften this last word, they encourage a form of communication that is analytically imprecise.
Virtual irony
A form of irony, therefore, runs right through the heart of anyone editing material for a web-site. Interaction design shows – and shows clearly and scientifically – that it is necessary to assume a gap between the way in which most users will understand the computer/software/web-site and its true nature.
This irony is not something that everyone would recognise, and factionalism can easily break out on both sides of the fence (meaning on the one hand, ‘reality’ and on the other ‘appearance’). A common problem for many people tasked with the job of propagating (or, worse, selling) a virtual experience is that web developers, computer programmers and software engineers become inured to their abstruse programming languages to the point where Java, or even the more prosaic HTML, can seem transcendentally self-evident. This kind of scientific purism starts from the assumption that people should adapt to, or make the effort to understand, reality in its most distilled and analytically unsparing form. This attitude finds cultural expression in the note of slight disdain that some ‘hardcore’ computer scientists have for Human-Computer Interaction as a discipline. The suggestion is that it is not a ‘real’ science, but some lesser half-bred shadow.
Conversely for many simple-minded end-users, terminology that the computer literate elite might recognise as ‘crude’ but necessary metaphors has, effectively, become part of reality (and often not recognised as only part of ‘their’ reality). From this perspective, a ‘mouse’, for example, has simply become part of the shifting scene of modern language, as real and obdurate as any other domestic fashion.
The viewpoint of classic scientific rationalism is one way to look at these factions and potentially reconcile them. Knowledge, through this prism, is all the things that a computer clearly is: determinate, mathematical, or in the concise phrase of seventeenth century luminaries, ‘clear and distinct’. The determinate and ‘clear and distinct’ way to look at the operations of a computer clearly favours the computer programmer rather than the simple end-user. It shows that the signs and symbols that make up the outer shell of the user experience can be explained in a more analytically precise or more ‘technical’ way. In other words, a ‘mouse’ is an inadequate description of what it actually is. Or in still other words, scientific rationalism vindicates the judgement that most people have an essentially thick-headed understanding of the machines that play a key role in their daily lives.
This ‘them and us’ (or dualistic and broadly Cartesian) view of things creates a neat distinction between ‘them’ (the not so clever ordinary user) and ‘us’ (the brainy computer scientist) inside of which everything can be explained. This almost implies that if only ordinary users made a little more effort they too would be able to grasp computing with the same sophistication as the boffins. And in this resurrected state the tawdry question of ‘usability’ would cease to exist (which perhaps accounts for the slightly haughty attitude towards Human-Computer Interaction).
But the fact that interaction design exists and the fact that software giants, software engineers and humble corporate web-sites take it so seriously suggests that this kind of scientific evangelism is a futile and fundamentally unrealistic aspiration. One of the observations that will strike anyone – like a web editor – who is suspended between the science and the way in which it appears to users of the science, is just how misunderstood it is. Anyone with only a smattering of technical knowledge will soon appreciate that most users are either happy to skirt carelessly over the surface of the underlying science or will become confused by even the most elementary concept. In short, they do not even come close to the scientific rationalist’s model of understanding.
The same disparity between the natural state of humanity and its ever-expanding and more powerful technology applies across many different aspects of modern life. We all drive cars, but most likely less people understand them than those who don’t. We are all consumers of energy, water and basic accommodation in different forms, but most of us require specialists to deliver them. These technologies may not be as complex as a computer, but they still push the boundaries of human knowledge and understanding to a point that goes beyond the natural resources of most.
But what a computer shows, more clearly than other examples of modern technology, is that the model of knowledge encouraged by scientific rationalism is at odds with the way in which most human beings will think, act, and interact by default. Computers provide a contained environment in which users can be seen not just to prefer but to need concepts and processes that are manifestly an idiosyncratic simplification of reality rather than its objective portrayal. Knowledge in the virtual world is not knowledge as such but knowledge designed specifically for humans with all their quirks and eccentricities.
Not only this, but the way in which technologies are simplified is increasingly as important (if not more important) than the underlying technical process. The extent of technical knowledge and expertise is such that the market is saturated with products. But products that are useful and easy to use among the technologically illiterate are another matter. And it is the latter products that make the difference. To give a parochial example, a government organisation went out to tender for a content management system. Of the four products that made it to the shortlist, all of them used the same underlying technology. But the one that won the contract did so because it scored highest on the System Usability Scale.
This premium on the need to interpret technology for the user becomes even more intriguing when the interpretation is not only practical but also desirable. Apple are, perhaps, the best example of this and are among those in the world of technology that have managed to make the adjective ‘cool’ a common and legitimate part of its vocabulary. What makes this fetishisation of technology possible is not just what the technology does, but crucially how it does it, so that in Steve Jobs’ famous quote we can’t keep our tongues to ourselves.
The left-field creative web or digital designer or developer is now almost a parody but their legacy is written all over recent trends on the web; otherwise technologically obscure processes have been transformed by a sort of a supine, pot-smoking vocabulary, so that sometimes your computer even talks to you as though it is stoned. Even serious analysts and commentators that occupy this ‘creative’ world use a language that aims to build bridges over chasms of ignorance by refracting the wet-wipe language of teenagers (‘that’s so 2007′). Twitter, Facebook, Bebo, My Space and the riotous proliferation of other user-generated technologies have already gone way beyond a piece of useful technology, to the point where they become cultural accessories, and a key part of social idioms.
In other words those who are serious about the development of new technologies, have to think about much more than the technology. First, they have to ensure that the technology is accessible for the non-expert audience who are on the receiving end of it; second, if they are really pushing for success, they have to turn it into something that users not only can use, but want to use. These sibling movements both signal that modern technology has had to learn – possibly the hard way or through a process of deduction – that sober-minded scientific fatalism needs to turn itself into a phantasmagoria of fairy tales, if it is to compete with the whimsical weirdness of human behaviour.
A virtual metaphor
As noted, ‘real world’ metaphors are an important part of the user-centred design practitioner’s toolbox. These metaphors are written all over computing technology. The idea is that you take the characteristics of a familiar object from the real world and use those characteristics to explain the bit of technology that would otherwise remain mysterious to the user. Your Facebook homepage has a ‘wall’ on which – in subversive fashion – you can scrawl. E-commerce sites have ‘shopping trolleys’, and ‘wish lists’. iTunes has its own ‘store’.
But the incongruous relationship between the comparatively sophisticated science and the comparatively crude human being punching away pugnaciously at a keypad is itself a curious image to consider. It may be that computer science’s use of metaphor to make sense of machines is itself a useful metaphor to understand – or partly understand – how human beings make sense of the ‘real’ world.
To unpick the components of this image a little, the user interface of a computer programme or a web-site represents the way in which humans make sense of reality, and the layers of technology beneath the interface represent reality ‘as it truly is’. It is important to note, however, that the front-end of a user interface is not merely an appearance, and is not an act of total deception. It is rather a simplification or a partial representation of a more complex process. It is not, therefore, entirelyinaccurate, simply less accurate. Neither is the degree of (in)accuracy incidental to the particular rational or scientific abilities of the individual, but a consequence of the natural or intrinsic limitations and peculiarities of human thought and behaviour.
On this model, human beings can only make sense of the stuff and business of the world as well as grand philosophical subjects such as ‘meaning’, ‘truth’, ‘reality’, even ‘morality’ and ‘justice’ through the means at their disposal, or, in other words, through the more-than-quirky characteristics that define the horizons of humanity.
The metaphor does not imply that these subjects are only relative to these horizons. In fact it suggests the opposite. A complete mapping of the determinate operations of a computer is, in a sense, equivalent to metaphysical (and speculative) notions of absolute truth, or ultimate reality. Rather it suggests that human beings, ‘see’, understand, express and refract these notions in a way that is unique, and therefore limited, to their nature. To express this more anatomically, it could be said that the basic template of a human being is itself an instrument with which to grapple with such notions, and this instrument has at its disposal tools such as reason, observation, memory, imagination, strength, will and emotion etc (on this score, human beings and computers have at least ‘memory’ in common, though computers don’t have the same problem with car keys).
So is there any reason to accept this metaphor? The first thing to say is that if the answer to that question were at all self-evident then it would obviate any such vindication. If we could account for this metaphor as an accurate metaphor (or a successful imagining of human nature and its relationship to reality) then the kind of incongruity it suggests would not exist. It presupposes a ‘boundary’ of ignorance intrinsic to human nature, or, to use mystical language, a ‘cloud of unknowing’ that must remain opaque. If the metaphor is to have any credence, therefore, it must be thought of as an act of speculative imagination, speculative because it is invoking an image to make sense of something that ‘goes beyond’ or ‘stands outside’ itself.
That the image is speculative does not mean that it is an image plucked arbitrarily out of the air. Instead it is a piece of speculation informed by empirical – even scientific – observations. The suggestion is that by observing the typical nature and characteristics of human understanding and knowledge, there is reason to speculate that it stands in relation to ‘reality’ in a way that is similar to the relationship between the surface signs and symbols of computing and the complex bit of machinery they are designed to make intelligible.
The image shows that even inside the world of computing we quickly – much quicker than we might like to think – run up against the fallibilities of human understanding. Computers are, of course, an entirely artificial, and man-made area of science, but what reason is there to think that the real world is any less complex? Surely, if anything, it is considerably more complex? And it is not necessary to have a PhD in quantum physics to appreciate this; it is only necessary to read Bill Bryson!
Here is an extract from a publicly available article published in a leading scientific journal:
IL-17 (also called IL-17A) is the prototypic member of the IL-17 family composed of six cytokines, IL-17A–F. IL-17 is the hallmark cytokine of Th17 cells and along with IL-17F, with which shares the greatest homology, is also produced by γδ T cells, natural killer (NK) T cells, neutrophils and eosinophils …
This is certainly written in English, but, unless you are sufficiently expert in the field to which it relates, it is largely meaningless. Or, to put it another way, it is ‘technical’, and assumes ‘expertise’ that transcends the knowledge of many, if not most, people. It would be possible to collate and catalogue many different examples of science and technology confounding the natural intelligence of the man on the street, but really it is not even necessary to adduce examples from the cutting edge of science. Newton’s Principia would go beyond the abilities of most. Probably many would struggle to recall secondary school lessons on electromagnetism. And if we aren’t in large part ignorant of things like the human body and washing machines, why do we employ doctors and plumbers?
So while the world may very well exhibit a similar (maybe even the same) kind of rationality as a computer, the degree of rationality outstrips the rational ability of most human beings. The point here is not to contest the research and findings of science, but simply to suggest that there is an intrinsic mismatch between the complexity of these findings and the abilities or propensities of most ordinary human beings to understand them.
In recent years particularly, the efforts of scientists to educate and translate their findings for a popular audience shows that even science is beginning to recognise the gap. Books like Bill Bryson’sShort History of Nearly Everything, and the fact that Oxford University employs a professor with the job of explaining science for the man on the clapham omnibus arguably suggests the kind of incongruity described by the metaphor.
That said, it is, no doubt, possible to explain this trend in science in different ways. Is it intended to make its findings more broadly persuasive and make us more rational? Is it intended to spark an interest in scientific enquiry among more people? Is it intended to unlock the proven potential for science to drive cultural, social and economic innovation and development?
The popularisation of science might well take aim at all these targets and might even hit them all. But can it ever hope to eliminate itself by turning us all into scientists? Or does it exist to bridge the gap without closing it? The claim here is the latter, and, moreover, that the popularisation of science is itself evidence for this. To break it down, it suggests (in a very negative way) something fundamental or essential about human knowledge and understanding: that it is limited.
But what does it mean to say that human knowledge and understanding is limited? Is this to suggest that it is not scientific, that human beings are for the most part pre-scientific with the exception of a well-educated minority? It was common among the revolutionary luminaries of the seventeenth century to discuss classic philosophical problems with a bent of mind that may have confirmed such a view. The ‘ordinary’ pre-enlightenment human being, for example, looks at the sun and, led astray by their senses, thinks that it is only a matter of miles away; or looks at a stick in water and thinks that the water makes it bend. The enlightened human being, on the correct side of the brain, looks at the world through the illuminating power of reason, logic, evidence, and science, and can explain these appearances rationally.
In this case, in the same way some dedicated computer scientists might sneer at Human-Computer Interaction as less than scientific, so too the outlook on life engendered by the rationalists of the enlightenment encourages the view that a) if only people were to look at the world rationally they would see it as it really is and b) those who don’t meet this challenge are worthy of derision. But thedegree of complexity in professional modern science, like the degree of complexity behind the surface interface of a computer, is such that it overwhelms even a well-trained mind. So both are evidence for a near universal rate of failure measured against the enlightenment model of knowledge and understanding. It might even suggest that the outlook of scientific rationalism leads to a view of human beings as intrinsically ignorant and stupid, encouraging a kind of pathological self-loathing and anxiety.
The popularisation of science, like the place of Human-Computer Interaction in relation to computer mechanics and engineering, suggests otherwise. It suggests that it is possible to adopt an interpretation of the world that is rational, but simplified according to human needs, and so less rational than the ‘actual’ or ‘ultimate’ scientific explanation. And if that is the case, then it is wrong to characterise knowledge in dualistic and categorical terms, as either rational or ignorant. Knowledge instead becomes a matter of degree or extent. The ‘pre-scientific’, and ‘ignorant’ point of view is, on this score, simply a less analytically precise point of view, but not necessarily entirely inaccurate. If constructed in the right way, it can be as scientifically accurate as possible but also suited to the idiosyncrasies of human nature (or whatever nature applies).
This not only encourages an alternative way of thinking about specifically human knowledge, but also suggests a different way of thinking about science. It is a common trait of modern manners of thought to assume that human behaviour, animal behaviour or simply natural phenomena as it appears has a reductive, scientific explanation, or a ‘final’ explanation rooted in rationality. Psychology, sociology, economics as well as the natural sciences all adopt this explanatory typology. But according to the model of knowledge in the metaphor, these kind of reductive scientific explanations are just more analytically precise, better informed, or more sophisticated explanations. Rather than pitted against the implied ‘natural’ ignorance of most human beings, they have simply extended a perspective to a point which most human beings struggle to grasp. To stick with the favoured imagery of the enlightenment, they are like a more powerful torch that has managed to illuminate a surface area with a broader radius.
The difference here is between a superlative and a comparative, and the comparative suggests that, to continue with the image, there still remains an area – possibly a large area – shrouded in darkness. Science is not then a ‘final’ explanation, but a more technical, detailed, and precise explanation, that in its current, modern and (comparatively speaking) sophisticated form, is so technical that for many people it becomes unintelligible. But as a comparatively sophisticated form of explanation rather than a superlatively sophisticated form of explanation, it perhaps has much more in common with the ‘ignorant’ populace than some of its more intellectually supercilious champions might care to admit. On this model science is more like a play about a play, or a film about a film. Though it can recognise and map out the play, it is, itself, a meta-drama. Science, in short, has some of the characteristics of Sunset Boulevard.
As for human knowledge, there is clearly a need to adapt creatively to the human comparison in a sufficiently rational way.
So what is a web editor?
To judge from the outside and to caricature, modern journalism seems to fall into broadly two camps, which for the purposes of this discussion, we will call ‘realists’ and ‘sophists’. The realists believe in the more traditional approach to journalism which tries to report the story accurately by checking all the facts and maintaining an objective – or semi-objective – stance. This view, broadly speaking, assumes that journalists have a responsibility to hold up to scrutiny issues that affect the public domain. The sophists, through some ill-digested 1970s education in post-structural theory, maintain a kind of relativism which aims to construct every article or piece as a dramatic story that elicits gasps from the reader (or audience) and follows a logic of heart-stopping tension. Here, the facts, respect for the individuals concerned, and the conclusion or judgement of the article, are all irrelevant so long as it entertains the reader (a philosophy that sits comfortably with a more mercenary approach to journalism).
The realists argue that there are a set of determinate events that must be conveyed. The sophists retort that ‘determinate events’ collapse under examination into an impossibly complex series of irreconcilable and ambiguous perspectives, each with no greater claim to legitimacy than any other; and therefore to claim journalistic integrity is a duplicitous (and possibly self-deceiving) masquerade.
If this debate is considered in view of the metaphor sketched out above, however, editorial decision making begins to look a little different.To recap, the nature of something like a web-site – quite apart from the content and focus of the site – means that web editors need to tailor the content for the way in which users typically approach reading and interacting with online material. The content and functionality needs to be simple, intuitive, easy to skim and scan, broken down into digestible chunks of information, and to take as much advantage as it can of visual logic and visual structure. This editorial approach is necessary in view of the gap between the technical nature of a web-site and a user’s grasp of it.
If the metaphor holds true ‘the nature of a web-site’ (and the irony that entails) is a sort of prototype in miniature for the wider reality that words are trying to describe all the time. This prototype, considered in the right way, can also be used to account for the approach of the sophists and realists, without endorsing either.
The model of writing and editing for the web assumes that the underlying technology is too complex for most users, hence the syntax of ‘homepages’, ‘hyperlinks’ and ‘hero slots’. A sophist might go further by leaving a permanent question mark over a coherent notion of objectified reality, but nevertheless starts from the same view that a stable interpretation of reality recedes beyond the veil of understanding. The response of the sophist is disillusionment and the pursuit of rhetoric for its own sake in order to subvert pretensions to anything more substantial.
Whereas the realist may end up in a state of self-enclosed autism, if they pursue critical objectivity with meticulous attention to detail. They may adopt a precise language but it may also become one to which only they and a small intellectual minority can relate.
Both approaches, if pushed to an extreme, debar the production of material that is ‘meaningful’ for its audience: the first because the apparent meaning is in fact a superficial, existential thrill, and the second because the interest in the material is indifferent to the ability of the audience to grasp it. But, despite drawing different conclusions, they are, arguably, dialectical cousins that descend from the same family of thought. The thing they both have in common is a view of knowledge that is apathetic to the lives of human beings. Both start by looking for objectivity: the sophist cannot find it and falls back on style, the realist finds it but at the expense of communication. In the terms of the metaphor, the sophist is producing a web-site that looks good but serves no purpose, and the realist has no need for a web-site because all they see is the underlying code.
Neither considers that knowledge can adapt to the nature of a subject (in this case a human). Neither, in short, can think about knowledge in anything other than categorical (or superlative) terms, with the result that those interrogating the knowledge become irrelevant. Neither, that is, do what web editors must do, if they are to produce a web-site that serves any kind of purpose.
Unlike the activity of the sophist or the realist, the practice of editing a web-site is driven by the nature of those interacting with it. It must construct an account of ‘reality’ that is in tune with the purposes, prospects and limitations of those on the receiving end of the account. It is an ‘artful’ or ‘creative’ representation of a ‘scientific’ reality. It may sacrifice analytical precision for the sake of something that is meaningful for the user, but the meaning it creates is ‘accurate’ in proportion to the nature – and, by extension, abilities – of the user.
In the case of web-sites, it becomes necessary to invoke imagery widely understood to represent the characteristics of more abstruse and complex mathematical ideas. Whether the world as it is in itself – or the world in its fullest and unified sense – is anything like the mathematical ideas that govern the operations of a computer remains to be seen, but in the same way it is necessary to write specifically for the web, so too it is necessary to create meaning for human nature. This, then, indicates a way of thinking about communication, meaning, and understanding that tries to represent an impossibly complex metaphysical reality in idiosyncratic rather than objective terms. Web-sites, it might be argued, are a metaphor for the representation of something like eternity in the vocabulary and grammar of time.
Religion: the ‘web editor’ of the world?
This model of describing and explaining the world is driven by an immediate and pressing sense of purpose that entails the different directions that human nature takes. It is a skill or an art that crucially must grasp the context of the explanation first (and by grasping the context we mean understanding the object of explanation and to whom the explanation applies). Applied to journalism, this makes the job of the journalist particularly challenging (though potentially rewarding); not only must they have some understanding of the detailed explanation, but they must understand who they are explaining it to well enough to translate it into terms their audience can understand.
Editing web-sites, popularising science, and editing and writing newspapers are three among, no doubt, many examples it would be possible to cite. But the suggestion is that this model applies not only to examples within human nature, but to human nature as such. The difference here, is that human beings – by their nature – are not in a position to see (except perhaps to glimpse) the complete delineations of what it means to be human. They can never stop being human to look at themselves from the ‘outside’.
Speculation about human nature in this quintessentially metaphysical sense, is traditionally the preserve of philosophy and, perhaps more pre-eminently, theology. More particularly the kind of sacramental use of figurative language implied here as a way of thinking about human nature and its relation to ultimate truth has a long tradition going back to the classical world, the scriptures, through patristic theology and into medieval thought. Even in the Old Testament – though it is a grand simplification to say it – many of the most influential post-exilic editorial voices were attempting to construct a human-centred narrative (complete with cosmological mythology) that conveyed the identity of a people in metaphysical and theological terms, which is to say in terms of their fundamental nature and its relationship to their God.
The Greek tragedians, similarly, took the familiar stories from earlier epic poetry and used this popular idiom – also human-centred but rooted in Greek culture – to explore metaphysical and moral as well as social, political and psychological issues. Plato, in the manner of a pythagorean mathematician, adopts a more clinical use of figurative language, poetry and mythology, describing his metaphysics through dramatised debates and dialogue with rhetorical flourishes, such as the famous cave allegory. (Plato is perhaps particularly interesting from a modern point of view because he is among the first to use anthropocentric language in a more explicitly philosophical and pseudo-scientific way with the enticing suggestion that mathematics could account for this poetic philosophising more adequately.)
It is perhaps a little mono-cultural – particularly in a multi-cultural age – to focus on Christianity, but it is in a sense the defining characteristic of the gospels that they are a human story. They are written from the unadorned raw elements of bodily existence. It is barely possibly to even summarise this characteristic of Christianity so replete is the New Testament with a use of language that draws explicitly from ‘real world’ imagery in order to describe, or rather intimate, the ‘invisible’ and ‘spiritual’. Though, if pressed for a summary, the most significant and self-evident in Christianity is clearly and passionately embodied – literally embodied – in the figure of Christ, or the idea of God incarnate in the limitations and finitude of the world. Perhaps, then, the sacramental use of language as a meditation on human nature and its relationship to ultimate ends is, finally, the stated aim and meaning of ‘The Word’ or ‘Logos‘?
It is a near convention of modern secular and liberal habits of thought to think about the speculative, cosmological, and mythological character of religion in reductive terms. The notion of the resurrection, miracles, transcendental love, predestination, the creation and prelapsarian man, spirits, angels, and demons, to the extent that they aren’t simply seen as fanciful ‘fairies at the bottom of the garden’ are explained psychologically, sociologically, or biologically as satisfying a more primitive – but explicable – instinct.
But if religions – or some religious soteriologies – adopt a model of explaining the relationship of humanity to reality that is similar to the way in which humans make sense of things like computers, then there are two important points to make. First, this implies some sort of ‘reductive’ explanation for religion (meaning that religious explanations of the world are, by implication, susceptible to more analytically precise and objective explanations). But, second, given that, on this account, these soteriologies are a way of helping human nature identify and explain itself in relation to the beyond, it becomes an a priori feature of any such ‘reductive’ explanation that it cannot be accessible to human intelligence. It is not, therefore, really ‘reductive’ or ‘an explanation’. It only has the characteristics of a reductive explanation metaphorically speaking.
This way of thinking about religion is a kind of reverse flip of the analysis in writers such as Freud and, more recently, Dawkins. In this case, the phenomena of religion can be explained only through a unified metaphysic (a notion, which, among its more reflective mystical exponents, forms the contemplative focus of religious activity).
Theology and science, moreover, where the prevailing secular reductionism of today pits one against the other, become united in pursuit of the same goals (with the important difference that theology plays the part, so to speak, of the web editor by translating reality into a human story, and science that of the web programmer or geek who defines – or attempts to define – the broader infrastructure).
This ‘one-nation’ – or fundamentally theological – view of knowledge (or ‘Knowledge’) and its study through different disciplines, is, however, largely at odds with modern customs and habits of thought and study. Clutching onto the raging debate between science and religion (though it could apply just as easily to any discipline studied at a liberal university), this has been repeatedly construed as an either/or contest. Since the 1960s there seem to have even been prize-winning fighters on either side of the fence: CP Snow vs FR Leavis; and today R Dawkins vs … well, pretty much the entire global history of religion (a contest that perhaps lacks perspective?).
The result is necessarily divisive, and it is divisive because the different disciplines are thought about as rival descriptions of the same reality from the same perspective (when in fact they are descriptions of the same reality from different perspectives). They are divisive because they rest on a categorical or dualistic view of the world. And, if the debate between religion and science is symptomatic of the way in which – as a culture – we think about Knowledge, we are, ironically and paradoxically in view of our pretensions to liberalism, a divisive culture.
The inevitable fallout from this state of intellectual warfare is the fragmentation and dissolution of any coherent ‘human-centred’ meaning or story, precisely because the ‘human centre’ is not considered. The modern ‘liberal’ state in this sense has ‘merely’ found a way in which different points of view can co-exist without descending into a conflagration (rather than discovering the constructive relationship between those perspectives and any intrinsic value they may have). (Though it might be argued that historically the liberalism of today is a much greater achievement than this ‘merely’ suggests and not something to be dismissed lightly.)
In each of the many different ways in which academic study – or simply different trades and endeavours – unpack and explore the world, it may be possible to construct some kind of meaningful description; but that meaning is increasingly forensic, technical, narrow, and lacks the broader vision and imagination to reconcile itself, or understand itself, in relation to other perspectives. Without the binding contractual and nominal logic of a liberal state, liberal societies might easily descend into simple atavistic violence as they increasingly lose the means of understanding their common humanity.
Perhaps the signs of this fragmentation are already there? Perhaps the shallow nature of celebrity and tabloid culture, binge drinking, rising levels of obesity and antisocial behaviour are merely the frustrated expression of lifestyles that increasingly lack any intuitive or easily accessible meaning? If that is the case, and the thrust of this argument is broadly correct, then it might be possible to interpret the modern state of play as crying out for a story similar to the narrative of Christianity, or a narrative that attempts to understand the things human beings have in common – or ‘human nature’ – in relation to reality in its most inclusive and all-encompassing sense.
It is important to emphasise the ‘similar to’, not least because Christianity, though it is an attempt to embrace universal and absolute truths, is, of course, the product of a context with its own distinctive identity and history. This identity and history has – as its detractors repeatedly observe – become an excuse (it really is nothing more than an excuse) to behave divisively. The distinctively Christian mantle among fundamentalist (though also among more mainstream) Christians has become a label through which to alienate those who do not share the faith. This, among other things, gives ammunition to the now slightly hackneyed argument that ‘religion causes war’.
Undoubtedly people have fought many wars in the name of religion, and undoubtedly a dogmatic subscription to religious values has led many people to behave in an intolerant, even malevolent, fashion. But this is, of course, ironic given that Christianity in its origins was a Jewish movement designed to spread a universal message beyond the Jewish community. It developed as a message that sought to overcome the obstacles to exclusion, including what we might now think of as religious and cultural identity.
If, then, there is an appetite for something similar to Christianity it must be something that does not see itself as the basis for divisive behaviour (an attitude that many modern expressions of Christianity lack the vision to accomplish, but which is nevertheless a posture consecrated by its most orthodox theology).
Another related reason for the ‘similar to’ is that as a religious tradition with its own identity and history, Christianity is only one among many religions. It is one among many different ways in which human beings have tried to understand themselves in relation to the rudely assertions of the reality they stare in the face daily, yearly, and historically. If religion is construed as first, a distinctively human account of things and second, one that is therefore idiosyncratic, then surely there is no reason why the variegations of human nature should not, legitimately, account for that reality in different ways? In the same way the scientist and the believer are describing the same thing in different ways, so too are different types of believers. As the old clichéd metaphor says, the lamps are different, but the light’s the same.
Perhaps then, all that needs to be said is that the fragmentation and dissolution of popular self-understanding and purpose means that there is an open call for some kind of religion (or perhaps mutually compatible and tolerant religions).
To even take a superficial tour guide of the world’s religions and their history is to see immediately how creative, artful, inventive, ingenious and full of vitality they are. What is characterised here in dry secular terms as ‘the way human beings make sense of their nature in relation to reality’ takes on sometimes stunningly beautiful and enchanting forms. Whether it is in the form of carefully crafted popular rituals, traditions, and customs, artwork and architecture, myths, literature, poetry and more recently film, or the ideas hewn out of human nature by theologians, religion cultivates human nature in rare and extraordinary ways, that make the secular and pornographic literalism of the present look prosaic.
Here, briefly, is an extract from the opening canto of Paradiso, the third book of Dante’s The Divine Comedy:
‘O divine power, if you lend yourself to me
So that the ghost of the blessed kingdom
Traced in my brain, is made manifest,
You will see me come to your darling tree
And there crown myself with those leaves
Of which the matter and you will make me worthy.’
Dante was a poet, but even these two verses show that as a poet he puts his language to work. He contracts his art form to the employment of higher ends. The ‘ghost’ of the ‘blessed kingdom’ and the ‘darling tree’ have a sacramental purpose, just as the whole of Dante’s vivid journey from Inferno toParadiso has a sacramental purpose. They are words and images that are not purely aesthetic or purely analytical but are constructed to create a metaphysical tonality and harmony that resonates with human nature. Put simply, they talk about things that cannot be talked about it terms human beings can understand.
Dante’s journey was told in a vocabulary that the the high middle ages would have understood, but it might look to contemporary audiences slightly more arcane, just as some of the imagery in the Bible or other scriptural texts can look arcane. But the sacramental character of this language suggests that it should not be treated peremptorily. On its own terms, it acknowledges that it is only trying to describe notions that are peremptory and universal (but transcendent and therefore opaque) in the language of time (which by its nature decays). Religion, as a response to human nature, therefore requires constant re-invention within a tradition of belief and thought (this, though I may be mistaken, was the original idea of a ‘church’).
It is debatably the stagnation of religious language – perhaps frozen by the more dualistic and categorical habits of thought the modern world appears to have evolved – that makes religion seem irrelevant. Few writers today take recognisably modern imagery and employ it in this sacramental way. One of the few, though he is subject to fiercely contested interpretation, is the Jewish, Czech novelist, Franz Kafka. His novel The Trial uses imagery to which anyone who has worked in an office can relate. He is famous for decrying the bureaucratic character of modern life. But one interpretation of this novel sees the elusive jurisdiction of Joseph K’s prosecuting authorities as a metaphor for the moral law, and modern man’s response to it. Here then is a use of sacramental language constructed in terms that are familiar to the modern world.
Among the more fascinating of modern filmmakers are Joel and Ethan Coen. Many of their films betray a familiarity with the tradition of Hollywood film. Some are clearly drawing on the established aesthetic of film noir, and others on the the screwball comedy (their 1990 film, Miller’s Crossing is a near pastiche of Stuart Heisler’s 1942 adaptation of Dashiell Hammett’s The Glass Key). But many of their films also have a moral or metaphysical cast of mind, as though, like Kafka, they are using a recognisable idiom or mythology (in this case popular film, mainly from the 1940s) to articulate something more fundamental.
It is, of course, a little presumptuous to assume that the work of Kafka or the Coen Brothers (and it may well be possible to find a few other lone voices who appear to be communicating in a similar way) fits neatly within a religious tradition; but the methodology they employ, if this observation is correct, suggests how a sacramental art form might work for a modern idiom.
To return to the wider point, what religion essentially can offer is a description of reality that engages human beings in terms they can understand and at the level of their most fundamental character and interest. Christians, Jews, Hindus, Muslims (and more) are, in other words, the ‘web editors’ of the world.
The need for this, what might be called, ‘religious art form’ (but in another age, would have simply been called ‘theology’), is all the more paramount given the fragmented dead-end at which communication in the modern world has arrived. At a time when popular communication, in the west at least, takes the more insubstantial form of things like Grazia magazine, or the ITV news, and more analytically sophisticated endeavours are broadening the horizons of knowledge but within self-enclosed and inaccessible communities, religion and purveyors of religious meaning are confronted with fresh and exciting challenges.
Like someone tasked with the job of translating mathematical algorithms into a comprehensible experience, the decay of popular meaning means that the previously fertile craft of religion needs to revitalise. Meaning and Knowledge, in some form, need to yield to human intelligence.
Or as Pseudo-Dionysius put it:
‘The Word of God makes use of poetic imagery when discussing these formless intelligences but, as I have already said, it does so not for the sake of art, but as a concession to the nature of our own mind.’