Maxims from the Chair

The Do's
  • Do something old in a new way
  • Do something new in an old way
  • Do something new in a new way, Whatever works . . . works
  • Do it sharp, if you can’t, call it art
  • Do it in the computer—if it can be done there
  • Do fifty of them—you will definitely get a show
  • Do it big, if you can’t do it big, do it red
  • If all else fails turn it upside down, if it looks good it might work
  • Do Bend your knees
  • If you don’t know what to do, look up or down —but continue looking
  • Do celebrities—if you do a lot of them, you’ll get a book
  • Connect with others—network
  • Edit it yourself
  • Design it yourself
  • Publish it yourself
  • Edit, When in doubt shoot more
  • Edit again
  • Read Darwin, Marx, Joyce, Freud, Einstein, Benjamin, McLuhan, and Barth
  • See Citizen Kane ten times
  • Look at everything—stare
  • Construct your images from the edge inward
  • If it’s the “real world,” do it in color
  • If it can be done digitally—do it
  • Be self-centered, self-involved, and generally entitled and always pushing— and damned to hell for doing it
  • Break all rules, except the chairman’s
The Don'ts
  • Don’t do it about yourself—or your friend—or your family
  • Don’t dare photograph yourself nude
  • Don’t look at old family albums
  • Don’t hand color it
  • Don’t write on it
  • Don’t use alternative process—if it ain’t straight do it in the computer
  • Don’t gild the lily—AKA less is more
  • Don’t go to video when you don’t know what else to do
  • Don’t photograph indigent people, particularly in foreign lands
  • Don’t whine, just produce

The Truisms

  • Whoever originated the idea will surely be forgotten until he or she’s dead—corollary: steal someone else’s idea before they die

  • If you have to imitate, at least imitate something good

  • Know the difference

  • Critics never know what they really like

  • Critics are the first to recognize the importance of that which is already known in the community at large

  • The best critics are the ones who like your work

  • Theoreticians don’t like to look—they’re generally too busy writing about themselves

  • Given enough time, theoreticians will contradict and reverse themselves

  • Practice does not follow theory

  • Theory follows practice

  • All artists think they’re self-taught

  • All artists lie, particularly about their dates and who taught them

  • No artist has ever seen the work of another artist (the exception being the post-modernists who’ve adapted appropriation as another means of reinventing the history)

  • The curator or the director is the one in black

  • The artist is the messy one in black

  • The owner is the one with the Prada bag

  • The gallery director is the one who recently uncovered the work of a forgotten person from his or her widower

  • Every galleriest has to discover someone

  • Every curator has to re-discover someone

  • The best of them is the one who shows your work

  • Every generation re-discovers the art of photography

  • Photography history gets reinvented every ten years

  • New galleries discover old photographers

  • Galleries need to fill their walls—corollary: thus new talents will always be found

  • Galleriests say hanging pictures is an art

  • There are no collectors, only people with money

  • Anyone who buys your work is a collector—your parents don’t count

  • All photographers are voyeurs

  • Admit it and get on with looking

  • Everyone, is narcissistic, anyone can be photographed

  • Photography is about looking

  • Learning how to look takes practice

  • All photography, in the right context at the right time is valuable

  • It is always a historical document

  • Sooner or later someone will say it is art

  • Any photographer can call himself an artist,

  • But not every artist can call himself a photographer

  • Compulsiveness helps

  • Neatness helps too

  • Hard work helps the most

  • The style is felt—fashion is fad

  • Remember, its usually about who, what, where, when, why, and how

  • It is who you know ,

  • Many a good idea is found in a garbage can

  • But darkrooms are dark. . . and dank, forgidaboudit

  • The best exposure is the one that works

  • Expose for the shadows, and develop for the highlights

  • Or better yet, shoot digitally.

  • Cameras don’t think, they don’t have memories

  • But digital cameras have something called memory

  • Learn to see as the camera sees, don’t try to make it see as the human eye

  • Remember digital point and shoots are faster than Leicas

  • Though the computer can correct anything,a bad image is a bad image

  • If all else fails, you can remember, again, to either do it large or red

  • Or, tear it up and tape it together

  • It always looks better on the wall framed

  • If they don’t sell, raise your price

  • Self-importance rises with the prices of your images on the wall

  • The work of a dead artist is always more valuable than the work of a live one

  • You can always pretend to kill yourself and start all over.

  • Good work sooner or later gets recognized

  • There are a lot of good photographers who need it

  • before they are dead

  • If you walk the walk, sooner or later you’ll learn to talk the talk

  • If you talk the talk too much, sooner or later you are probably not walking the walk (don’t bullshit)

  • Photographers are the only creative people that don’t pay attention to their predecessors work—if you imitate something good, you are more likely to succeed

Viaggio in Italia

Catalog essay for Luigi Ghirri

Exhibition, Julie Saul Gallery, 2001

It is always great folly to write about photographs, for the experience of them always transcends description. This is particularly true of really enigmatic photographs such as those of Luigi Ghirri. Like all good art, Ghirri’s photographs are not about specific things but transfer into their content, the process by which they were made. Thus I cannot address the photographs individually but instead ask the viewer to consider them carefully, particularly in their relation to each other. The statement made by a single image can never be as powerful as that made by an entire body of work. This is particularly true of Ghirri’s oeuvre as he photographed in sequences and patterns making the meaning of the photographs the synergistic experience of several themes.Ghirri’s subject matter, for lack of any other way to describe

it, is the travelogue of Italy in the latter day twentieth century. As Heinrick Heine put it in his Journeys Into Italy, “there is nothing so stupid on the face of the earth as to read a book of travels in Italy—unless it is to write one.” Ghirri reminds me of Heine; he mocks his own Viaggio in Italia disclaiming the wonders for the pedestrian delights.For me, Ghirri represents in his persona and his remarkable but short-lived life all that is wondrous in Italian culture. I first met him over twenty years ago when he guided me in my own photographic adventures through Italy. He was then, as he symbolically is now, the catalyst for serious photographic activity in that country. He introduced me and other foreigners interested in the activities of our fellow photographers to a large number of young, energetic image makers who, like Ghirri, were dissecting the physical and cultural topography of their environs.

Like a Don Quixote, Ghirri traveled relentlessly by car or train around his country, madly in the quest of his own inspiration but also for a livelihood which he made through freelance commissions, consulting jobs and curatorial work. I saw him unshaven, elegantly wrinkled in the way that only an intellectual Italian can be. Gesturing about on an unrealistic time schedule, he was reflective of the frenetic incongruities he depicted. We were good friends, simpatico and totally in sync without understanding a word of each others language. He was an unfailing energy, seriously snapshooting to the end.In the eighties, with my fellow writer, Luigi Ballerini, I embarked on a visual and literary look at Italy in my own book Italy Observed. It was Ghirri whose work opened the doors to the rich contemporary photographic work ignored by the American cognoscenti (MimmoJodice, OlivoBarbieri, Gabriele Basilico and many others). While even today Ghirri’s work goes largely unsung, it remains as significant as his better known American contemporaries (Steven Shore, William Eggleston, Robert Adams). Perhaps in the decade and a half when Ghiri worked, the curatorial interests overlooked wit for the detached, dispassionate, cool eye. Ghirri’s work though manifesting some of those same photographic interests anticipated something else. His concerns were postmodern before those words were bandied about in photographic circles. He believed not in photographic truths but in the act of photographing as a social phenomenon, as a means of deconstructing the landscape and reinventing a hyper realistic space. Remarkably, his work also foretold a facet of another genre to come. His images are diaristicnotations of a passage through typologies that reveal a kind of collector’s mania. Unlike many of today’s obscurely narcissistic works, however, Ghirri’s does not require his interpretive presence for one to understand the cultural life and muse of post-modern Italy which drove him.Luigi Ghirri saw the multiple personalities of Italy through the signs of its many layered civilizations. Ballerini and I broke Italian culture down into six categories, each of which are archetypally represented in Ghirri’s photographs and can be well understood through them.

Fabulous Graces is his portrait of the enduring forces that shape Italy’s Genius loci.This Figured Myth is Ghirri’s expression of a country suffering from the insecurity of a modern and now unisolated culture.Shattered Illusions is the irreconcilable rift between myth and its erosion in his photographs.Extant Realism is that hyper focus in Ghirri’s photographs that is but one face of illusion’s coin.Resurrected Legends is Ghirri’s concern that his imagination can only expand as he continued looking into Italy’s layered archeology.Eternal Charms is the paradoxical ethos of Italian daily life. As the faces of the coin are indistinguishable in his images.

Digital Humanism

for more essays on the Realm of the Circuit visit

Published in The Education of an E-Designer, Stephen Heller, ed. Allworth Press, 2001 As members of a thinking community, we must accept this premise: we are no longer anticipating a revolution. It has already happened. It is time to build on its promise, transcend the inevitable losses, and become more comfortable, more human, with the change now wrought.

This revolution has created the possibility of reinventing ways lost in history of interacting, thinking and creating. This is manifest in the advent of the digital computer, and its accompanying methodologies, giving unprecedented new opportunities for working in ways that emphasize relationships between bodies of knowledge and human minds. The computer is valuable in is ability to enable us to reconceptualize our relationship to knowledge, and to organize it, rather than merely accumulate information. The methodologies of the computer allow us to share a commonality of human expression that crosses disciplines. If approached openly by thinking people who hold the humanist tradition dear, they allow a means for creativity which will enable us to reinforce that which makes us human. The great achievements of man lie in the quest to expose the unseen, and the computer’s value lies in its ability to further these achievements.

The ways of working in the digital world, however, are not new, as we shall see. Indeed, precursors of multimedia and hypertext have been around for centuries. The present strength of the computer, its speed, flexibility, and strength in retention of fact, only enhance what has already been embedded in the constant course of human intelligence—the desire to create new meanings through relationship.

We posit a new creative individual, the “creative interlocutor,” a navigator of associative trails of thought and resource, who enables others to freely and creatively manage their human interests. This individual is one who is integrated: his creativity functions as an organic part of society, and he acts to connect for the common good. The creative interlocutor is also an integrator in his ability to negotiate the disparate fields of human knowledge and bring them together in previously unimagined ways. In so doing, he enables others to further their creative potentials.

Herein we will make the case that technology has always aided, rather than hindered, human expression and creativity. Human beings, however, have always had to overcome an initial hesitancy, whether it be telegraph or the computer. Henry David Thoreau remarked: “We are in a great haste to construct a telegraph from Maine to Texas. But Maine and Texas, it may be, have nothing important to communicate.” Clearly, today one does not doubt the humanity of a grandmother in Maine who talks to her granddaughter in Texas. What we lament is the loss of content in that conversation.

We seek to negate the self-fulfilling prophecy engendered by entrenching interests that lament the loss of their primacy by blaming the inhumanity of the technology. You can’t touch it, you can’t read in bed, it hurts my eyes, and so forth. These regrets and fears, like all others, are inhibiting. All too often they segregate the minds of humanists and artists whose creative input is vitally needed in the implementation of this new technology. The irony is that this feeling unnecessarily reinforces the power of the technocrats who then direct the design and implementation of the technology in a self-promoting way. Ask not what the computer can do for you, but what you can do for the computer.


The computer has value only as it enhances that which makes us human. Most likely this is our ability to learn, or rather to learn how to learn ¬ the knack to order, manage and reconfigure that which we know. Our humanity lies in our ability to transmit from one another, allowing others to gain access to successful formulations and articulations that further our notion of being. This what builds culture ¬ the accumulated conceptual riches brought through the history of civilization.

We use the Liberal Arts1 to understand these riches. They treat the fields of knowledge in a balanced and equal manner, emphasizing the commonality of human experience and its expression within its diverse fields. A student of the liberal arts creates meaning by weaving a nurturing blanket from the common threads that hold the fields together, rather than by focusing on the seams which set them apart. This balance between fields of knowledge, and search for commonality is precisely what is furthered by a judicious use of multimedia digital technology.

Thinkers of the Enlightenment rediscovered patterns of thinking that today are embodied in the technology. Francis Bacon followed the Renaissance masters as a model of the creative interlocutor, connecting the spirit of the Enlightenment with the great Age of Reason. Through his methodology of inductive reasoning, he sought to free intelligence from dogma that constrained and limited our understanding of the greater rational scheme of the world. In NovumOrganum in 1623, he argues not only for scientific methodology, but also for its integration with the arts and the humanities. In inductive reasoning, which is the accumulation of information and the detection of patterns therein, is the commonality of procedures that dispels notions of a priori preconception. His philosophies opened the field of human inquiry to an ever-expanding body of knowledge. Francis Bacon’s life, rooted in philosophy, politics, and the creative art of writing is exemplary of methodological inquiry furthering the connectedness of our human interest.

Maria SibyllaMerian (1647-1717)2 was the visual arts analogue to Bacon. Through her use of the evolving technologies of optical magnification and mechanical reproduction, she was able to further humanist values and the ideals of the Enlightenment. Born to a family of bookmakers, she took at an early age to observing and sketching insects. She would take the observational skills learned as a child and go on to publish two major works: Raupen and Metamorphosis, both editions of copperplate prints. In these works, she depicts the insect and plant life of Europe and Surinam to the emerging intellectual class of the period. Merian was unique among botanists of her age. She depicted insects and plants not as specimens, but rather as creatures intricately and intimately involved in the cycle of life. She was not interested in then conventional classification schemes or in “cabinets of wonder” that present sterile specimens. In fact, she told one potential collaborator to stop sending her dead insects ¬ she was only interested in “the formation, propagation and metamorphosis of creatures.”3 Prior to the seventeenth century, our understanding of the world was formed by a combination of myth and doctrine. During the Enlightenment, the West found a new fascination with the real, and developed ways of thinking and technology to explore the world. Merian was inspired by the new optical technologies of her time; the compound microscope came about in the 1660’s, and Althanasius Kirchner published his book Ars manga lucisetubmrae, which discussed the camera obscura as a tool for observation and illustration. In her imaginative use of these tools, Merian was an artist who responded to Enlightenment discourse about knowledge and the natural world, and effortlessly crossed boundaries. The fruits of scientific methodology fathered by such as Bacon, Merian and the great thinkers of the Age of Reason brought forth the Industrial Age. In this new age, the ever-expanding fields of knowledge required specialization at the expense of more universally learned individuals.


The idea that one field might enrich another is also not a new one. Though it seems to be forgotten by the over-specialization emphasized in our learning institutions, the concept and practice of what is currently termed multimedia is an age-old notion. Multimedia is not suggested merely by technological advancement, but rather it is grounded in fundamental human practice that predates the invention of the computer by thousands of years. Early uses of multimedia were cross disciplinary in an unselfconscious way. The advent of the computer did not create the technical tangle of multimedia, but rather manifests a pre-existing need in our culture for a more democratic, universal and diverse way to communicate.

We can see multimedia in the burial rituals of the ancient Egyptians who made no demarcation between media employed in the great technology of the pyramids and their elaborate burial rituals. These burial sites combined elements of architecture, writing, sculpture, and during the rite, even music and performance, all for the purpose of captivating and mystifying the laity under the dominance of their rulers. In the Middle Ages, the prevalent form of multimedia was at the same time a form of mass communication. The cathedral communicated the awe-inspiring Christian spiritual doctrine which was the dominant means of rationalizing human existence. The message was made stronger by its embodiment in a variety media stimulating the senses: visual (stained glass and statues), sound (music and hymn), touch and taste (performance and mass), and smell (incense and myrrh). Writing itself was the means for codifying the knowledge held in the cathedral, the knowledge to sort out the patterns of our existence, to know the unknowable.

All of these technologies were beyond the reach of the ordinary man, since books were tremendously expensive to produce and few could read. The expense and duration of constructing a cathedral made it an option only for the wealthy. It was of course not portable, so it remained in a central location, accessible only to those in its immediate vicinity. Due to these inherent, and perhaps intentional, constraints, knowledge and, thus, power were concentrated in the hands of the theocracy. It was not until the advent of the printed book that the quest for knowledge could become a part of a universally inclusive culture. Yet, printing came with a price: a devaluation of multimodal communication.

Victor Hugo comments further on the advent of printing, its narrowing of our field of expression, and the dominance of the word over the image in his nineteenth-century novel The Hunchback of Nôtre Dame. A character in his novel, a priest in fifteenth-century France, directly after the invention of movable type, compares the newly invented book to the cathedral and states, “this will kill that” ¬ the book will kill the cathedral. Yet, it did not. Hugo’s phrase also refers to the conflict between the text of the book and the multimedia imagery of the church. By the nineteenth century, the text had become dominant as a means of discourse. For a century to follow, the word, through the great dissemination of the written text, was the primary source for creative inspiration. If nothing else, it allowed for the distribution of description pornography and a stimulation that gave rise to Modernism.

But all was short-lived. In the twentieth century, Marshall McLuhan, in his book The Gutenberg Galaxy, foresaw the rise of the image, empowered by global visual media such as television. He envisioned “the civilization of imagery” wherein the word is no longer the sole stimulating force in the imagination. Today there is an unanswerable conundrum ¬ which is it that stimulates the imagination first or more, the word or the image? The computer doesn’t care, because it’s a multimedia cathedral!

Predictably, the phrase “this will kill that” was repeated with the invention of photography, and is all too often heard again today as we experience the digital revolution. Much in the same way that the text of the book threatened the mulimodal cathedral, or photography’s imagery that of painting, the computer now threatens the book. Likely, there will be a co-existence in the media. The book will likely not disappear, but will inevitably change in function and meaning, as did painting. Furthermore, the computer offers us another Renaissance in our extensions of creative possibilities through the coequal distribution and interconnectedness of age-old multimedia. The Web is an ever-expanding territory of thought, commerce, and entertainment.

Obviously, there is no doubt that technology relieves us of burdensome tasks, whether it is the welding of metal or of numbers or of images, or of all of them together. But have we allowed it to free us in the greater pursuits of our humanness? Perhaps the blame lies not with technology but with our systems of learning.

All too often today, intellectual ideas are treated as chattel property whose purpose remains locked in the discourse of the “knowing” rather than serving the common good. This notion segregates us from our commonality of intelligence and unravels with technobabble and jargonization the very fiber of our humanity. Pre-Enlightenment myth returns to these forms. Specialists sequester themselves in monasteries of learning, untouched by the great unwashed masses. Something medieval is happening again.

Is it not astounding that at Harvard University, as recently as 1989, the late great Italian poet Italo Calvino needed to remind his audience of what should have been evident in the liberal arts ideal: Creative visualization is a process that, while not “originating in the heavens,” goes beyond any specific knowledge or intention of the individual to form a kind of transcendence. Calvino stated that not only poets and novelists deal with this problem, but scientists as well. “To draw on the gulf of potential multiplicity is indispensable to any form of knowledge. The poet’s mind, and at a few decisive moments the mind of the scientist, works according to a process of association of images that is the quickest way to link and to choose between the infinite forms of the possible and the impossible. The imagination is a kind of electronic machine that takes account of all possible combinations and chooses the ones that are appropriate to a particular purpose, or simply the most interesting, pleasing, or amusing.”4


Earlier in the twentieth century, John Dewey, in his pragmatism, advocated an educational system which would recognize the common humanist thread within experience, communication, and art. In his analysis of the Greek Parthenon he noted:

The collective life knew no boundaries between what was characteristic of these places and operations and the arts that brought color, grace and dignity into them. Painting and sculpture were organically one with architecture, as that was one with the social purpose the buildings served. Music and song were intimate parts of the rites and ceremonies in which the meaning of group life was consummated.5

We ought not to have to remind today’s thinkers of his philosophies, and yet find we have to over and over again. Dewey sought to recover the continuity of aesthetic experience and normal processes of living through proper education. All art is the product of interaction of living organism and environment and an undergoing and a doing which involves a reorganization of actions and materials.6Aesthetic understanding must start with and never forget that the roots of art and beauty lie in basic vital functions. Herein is a mimic of Bacon’s earlier notion that all pattern is of the “machine of God.”

Marvin Minsky, one of the founders of modern computer science, in a like manner has portrayed the mind as a society of tiny components forming a magnificent puzzle of evolving imagination. In his book The Society of Mind, he cites Papert’s principle, the notion proposed by Seymour Papert regarding mental growth, wherein Papert theorized that intellectual progress is based not simply on the acquisition of new skills, but also on the acquisition of new administrative ways to use what one already knows.7 Our conception of the computer as an art-making and communication device is just that—a tool which fosters and encourages the creative re-administration of information.

Dewey envisioned an educational system which imparted pragmatic information without elitism. In order to allow education to become a tool that enables humanity to cultivate and reorganize our work and culture, we must abandon authoritarian methods of educational practice, where the teacher is the endowed disseminator of privileged knowledge. Humanists must remember that the computer is a tool of multimedia communication between the source of information and the user, without giving authority to the selected few. This communication becomes an ongoing ebb and flow of escalating meaning/communication which engages and empowers the inquisitive user. As a tool for art-making and scientific thinking, it is unique, and allows us the potential to realize Dewey’s vision. More than at any other time in history, it is important to educate students with tools, both technical and intellectual, to formulate new patterns between the details of knowledge rather than to expect them to accumulate information like books on a shelf.8 Cyber communication must be made to be the intelligent extension of human capability for new discovery. Communication is education.


Whether communication takes the form of vocal utterances, ink on paper, or modulation of radio waves, the intention has always been the transfer of meaning form one individual to another. This creates an image that will convey idea. It is in our humanism that we attempt to make manifest some facet of experience/content and communicate it to another person or persons.9

Until now, the medium has determined both the audience for the message and its destination. Thus, oil paintings were destined for the museum, text for the printed page, music for the radio. Subcultures have grown up around these destinations, and these subcultures have become insular and self-referential. Yet the separations are artificial, imposed by the restraints of the technology and mostly by the lack of vision of those working within politically defined fields. These boundaries between media also forced a separation of audiences, creating the artificial divides of high and low culture. Evolution of media allows an evolution of audience. With its virtual writing spaces,10 the computer positions us to transcend these restraints, and to reunite all experience, within its algorithms, to recognize the common humanism within all communication.

The digital computer when combined with the optical scanner, the music sampler and a myriad of other computer input devices, allows us to reduce all physical media to a virtual binary digit. At this point, when we have digitized sound, or photographs, or film, it is all equal in the cathedral-like space of the computer, without dogma. Images become reduced to a dataset ¬ nothing more, nothing less. Every digital movie, every digital image, every digital sound is nothing more than a sequence of zeros and ones stored in the memory of the computer. These numbers can now be seamlessly combined and juxtaposed. In the computer’s virtual spaces, all forms of communication are equal.

The computer, in its use of multimedia, merely reinforces common and historic themes. In order to communicate in the interest of evolving the human condition, there must be access to the creative tools ¬ the computer network ¬ to all interested. The computer has the ability to structure all communication to the common and accessible level implied within the language of the dataset. Hence, it empowers the user to also reorganize any message in new ways that allow for pattern thinking, trans-disciplinary intercourse, and the visualization of the unseen.


The idea of making a large body of information available to others is not new. In 350 B.C., the Athenian Speusippus created an encyclopedia that purported to contain all human knowledge, as did Lu Pu-Wei in China in 239 B.C., who gathered 3,000 known scholars and assembled their knowledge into a work of more than 200,000 words. One of the limitations of these encyclopedias was their mass: Pliny the Elder’s encyclopedia, Natural History, compiled in A.D. 79, was said to comprise thirty-seven volumes containing 2,5000 chapters. The next limitation was cost: At a time when books were reproduced by hand, works of this magnitude were fabulously expensive. The final limitation was more of a cognitive one: when large bodies of information are put together, some organizational scheme must be used. Modern encyclopedias are organized more or less alphabetically, with one entry following another from a to z. This is a fairly arbitrary modern, and limiting system. Diderot’s Encyclopédie, a text meant to further the Enlightenment by bringing out the essential principles of art and science, was organized by tasks and preoccupations.

Vannevar Bush, science advisor to Franklin Delano Roosevelt, has been somewhat forgotten, yet stands as a remarkable creative interlocutor. In his 1945 vision of the memex, he held out the solutions for the limitations of human mind and dexterity. The memex was an unrealized tool that a more enlightened harvard audience, listening to Calvino, might already have employed. His machine improves memory, like an encyclopedia, while allowing the mind to operate “by association. With one item in its grasp, it snaps instantly to the next that is suggested by its association of thoughts.”11 His vision of its ability to scan information allows the user to recombine art and knowledge, to become a creative interlocutor.

He talked of new organizational schemes ¬ ones which can be customized to the needs and interests of the particular users. His device combines two of the liberating capabilities of the digital computer; reduction of images, words, and music to a dataset and networking in what was meant to be a personal device. He foresaw both the internal network of hypertext and the possibilities of the external network.

Remarkably, today Bush’s mechanism is as common as the desktop computer, and yet, his essential idea of the memex¬ that users can be empowered by hypertextual trails through information ¬ is unfulfilled. Why, we ask? It is not the fault of technology, but rather a failure of entrenched values and limited vision. The Web, the most prevalent implementation of hypertext, is essentially a one-way distribution system, where the user has little facility to be creative. We foresee the use of the computer networks to facilitate and empower the creative interlocutor.

The creative interlocutor uses hypertext and hypermedia to create trails; these trails transform data into knowledge to be redistributed to others, thus feeding the network. The memex, and, likewise, the computer create a miniature network within the data they hold in their memory. When linked to a larger network, such as the World Wide Web, their ability to create new meaning is increased almost infinitely.

Our ability to nurture and engage our own genius is stifled by an education that fails to recognize the value of associative capabilities inherent in this network. Clearly, this is the task of the redefined humanist and visual education, or what we once referred to as the classic Liberal Arts. It must engage us all, as scientists, engineers, artists, and scholars. Technology has failed us in accomplishing this goal because it has been segregated from humanist activity.


A new artist, interlocutor-designer should be a product of an enlightened engagement fostered by a new educational system which is trans-disciplinary in nature.12 The creative interlocutor is one who facilitates the exchange of ideas and information between one human need and another. This person is the producer, director, the organizer-navigator. More specifically, this person is the curator, editor and collector, then the maker, weaver, welder, builder and distributor. History reminds us easily of such as Leonardo da Vinci Frances Bacon and Thomas Jefferson. But one must also ponder th great stretches by multidisciplinary minds such as the weavers of the Bayeux tapestry, Anna SibyllaMerian, Samuel F. B. Morse, the Roeblings, Booker T. Washington, Laszlo Moholy-Nagy and countless others whose reach across boundaries changed civilization for the better. Creative intelocutors are: programmers, producers, inventors, researchers, teachers, scholars and volunteers. The creative interlocutor negotiates revolutionary associations, a kind of new genius.

We see a budding of the creative interlocutor in the collaborative spaces of the Internet. The language of the computer is a shared language that allows participation by those who so choose. In the examples to follow, there is not longer a single creator, but rather a collective genius, a web of creative nodes that weave together previously disconnected pieces of information.13 As innovators, creative interlocutors use their art in a manner which facilitates others to find and define their own creative meaning in the interrelationship of ideas and forms.

In 1979, the inventors of the RSA encryption scheme (the one currently used by Netscape Navigator) put forward a challenge.14 They encoded a message, and offered a $100 reward to anyone who could crack it. They felt that given the computing resources of the time, and even granted advances in chip speed with a factor of millions, nobody would be able to break their code in the foreseeable future. They were wrong! Instead of thinking of a single computer as a self-contained and limited system, Derek Atkins, a twenty-one-year-old engineering student at MIT, realized that while one computer would take a long time to crack the code, he might harness the power of the Internet and distribute the computing load over many computers. And that’s exactly what he did, In 1991 he directed his friends to use a recently discovered mathematical method to devise a program that would crack the code, and had it ready to go by mid-1993. The program was distributed over the Internet to more than 1,500 computers on six continents to create an expanded computer that churned out 5,000 MIPS15 years. The code was cracked in the spring of 1994 by looking beyond the boundary of the individual computer and thus consolidating the power of the network.

Another example of imaginative administration to enlarge our sphere of possibilities lies in the development of a computer operating system. The operating system Linux was not so much invented, as evolved through creative re-administration. It is a prime example of the networked aesthetic of the expanded public sphere of individuals working in concert. It began with Linus Torvalds. He was in school in 1990, and owned a PC that ran Minux, an operating system designed mostly as a UNIX tutorial. UNIX is a powerful operating system, which at the time, could only be run on more powerful computers. So, he imaginatively worked within his limitations, and wrote a few programs ¬ a terminal emulator and a disk driver ¬ so that he could save files to disk. He posted these initial programs, and generously shared them as freeware ¬ software that is distributed primarily over the Internet, and for which there is no charge. From there it took off. As a result of Torvald’s interlocution, the operating system evolved in a democratic and Darwinian manner; anyone could contribute code to he operating system, but only the most evolved would become part and parcel of the final release version. He presided over the development, but was by no means entirely in control of it. He provided the seed idea and the guidance, but left the mechanics of its development up to the community of creative users. Today, Linux has an established base of nearly ten million users in 120 countries and is composed of millions of lines of code. All of this primarily because Linus didn’t follow the usual notion of creating software through the confines of a defined proprietary scheme ¬ hundreds of hackers around the world wrote it, collaborating over the Internet.

These examples serve to help define the notion of the “creative interlocutor”, a multimedia universal designer, engineer, artist, socially responsible person whose mandate is to help negotiate the crossing of boundaries. While they reside in the field of technology itself, their parallels must be generated within the humanities and arts.

1 Beginning in the Middle Ages, the traditional fields of the liberal arts were defined from classical studies in order to reveal obscured meaning in the text, symbols, doctrines, icons and mysticism of the Church. The ancient seven branches of learning included grammar, logic, rhetoric, arithmetic, geometry, music and astronomy. All were means of deciphering the hidden codes of the Biblical text. Since the Renaissance and its enlightenment, the core of this traditional education held that all area of human endeavor are suitable topics for inquiry, regardless of their nominal concerns. An integrated individual versed in the liberal arts loves learning and is directed by intellectual curiosity, rather than by disciplinary guidelines. The Renaissance dawn from the a priori methods of the Dark Ages revealed the various facets of diverse fields and the common rays of humanism's enlightenment. Educators such as Vittorino de Feltre of Mantua taught men to be well-rounded individuals. In his boarding schools, princes and poor scholars mixed in a classical education. Character was shaped, along with the mind and body, through frugal living, self-discipline, and a high sense of social obligation. All was done with an eye to the practical: philosophy was a guide to the art of living, along with training for public life. "Students were expected to excel in all human existence." 2 Natalie Zemon Davis, Women on the Margins (Cambridge: Harvard University Pres, 1995). 3 Davis, 181. 4 Italo Calvino, Six Memos for the Next Millennium (Cambridge: Harvard University Press, 1988): 87-91. 5 John Dewey, Art as Experience (New York: Perigree Books, 1934, 1980):7. 6 Dewey, 16. 7 Marvin Minsky, The Society of Mind (New York: Simon and Schuster, 1985) 8 Johann Wolfgang von Goethe, the nineteenth-century thinker, is said to have been the last man to have known everything. This fact was remarkable for two reasons: first, there was much less accumulated human knowledge at that time, and second, Goethe's capacity to retain even this. Today, the genius of Goethe is suplanted by the convenience of the ability to communicate to all knowledge. 9 "As a photographer, I don't caer about photography, and have always been irritated with those who are concerned exclusively with f-stop and stop bath. For me, it is the communcation of idea to another that holds the true excitement. Photography is merely a means to an end, and if I could achieve that end in another way, I certainly would." Jonathan Lipkin (1993) One Family's Journey, MFA Thesis, School of Visual Arts, New York. 10 Jay Bolter, Writing Spaces (Hillsdale: Lawrence Erbaum Associates, 1991). Here Bolter traces the history of the effects of technology on writing. He discusses the book, the scroll, the pictographic and logographic alphabets. The computer is seen as merely the next step in long series of technological advances that interact with the culture of time. 11 Ibid. 12 Charles H. Traub "The Creative Interlocutors: A Creative Manifesto."Leonardo, Vol. 30, No 1 (MIT Press, 1997) 389-390. 13 Moholy-Nagy supported our notion of genius in his description of education at the new Bauhaus (Institute of Design) by structuring education there to place emphasis on "integration through a conscious search for relationships ¬ artistic, scientific, technical as well as social. The intuitive working mechanics of the genius give a clue to this process. The unique ability of the genius can be approximated by everyone". From New Education, p. 9. See also Minsky's reference to Papert's principle. 14 Steven Levy, "Wisecrackers." Wired Magazine (March 1996): 128. 15 MIPS is short for millions of instructions per second, a measure of computational power. A MIPS year is the amount of computing power produced in a year by a computer capable of a million instructions per second.


In ordinary speech fruit is what you can eat for dessert. For Botanists, it is part of the plant that protects and eventually releases the seeds, enabling the continuation of the species. For botanists, therefore, a pumpkin is a fruit, and so are tomatoes, and peppers, while carrots and potatoes are roots. For green-grocers instead, and their shoppers, tomatoes are mere vegetables, on a par with potatoes and cauliflowers. Apples on the other hand and pears, bananas, strawberries, and gooseberries are fruits for both botanists and the common folks Within and without the domain of science, fruits have been a much frequented referent in the portrayal of the most essential needs, desires, and aspirations human beings have experienced since time immemorial. Indeed, ever since they began to write poetry or to make any kind of symbolic gesture, depiction, and celebration of fruit have been central to those processes. Sensuous and colorful images of fruit reverberate in the discourse of politics, physics, ethics, religion, jurisprudence, and other secular rites.

Above all fruit language and fruit related imagery have been a most successful vehicle to speak about sex and all the pleasures and terrors that go with it. Few have said it with more conviction than D.H. Lawrence: For fruits are all of them female, in them lies the seed. And so when they break and show the seed, then we look into the womb and see its secrets. So it is that the pomegranate is the apple of love to the Arab, and the fig has been a catchword for the female fissure for ages. The apple of Eden, even, was Eve’s fruit. To her it belonged, and she offered it to the man…”The exhibits and the documents included in TUTTI FRUTTI cover a rather large territory, but not the entire planet. Their journey has unfolded primarily along the traditional routes, through Europe, North, Central, and South America and those North African and Near Eastern Civilizations that, a various points in time, and with varying degrees of intensity, have either participated in the creation of the so called Western World or have actively interacted with it.

A massive and articulate reference work offering a spectacular visual history of the allegorical values of fruit, as well as an exhaustive canon of their meaning, TUTTI FRUTTI draws in contents from painting, sculpture, architecture, cinema, television, photography, music, cartoons, crafts, industrial design, advertising, and gastronomy. Visual and written texts (and musical scores) are juxtaposed to develop unobtrusively but recognizable clusters of culturally compatible works, often casting an “irreverent” light on the meaning of each individual component. The volume is concluded by a glossary, and a gastronomic appendix with dozens of exceptional recipes.

Image Revolution


Freedom of will may be a fabricated state of mind- illusion- but one that allows us to act opposed to being acted on. It is here as image makes and consumers that our education begins anew. As photographers we can abandon our old ways of thinking about the authority of the image – it’s cloak of truth and uniqueness. Maintaining this investiture in it makes photography a precious commodity disavowing the inherent properties of duplication, reproduction and interaction that should be allowing to run more freely. In understanding the artificial restraints we impose on photography and other visual images we educate ourselves against what McLuhan called “Media Fallout” by challenging the structure thereby allowing for change.

Metabolism of Photographic Truth in the Digital Age

From digi, Volume 1, Number 3. Hong Kong Arts Council, 1994

“I’m not interested in truth, only my own reality.”—Carlo Uva “Lo, there shall be more stars as telescopes get more and more perfect.” —Gustave Flaubert “The only reason to make a photograph is to see what something looks like as a photograph.” —Garry Winogrand We want to believe that photographs record real events. Every photograph, even the most inconsequential snapshot, gains value in the culture as time passes. The further time takes us from the moment recorded, the more meaningful the image becomes as an artifact of history. A notion caught in the past becomes present and tangible in the retention of its image. Cultural perspective gleaned from an old family album evolves in value as a subjective experience when information caught in the light of the past is interpreted by conditions found in the present. This fact is underlined by the rage to collect historical material in photographic form. Snapshot albums, cartes de visite, even nostalgic albums from the late 60’s are swept up by collectors working flea markets, antique stores and attics. The reason for this flurry of activity is a need to hold on to some subjective image of our fleeting past—a relic.


Interpretation is what a person derives from a set of representative values, forms or ideas in the image. The origin of these codes, their meaning and their context are easily lost or distorted by the context in which the image is witnessed. Hence, all photographic witness has a kind of built-in fallacy: there can be no sure interpretation since all the messages have been reduced to the ethereal film of light suspended in a two-dimensional plane. Any evidence revealed by the codes is based only on its likeness to something we already know. The facts and circumstances of origin become less important—even forgotten—as the aura of the image gains sway. Its collectability is a matter not only of its rarity, but also of how its codes recapture our idealizations and notions of the past. The image gains new value subjectively as its symbolic message becomes accepted as universal and its metaphors become the archetypal codes of the memory. In more innocent times, we validated the photograph, if only for convenience and reassurance, by saying that it was worth a thousand words. We convinced ourselves that the real-world subject, caught in the light of the sensitive photographic grains, conveyed messages and codes indistinguishable from fact—the negative was irrefutable as an original. Photographic proof stood when words—even a thousand of them—failed. The unique seamlessness of the colloidal suspension held us captive to the belief in the camera’s faithfulness to the scene. As it is anyway, we suppose that photographs break down into identifiable elements that represent real things, but the elastic and fluid perception we call fantasy allows us to interpret those elements in may ways. Imagery in the “Age of Spectacle” no longer exists only as a record of being, but is the life substance—experience itself.


The digital world tells us unequivocally that not only is there no one-to-one correspondence between the image and the scene, but there is no original. All images are original, malleable and mutable and, perhaps, even more valuable. The issues of witness verification and fact are sublimated as important elements of depiction. It is a given, at present, that a knowledgeable viewer can seamlessly change an image to their own designs through the digital process. This fact relegates the older processes of the photographic traditions to more passive roles as communication vehicles. Photographs are one way signals believed to be stabilized by the taker and permanent in their fixing of the observed object. Our belief in the immutable elegance of the older processes still holds us in awe. The object of the photograph itself is a kind of relic of a history when we still believed in the objectivity of the camera. Nevertheless, the ability to change things digitally shows us that there can be inherent discrepancies in any image that looks objective on the surface whether made by the camera or the computer.


In digital depiction, nothing is suspended long enough to create an aura. There are no relics as all images can be changed. (Hence, the lack of acceptance of “Computer Art” that is only superficially stabilized.) The viewer is not held captive or confronted by the authority of a one-way artery of pseudo-objective representation. The response allowed by the digital signal metabolizes truth in a constant and evolving flow of real-world communications. The viewer can not only make choices, but can actually change the choice put before him.


My colleague, Douglas Davis, noted in a recent article that the fictions of the master and the copy are now—like lovers folded together in ecstasy—so intertwined that it is impossible to say where one begins and the other ends.1There is no celibate negative. As Davis notes, the critic Walter Benjamin predicted this inevitable collapse by assuming that the aura of the original would also be eclipsed. Benjamin missed the ironic twist that the spread of mechanical reproduction might give an analog object of the past more value. As the image is dispersed, its popularity grows and, consequently, the demand for it (e.g., Ansel Adams’ Moon Rise). He did not foresee the popularization and the collectability of the photograph. There is today a cult of cognoscenti moving a new commodity, a new industry, intertwining galleries, collectors and museums in sometimes suspect intercourse which gives a skewed value to the object, heroic status to the maker and an aura of originality in the production, while obscuring the inherent accessibility of the image and its discourse.


But, concurrently, if the aura of the image is diminished, the healthy exchange of image information infused by the interactive digital signal gives witness and the act of observation a richer if more complex meaning. A given perspective can be examined from more than one vantage point. The single authority of the photograph (or representation) is not solely given credence by the maker; the gallery, the newspaper or the network—all are answerable. In the digital realm, the truth lies in a dialogue which raises questions from observations taken and observations given. Meaning is gained in the continual and equal relationship of the image maker, the audience, and the subject reversing their roles in an ongoing flow, metabolized by the computer in the arterial network of communications. Connecting, collaborating, cross-referencing and collaging allow us to extrapolate from both subjective an objective material to an exchange of more heightened enlightenment. Consciousness begins as a stream of potentially chaotic input from without. Ideas of things and events in the world were never copies of external reality, but rather the outcome of an interactional process within the subject, in which ideas underwent operations of fusion, fading inhibition and blending with other previous or simultaneously occurring ideas or presentations. The mind does not reflect truth but extracts it from an ongoing process involving the collision and merging of ideas. —John FredrichHerbad, 18242 Contemporary consciousness is enhanced digitally as we take information and image into our own personal realm of creation. We are able to examine any supposed fact with deeper reference and arrange it in an order, sequence or time space that accommodates better our personal ability to comprehend. Some fear digitalization because of its potential for deceptive manipulation. Its greater potential lies in giving us means to deconstruct and reconstruct all elements of art and perception—image, sound and text. This ability makes for unique and singular activity that enables all to be creative; to interpret as we must; to make our own verification that is as original as the senders/receivers we are. We can be more self reliant in our means to gain knowledge. We are further along the way to the democratization of the media. We embraced photography from the beginning because it offered such potential.


The relationship of the lens-recorded image to how we have historically perceived it becomes, as William Mitchell in the Reconfigured Eye indicates, more an issue of illustration than of reportageŠ[Viewers] will be aware that they can no longer distinguish between genuine image and the one that is manipulated, even as news photographers and editors resist the temptation of electronic manipulation, as they are likely to do [with standards, ethical conduct and encrypted codes]. The credibility of all reproduced images will be challenged by a less naïve, more provocative receiver. “In short, photographs will not seem as real as they once did.”3 And thus, the analogued camera-made image will become even more a collectible object; the object maintaining the aura of the original hoarded, sold, collected, prized because it survives. As John Szarkowski, former curator of the Museum of Modern Art, puts it, the subject becomes an artifact of culture with a distinctive air all its own. As now established, what one sees in a photograph is only one kind of representative reality; moreover, we know the image posits a subjective notion of history. At this point in the evolution of the recorded lens-made image, the growth of pure photographic seeing is clearly in question. It may become an artistic mannerism. At this moment in history, we are agitated by the compromise necessitated by accepting the subjective objective natures of the representations that are about us. The trouble lies in our not being sure of ourselves enough to distinguish between them—authorities (the cognoscenti) too, have proven to be suspect. Even those who study ocular phenomena scientifically (psychologists, physicists, physiologists et al) are quiet clear that how the world looks to us is a remarkable achievement calling for constant reexamination. No single explanation can account for what we think we can see in something we call reality. The human senses are always in conflict. This same conflict is brought to our understanding of photography and the merit of the photograph as a telling object.


Jonathan Crary notes that the cybernetic realm is one where abstract visual and linguistic elements coincide, consume, circulate and exchange globally.4 The position of the observer in the real world is that of an image gatherer enabled by the digital response. We exist in a kind of graphic observatory that equalizes our ability to look at the broad scope of possibilities. At the same time, it allows us to isolate and select. As we constantly reexamine our own experience in relation to others, we expand our understanding. Rather than viewing as mass media consumers, we are now in an age of mass image customization. We receive and disseminate equally what our needs and whims dictate. Our unique personal universe expands in the new digital observatory. Photography, now, is part of the imagery ecosystem composing the real and the unreal, a collage that the computer serves both to assemble and to navigate through. Figuring out how to balance our urge to examine the world in the stable analog way, (as in looking at the aesthetic and formal qualities of the photograph) while mastering the digital interdisciplinary and multi-media production that contains that same image, is the question of critical cultural study. The viewers of all imagery must be active and educated to intervene.


By our intervention in the image, we are made aware of the plasticity of our universe. It is a space where there is no separation between representation and reality. We must be agile enough to accept the fact that experience is assembled from the image and the reality it represents. The meaning of the experience is only as useful as the cods or orders we extract from the whole picture, somewhat in the manner of examination allowed by a telescope. As we are better educated about the complex realm of the digital world interacting upon real experience, we understand that there is something called actual and virtual reality. They are two separate but interacting horizons calling for development of a more optimum multi-optical mind set—that is, an ability to elastically float between one reality and the other. Our perception will have to use the computer as its lens/observatory to navigate us not only between spatial and temporal proximities, but also between spaces of objective and subjective representation. The computer observatory gives us the means for examining an isolated form (or photograph?) from which we can devise a specific context which is more directly related to our own personalized systems of organization. Having analyzed these forms, they can subsequently be put back into a given space with our own logic, establishing the truth of our own unique experience. But perhaps our personal one only! We can know only what we know! How the world looks to us will be all the more remarkable and confounding as the cybernetics of the computer world calls for constant vigilance, illumination and psychological development. As human experience expands, so too has the technology we have made. It now expands us beyond our immediate, temporal existence. In his classic article Big Optics, Paul Virilio states, Let us remember briefly that there is no real existence in this world—in the real world of sensory perception—except by the delusory device provided by the egocentrism of the live presence, that is, the existence of a real body living here and now.5


Recently, The New York Times reported that studies at Harvard revealed a remarkable discovery of how the brain works.6 Researchers found that the brain uses the same normal pathways for seeing objects as it does for imagining them—only it uses them in reverse. The discovery once again challenges the validity of eyewitness accounts. The imagined object, at least to the observer’s brain, is every bit as real as the one that is seen. Thus, we are presented with an even more confused relationship between imagination and reality. Luckily, it seems that in most of us the input to our cognitive thinking is more from the eye and is still stronger that that of the imagination’s. In the course of consciousness and its development in cyberspace, the reverse may be conditioned to be the case. We can no longer accept that imagery only records but must comprehend that the virtual world is valid circumstance. Marvin Minsky says that in virtual reality “experience will be fulfilling as it satisfies a certain longing for unattainable feeling.”7 The act of image collaging allows you to indulge yourself in a realm of here-to-fore unimaginable, unseeable relationships. Verification in a tangible object like the photograph is necessary in order to maintain the semblance of old truths. Is it perhaps a dysfunctional process? We are now in the Age of Enabled Image Handling.

Standing Tall: Undoubted Vision

Speech given in appreciation of William Klein

William Klein Symposium, Cornell - University, 1992   What is to follow is an appreciation of an unflinching talent. Myself, I am a confrontational, modernist-leaning photographer. I have a perspective from the vantage point of the waning of Postmodernism and the dawning of what, for lack of a better term, can be called Post-Postmodernism. These terms are really only used here to establish a time frame between the peak of William Klein’s work, to its rediscovery, to its current lionization.

In the course of history, real talent weathers the ironies of our changing critical modes. Postmodernism found its object in a sphere that was neither wholly cultural nor wholly institutional. It is in the tensely renegotiated space between the two that we might reevaluate Klein’s later work with regard to his use of subjects, his graphic overlays, his constructed collages and his painterly strokes. Here Klein parodies fashion, media and power. He now operates inside the power structure with the authority of fame and achievement while contemplating his own position at the same time.

When I think about Klein’s early work, its undaunted directness, its energy and its visceral sensuality, I think of Sophia Loren and specifically her comment (on the jacket of Klein’s book, Rome) “Klein has an eye like a knife. He is ruthless and outrageous but never mean. He is tender and funny and violent and I’m sure, really in love.” In love with himself, with his family, with his cities, with crowded spectacles, with masses of flesh, with the medium of photography (about which he was once flippant), with his own vision, with the act of looking and being looked at—Klein’s vision is an affair between the self-image and the public’s image. It is a powerful expression of the primordial pleasure taken in the act of looking.

What narcissism allows his brash intrusion, his deliberate intervention, his confrontational stance? One observes him standing tall, peering undaunted into the masses unchecked. He announces his presence with a confident authority and a knowing manner that is not only streetwise but also cultivated and sophisticated. He crosses like a cat from the salons of the powerful and glamorous to the backrooms of the lowly and struggling. Klein’s ego has found a constantly moving stage in the object world from which we can evolve his own particular illusion of reality. His intrusive camera allows him the power of the director—no wonder he makes movies too! He moves, distorts and forces a return glare from his subject. Who is this observer, and what are his limits?

Today there is a great difference between how the street photographer has to act and how Klein worked during his peak. Today no one gets as close. Permission is an abscess on looking. The postmodern world is a more hostile environment that talks and leers back at the self-appointed authority. Klein anticipated this fact by intensely framing his subject’s scowl. He provoked a confrontation in order to create his subject. The photographs are mostly about the interruption of a scene created by his presence. Because he is referencing his own act of observation, he is not objective. He was postmodern before Postmodernism. (It is no secret that the non-objective intrusion of Klein’s photographs irritated the dominant aesthetic of the Museum of Modern Art and offended the cannon of John Szarkowski.) His work is as different from that of a peer such as Robert Frank as Cindy Sherman’s is from William Eggleston’s.

William Klein is almost always the outsider—the alien in his own country and in his adopted one as well. His accent is just foreign enough for him to feign innocence in his approach. But, he is in control, and he doesn’t need our approval. If anything, he is there to disapprove of us. He takes pictures with a vengeance and directs his visual barbs at the stereotypical “bourgeois” American, “haughty” French, “inscrutable” Japanese, “plodding” Russian and “comic” Italian! He is so savvy and so unflappable that he can take shots at any establishment.

His archetypes are not individuals, but crowds, people gathered together are not individuals but crowds, people gathered together. The concern is the posturing and posing that repeats itself in the mass. To quote the Italian Romantic, Carlo Uva, “He would love to love democracy if he could make the crowd more mobile and the individual more ignoble.”

In Freudian terms, Klein is engaged in a kind of active scopophilia which demands identification of the ego through his fascination and recognition of his own kind. His subjects are alienated, so too is he. The ability and power with which an artist manages his neurosis is a defining factor of his artistic greatness. Klein shapes his fantasies and gives them social references. Artists don’t see things as they are but as they see them. He has admitted to the autobiographic nature of his New York book, He describes himself as a prizefighter. “Sometimes I take shots without aiming just to see what happens. I rush into the crowd. Bang!” There is no mistaking who Klein’s heroes are: the brash outsider, the excluded, the establishment reject, the ignored genius and the maligned champion. Need I remind us of his great film about Muhammad Ali? Remarking on the so-called cognoscenti of the boxing world Klein said: From the beginning, Ali said he was the Greatest and everyone laughed. Boxing experts, the most nearsighted and pompous of experts would take on Talmudic airs, smile, bring out gibberish statistics and Louis and Marciano and other Godzillas, but what did they know? Ali was the Heavyweight Champion of everything. Especially everything American.Hype, PR, Media, Showbiz, Street Theater, Rap, black humor, moneymaking and Politics.And several things not particularly American, like courage and conviction.

Is this an anti-modernist statement? Postmodernists have no heroes. Klein harbors some admiration here and there. Albert Camus in his existential notebooks gives us a definition of Klein’s view of the world. “Real nobility is based on scorn, courage, and profound indifference.” A creative artist like Klein has the capacity to record his own indifference. The camera extends the ability for impressions and experiences that reflect his nature. It gives Klein the space to operate alongside and within the institutions of authority and the art world itself. He is characteristic of the post-modernist mistrust of the position of the outsider who co-ops himself by his own creative act and the recognition and success of it. “His disdain was so powerful that it attracted his distracters so that they were no more . . . ” —Carlo Uva.

Aaron Siskind - An Appreciation

An Appreciation, 1991 | “The best memory of the celebration is the cake.” Those who enter this gallery hardly need to be reminded of Aaron Siskind’s contributions to Photography and Art. Therefore, I celebrate him here, as a friend. I am not alone in my memories as Aaron counted among his colleagues an unusually large extended family of admirers. All of us who keep him in our hearts are fortunate that our mourning of his loss is offset by the knowledge that he lived in the fullest of ways and died simply and peacefully.

Aaron’s life was not an uncomplicated one; he lived it as he felt it. He once described a good photograph as, “a balance of continuing tensions.” Such was the essence of his own vitality. It was in his creative drive to find order in the commonplace and beauty in simplicity.
I did not know him in what he referred to as the “old days”, but I knew him for a long time. My first encounter, during the rage of an earlier war, was when I first registered for classes at the Institute of Design. I had not seen the famed photographer before. I was sure he was going to be a figure as grand and as elegant as his photographs. When I asked the little man eating at the registration desk, where I could find Aaron Siskind, he replied, “Your looking at him, Bud, or at least I think I’m me. Would you like a piece ofdeliiicious cake?” The offer was irresistible. The sensuality and generosity of his manner overwhelmed the almost comic figure of the “old man” in the Harris tweed jacket with the self-inflicted coffee spotted shirt.

Undoubtedly, Aaron Siskind was one of photography’s greatest teachers. Ironically, I don’t remember any of his critique strategies. I can still feel the excitement of his Wednesday morning class, and how each of the students anxiously awaited his glance at our work and one of his three stock responses: Silence; “What else do you do?”; or the coveted, “It’s a beauty!” He dismissed the longwinded, the ideological and imitative. He praised energy and ideas.

Aaron did his best teaching at the Belden coffee shop, where we ate the worst cake and learned the best lessons. It was here that he revealed his humanism. He delighted in the observation of all the characters around us and catalogued their foibles without judging them. He admired the efficiency and steadfastness of the waitress who worked there for years and the flamboyant gestures of the overbearing owner. Most of all, he liked watching the young people holding hands in the booths. How many times did I hear him say, “Aren’t they marvelous?” The “old man” always ate too much, complained, popped a few Gelusils, and left the now legendary, outrageous tip.

I accompanied him when he returned to the Belden after three years of living in Providence. The same stoic waitress brought him his cake and overly sugared coffee before the request left his lips. “You still driving?” she asked. “What do you mean?” Aaron indignantly replied. “You’re a cab driver, aren’t you? I always used to see you looking at folks around here. You can always spot a cabby by his eyes.”

I drove with Aaron through many poetic cities that captured his imagination. He did a lot of looking in places like Chosia, Makenes, Jalapa, Recife, Bath and the like. He loved to drive through the busiest and often most mundane sections of cities. He skipped the grand palaces of culture; the important stops were the quarters where he could observe the ebb and flow of ordinary life. Aaron frequented that same coffee shop wherever he went. It was best when it had an outdoor table overlooking the marketplace and when they served chocolate cake. This atmosphere nourished his eyes and fed the awesome inner solitude of his wonderful emotive works. Even when he shouldn’t have, he traveled; hoping that the next stop would be just like the last one he visited, only better.

Before his death, I visited Aaron in a Providence hospital. As I attempted to comfort him with platitudes, he smiled at my own discomfort and limericked:There once was a man from Pawtucket,Who as fate would have, couldn’t luck it.

He climbed a great wall and had a big fall. Broke his leg, and said Aw Fuck it!Undaunted, he always relieved the moment with comedy. I think he was the most civilized man I ever met.

From the Past: Perspectives on the Future

The Imaging Revolution, speech at Ohio University Athens, 1986

When I left college as an English and journalism major, and turned away from those disciplines in order to pursue my passion, photography; I had the self-deluding idea that I could make coherent my personal and private thoughts within the scope of the visual language of the print and communicate them more effectively than I could as a desk-bound writer. I thought that photography offered an independent and equally valuable means of communication that would save me from ever having to write another academic critique.

So here I am, twenty years later, after two weeks of anxiety, trying to prepare this paper, doing just what I set out NOT to do: addressing a monumental change in visual communication with words. Unfortunately, photography in its alternative organization doesn’t substitute for language. Had I some photographs of my own, made with the new electronic technology, believe me, I would attempt to dazzle or subvert you rather than pontificate herein. I am envious of those who have managed to organize themselves in such a manner as to integrate their creative photography with the current technologies. I am sure they will inherit the earth irrespective of their vision. Nevertheless, I suspect I am like most photographers of my generation: I am caught in some sort of cultural lag or gap between knowledge and application, not sure of how to apply the new imagery imaginatively.

Again back in ’67, I witnessed the media’s first generation gap. Marshall McLuhan was the prophet of the moment, a spokesman for the first media revolution. The medium had not only become the message, but it has also become, vis-à-vis his kind of analysis, self-reflective. McLuhan stated that “It is impossible to understand social and cultural changes without a knowledge of the workings of media.”1 McLuhan and other theorists who came before him, like the Swiss linguist Ferdinand de Saussure, and the German critic Walter Benjamin, created a “critical mass”—a consciousness that has shaped our perception of the medium and, as a consequence, has become a message about which future image consumers and makers need to be educated. (“I probably need not remind you of the frequency in post-modern art of images made as media comment that owe a debt to the dictates of semiotics and appropriation prescribed in the writings of these critical minds.”)

The revolution in media thought brought to consciousness by McLuhan was concurrent with the evolution of electronic technology which gave it a universal presence and allowed it to invade every social and moral issue from the Vietnam War to the sexual revolution. The electronic media made itself inseparable from any happening. The event was the recording of itself. McLuhan warned that there was no more substance as it was changed too fast by the media. Thus, “if one approaches his environment,” this all too pervasive “social drama, with fixed an unchangeable point of view,” one was doomed to a “witless and repetitive response to the unperceived.”2 McLuhan also warned that by bringing the disparate together, technology and its environment would break old barriers and erase old categories making private thoughts no longer possible. The electronic gossip column was unforgiving. However, McLuhan did envision a remedial control made possible through education in how the media works. The positive message was that the collective workings of the media might indeed bring us together and change the social fabric if we accept its democratizing properties.

It was an exciting time to be a media student. I was in Chicago when the post 1960’s promised us great change. As a young photographer, I stood with a hoard of other journalists recording the message of an equally young Jesse Jackson as he confronted the last great American demi-god, Richard C. Daly in His Honor’s own office. I remember little of the content of the discussion, but I am in vivid recollection of being coaxed by Mr. Jackson’s charismatic gestures to frame him under a gold leaf portrait of George Washington. The disappointment was that neither other journalists nor I caught the essence of the moment in the symbolic pictures subsequently reproduced in Chicago’s newspapers. No, the essence remained in the events that allowed the image to be made. At the time, I had no thoughts of these matters. I was caught in a tunnel vision, making what I assumed was a great picture. That the confrontation had occurred for the camera was no accident! I was an unwitting accomplice to media hypnosis with no memory of the issues behind the event. The image I made had a latent and important message of change. A revolution did take place in Chicago—the Daly machine was overthrown following the mayor’s death; Chicago elected a black Mayor named Washington, and today, Mr. Jackson seeks to have his visage framed not below the portrait of George Washington but beside it.

Ironically, despite all the media interference during the period, not much really changed. The poor of the city are just as poor, if not more so; the middle class is moving out, and the liberals who once battled the forces of Mayor Daly aside Lake Michigan are longing for “the city that works.” (All this is not to cast aspersions on Mr. Jackson but to change the cliché “a picture subverts a thousand words.” Mr. Jackson uses his image well and because he does, he may indeed be the most attuned and appropriate candidate in a media dominated culture.) Maybe we have come together; faces have changed; roles have reversed, and images have been switched, but it is because the media just keeps consuming the message with no evaluation of content.

What I am leading to is a caution that we not forget the promise and failures of this earlier period as we now embrace the 90’s and this second technological revolution. We might enter this period better educated, but we must not be so enamoured of our hardware that we forget the issues that make for a true cultural revolution in how we use it. Also, we must try not to be caught up in our own rhetoric, in the kind of critical hegemony which obfuscates the possibility of unselfconscious creative experimentation and expressive interchange possible in using the new visual technology at hand.

We can be conscientious. Remembering that the imperatives of our media-manic culture co-opt our very activity in these present discussions, we must know that to what ends they lead are largely determined by cuts and edits and how and where our thoughts will reappear.

Most of us who are audience, the so-called “product household,” the consumer, the consumed of technology, are paralyzed by the collective force of technological information change and exchange. The electronic image is an all-pervasive, impenetrable impersonal structure that lacks not personality, but soul. As Roland Barthes expresses it, “Technology makes us passive in our ability to effect positive values in our lives.”3 No matter how real and instant the image, whether made with a 20″ by 24” Polaroid or an electron beam, it hides an encoded message and thus is manipulative by character, despite the best intentions.

My photograph made in 1969 of Jesse Jackson tells us little about the real facts (if there are any) of his encounter with Mayor Daly. It is only an idea of an ideal that furthers a mythology. The technological pretense of the photo/electronic image to realism masks further that which fabricates itself. There are too many Damocles swords hanging over the headiness of image evolution in this new age—a scenario of someone else’s control that gives us only the choice between a fixed set of messages and the random act of selecting them. However, the artificial intelligence seer, Marvin Minsky of MIT, suggests that the imagination offers a third alternative, one called freedom of will which lies beyond the constraint between the fixed and the random.4 Freedom of will may also be a fabricated state of mind—an illusion; but it is one that allows us to act as opposed to being acted upon.

It is here as images makers and consumers that our education begins anew. As photographers, we can abandon our old ways of thinking about the authority of the image—its cloak of truth and uniqueness. Maintaining this investiture in the image’s authority makes photography a precious commodity and disavows the inherent properties of duplication, reproduction and interaction that should be allowed to run more freely. In understanding the artificial restraints we impose on photography and other visual images, we educate ourselves against what McLuhan called “Media Fallout” by challenging the structure and thereby allowing for change. The reader has the right to interpret and, in effect, create the work of art, argues critic, Stanley Fish. The imagination becomes active through continuous engagement in the critical dialogue, through recreating the work and by joining and breaking with other interpretive communicatives.5

Anxiety is heard in many spheres of criticism that react against the portent of the electronic message. Educators like New York University’s Professor of Media Ecology, Neil Postman, fear that television has nullified our logical and cognitive processes. They maintain that pictures have no thesis as they are only analogues whose level of abstraction is concrete and invariable.6 John Baudrillard charges that the simulation technology so obscures reality that we no longer repress anything, which is why our culture is close to the sphere of psychosis.7 In a new novel, Swiss author Friedrich Durrenmalt sees an ironic twist to this psychosis. He defines contemporary man as one who would suffer from meaninglessness if he were not constantly under observation.8 Hence, our preoccupation with stardom!

In the field of photography, I hear image makers complaining about he loss of the individuality in their work to the new simulation technology, that their (romanticized) role is demeaned. Battles over copyright laws, resale of work, alterations of originals, model releases, and authenticity are growing in frequency and may possibly even impede the application of the technology. But, the root of all this anxiety may be, again as McLuhan put it, “in great part the result of trying to do today’s job with yesterday’s tools—with yesterday’s concepts.”9

While I resent the use of the term “post-modern” because, based on current creative output, the implication is that change is no longer possible, the post-modern age offers us the opportunity to interact personally with the media. The phantom image can be demasked by imaginative play at our own workstations. Interpretation can breakdown its aloofness and change content. If, as Clement Greenberg says, “modernism used the characteristic methods of the discipline not to subvert it but to entrench it more firmly in its area of competence,” than the possibilities of any feedback to check its authority are lessened. Each previous image system has forced us to respond to impenetrable logic and dictates.

But, post-modern, new-tech expression offers the audience/creator the alternative to collapse boundaries between the media and to open up communications from one image to the next. Modernism may have run its gamut because there may be nothing more we think we can say anew. But, if we can break apart the messages which were both previously and currently being beamed, we have the means of de-mystifying the image and detaching it from its aloof atmosphere; we combat our passivity and provoke change.

I hear the photographers’ fears of losing the primacy of their images as they become manipulated by the Sytex machine and random foreign users into something distorted from and other than their originals. I counter that technology is democratizing the process, that the photographer can just as easily alter the images back again, change them even further, separate their parts, reassemble them with other messages and disseminate them once more through their own electronic publishing systems. Information need not be in exclusive hands. We must not resist technology. We can evolve with it, accepting the revolution and the power it offers us as creative people. “Reproduction emancipates photography,” as Walter Benjamin wrote, “from a parasitical dependence on ritual.” There is no longer a one-way flow of information in this tech-age of simulation.10

We have been in a classic double bind in which we are told that we have all sorts of technological consumer resources but no license to use them to subvert authority. Likewise, we are told all to frequently by institutional bureaucracies that we do not know enough about the system to argue with it. But, this second revolution will make it more and more difficult to sequester information. We have the opportunity to educate ourselves. Mr. Jackson’s scene in the Mayor’s office, the voices and background data; all are on tape and will be recallable with relatively instantaneous speed as well as scrutinized at the whim of any oppositional voice in his or her own private time frame. Intelligence can now be mechanized without limit and, as such, can also be decoded! The photograph of Mr. Jackson is no longer just the result of a series of actions that will remain unchallenged. Rather, it can now be acted upon by the creative decoder to reveal evolving patterns of information and misinformation.

It has been said that human vision is the historical product of making pictures and, also, that we have more memory than is discernable in language. We continually change our language and make new snapshots in order to grasp the essence of our experience. The mind too is apparently changing constantly in its makeup of smaller minds. Edward Fridkin of M.I.T. is a spokesman for the revolutionary idea that the universe might be something other than energy and matter, something more fundamental like binary units of information that, in their ceaseless repetition and transference, create energy and matter. Thus, we do become what we behold. “Science and its laws become only like statistical laws as those that govern the letters on this page, they are accidental, without real explanation and have little to do with the meaning,” and what I am trying to say.11

We have now crossed over from the realm of science to the metaphysical, but in so doing, we have allowed for choices. To Fridkin, a physical reality and an idea may be similar, and consequently, the realm of the computer intelligence may be more interesting and tractable than the real world. Our re-education centers on learning to accept these considerations

In the new digital world, images are numbers! The computer is a conceptual structure that challenges us to react and interact in a hyper-real world. The computer acts as a virtual camera that is allographic. (The conventional camera is only an analogous means of recording information whose mimetic response implies reality.) The computer, on the other hand, is indirect, a discontinuous process of information which does not work as a mirror of reality and is, thus, disanalogous to its source because it has been formalized in a logical system for abstracting, manipulating and transporting information. “Digital information is formalized, discrete and choppy. Discontinuity is inherent in the information system and is part of its successful operation not its breakdown.”12

The computer is aloof from the real world, dealing only with numbers that have no direct reality quotient. It is this kind of hygienic aloofness that enables our imagination to roam free. All we need to do is to interface with the digital information and we can reconstruct it in any form we choose.

Yes, from this day on, photography is DEAD! At least it ceases to exist as we know it—a seemingly reliable trace element, a reference to a direct experience. The reference is no longer needed; inference is all that is required. Many conservative voices in the photography audience will cry, “How will we ever know truth?” Well, did we ever? There is nothing new about prevaricating photographs. We have always been able to manipulate them. The semiologist reminds us that photographs are only signs. We are no less in danger than we were before if our education focuses on what these signs infer as well as on what they are.

We will probably share the feeling of loss as the magic alchemy of film and chemicals give way to laser jets and electronic impulses. The romance of the photographer trekking out to find new vistas, (a tenacious allure of photography left from the 19th century) will have to give way to discovery in the mind. The ritual exercise of stopping action has dominated our thinking and education about photography with something of the heavy-handed dogma of institutionalized modernism. Oddly, photographers react to being called voyeurs but are reluctant to give up the direct experience of looking. Computer imagery is hygienic as it puts its mathematical logic between us and the experience of snapping pictures and asks that the tunnel vision of the mirrored image now scan instead. Photographers have entered a conceptual age where leaning how to see in the real world is best accomplished by practicing in the imaginary world. The manipulated image of the computer may extend the range of our ability to see, to know. We will learn by experiencing the paradox of disparate things juxtaposed.

Accordingly, as educators, we will need to teach backwards from the creative experience in order to understand cause in so-called real world events. The flight simulator is an example in point. We might check the pilot’s error by being in effect all backseat drivers with our simulator attached to our airline seat. Or imagine, if you will, a holographic self-portrait completely computer generated by its maker who puts him or herself on the analyst’s couch and is allowed in three dimensions to manipulate his or her own interactive analysis to follow any imagined scenario. This patient can be preparing for the future by seeing and reacting in the imaginary past. We can not predict the future, but we can practice how it might unfold.

The meaning of things, to paraphrase the critic Paul DeMan, will be passed on intexturally from one body of information to another.13 Hence, the New Vision and its education call for breaking down the curricula of one or more disciplines and moving from mode of expression to another in an interdisciplinary approach. So, where do we stand as photographers, and why are we here today discussing this image revolution with the word “photography” at the head? Because photography’s first 150 years is over, it is resurrected in the interdisciplinary interchange that is the mandate of this revolution. Sometime in the process, my original image of Jesse Jackson under the portrait of George Washington has to be scanned as a primary matrix. Perhaps it will interact with a current image of Mr. Jackson through the hands of a Nancy Burson who will reveal something previously unseen about the candidate. The camera was the first device capable of cataloging large amounts of information quickly—an early computer. As a form of vision (whether in video, film or still photography), it remains the interconnector of the new media language. I believe students know more about what they see than what they hear or read. I believe they understand signs of the photograph but may not be able to express their knowledge in the linear logic of the present language system. By starting with the image, breaking it down into easily nameable parts, then interchanging them with other images, it may be that learning skills can be taught backwards, and the media generation gap can be overcome. Photography is the obvious jumping-off place for an interdisciplinary approach to education in the second revolution. Anyone who can take a picture can massage it and create anew. Marvin Minsky stresses that “the educational system of the future must be concerned with how we learn rather than with the acquiring of specific skills.”14,/h5>

The challenge here is for educators, industry and media alike to take the emphasis off development and the use of technology to further replicate the control systems now in place and to instead, put the emphasis on the means by which this pluralistic technology can be used in the creative learning process. This requires a dialogue between the maker and the audience and the audience that becomes maker. This dialogue, in being allowed to break the symbol and reorganize it, is education in the civil defense against media fallout.


Finally, a few thoughts about our contemporary art culture which may help to ease some of the problems that we inevitably face in this period of transition, brought about by the technological revolution. Those of us who have been hypnotized by the modernist imperative may have found ourselves resistant to the post modern artist’s cool attempts to intervene in the media message. Many of you probably share with me some sense of the current failure and redundancy of this act abounding in the galleries. Much of it is glib and smug in its pretense to creative individuality. I need not say perhaps that Marcel Duchamp, Moholy Nagy and others had conceptualized over 50 years ago much of the current gallery dialogue. What is unfortunate is that the art world works on the production of commodities that in and of themselves are often quite meaningless and speak primarily to an elite and to those who are already converted. It is often an insular world of buying, trading and indeed manipulating not unlike its Wall Street cousin. The new vision enabled by the technology revolution is one that empowers others to be creative. It is the creative accomplishment of the collective not of the individual ego that will ultimately deserve the patrimony of our culture.

1 Marshall McLuhan and Quentin Fiore, The Medium is the Massage (New York: Bantam Books, 1967) 8. 2 McLuhan and Fiore, 10. 3 Roland Barthes, Camera Lucida (New York: Hill and Wang, 1981) 85-87. 4 Marvin Minsky, The Society of Mind (New York: Touchstone, 1986) 306. 5 Stanley E. Fish, "Interpreting the Variourum," Debating Texts, (ed.) Rick Rylance (Toronto: University of Toronto Press, 1987) 155-171. 6 Neil Postman, "The Teaching of the Media Culture," American Media and Mass Culture, (ed.) Donald LaZere (Berkeley: University of California Press, 1987) 421. 7 Jean Baudrillard, "Simulations," New York: Semiotext (e), 1983: 152.