Saturday, December 28, 2013

All You Need Is Not Enough

Whenever I explain my "two essentials" theory of success it generates vociferous denial.

The two essentials are Status and Confidence. There are people with status who lack confidence, and confident people who lack status; neither type will become achievers. Both characteristics are essential to gaining success. And the two characteristics are sufficient, needing nothing else. (I'll be coming back to this last point later, so save your outrage.)

Bear in mind that I'm describing qualities as perceived. Status and Confidence can each be faked, and both together. If they are perceived by others as true, the faker will have success with the gullible.

But true Status and true Confidence certainly exist, and sometimes in the same person.

True Status means a social standing certified by some impressive institution that represents a summary approval of a person's quality. They have been evaluated and found qualified, and the rest of us can trust that judgment without conducting our own evaluation. Professionals have status, the rich have status, PhDs have status until they blow it. Teachers today in the US have little status, and lawyers are always under suspicion.

True Confidence means a self assurance that obstacles will be overcome even if they cannot be fully foreseen. A confident person says to herself or himself, I know I can do this even though I've never done it before, I have faith in my abilities to perceive, assess and respond because I'm ever striving to do my best, and although I have always a wall of doubt to scale, I have always been able to climb over that doubt and deliver good results.

A faker in each of these two qualities simply believes the lie he tells himself, which enables all lies to others.

There are fakers who have true Status and Confidenceas fakers. They are widely acknowledged as being really good at telling the lie that others want to hear. Some of them are politicians and some are motivational speakers.

Whether faked or true, perceived Status and Confidence are essential to success. But they are no guarantee of quality. This fact explains most of the mediocrity in life.

For what happens is this: People with perceived Status and Confidence are often given opportunities on the strength of perception alone, with no due diligence whether they can actually deliver desired results. Con men build penny stock empires that invite their own collapse, blue ribbon panels of prestigious people issue clueless reports, managers who have successfully job hopped to a high level position demonstrate they have no talent for running an operation, having never run anything but their own career advancement.

Thus, much of what we reflexively attribute to the Peter Principle is the fault of the people who promoted candidates of perceived Status and Confidence to their level of incompetence.

Now, back to what is not on the list of necessary characteristics for success: Talent, authentic, enviable talent. Its products are worth stealing, its mysterious wellspring quite frightening to those who have little talent of their own. Talent is like being born a magical person, someone who can make things happen. And magical people are considered threatening, they need control. Talent is not necessary for success because talented people can be appropriated.

To many creative people, talent is a curse. People of perceived Status and Confidence confiscate the golden eggs and give the creative person the goose. Snarky critics demean the talented person, attempting to deny them Status and to shake their Confidence. Talent brings out the quiet mean streak in mainstream hypocrisy.

This truth is pretty widely acknowledged, and it does not offend. My hypothesis receives vociferous denial not so much because it does not make talent essential, but because it does not acknowledge the commonly believed fount of success: Dedicated Ambition. 

If you believe that wishing hard and working hard trump all else, you are in the mainstream. If you realize that wishful thinking and unproductive effort is delusional, you are a heretic. I am a heretic, and this makes people angry, as if I were refuting the only path they see for themselves to success.

There is a relationship between talent and hard work. People with basic talent can greatly improve their skills. But Dedicated Ambition is something apart from the effort, drill, study, training, and self discipline that hones talent to its sharpest edge. Dedicated Ambition is an occult potion for those without talent. It gives one powers that are not natural gifts. It is not exclusive like talent, it is available to all.

The quotidian essence of Dedicated Ambition fits it well into the American Zeitgeist. We as a people have always believed to our marrow that with enough hard work and pure determination, anyone can achieve anything they want, you just have to want it hard enough. 

This belief is demonstrably untrue and at the same time, impossible to falsify. For every gold metal winner there are silver and bronze "losers" who just didn't want it hard enough, and countless also-rans, a vast majority, for the sole victor to leave in the dust. The true statistics of competition are easily dismissed by the culture of winning. Such denial is overwhelmingly powerful, built into the American psyche. We don't fault ourselves for not being good enough, we fault ourselves for not trying hard enough.

Notice that in the bootstrap mentality of Dedicated Ambition there is no special place for talent. You don't need it starting out, you just have to want it hard enough, and you will acquire it through belief in yourself. In this way, talent becomes a product of Dedicated Ambition. With enough desire, even a pig can fly.

Notice, too, that a lot of the "impressive institutions" that certify Status also highly value Dedicated Ambition. And why should this matter more than rigorous tests of excellence? I'll answer with another question: Why do some engineers build bridges that fail? Institutions can sometimes be too impressed with themselves and not enough with standards of performance.

Performance is the domain of talent. In some fields, talent prevails. Status and Confidence may gain an audition, but only performance will win the part. When is the last time you applied for a gig that included a performance test? If you are an actor or a musician, even a successful one, you understand and appreciate that auditions are a necessary part of your career. If you are a manager or a professional, you are used to coasting on credentials and recommendations, and would resent having to demonstrate your ability through repeated audition performance.

And that is why All You Need Is Not Enough. Status and Confidence are of little help in the performance arena, where talent emerges and shines.

But talent should never expect success as the reward. Knowing you did good work is often talent's only gain, incentive enough to do more good work with little expectation of reward and no escape from having to audition for your next gig. Talent is a compulsion, not a goal. Talent is who you are and who you must always be, regardless of outcome.

Tuesday, December 10, 2013

Deep Time and Shallow Clocks

Studying up on paleontology, the vast span of geological time I once thought intimidating doesn't seem so vast anymore. One hundred seventy million years of dinosaurs goes by fast.

Continents move at a quick pace for something as large as a continent. We can now measure contemporary velocities with GPS technology at centimeters per year, a stampede of mountaintops and shorelines. 

Continents moving as fast as fingernails grow seem to me in fairly rapid motion.

The major psychological shift I'm experiencing is that the human life span now feels impossibly brief. Deep time can be plumbed, but the breadth of existence from birth to death is a measure too thin to be useful.

And what a slight proportion of our time is spent transferring knowledge from generation to generation! Perhaps once there was a thing worthy of being called a culture, but nowadays, as we approach the attention span of Drosophila, what mechanism is there for collective learning that transcends the momentary sensation?

Yes, some will claim that the longest lived cultural institutions are the Abrahamic religions, which together span about three thousand years. And I answer that no religion has demonstrated itself to be a learning institution. Religions are stasis institutions, insisting on adherence to abstract timelessness and placeless ideals. They have no clocks and they do not value this world. They have sponsored scholars and universities, but their belief systems do not function as living, collective memories ingesting and processing human experience, constantly revising their conclusions.

For that, one would have to turn to science. Science might seem the only hope for a cultural repository of cumulative investigation, except that science as practiced has become diversified beyond comprehension, an institution of cubbyholes, where specialized experts don't understand each other's work.

And yet, science provides the only wide scope on reality we have beyond immediate sensation. The climate changes we have wrought might remain mostly under the radar of human awareness, save for sensationalist headlines written about scientist's findings. When things happen too slowly for us to notice, we think no change is taking place. The gradient of change is below the threshold of perception. When things happen fast, they usually happen in fits and starts, the jagged chart line of change providing ample excuse for denial.

A cultural learning process would have heuristics substituting for statistical analysis, ways of recognizing shifting envelopes of variability, an instinct for knowing that things are not as they were.

An individual can grasp time before and after one's existence, but a society cannot. Our clocks tick too fast and out of sync. We have no means for gathering and evaluating knowledge that transcends generations, save for a rare "longitudinal" study, and even then, the results are regarded by a public with severe short term memory issues as the curiosity of the moment, quickly forgotten. 

It occurs to me that human affairs should be run by scientists, and only scientists, and that democracy is absurd because it always tries to undo itself with ceaseless battles for dominance swaying between extremes, that make of our collective mind a bipolar paranoid schizophrenic. Taken together, we are a truly psychotic species.

But a world run by technocrats would be just as bad, solving problems through genetic engineering and extermination.

Jeff Bezos, the Amazon founder, has funded a ten thousand year clock. What we need is a society of ten thousand year clock keepers, people who invisibly nudge the course of human affairs toward wisdom.

This needn't be a secret society, as no one would take seriously an organization whose vision looked forward farther than recorded history can look back.

This society would need the endurance of religion and the awareness of science. It would have to pass its mission on through generations even as it adapted to changing circumstances. It would be able to infiltrate every center of power without revealing its methods for guiding human affairs.

Rather sounds like Asimov's Second Foundation, except that mentalism isn't an option.

I would not be disturbed to learn that there is a smart, benign hand guiding an otherwise stupid, self destructive species. Not the hand of a god who thinks nothing of destroying the planet it created, as a broken plaything is to a child, but rather, the hands of the smartest, most humane and empathetic people human genes can produce.

A society of Nelson Mandela types, looking ten thousand millenia hence. As far fetched as that sounds, it is about the only hope we have for lasting as long as the dinosaurs did.

Tuesday, November 5, 2013

The Political Thriller Revived

For awhile now political thrillers have been generic action movies playing out against vague and largely irrelevant political backdrops.

I think the genre is about to undergo a revival, politics to the foreground and the action more psychosocial. Always the threat of violence lurks, more frightening from the shadows and more insidious because those shadows reside in the soul.

Movies inspired by real events hit the screens a few years following those events. You can be sure that at this very moment, several scripts derived from the Snowden revelations and the federal shutdown are being shopped around. Most of them will hew to the familiar tropes about our government out to get us and the ruthless ambition of politicians, tropes that are throwbacks to the 1970s culture of paranoia.

I have in mind something different, a political thriller that turns tropes upside down.

In my story, terrorists, the usual suspects, are vastly overrated bunglers, struggling against their increasing irrelevance.

The Tea Party "patriots" are subversives in the legal as well as the rhetorical sense of the term.

Mainstream Republicans show themselves to be nothing more than money mongering opportunists with no moral center.

Journalists worthy of the name are few, and most with bylines and televised faces are self promotional hacks addicted to sensational speculation.

Liberal righteous rage finally, foolishly and fatally bursts out of its fabled self restraint, a serious menace posed by rational, overly tolerant people gone mad with vengeance.

Intelligence services are concerned primarily with avoiding culpability by erasing their tracks. In a strange way they are pathetically innocent of what they hath wrought.

The villain is not a secret movement, it is the American brew of apathy, self righteousness and impetuous power. The conspiracy is not about cover ups and murder, it is about collective denial in a culture where people nurture their favorite nemesis in order to justify their own agression.

Amongst the military are the good guys, the only adults in the room, and the Chairperson of the Joint Chiefs is the wisest woman in the world.

The NSA fishing pond stocked with archived phone calls and emails becomes an asset vital to the survival of a democracy that is most threatened by a public too lazy and too ignorant to sustain it.

The main practical question to be faced by a story that leaves few unscathed by disdain is, will it sell?

I think that so much self disdain roils beneath the surface of our national shame, that, yes, it will sell. Our dissatisfaction with ourselves craves an outlet.

Friday, October 25, 2013

Dreams Are Made Of This

Insanity and magic.

That's what dreams are made of. 

When stories use insanity and magic they are dreamlike. They seduce us with surrealism.

I'm waiting for brain scientists to show movies to viewers strapped into MRI machines recording mind prints of the experience. The brain scientists will compare these with mind prints from the same subjects recorded while they were dreaming. For some titles the mind prints from movies and dreams will match.

What makes a movie gripping is the apparent illogic as it unfolds. Twists and turns defy our expectations. The story will set up expectations and then violate them. It will foreshadow strangeness and then make it happen. In the end we are left wondering what it all means, because it seemed to mean something, but the meaning is not clear. For two hours or so we are dreaming with eyes wide open. The experience echoes, just like a dream, just like a dream. 

What is insanity? What is magic? Both are ways of reconfiguring reality. In lucid dreaming just before waking we can influence the reconfiguration, sometimes sleeping through the alarm in order to continue the dream. We get to experience insanity, and exert control over it with magic, and then wake up without consequences. Just like watching a movie.

Someday it won't be the bean counters who run Hollywood, it will be the brain scientists. Test screenings won't take place in Covina, they will happen in a lab. MRI scans will be compared with a massive database of mind prints, and movies will be reedited to firm up any scene where the audience shows signs of awakening from suspended disbelief. 

There are some surprises that are still surprising even when we have become entirely familiar with them: orgasmic surprises, fully anticipated, nevertheless startling when the climax arrives; sensual surprises, always pleasurable to the touch, like petting a cat; sentimental surprises, jerking a tear we need to shed. While the memory of a dream fades rapidly, a dream redux is still a dream, always potent, ever surprising.

Psychotropic drugs induce the dream state without turning off sensory awareness. The live feed becomes a dream feed, a dangerous mix. Movies cannot accomplish this mix because the theater shuts out the rest of the world. Home viewing allows the movie to be paused when reality intrudes. Eventually, augmented reality on mobile devices will become electronically psychotropic, making the connected world a collective dream state.

In the Matrix movies, the collective dream state has been imposed upon humanity. The denizens of subterranean Zion, outside the Matrix, live a basic, sensory existence of the flesh, dreaming in their sleep. Humans within the Matrix are always asleep, subsumed by the dream construct created for them. The human travelers between are lucid dreamers. The machines and programs in the Matrix want to dream they are human or superhuman, and only in the Matrix can they fulfill that ambition. The nanites that were sown by humans to block out the sun have used that solar energy to extend the Matrix into the physical world, a nanobot ether pervading all. 

When the whole world becomes entranced by the insanity and magic of the dream state, will it sleep through the alarms heralding its demise?


Monday, October 21, 2013

When A Story, Not A Story, Is A Story

A story is not a story when scientists use the term. They should say, an account, but they use the word, story. 

"And then and then and then" is not a story, it is a series of events. "Once upon a time" is not a story, it is a situation. "This causes this causes this" is not a story, it is a causal chain. These all can be considered accounts, a weak synonym for story.

When someone in science tells an actual story in the literary sense, with characters and action and motivation and conflict and theme and structure, scientists call this anecdotal. Anecdotal is a pejorative term to scientists, it means that a story cannot be entered into the record as reliable evidence. All stories, even true ones, rely on some contrivance in the telling.

Thus, the semantics of science and the semantics of the humanities are skew lines, non-intersecting long after C.P. Snow observed this state of affairs.

I just read an essay by a scientist that said, the rocks have stories to tell. Scientists will take literary license like this to make a point to a lay audience, and that is permissible. Scientists permit this hyperbole because the implicit meaning of this phrase is understood: rocks provide evidence of change, and science carefully pieces together this evidence to provide a cause and effect account of change. 

An account, even when positing cause and effect, falls far short of being an actual story. Ask a teenager how she spent her day, you will get a rambling account of one activity after another, annotated with exasperation and gossip and self justifying speculation about cause and effect, but you won't get an actual story. Ask a thoughtful adult the same question and the account soon digresses into philosophical ruminations, punctuated by disjointed personal anecdotes.

You can appreciate why scientists distrust anecdotal sources as too subjective and selective. And yet, scientists claim when explaining a scientific discovery that they are telling a story, an objective story, solidly platonic, elevated above any given human storyteller, a story told by nature itself. These stories are called lectures and journal papers, hard to follow and abstract and apparently having no overarching theme except to say that this is The Way Things Are.

The Way Things Are: that is the focus of interest for scientists. They are not particularly interested in how this construct, The Way Things Are, came to be. They would not admit to scientific findings being a construct. Construct is a humanities word, a solipsistic notion that the scientific method strives to transcend. This disavowal of collective subjectivity is the institutionally unacknowledged core of modern scientific practice, even as many practicing scientists wryly acknowledge as much in personal anecdotes that will never be included in any paper submitted for publication.

An authentic story (not to be confused with a story that is true, although an authentic story can be true, but it can also be fiction) is a construct, a shameless construct, an unapologetically bold collaboration of human minds, encompassing the minds of storyteller and story receiver. 

An authentic story is an account using cause and effect to convey meaning. Even if the authentic story concludes that there is no meaning, then that is its meaning, that all is pointless. An authentic story is always meaningful, even if the meaning provides no consolation.

Science eschews meaning. I don't think it must, but traditionally, it does, because of the history and culture of science. Let the facts speak for themselves, let the rocks tell their stories, which are really accounts told by the scientists who investigate rocks, leaving themselves out of the accounting. "The way things are" should not be burdened with "the meaning of the way things are" lest science devolve back into the dark ages.

The position of science regarding what things mean is this: everyone is free to attribute their own meaning to The Way Things Are, but such musings are beside the point. Purpose is for philosophy and theology and fiction to ponder. Science investigates reality without prior assumptions of what reality is supposed to mean, and without fear of what reality compels as consequences. The facts are the facts, follow them where they lead.

I ask myself, is it possible to fashion an authentic story about scientific findings, a story that conveys meaning, that also is compatible with the detachment of science that makes it so potent?

I think it is possible. In truth, this is the way I learn science, by studying scientific findings and how they were found, and considering what this process means to me, transcendently.

Currently, I am studying paleontology, a field which in the past was mostly argumentation but which has become increasingly analytical, availing itself of computational tools. Paleontology has also become in the last quarter of a century more integrated with other earth and life sciences, as it strives to trace the tapestry of life over time. 

The vast span of geologic time, divvied up into episodes of unimaginable duration and labeled with Latinized place names where characteristic rock formations were first found, this brain-glazing ramble of global change that strains the attention span of even the most devoted student, has found a unifying paradigm over the last half century of earth science. That paradigm is Plate Tectonics.

Pretty simple when you distill it down to its essence: cold rafts of rock ride atop the billowing melt, splitting apart and colliding and annealing again in a recurring flux, changing atmospheric and oceanic flows and composition, while life evolves to exploit the opportunities thus provided, thereby changing the planet and itself still further.

Does this account have meaning? Is the Earth telling an authentic story?

I think the Earth is not so much telling as composing a story, an authentic story that challenges us to redefine what Meaning means. I think Earth's evolution invites us outside our bubble of self justifying constructs to consider Meaning from a detached perspective.

Life perseveres. Life adapts. Life invents. Life creates and destroys, devours and is reborn. Life's purpose is to go on living. Life is the fundamental product of Earth. Life is its own Meaning.

The rocks with their entombed fossils do not tell this story, we tell it. Life on Earth has no purpose other than to be, but we have a purpose as the voice of the Earth, speaking to our own kind about our inheritance and our legacy, our history and our destiny, as the most disruptive, and perhaps someday, the most healing life form on the planet.

That's quite a story.

Thursday, September 26, 2013

Evolution as Cause for Pride

If all goes well, I'll be starting a stint this winter as a volunteer interpreter in the paleontology gallery at the Natural History Museum of Utah.

It is impossible to interpret paleontology without interpreting evolution. I think science education has done a poor job of this.

The resistance to evolution is essentially psychological, not religious. Attacking Creationism and Intelligent Design doesn't get to the root cause. These are empty fabrications, easily refuted with evidence and logic, and yet they withstand all rebuttal because the fusillade of reason simply passes through them. 

Science educators mistakenly argue that science and religion are "orthogonal" (a scientized word meaning that you can't measure one with the standards of the other). This approach only reveals how antisceptic scientists strive to be, doing themselves a great disservice.

Scientists and science educators should own up to the fact that science has much in common with religion, even as it has much to distinguish it from theology. As with religion, science has passion, conviction, devotion, pride, awe, wonder, and a striving for a comprehesive understanding of existence. Unlike theology, science invites questions, refutation, paradigm revolutions, and at its best, greets the great unknown with first a shrug and then a squaring of the shoulders. What we don't know, we will strive to find out.

This is how science is done. But science as it is presented in the classroom seems like catechism, barely distinguishable from theological doctrine and therefore, a potential rival to theologies of all stripes. There are scientists who embrace this, who have made a career of debunking theology, thus feeding the impression of rivalry. 

Let me clarify my distinction between religion and theology. Religion is a set of psychological questions, theology is a set of authoritative answers to those questions. 

Science is a set of methods for finding answers so that they can be reframed as further questions. It has much to say to religious impulses but it challenges those impulses to continue their inquiry. It has very little to say to theological doctrine that provides only pat answers intolerant of continuing inquiry.

The religious impulse is much stronger than any given theology, because it derives from the root psychology of the individual. We are born to wonder, to inquire, to formulate hypotheses. We are born to feel pain, and seek salves for our anguish. We are born to desire pride in ourselves and our associations. We are born to feel small beneath a starry sky and at the same time as large as the cosmos. We are born to question who we are and why. Without these religious impulses theology would find no footing in the human psyche.

Science education can connect with these psychological drives, not by asserting itself as a rival to theology but by showing a kindly regard for human nature.

When a child or adult feels discomfort at the idea that "we evolved from apes" that unease is an expression of our fragile self esteem. To imagine oneself as simply a naked ape with primate lineage and instincts feels degrading, while to imagine oneself an exceptional creation only one step removed from angels feels uplifting. 

I think what the interpretation of evolution should say to this person needing affirmation of personal worth is this: You are the sum of all the survivors who forged the path to your existence. You have their attributes built into your genes, a geneology of success. Making the best of that wonderful 3.5 billion year inheritance is your test in this life. The success of humanity in the eons to come depends on the choices you make now.

When an uplifting message is phrased as a challenge, I think it can set people in a forward direction. It can give them a way to earn their pride.

Tuesday, September 24, 2013

The Muse is not a Marketeer

Writers can wonder where ideas come from, but maybe a more important consideration is what the ideas stand for.

None but a blockhead ever wrote but for money, according to Samuel Johnson and Larry King. I think that is valid advice for the act of writing, but does it apply to the inspiration that prompts the writing? 

Should a writer think first about money, asking the muse for profitable ideas? Or should the writer request purposeful ideas and then dress them up to sell?

That's confidential information shared between writer and muse, isn't it? I would not decree my choice as guidance for anyone else, and I have no way of knowing if any given literary masterpiece started out as the gleam of dollar signs.

All I know is that when I try to talk myself into believing something, the contrived conviction eventually wears off. And when I try to talk myself out of something I believe inherently, the self inhibition eventually wears off. I've lived a long life experiencing a lot of wearing off.

What is true to me is what entertains me as it appears on the page, what is fun, even thrilling, to see in my mind's eye and describe with words. The muse doesn't hand me stuff ready made, it stands over my shoulder, kibitzing my own choices. 

Fun, even thrilling, for me likely would be absurd and depressing for most others. I analyze others, deriving a calculated version of how they function, which deprives me of true feelings for them but creates true feelings for the characterizations I have made of them. The best approximation I can make of Hope is to admit that I might be wrong. The best imitation of compassion I can manage is to identify with my characters' falibility.

Thus, my Muse is not a Marketeer. It does not spoon me ideas with a promising return on investment. Born naked, it is up to me to dress up for the party. I will sell myself through appearance, but that is packaging, not product.

If you want modernized Samuel Johnson, read any advice about script writing. The message is always about money: where to find it, how to deliver it by pandering to audiences. When I contrive to please others, I displease myself, which means that the essential talent in writing for hire is to repress one's well earned self loathing. 

And a screenwriter is a writer for hire, not necessarily starting out that way but any sale will make it that way. Either the script will be changed by others or others will insist that you make the changes yourself. Money becomes the salve for the wounds of indignation.

One of my absolute convictions in life is that audiences, whether reading or viewing, seek solice, forgiveness, salvation, self justification. They will accept truth only if it is kind. 

My work will never be popular because I think the truths worth writing about are cruel, and that the task of a truth teller is to show how one can cope with cruelty by refusing to practice it, refusing to let it corrupt one's soul, refusing to allow it dominance over the oases of beauty amidst bleak desert sands, refusing it by seeing beauty even in those desert expanses, and that the way to do this is to embrace one's pain and make it a thing of wonder, a reminder that you are alive.

Saturday, September 7, 2013

Matrix Resolution

Ready for some fun?

For all of the philosophical musings written about the Matrix movie series, I've yet to encounter any comment that resolves a fundamental issue in the science fiction premise of the story arc. In other words, how does this fictional world, on its own physical terms, actually work?

To be more honest, I've yet to encounter a fanboy who understands as well as I do what is really going on in the three worlds of the epic: the virtual reality of the Matrix itself, the Machine World on the surface, and the subterranean home of humans, Zion. These three are tied together thematically at the conclusion, but I think it is pretty clear that nobody but me and the Wachowskis understands how.

I bought a Blu-ray set of all the movies in the series, a set that includes the most important "documentation" of the science fiction premise, The Animatrix, nine animated shorts commissioned by the Wachowski siblings (now brother and sister). Without information supplied by The Animatrix, the storyline in the trilogy has gaps that only a stickler for consistency would notice. (Who, me?)

The biggest consistency gaps are 1) the true nature of "The Source" and 2) the true nature of what Neo sees with his extrasensory perception (after having his eyes charred), which he and the philosophical commentators alike take as the machines being "made of light." In this essay I will explain that the true source and the aura of light are the same, something well foreshadowed by the storyline, when The Animatrix prequel episode, "The Second Renaissance" is included.

Here's the backstory presented by "The Second Renaissance": In humanity's desperate struggle against the machines, people seed the sky with nanobots or nanites, nano-scale particle-like constructions that envelope the planet in artificial clouds, thus cutting off the machines' source of power, the sun.

This works physically but not as a strategy. The machines adapt by using capsulized human bodies as their source of power and they control human psyches by placing human consciousness in the simulated world of The Matrix.

The nanites are the central actors in this model, even though they are seldom mentioned in either the movies or the animated short that explains the "scorching of the sky".

My contention (I'm being modest about what I intuitively know to be fundamental truth) is that over the centuries since their introduction, the nanites evolved to pervade the planet. They are the true source of connectivity, the means by which Neo can exert his virtual reality powers in the physical world. They are what Neo sees in Machine City as a aural golden glow, a flow of nanites enveloping all machinery and programs. Their power is as great as the sunlight bathing the outer edges of the atmosphere, and they can connect all partitions that the machines have made of the planet.

When Neo zaps the attacking sentinels in the physical world, falling into a coma while his psyche awakens in a limbo way station (anagrammed as Mobil Station) between the Machine World and the Matrix, he taps into this nanite ether for his zap power, and for his psychic transport to Mobile Station, not actually in the Matrix proper, and thus, not reachable by jacking in. The Trainman engineered this underground railway by programming nanite presence without realizing the underpinnings of what he was doing.

It is clear throughout that neither the humans nor the machines (even in program form as uber artificial intelligences, such as the Oracle or the Architect) understand what is really going on at the highest level. Both species are equal in their unawareness of nanite influence. They both think their reality is all there is.

I guess the ultimate revelation that they are connected by a nanite ether will have to come for both in the fourth installment, which I would title,

Matrix: Resolution

I'd write it, but the Wachowski siblings own the franchise.

Tuesday, September 3, 2013

Nearly Ideal Immersion

A movie maker should desire, first and foremost, to tell a good story. Immersive technology may be a means to that end, or it may be a distraction, or it might require a very different approach to storytelling.

The ideal immersion described in the earlier post took as the measure of perfection the achievement of total presence, the viewer's sense of being physically in the scene. The trade-off for achieving total presence, however, is the loss of every cinema device currently used to tell a story visually.

I experienced a state of the art demonstration of total presence immersion, Nonny de la Peña's Immersive Journalism project, "Hunger In Los Angeles", at Sundance 2012. The Virtual Reality head tracking goggles, an early precursor to Palmer Lucky's Oculus Rift device now nearing consumer release, provided a convincing, slightly pixelated stereoscopic view without peripheral vision. The virtual world was a recreation of an actual event, using an actual audio recording of it, situated in a computer graphics scene recreating the actual physical location, populated by motion capture avatars of actors portraying the actual people who were there. The whole idea was to simulate an eyewitness experience that allowed the user to bear witness from any point of view.

The demo was quite effective at showing the promise of the technology. I felt simultaneously immersed and reflective, imagining myself as a news cameraman trying to shoot the best view of the action, feeling complete personal detachment from the events unfolding. I'm not sure what the demo hoped to evoke in viewers, but what it brought out in me was the desire to shoot the scene well. I would NOT have reacted to the actual event this way if I had been there when it happened, I would have improvised a pillow under the head of the man who fell to the sidewalk during a diabetic seizure caused by hunger while waiting in a food line. But these were virtual actors, not people, and so I responded cinematically.

I reflected upon that occasion a long while afterward. It wasn't storytelling, it was event simulation, the story of which was up to the person vicariously experiencing it. I had created a story as recorded by a news cameraman looking for good coverage and cinematic composition. Other people created stories of themselves as powerless and invisible, unable to influence the events unfolding. The story as perceived was entirely in the augmented mind's eye of the viewer.

"Ideal Immersion", defined as the achievement of total user presence in the scene, is not a storytelling medium, it is an experience medium. There are no aspect ratio choices, no lens focal lengths, no camera dollies, no post production effects, no edits. The cinema storyteller is the eliminated middleman, making way for user stories.

By contrast, "Nearly Ideal Immersion" isn't an oxymoron, it isn't a lessening of an absolute. Rather, it requires a shift in emphasis from a simulation of physical presence in the scene, to the creation of an emotional presence within the story, using immersive technology. This means that the cinematic storyteller retains traditional control, with constraints, not only over the action but also over how it is shown, and that the viewer has to make a personal commitment in order to experience this immersively.

Next installment in this series of posts on immersive cinema, I'll discuss the tradeoffs for achieving balance between physical and emotional immersive presence, and the technologies for doing so.

Sunday, September 1, 2013

Visual Argumentation

We all know how to argue with words. It could be argued that argument is the most common form of communication.

Is it possible to make an argument with images so skillfully that the viewer is unaware an argument is even being made?

Yes. The name for visual augmentation in cinema is montage, and it works so well because images in sequence create an uncontestable logic of their own.

Any verbal assertion, no matter how well it is backed by fact and logic, is subject to the contentiousness inherent in language. We can define our terms in opposition, we can interpret "facts" differently, we can twist what others say contrarily as if they were proving our own point.

Images are different. They are even processed separately, in the half brain opposite the verbal half. Images in succession don't follow logic, they create logic by establishing an associative chain that invites careless inference and defies careful analysis. Rebuttal of a montage with words is always a belabored point, and therefore weak. A succession of images goes by fast enough to reach its conclusion before we have time to think verbally about what we are seeing.

We filter words by what we wish to hear. To some degree we filter images by what we want to see, but visual preconceptions are vague and easily superseded by fresh images. With words, to immediately discard what you previously believed in favor of what is now being said, is a sure sign of mental illness. And yet, this is exactly the way we process images, tossing out provisional expectations to make way for current sights. If someone doesn't look as we imagined from previously hearing only their voice, the sight of them is immediately definitive and our expectations are forgotten. Seeing is believing; our visual thinking is structured to alter our beliefs in a blink of a film edit.

We have no visual language to rebut what we see, we can only choose to doubt it or to not see it at all. We cannot readily challenge pictures we view with contrary pictures of our own, and even if we took the trouble, one selective truth told by a camera does not necessarily rebut another selective truth told by another camera. More likely, the audience accepts both as alternate views.

All this is grist for making movies by editing sequences as montages. Using my own terminology, let me distinguish an action montage from an association montage. The difference between the two is not always a sharp boundary, even though they are distinct forms.

An action montage simply jumps from one point of view to another as the action unfolds continuously. The action can be as ordinary as a conversation or as frantic as a high speed car chase. In both cases, the action montage leaps from one camera position to the next as the action maintains space-time continuity. This succession of compositional framings can, in the hands of a master, manipulate the mood of the audience as they experience the scene. Viewed with detachment, an action montage is an argument that the diverse shots pieced together chronicle continuity from multiple vantage points. This contention is, of course, a bald faced lie.

An association montage, what Eisenstein created for his movie Battleship Potemkin when he established the method, shows in succession disparate images that the audience melds into an integral scene. There need not be a sense of jumping from viewpoint to viewpoint, but rather, an impression of connection between what the images show or symbolize. The holistic effect of an association montage is psychological, a cognitive and emotional summation. Again, from a detached perspective, an association montage is an argument that a visual experience consists of many facets with many simultaneous levels of meaning. This contention is in general, true, and in particular, highly subjective.

In summary, visual argumentation gains its force from logic defined on its own terms, through associations created in sequence. It can lie, of course, but perhaps its ultimate power is that the audience knowingly submits to the lie, accepting its influence over them.

Natural Scenes and Digital Fakes

The eye that knows the land notices nature's marks in the patterns of topography mantled with vegetation. 

A landscape is a like a rumpled bedspread midway through a fitful night. The knowledgeable observer sees in cliffs and swales, forests and meadows, canyons and bajadas, not a creation frozen in final fulfillment, but rather, a process with much tossing and turning yet in store.

Digital forgeries of natural landscapes luxuriate in cloned and fractal and L-system complexity, bearing only the mark of iterative algorithms. Avatar's Pandora was seeded by zeros and ones in a nursery of RAM. There is no story to be read in the marks of nature where nature had no hand.


When the Wieslander Survey crews clambered over the California landscape in the 1930s, their trained eyes observed not only the vegetation types but also the telltale signs of old burns healing. The Yellowstone fires of 1988 revealed fire patterns and subsequent regrowth as a complex mosaic of natural history. A hiker ascending a Sierra trail scarcely notices lodgepole pines in thickets where John Muir had remarked upon the open glades tended by frequent low intensity fires.



Wieslander Vegetative Type Map for South Lake Tahoe, 1930s

Muir particularly noted the signs of glaciers. In the grooves and polish of granite, in the U-shaped profile of valleys, in the isolated boulders perched upon ridges, he saw a landscape hewn by massive moving ice. Muir was a world class noticer of things in their place and things out of place.

Movie audiences are not so particular. They do not object to pretty waterfalls with no watersheds to feed them (The Hobbit) , or to mountain ranges with no faults or folding (LOTR). Even with photography of real places, few viewers object to a Monument Valley drive-in theater just down the road from a California suburb (Back to the Future III) or to slaves shuffling in chains over the Eastern Sierra on their way to Texas (Django Unchained). They accept shallow verisimilitude and impossible juxtapositions as plausible because they are used to scenery as backdrop, not as story. 

To the lover of living landscapes, scenery is a tangled story inviting unravelment. Was there ever a fiction movie that showed place authentically? Better still, was there ever a movie where Place served as a main character, in the manner of Thomas Hardy's heath in his novel, The Return of the Native?

We like to think in our vanity that a landscape is merely a stage upon which humans perform unbound fancies. But in truth, we are so bound to the land that its fate is our own. What is to become of a society whose entertainments are estranged from its realities?

Saturday, August 17, 2013

Ideal Immersion

What will immersive technology look like in the future?

Before considering the options, let me define immersive in terms of technical presentation rather than subjective perception. This caveat bypasses the issue of user commitment to immersion by assuming that if the technical parameters are adequate, an immersive experience is assured.

With this in mind, the ideal, if unattainable, immersive technology would present an experience identical to physical presence.  We have two fictional versions of this, the Holodeck and the Matrix. 

What are the audio-visual technical parameters for these two imaginary immersive technologies that practical technologies can strive to approximate?
  1. Images and sound are completely surrounding. There is no sense of looking through a window or at a screen.
  2. The resolution limit is set by the user's own visual acuity.
  3. The viewing position is user defined and continuous. A shift or tilt of the head presents a different point of view and an accompanying change in depth perception.
  4. Audio visual directionality is continuously correlated with where the user is facing and where the user moves. A head turn or body repositioning reorients the sensory input accordingly.
The current technology that comes closest to these four ideal indices is Virtual Reality (VR) viewed through a Head Mounted Display (HMD).  Wearing an HMD device introduces tactile and kinesthetic sensory distractions, and the field of view of any optical design will restrict peripheral vision. The content must be provided as a volumetric database visualized by the device with no lag time. Nevertheless, VR HMD is a very active commercial research topic, driven by the large market for first person Point-of-View (POV) computer games.

The next closest technology exists only as a dream, but is theoretically attainable: a cyclorama consisting of an exceedingly fine micro lens array. To picture how this would work, imagine standing in a space surrounded by door screening or window screening material, the kind of mesh that keeps out bugs, configured into a dome or a cylindrical wall. Now imagine each one of the spaces in the mesh is actually a tiny lens that projects only a part of its particular image according to the position of the viewer. This would simulate the effect of light from objects beyond the mesh screen passing through the spaces in the mesh. Taken together, this micro lens array could create a light field, which would appear much like a hologram. 

The computational complexity for this approach would be daunting, in effect, requiring the production a tiny but high resolution image corresponding to a hemispheric view for each micro lens in the array. There is no question this would work when the component technologies catch up to the dream, but it will always be inefficient, producing at any moment far more imagery than the eyes are perceiving. 

It does, however, pose the potential for live action capture, with the "camera" being a micro lens array with the same configuration as the viewing array. Incident light would be captured at the recording side of the camera array surface, and its path regenerated on the projection side of the matching viewing array surface. This is synthetic holography, using sensors and digital encoding to simulate what coherent light does with wave phase interference and reconstruction. 

Apple has a patent for a clever solution that would reduce the computational requirement for a Massive Micro Lens Array (MMLA). In Apple's design, the lenses are convex micro mirrors that redirect light from a projection laser according to where the viewer's eyes are located, as tracked remotely through face recognition. This is somewhat like Digital Light Projection (DLP) except that the micro mirrors are stationary and the laser movements make all the positioning adjustments. Again, an anticipatory idea that current optics, mechanics and sensors can implement only crudely.

After VR HMD devices and the highly theoretical Massive Micro Lens Array technology, the contenders for the immersive technology of the future all utilize matched image pairs for Stereo 3D (S3D).  I call them "nearly ideal" because they lock the viewer's eyes into the POV choices made during content creation, affording the audience less of the exploratory interaction that contributes to an immersive experience. In other words, a viewer can't look around things, can't move relative to things, but rather, the viewer must passively accept camera and editing choices.

This constraint is not so much a loss as it is a trade off. In a later post I'll cover Nearly Ideal Immersion and it's advantages for storytelling.





Thursday, August 15, 2013

Easter Egg Ironies

In some movies there's the ending everybody wants to believe, and there's the ending the movie makers insert like an inconspicuous Easter egg, in the tradition of computer games. I'll bet you missed the concealed ironic twists below.

The Prestige: As the Christian Bale character walks away from the scene after murdering the Hugh Jackman character, another Hugh Jackman character in the shadows is shown briefly, hiding in one of the water chambers. He had duplicated himself in expectation of being killed.

Terminator 2: The "good" terminator battles the liquid metal terminator, who immobilizes the good terminator by trapping his arm in machinery. When Sarah Connors is about to be killed by the liquid metal terminator, the good terminator appears, a stump where his arm used to be and a grenade launcher in his other arm. After he shoots the liquid metal terminator and it falls into the vat of molten metal, destroying it, the good terminator takes the cyborg hand the previous terminator in movie #1 had left behind, and lowers himself with the hand into the molten metal. However, his own arm, the one he had just torn off, is not with him. Thus the second movie ends just as the first one, with terminator technology left behind to reverse engineer and bring about the rise of the machines. Later movies in the series were not directed by James Cameron, and did not capitalize on this concealed irony.

12 Monkeys: At the conclusion, when the mad scientist finds his seat on the plane, one of the scientists from the future is already in the adjacent seat. The mad scientist asks her what her business is, and she says, "Insurance." It is made clear at that point that the Bruce Willis character, at that very moment shot dead by airport security, has been set up from the beginning to instigate the gun scene in the airport as a diversion, thus facilitating the mad scientist's dash to the plane when security was distracted. The scientists from the future were presumed to be averting the global pandemic that created their timeline, caused by the engineered virus sown by the mad scientist in the journey around the world upon which he was embarking. The ending made clear, however, that the scientists from the future were "insuring" that the mad scientist carried out his mission, thus setting into motion the events that resulted in their world.

I have never come across any references to these three covert endings. The movie critics certainly missed them entirely. A movie's structure and momentum carries it to the conclusion people prefer, and the audience forcibly fails to notice anything contradictory.

Rod Howard's commentary on the Beautiful Mind disc remarks repeatedly how test audiences resisted any reveal that the Russell Crowe character, based on John Nash, was suffering schizophrenic delusions. There were several reveals in succession, but many in the audience still held out for the possibility that he really wasn't nuts and that the government really had masterminded a coverup. The final reveal was beyond anyone's power of denial, when the character says to his wife that she can't see his friend because he is wearing an invisibility cloak. (The movie was released in 2001, the same year as the first Harry Potter book, which introduces his invisibility cloak.)

Many people with me in the audience for a showing of Shutter Island exited the theater looking dazed. In a very rare display of public post-movie discourse, they gathered in little groups trying to figure out what happened at the end. The ending isn't confusing at all as story logic, but the audience could not accept what they had witnessed and they had no alternative. Leonardo DiCaprio's character clearly decided that without his mental illness to protect him from the truth, he would rather have a lobotomy than suffer horrible memories. The audience when I saw the movie could not accept this, and thus chose to be confused rather than suffer a horrible truth. I thought Scorsese very clever to create for the audience a fate parallel to the fate of the main character.

Stage magic uses diversion to fool the eye, but movies can use clear logic and self evident visuals to inform the eye, and the audience will still insist on fooling itself. The ultimate irony is that audiences see what they want to see, not necessarily what the storyteller presents to them.

Monday, August 12, 2013

Movies as Lucid Dreams

The critical, business and celebrity buzz about movies obscures their deep psychological influence over the way we think after seeing them.

For all of my own analytical detachment, I can be extremely influenced for minutes, hours or even days afterward. This is not so much an emotional effect as a cognitive one. The experience of the movie creates a temporary lens through which I interpret the world, a lot like the minutes after waking up from a lucid dream.

The fuzzy pseudo-logic of lucid dreams can pervade one's waking moments all the way to morning coffee. The after effect of certain movies, not all of them, lingers similarly.

I just watched on Blu-ray "A Beautiful Mind" for the first time. When movies are viewed into the wee hours in a dark room when one is sleepy, they can become very much like dreams, evoking similar reactions. I found myself wondering at the end how much of my own life is purely imagined, and even reread my email and blog posts as a reality check. No, I hadn't created imaginary characters or events, but for some minutes the prospect of this worried me.

Movies gain their power, and suffer their limitations, from their similarity to the dream state. They can influence us greatly but the effect fades. If that fading happens rapidly, the most appropriate response is to laugh at oneself.

For some people and some movies, the effect lingers as a personal, recurrent reference point. Do they risk, like Inception's Cobb, becoming stuck in the dream, a moment that lasts forever, a sleep from which they never awake?

Climate Change: The Movie

A fictional movie about climate change greatly exaggerated the science, but the ploy didn't work, society wasn't scared by special effects into doing the right thing.

A documentary movie about climate change used reason and data charts, to much acclaim but for little result. The truth of climate change is still inconvenient.

What a movie about climate change has not tried is reverse psychology, which just might do the job, not on believers but on the deniers.

Here's the formula: 1) Choose which audience you want to scare; 2) Appeal to the fear they hold most dear.

The people who understand that climate change is real—that the disappearance of summer arctic ice, the acidification of the oceans and the release of methane from melting permafrost are all world class catastrophes—these rational, informed people are not the audience a fictional movie should address. They are plenty scared already, for good reason.

By contrast, the smug confidence of climate deniers will never be shaken by the evidence and logic of a careful documentary. Nor will a fictional movie ever make them feel afraid of an actual threat.

No, conservative climate deniers would much rather be afraid of things that don't exist, and they reveal these considerable fears in their attacks on climate science. That vulnerability, the conservative bubble of delusionary paranoia, should be the target of a climate change movie.

Deniers say they are afraid that the scientific community is joined in a conspiracy so vast that all the mounting evidence from field expeditions, satellite monitoring and direct measurements are completely confabulated, a lie breathtaking in scope being perpetrated by the smartest people in society. There's plot point number 1.

Deniers say they are afraid that liberals wish to use climate change as an excuse to expand government power into every aspect of our lives, controlling our behavior and restricting our marketplace freedoms. There's plot point number 2.

So, here's the story premise that exploits these fears:

Conservative opposition successfully forstalls effective action on climate change when a seemingly liberal senator flips his position on the issue and joins them, mouthing the rhetoric of plot points 1 and 2.

Soon, major disaster strikes. Frozen methane deposits in the deep ocean suddenly belch immense quantities of the gas into the atmosphere. The Ross Ice Shelf breaks up, thus making way for rapid glacier movement from the Antarctic highlands into the ocean, raising sea levels dramatically. Ocean acidification wipes out entire food chains, devastating global fisheries. Major ocean current circulation systems simply shut down, radically altering local climates.

The once liberal senator now running for president suddenly flips again. He says these catastrophes can all be blamed on the conservative climate deniers, and that the nation needs to punish them for betraying the national interest.

As conservatives see their most treasured fear coming true, they realize they've been tricked into serving as scapegoats. The liberal senator was setting them up to take the fall. Climate change was never in doubt and the denial enablers would most certainly be blamed for making it worse.

The conservatives, from religious right to neocon, realize too late that they were on the wrong side of an issue that should have been theirs. They could have used climate change as an excuse for asserting global power aggressively to suit American business interests. They could have justified in the name of saving God's Creation, the conquest of all other nations.

They witness a liberal president living out the conservative wet dream, preaching and implementing the gospel of green imperialism, and realize their own traditional strong hand has been taken away from them, even as they face trials for treason and calls for the death penalty.

Yes, a terribly paranoid scenario, in which climate change plays a secondary part. What conservatives fear most is their oppression by liberal policies, and their foolish opposition to dealing with climate change dooms them to that very fate.

If a movie about climate change wasn't really about climate change, it was about the conservative fear of liberal power reinforced by climate change consequences, then conservatives might start thinking about doing the right thing, albeit for the wrong reason.

Reverse psychology doesn't try to make people afraid, it leverages the fear that already exists by showing how people can bring their own worst nightmares upon themselves.


Sunday, August 11, 2013

Claws, Jaws, & Maws: Cinema Monsters

Specific attributes seem obligatory in the design of modern cinema monsters, particularly, the accoutrements signifying a desire to devour human flesh.

In this modern era, when large predators like tigers, grizzly bears and even sharks are being driven to extinction, our apprehensions remain insistently primal. Whether the manifestation comes from outer space, the abyssal depths, or the lab of a mad scientist, the fate we prefer to fear is consistent: we are afraid of being eaten alive.

As with T. Rex, the fiercest will hunt us down in our nightmares long after they have vanished from Earth.

The danger of becoming a meal is no longer instilled in us by the fables we are told as children. The persistence of this visceral fear despite the absence of everyday cultural reinforcement suggests it is hard wired into our being, a genetic trait that once served us well because it kept us alert.

Other movie monsters exploit a considerable fear we each possess: the fear of our own kind. Vampires and zombies and faceless serial killers are but stand-ins for the mindless masses and malevolent loners living in our midst, who could turn on us without provocation. The primal fear in this case stems from a human predisposition toward paranoia. Instinctive xenophobia is particularly rampant in an anonymous society where anybody, even a family member, can become a malevolent stranger who was only pretending to be a friend.

Of course, we have tamed vampires and zombies with soap opera characterization, because we are not truly afraid of the mindless masses and malevolent loners, we just resent having to accommodate them during commute hour.

Despite the pacification of the planet, there are yet things roving the real world that wrap around our spines with an icy grip. The one true mortal threat that we fear most is so scary that it has never, to my knowledge, been portrayed as a movie monster. This is the horror of our own cells commandeered to kill us, the inscrutable threat of Cancer.

I thought about what Cancer might look like as a movie villain. The claw and fang cliches are boring. I am not professionally skilled at creature creation with 3D modeling programs, so I offer a sketch combining description with a conceptual image.


Imagine the Cancer Sisters, Maligna and Metasta. Maligna is a faceless, amorphous medusa head extending long tendrils outward in all directions. Metasta is a volumetric Mandelbrot beetle swarm hovering to Maligna's side. They are both glossy black, reflecting everything around them, showing no color of their own save a malevolent aura. When they move in for the kill, the victim is unaware, but soon the victim suffers the full range of cancer symptoms either compressed into the span of moments, or drawn out over months, at the Sisters' cruel discretion.

If you don't get a chill imagining this, if you don't fear superstitiously that even thinking about such things runs the risk of making them true, then you are made of stronger stuff than most of the human race.

Maligna and Metasta are not likely to appear at the multiplex. They are so scary no one would want to face them as phantoms on the screen. It is frightful enough that we face them when loved ones die, or the doctor brings us bad news.

We reserve our tame cinema fears for beasts with the predator characteristics we are vanquishing from the planet, as a kind of justification for our ignorant, thoughtless, relentless, selfish behavior: extermination as self defense.

The truly frightening threats in real life, all of our own creation now, are too scary for entertainment.

Friday, August 9, 2013

The Surface of the Thing is the Thing?

Nearly all that we call a thing, has a surface which bounds its existence. The surface is what we see, the surface is what we touch and hold. The surface contains the essence of the thing. Beyond the surface a thing can have a certain range of action impinging on other things with defining surfaces of their own.

If there is no surface, as with an evanescent cloud or transient musical sounds, we call these not things, but phenomena.

So what are we to make of this computer graphic rendering I created six years ago?


There are three things shown at different scales, the thing in the lower right being a part of the thing in the upper left, and the thing in the middle being a part of the thing in the lower right.

The thing in the upper left is a model I made of Bacteriophage T4, a virus of E. Coli bacteria made famous for its use in many laboratories for many purposes. 

The thing in the lower right is the baseplate of Bacteriophage T4, a mechanism that grabs a bacterium cell when the long fibers seen in the upper left are triggered by touching it.

The thing in the middle, looking evil and potent, is the needle structure that penetrates the cell wall of the bacterium, injecting into the bacterium the virus DNA, thus commandeering the bacterium cell to make hundreds of new virus copies.

A portrait of a weapon of mass destruction at the nanometer scale, disassembled into components like a field artillery piece. 

The Bacteriophage T4 is about 200 nanometers long. The shortest wavelength of visible light is 400 nanometers. Therefore, this picture could not be taken with a camera. In fact, it is not a picture at all, it is a rendering of data.

You could say that X-rays were used to gather the data, but not in any photographic sense. Thousands of frozen samples of Bacteriophage T4 were exposed to X-ray bombardment, being destroyed in the process. The X-rays were deflected according to the density of electrons in the atoms of the proteins making up the virus. Sensors surrounding the sub-microscopic virus registered these deflections.

Each of the individual T4 viruses was too small to arrange with any particular orientation. But the overall symmetry was known, and the top was distinguishable from the bottom. So the deflection data was converted by computation into a density volume fitted to be congruent with the expected symmetry.

In truth, the virus in its entirety was never examined this way. Instead, each major component, with five-fold or six-fold symmetry, was turned into data using this method, Cryo Electron Microscopy. These components were fitted together by scale and structure, to make a model of the whole virus. The volumetric density data was produced by the Rossmann Lab at Purdue University. The long tail fibers were imaged by transmission electron microscopy, also using X-rays.

I took that data, all publicly available, and used various software applications to turn the volumetric densities into geometric meshes. The meshes represent values where there is a sharp drop off in density, in other words, they represent surfaces.

But a surface of what? Think of what you see in this rendering as a shrink wrap of a complex compound of protein molecules. The atoms of the molecules are only hinted at in the surface shown because the resolution afforded by X-rays does not tease out distinct atoms. Instead, the structure of a given protein is synthesized in computer graphic programs and fitted into this shrink wrap to make sure it is accurate.

The rendering does not show these molecules themselves, but it does show the structural arrangement of the molecules in the needle. The ribbons are symbols of molecules with their atoms arranged in chains. The surface portrayal of the ribbons is purely arbitrary, they could as well be shown flat rather than plump.

Mostly absent from the data are the water molecules embedded in the structure. The water molecules might play essential roles in the adhesion of component parts and the mechanics of the components when the virus is triggered to penetrate the bacterium cell wall, but this is not known and not well studied.

What we don't see at all are are the surrounding water molecules. A water molecule is not spherical but it would just fit into a sphere two tenths of a nanometer in diameter. This means that a 200 nanometer high Bacteriophage T4 is to a 0.2 nanometer wide water molecule, as a 12 foot high model of T4 is to a BB, or small shotgun pellet. A twelve foot high model would be about five million times life size.


Water molecules vibrate and move violently. The effect of this pummeling on microscopic things in the water is called Brownian Motion. A T4 virus doesn't float around serenely in a placid fluid, rather, it is whacked around vigorously and constantly. Imagine yourself in a vat of violently moving shotgun pellets.

So look at the image again and think about what it shows. The surface of the thing is not the thing at all, it is a representation of data constructed not by optical imaging but by computation. The most significant aspects of the thing, like its structure, can only be shown symbolically. The environment of the thing isn't shown at all, and yet the whole apparatus that we call a virus must function within that environment.

And for all of the exquisite detail revealed by scientific visualization, we still don't know very much about how this thing, not so much a creature as a protein robot, makes its way in the wild. 

Bacteriophages. There are more of their kind, by individual count, than any other biological entity on earth.

Thursday, August 8, 2013

Keeping Things In Perspective

Movies in 2D manipulate perspective, not depth per se.

The latest Star Trek movie 3D version was converted from the 2D master, so I watched it to see what marvelous or abhorrent transformation would be wrought by a high quality synthetic conversion.

Turns out, the retrofitting of 3D onto a 2D conceptualization doesn't do much of anything. J.J. Abrams is a 2D director, that is his cinematic eye. The 3D from the conversion is a presence without significance, in most cases depth without roundness, adding nothing at all and in some scenes, detracting.

This is a well crafted movie, with many examples of creative lens choices. A private tête-à-tête between Kirk and Spock, conducted on the command deck with crew all around, is made confidential by the use of a long lens that foreshortens the distance between the two heads in a tight shot. The camera had to have been far away to make the two actors seem so close together. Their faces are flattened, enhancing the guardedness of their expressions. But in 3D we are looking at the back of one head as a blur on a plane, and the front of the other head as a picture on a billboard.

Depth in the action montages is scarcely noticeable, although realistic. Quick cutting unavoidably slices and dices any perceivable volume into nothingness.

I am a devotee of stereo 3D used to tell a story, but this movie demonstrates that adding stereo 3D as a special effect afterthought is a wasted effort.

What is the heart of the problem? I don't think the issue is conversion technology, which can be quite good with adequate time and budget. After viewing many converted 2D movies, as well as numerous CG features originated as stereo 3D, I've concluded that the use of perspective in 2D movies, and the use of Z depth in stereo 3D movies, are antithetical to each other. They clash when employed together.

In 2D photography the combination of different lenses is a combination of different perspective croppings. The longer lenses employ foreshortening for certain dramatic effects. The wider lenses utilize vanishing point perspective for other dramatic effects. The projection of perspective onto a flat image is the artistic medium for 2D, geometrically.

Perspective does not show depth so much as it creates an impression of distances between objects as measured away from the camera. There are many other depth cues in a 2D image, such as the play of light across surfaces. Painting and cinema have established conventions that allow us to interpret the use of perspective and lighting as indicators of depth, and many movie sets since Casablanca have used these conventions in order to "cheat" a shot so that objects look more distant than they really were on set.

You could not fool the eye with that kind of cheat in stereo 3D. Why not? True depth perception is the product of parallax, not perspective. You can see depth in real life with only one eye open if you bob your head side to side, what some cats do in order to better gauge depth with a wider de facto eye separation. The side to side shift differentiates the depth of objects according to how much they move relative to each other, revealing the parallax differences between adjacent points of view.

Parallax shift is more apparent when seen from wide angle vanishing point perspectives. It is scarcely noticeable at practical lens separations for telephoto lenses. Look through binoculars at the tree trunks in a forest. You know that tree trunks are cylindrically round but through the binoculars they appear flat. The depth from nearest to farthest in view is apparent, but not the roundness of shapes.

Take your binoculars to the nearest railroad line and stand on the tracks, looking along them. This is a good demonstration of the difference between depth and perspective, for with 8x power binoculars, perspective lines scarcely converge to a vanishing point, and yet, you can see the flattened depth planes that are characteristic of binoculars. You could see more roundness if the binocular lenses were very widely separated, in the manner of old style gunnery binoculars.

However, unless one is simulating the view through binoculars, for good stereo 3D, long lenses should be avoided. It would be possible to make an excellent stereo 3D movie with only a matched pair of fixed focal length wide angle lenses, about 53 to 58 degrees horizontal angle of view (about 24 mm to 28 mm on a super-35 sized sensor). What a 2D director of photography does by varying focal length and camera placement in a motif set up for quick cuts, a 3D director of photography should do by varying camera separation (the stereo baseline) and camera rig placement in a motif set up for long takes on a mobile platform. This is the difference between manipulating perspective and manipulating depth.

Good 2D movie makers should just stay away from stereo 3D, and continue to do what they do, without dilution. Those who aspire to be good stereo 3D movie makers should abandon most of what is written in books about 2D photography. They should also study established stereo 3D techniques with detachment, feeling free to discover new variations on the old verities through brave experimentation.

The great stereo 3D productions are yet to come, and they will be created by young people who, because they were never 2D lensmen, have no set of best practices to forget in order to invent a new set of best practices for a fundamentally different medium.

Manipulate perspective or manipulate depth, make your choice. Used together, each detracts from the other.