Posts Tagged 'Christopher Nolan'

Some thoughts on the Justice League rumors

Both of my regular readers might have some inkling that Christopher Nolan has been one of my favorite filmmakers of the last twelve years, that on the whole I’ve loved his Batman movies, and that Batman has been one of my favorite literary characters since I was probably six or seven.

A Justice League movie is an idea that people have been circling around for several years. There was the TV pilot in 1997 that a Google Image search shows to have been pretty ridiculous looking; the animated series from Bruce Timm and Paul Dini was awesome, but I’m guessing that Cartoon Network in the long run decided it was backwards-looking and chasing after an audience made up of the wrong age group. I never watched Smallville, but the pictures that I saw made their Justice League look low-rent to say the least. After Batman Begins and Superman Returns there was talk of George Miller making a standalone film with a totally different cast (like Armie Hammer as Batman and Common as Green Lantern); obviously that never happened, and since then, non-Batman standalone films seem to have been the plan, but I can’t say that there has seemed to be an overabundance of confidence in those projects. Neither Captain Marvel (I refuse to call the character “Shazam”) nor The Flash have really gotten anywhere. Green Arrow was supposed to be the hero in a villain-centric prison-escape film called Supermax, but that went nowhere. Superman Returns showed that there was still something of an audience somewhere for Superman movies, but it wasn’t a solid enough hit to maintain confidence in Bryan Singer’s vision. I didn’t hate Green Lantern, but for a movie that had as its fundamental premise somebody with a ring that they could use to build whatever they could imagine, it seemed to be pretty unimaginative. Wonder Woman has had a infamously troubled path to the silver screen, with even Joss Whedon not being able to put together a package that could convince studio execs to pull the trigger (and then there was a TV pilot a couple of years later about which, it seems, the less said the better).

After the success of The Avengers last year, Warner Bros. predictably announced that they would be making a Justice League movie their priority after The Dark Knight Rises was done, but whatever idea that seemed to be pushing that forward fell apart a couple of weeks ago. We’ve been left with Batman being apparently done for now, Man of Steel still being something of a question mark, and a consolation prize of a version of Green Arrow presently on TV who is clearly the poor man’s Batman, but he’s still pretty scrappy and reasonably enjoyable to watch.

Then, last night, a rumor started circulating that even got picked up by Forbes: after the demise of the most recent iteration of the Justice League idea, Warner Bros. has handed the reins of the DC film universe over to Christopher Nolan and David Goyer, with Christian Bale likely in the mix to come back as Batman and Zack Snyder (director of Man of Steel300, and Watchmen). Nolan is producing Man of Steel, and a version of the Justice League rumor was going around about a year ago, but Nolan seemed to intentionally distance himself from the concept in interviews.

Is it true? I have no idea. My guess is that nobody wants to miss out on the money that Marvel Studios is raking in with their cinematic universe, and that if Christopher Nolan plays his cards right, he’s got guaranteed budgets and creative control for all of his personal movies going forward. How do you reconcile such a move with the end of The Dark Knight Rises? I’m not overly concerned about that; surely that’s an opportunity for creative storytelling. It’s entirely possible that it’s true right now in the sense that it’s the idea they’re trying to make the deals for; a denial down the road may mean only that they couldn’t get everybody to sign on the dotted line, not that it wasn’t what they were trying to do. (My plan B: Bruce Timm produces, Christopher Nolan directs, Paul Dini writes. It’ll never happen, but that would be my dream team.)

I’m somewhat less interested in whether or not it’s true than a couple of other dimensions to the story. First, it’s evident to me reading comments on the various re-postings of the story that, bluntly, geeks have short memories. It’s no longer a novelty that an A-list auteur is directing a film based on a comic book with a big budget and an Oscar-winning cast, so it’s time to rewrite history so that the auteur in question is an overrated hack whom everybody has always hated (going all the way back to that second-rate piece of celluloid Memento) and whose contributions to the comic book genre of films have been miscast and mediocre at best, self-important trash at worst, and, really, even The Dark Knight was a second-rate Heat knockoff that mostly sold tickets because of the death of one of its stars. The Dark Knight Rises went off in a different direction than they’d hoped (tying off the arc of the cinematic character rather than opening up ways to tell more of the comic book stories), so now the guy everybody was drooling over when he was announced as the director of Batman Begins is persona non grata. Like I said, short memories, and I can’t really say that I get it.

The other thing that I find intriguing is the apparent article of faith in some circles that a Justice League film can’t work, that these characters fundamentally will look silly next to each other on the silver screen, that there are too many storytelling problems introduced by having Superman and Batman in the same world, etc. etc. Somehow these concerns are a non-issue when you’re talking about Marvel characters (The Avengers, after all, includes a Norse god, a chemically-enhanced supersoldier, and a genius gajillionaire in a wearable energy source that makes a nuclear reactor look like a 9 volt battery), but when it’s Batman, Superman, and Wonder Woman, it’s irredeemably silly somehow. Nobody’s really been able to explain why the DC characters are different, they just are, apparently.

Thinking about it, I’d like to toss out a possible explanation, and that’s one of generation. The DC characters, as the prototypical superheroes, inevitably are first archetypes of a sort and characters second. For the Golden Age characters, the basic point of reference is the circus, a common enough cultural experience in the 1930s. The costumes are all more or less based on the strongman/acrobat model; Batman’s not wearing body armor in Detective Comics #27, he’s wearing a leotard. The types of characters are all basically that, types — Superman’s origin is all of half a page in Action Comics #1, and the point isn’t to give him a psychology, the point is to explain why he’s got super-strength. He’s a strongman; Batman’s a detective and an acrobat, a mixture of Sherlock Holmes and Zorro; Wonder Woman is a goddess, again in a circus performer’s costume; the Jay Garrick Flash is a combination scientist and and college athlete, dressed up as Mercury; the Alan Scott Green Lantern is basically a wizard-type of character. The alter-egos are also essentially types; Clark Kent is a reporter (the trappings of which very much date the character and are not easily transferrable to the popular imagery of journalism of 2013 — it’s more Matt Drudge than Cary Grant), Bruce Wayne is an aristocrat, Diana Prince a nurse — and the 1950s revamps of Flash and Green Lantern keep this going, with Barry Allen as a police scientist and Hal Jordan as the ultimate manly man of the 1950s, a test pilot.

By contrast, being a generation later, the methodology has developed somewhat, and while the Marvel characters all certainly have some basis in types — mostly the “scientist” type — from the get-go they’re made into more than types by flaws and deformity. Peter Parker is a geeky high school kid whose powers convince him just long enough that he’s better than everybody else for it to cost his Uncle Ben his life. Tony Stark is a genius weapons engineer and industrialist whose talents are turned against him. Bruce Banner set free his own inner demon. And so on. If, as William Faulkner once said, drama is the human heart in conflict with itself, then one can argue that the Marvel characters are fundamentally more dramatic.

From this perspective, the problem with the DC characters maybe becomes a bit more evident. The whole premise of the Hal Jordan Green Lantern is that he is supposed to be without fear; that rather makes inner conflict a tougher nut to crack, dramatically. (And the film was hampered by this problem — the cinematic Hal Jordan’s inner self-doubt, while perhaps more “cinematic”, completely undermined the foundations of the character. Ryan Reynolds did what he could, but the result was, rather than the human heart in conflict with itself, a movie plot in conflict with itself.) It also makes more sense why Batman has been the most successful of the various attempts, and in more than just one medium — of the Golden Age characters, he’s the one who actually has a personal internal conflict to resolve, and it’s an incredibly effective, primal one at that. Superman is much harder to make interesting in that regard; the 1978 film’s use of Jor-El and Pa Kent was a good storytelling move in terms of giving him an inner conflict, moral poles to bounce off of, and the trailers make it look as though Man of Steel will draw some of its drama from Philip Wylie’s 1931 novel Gladiator, one of Siegel and Schuster’s key sources, so we’ll see how that turns out — but at the same time, there’s simply no reason for Superman to be wearing body armor.

To me, however, none of that says “won’t work on film”, it just says, “You’ve got to do it with the best filmmakers possible” — filmmakers who understand the archetypes they’re dealing with, understand what it is about those archetypes that people connect with, and not use artificial and false storytelling techniques to try to re-engineer the characters. None of it says to me “Batman, Superman, and Wonder Woman can’t all be in the same movie”, either; again, it just says to me that you need a filmmaker who knows exactly how each character fits into the story you’re telling. Zack Snyder seems to have a reasonable grasp of how ensemble casts in comic book movies need to work; again, we’ll see how things look after Man of Steel comes out.

Anyway, to me it seems like a good day when the big thing you have to complain about is that Christopher Nolan might produce a Nolan/David Goyer-written, Zack Snyder-directed Justice League movie. I guess we’ll see.

Advertisement

“Do you feel in charge?”

Spoilers for The Dark Knight Rises, Skyfall, and so on below.

The UPS truck came yesterday. Thus it was that once Theodore was put down, I took a break from Cicero’s letters, and I told Flesh of My Flesh, “I’m making dinner, and then we’re watching The Dark Knight Rises.” The issue being, you see, that she hadn’t seen it yet. Skyfall was the first movie we had been able to see together in   since The Avengers, and that was just because for my birthday, some friends offered to watch Theodorus Rex while we went out by ourselves.

I very much enjoyed Skyfall, incidentally, and have seen it twice now; the transformation of Daniel Craig as the new James Bond into, effectively, the old James Bond, is complete. If one knows something about the history of actors who might have been Bond, the presence of Ralph Fiennes (as well as the reveal of him as the “new” M, who’s really the “old” M) is clever; he was one of the actors being considered back in 1994 to take over the role after Dalton declined to participate in what became GoldenEye. If I’m remembering a particular issue of Premiere sufficiently, the list before they finally circled back around to Brosnan was Mel Gibson, Liam Neeson, Ralph Fiennes, Sam Neill, and Hugh Grant. Fiennes, then, as a former field agent whom we see take a bullet and fire off some of his own shots, takes on some real-life resonance, given that part of the idea with Gareth Mallory seems to be that he’s what Bond would have been if he’d just been a bit more respectable.

I liked the approach with Skyfall of “James-Bond-as-art-film”; we get a lot of Bond fighting in silhouette, and this can get a bit trippy, particularly when he’s silhouetted against lots of moving neon lights. Sam Mendes also seems to be trying to tell you something with how he frames shots; Bond is frequently centered in the frame, as is Javier Bardem’s Silva, but then, at the very end, when Bond is looking out over the city from MI6’s roof, it starts out with him centered only to have the camera nudge just a little bit right so that Big Ben is now centered. The point seems to be that Bond has finally decided once and for all that he’s doing what he does, not out of anger for Vesper’s betrayal, not out of the repressed trauma of his parents’ death, but for queen and country.

But I’m supposed to be talking about The Dark Knight Rises.

So, four and a half years ago, The Dark Knight took up a big chunk of my summer. I saw it, I think, nine times on the big screen all told, six of those in IMAX, and really what I did during July and August of 2008 was call up my friends and say, “Hey, want to go see The Dark Knight in IMAX with me?” I didn’t have much else to do; I had a French reading knowledge class I was taking, and I was working, but so far as I knew at that point, I was definitely not going to be working on a grad program anytime soon, and my wife was in Germany, followed almost immediately by her spending close to a month in Seattle after her dad was diagnosed with cancer. The Dark Knight seemed as good a way as any to kill time.

Obviously, my life has changed a lot in the intervening years, and I have other ways my time is occupied these days, thank God. Still, I managed to see The Dark Knight Rises four times on the big screen, two of those in IMAX. So, draw whatever conclusions you will about that — such as, for example, baby or no baby, PhD program or no PhD program, I’m still an unrepentant nerd.

The Dark Knight Rises picks up eight years after the Batman took the rap for Harvey Dent’s murders rather than have Gotham lose their knight in shining armor. The storytelling reason for this seems to be so that Batman takes on some qualities of being an urban legend, almost a Spring-heeled Jack kind of figure; I suspect that the practical, real-world reason is because it had been seven years since they started shooting Batman Begins, which itself required Christian Bale to play Bruce Wayne over a seven to eight year span, and The Dark Knight was supposed to take place something like six months after Batman Begins while being made three years later. Everybody looks noticeably older than they did in 2005, in other words, and this gives them an in-story way to explain that.

In the intervening years, Bruce Wayne has become a recluse, letting the physical and mental damage he suffered as Batman settle in. He tries to become a philanthropist, but he finds out projects that can save the world also have the dark side of probably being able to destroy it, a realization which just drives him further into the shadows. Commissioner Gordon has been able to use the memory of Harvey Dent to lock up a lot of people, but the lie is eating away at him. It’s a status quo that’s, for all intents, a well-polished rotten apple.

Enter four characters who each stir up the hornet’s nest in their own way. Selina Kyle is a jewel thief who has taken a curious interest in Bruce’s fingerprints; John Blake is a young cop who sees that everybody is pretty much just going through the motions to keep everything looking nice; Miranda Tate is an investment partner of Bruce’s who wants to show him that the world is worth trusting enough to save it… and then there’s Bane, who is manipulating much of this (or is he?) specifically to force Bruce to get back into the ring as Batman.

A running theme of the movie — which goes all the way back to Batman Begins — is that of the “clean slate”. The express purpose of the League of Shadows, after all, was to “wipe the slate clean”, as it were, when cities got too big and corrupt by destroying them, which is the objective that Bane has inherited; Selina Kyle wants to wipe out her own past (there’s a MacGuffin of the “gangland myth” of a computer program that can do this, which it turns out that Bruce acquired to keep from being used; since the Joker had no traceable record in The Dark Knight, it’s possible that this is intended to be an oblique reference to him); Gordon, Bruce, and Blake are all living with accounts needing to be settled, and so on. Is a revolution how society ultimately has to pay its bill? As R’as Al Ghul suggests in Begins (and as Bane trumpets in Rises), is burning away the brush always the solution, and it’s just a question of scale? Can one man show a better way? One might not be entirely wrong to detect a Christ allegory with where Nolan and company end up with this question (albeit thankfully not in the somewhat ham-fisted way that Superman Returns did), but it seems to me that Plato’s cave is far more explicitly referenced.

Rises does a nice job of bringing things full circle back to Begins; the story keys off of R’as Al Ghul’s conviction in Begins that “Gotham must be destroyed”, and shows that just because R’as Al Ghul died, it didn’t mean the idea died. R’as Al Ghul left a legacy, and it’s a legacy Bruce has to deal with.

Incidentally, for anybody familiar with the comics, there’s only one way the character of Miranda Tate was going to make any sense at all, and if you’re paying attention, they telegraph her identity as Talia from the get-go. She talks like R’as (she has a line about “restoring balance” early on), and there are visual callbacks as well, like showing her good at building a fire, much as R’as was in Begins. A reward of repeated viewings, too, is noticing that the little girl in the prison stabs somebody in the back in one of the first flashbacks, much as the adult Talia does to Bruce.

There are other interesting references to Begins; there’s the explicit comparison of the prison to the well young Bruce falls into in Begins, but there’s also Batman walking on ice (apparently having learned to mind his surroundings after all), and the memorial statue of Batman unveiled at the end bears a striking resemblance to the nightmare Batman that Scarecrow hallucinated in Begins. Both Begins and The Dark Knight end with conversations about thanking Batman — Begins: “I never said thank you.” “And you’ll never have to.” TDK: “Thank you.” “You don’t have to thank me.” — and while Rises has its own ending, it has its own version of this conversation: “Thanks.” “Don’t thank me yet.” It’s a little grace note that seems to say a lot about where Bruce sees what he’s achieved relative to his own ideals — in Begins the story ends with his ideals having won the day, in TDK his ideals have cost him dearly, and at this point in Rises it’s unclear whether or not he will even survive his own ideals.

The point of Rises, and I suppose of Nolan’s whole trilogy, is that in this story, Bruce Wayne isn’t trying to become a superhero, he’s trying to build a myth. As R’as says in Begins, “You have to become an idea.” The idea that there might be a big unstoppable demon that comes after you if you’re a bad guy, and you’ll never know who it is, is much more powerful than there being a human being who can be taken down with a bullet. In Begins he takes on the initial trappings of the legend, in TDK he sees a human being who he thinks could become the idea without having to pretend to be somebody else, only to see that person corrupted, and in Rises he has to complete the mythmaking by, effectively, dying and coming back from the dead — both so he can just be Bruce Wayne once and for all, and so that Batman can be that much more powerful. Along these lines, Bane is presented as sort of a counter-symbol to Batman; “No one cared who I was till I put on the mask,” he says.

Do Nolan’s films constitute a “definitive” Batman? Well, what do you mean by that? “Definitive” according to whose take on the character? Frank Miller’s? Neal Adams and Denny O’Neil’s? Bob Kane’s? Paul Dini’s? Adam West’s? Just by the nature of film, where you’re basically blowing up a short story into two or three hours and only being able to do it once every few years, you’re never going to be able to bring out all the nuances that one can play with in an ongoing serial like a comic book or a TV show. Like any film adaptation, the best you’re going to be able to do is honor the spirit of the source material while doing your own thing as well as you can. Batman, by virtue of being a symbol, is open to a lot of interpretation, and I think Nolan has made three pretty great movies with his interpretation. Somebody else will probably come along and do their own interpretation in a few years, and there may very well be things I like about that one better, perhaps things I don’t like as well; who knows? The comics themselves will still be sitting on the shelf right next to these movies, as well as Tim Burton’s 1989 film. What I think one can say is that this trilogy is a long Batman story that incorporates a lot of ideas and images from the comics as well as the animated series (there’s a decent amount of Batman Beyond in Bruce’s solution), and while it’s something you could never do with Batman as he appears in the comics (unless it were an Elseworlds mini-series or something of that nature), it’s a great way, from where I sit, to encapsulate the character onscreen in a self-contained story.

OK. Back to reading some Latin.

A few additional thoughts on Inception and The Prisoner

My treadmill viewing this summer has included a string of movies that I felt like had given me a very warped view of history by the end of it — The Queen, Nixon/Frost, W., and Kingdom of Heaven — but also The Prisoner. The Blu-Ray set was on sale on Amazon very briefly for something like $40 (marked waaaaaaay down from something like $120), and I snapped it up. I had seen all of the key episodes a number of times — “Arrival”, “The Chimes of Big Ben”, “Free for All”, “Fall Out”, and “Once Upon a Time” — but had never caught the whole thing, so I started working my way through all seventeen.

At the end of “Do Not Forsake Me, Oh My Darling”, in which Number Six curiously looks like an older, fatter actor in close-ups but like Patrick McGoohan in long shots, as opposed to the rest of the series where he looks like Patrick McGoohan in closeups and a younger, trimmer actor in long shots, I noticed something out of the corner of my eye during the credits — an actor named John Nolan credited as “Young Guest”.

Wait, what? Christopher Nolan’s uncle is an actor named John Nolan. He’s cast him in a couple of small roles — The Policeman in Following and in Batman Begins as Wayne Enterprises board member Goerge Fredericks (who memorably tells Bruce Wayne that “the apple has fallen very far from the tree”). He actually looks a good bit like John Hurt (and in fact I assumed it was John Hurt the first time I saw Batman Begins, and I remember wondering, “Why does John Hurt have such a small role?), although the irony here is that apparently

there are people who think that John Hurt looks a lot like Ian McKellen, and I don’t think that John Nolan looks anything like Ian McKellen. Maybe John Nolan looks more like John Hurt than John Hurt does.

Anyway, I skipped back to the party scene that presumably included the Young Guest, and sure enough, it was pretty unmistakably the same guy, just a few decades younger (and still looking a LOT like John Hurt and with maybe a teensy bit more of a resemblance to Ian McKellen).

My point with all of this is to establish that there’s actually a personal connection between Christopher Nolan and The Prisoner at a familial level, not just a professional level. He said back in 2006 that The Prisoner had been an interest of his for a long time, and perhaps that actually had more weight than was apparent at the time.

I mused earlier that perhaps Nolan’s re-use of the name Cobb was a reference to The Prisoner. This obviously is not proof of that, but it does suggest that that’s maybe not a terribly crazy or random idea, and it provides some interesting food for thought about other matters as well. The Prestige and “The Girl Who Was Death” both have near-identical gags involving pint glasses. “A, B, and C” is very Inception-esque, involving Number 2 trying to extract information from Number 6 by means of a constructed and controlled dream environment. The Joker’s gleeful escalation of weapons while attacking the armored van from the semi trailer in The Dark Knight is very reminiscent of a scene in “The Girl Who Was Death.”

Maybe someday we’ll get Christopher Nolan’s take on The Prisoner, but at the very least I think it’s interesting to consider the possibility that, with or without it, many of the series’ ideas have taken root in his head and have a visible influence — and as far as I’m concerned, that’s a great thing.

“My face is craggier than yours, Mr. Hurt.”

My wife knew what she married

The website NolanFans held a giveaway for Inception promo items, and I managed to be awarded “the Point Man Bundle,” consisting of a t-shirt, a cap, and a top keychain.

Yes, I’m 33 years old, and I’m a geek.

Thank you, NolanFans!

Thoughts on Inception or, Christopher Nolan and Cobb salad

I’m not going to write a conventional review of Inception; I think the movie is stunning, and I strongly encourage everybody to go see it. That’s about as much of a “review” as I want to write; what I’d rather do is discuss what thoughts were provoked by it.

I will say this once:

DO NOT READ THIS IF YOU HAVE NOT YET SEEN INCEPTION, SINCE A VERY THOROUGH DISCUSSION OF SPOILERS IS TO FOLLOW. I WILL ALSO BE TALKING ABOUT THE PRISONERUSUAL SUSPECTS, SHUTTER ISLAND, THE PRESTIGE, MEMENTO, FOLLOWING, INSOMNIA, BATMAN BEGINS, AND THE DARK KNIGHT, SO READ AT YOUR OWN RISK.

We clear?

I’ve seen Inception three times now; I saw it at the midnight showing last Friday morning, again Friday night, and again Tuesday morning. It’s one of those movies where you want to take people to see it and be there with them while they experience it for the first time. I think maybe The Usual Suspects was the first movie like that for me; I took different people to see that, all because I wanted to be there when they saw the falling coffee cup and realized what it meant. Christopher Nolan has become the the guy who makes that kind of movie for me these days; Memento and The Prestige were both movies I happily did repeated viewings of with different people, and two summers ago, as reported to some extent here, I did the same thing with The Dark Knight.

Now, one of these things — The Dark Knight — is not like the other, at least at first glance. Suspects, MementoThe Prestige — these are just all “twist ending” movies, right? The whole point of the movie is the ending you aren’t expecting, and there’s not really anything to them beyond that? Well, there are those who might argue that, sure. The question becomes, how do these movies stand up to repeated viewings? I have never bothered with The Blair Witch Project since the one time I saw it in theatres, because that’s the kind of movie that, for my money, really is just a magic trick that would probably show its strings upon seeing it again. The twists of Suspects, Memento, and The Prestige are such that you have a fundamentally different sense of what the story is actually about the second time around, and it’s a question of whether or not that different story is interesting. Is the story of Verbal Kint/Keyser Soze (depending on how you look at the story) conning Agent Kujan as interesting as the story of Agent Kujan trying to figure out what happened at the pier? Is the story of Leonard Shelby setting himself up to murder John Gammell, both as revenge for being used by him and as a way to give himself closure, however briefly, over the death of his wife, as interesting as the story of Leonard trying to solve a murder mystery in an incapacitated state? Is the story of three magicians essentially living out large-scale versions of their own tricks in pursuit of their craft as interesting as the story of the rivalry between two magicians leading to a mysterious death?

And for me, the answer has always been, unequivocally, yes. Verbal/Keyser becomes a fascinating character on subsequent viewings — little gestures and facial expressions take on new meaning, and while you realize that he’s taking Kujan on something of a ride, you also come to the conclusion that some of it has to be true. It’s particularly unsettling if you conclude, as I do, that he’s telling the truth about having killed his own family. The ways in which both John Gammell and Natalie manipulate Leonard to their own ends, but also in which Leonard consistently manipulates himself, suggest that what Leonard really is in his damaged state is a loaded gun, and it’s just a question of who’s going to get to pull the trigger. One of the rewards of multiple viewings of The Prestige is understanding exactly why Borden figures out the goldfish bowl trick so quickly and why Angier doesn’t get it, to say nothing of seeing just how clever Christian Bale’s performance actually is, and that he very clearly differentiates between the two twins.

Particularly given my experience with Memento and The Prestige, I did my best to stay as ignorant as possible about Inception from the time I heard it announced until when I walked into the theatre for the first time — but it didn’t escape me that Leonardo DiCaprio’s character was named “Cobb,” the same name given to the antagonist in Following. What would it come to mean, if anything? Would “Cobb” be nothing more than Nolan’s “Spota”? Or was there more to it?

The opening shot of Cobb sprawled and sputtering on the beach suggested even more of a connection to Following, since a very similar image plays an important role in the opening of that film as well (and as it works out, in both movies you don’t find out what these images actually mean until much later). About an hour into seeing it the first time, I started to become convinced that where the story was leading us to was the revelation that Mal was right, that she had escaped the dreamworld by throwing herself off the building and Cobb was still stuck, and that all of her manifestations were actually her entering Cobb’s dream to try to rescue him. Perhaps she was a “forger” as well, and that was why Saito echoed her line about a “leap of faith”. I prepared myself for this ending, expecting a montage of clips at the end that would replay some of Mal’s appearances with additional “behind the scenes” information presented, showing how they meant something else than what we, the audience, thought they meant at the time. I figured that even though I had figured it out, Nolan would be able to present it in a way that would make it work and that would be up to par with the rest of his work.

Can I tell you that I was really happy to be wrong, and that I was completely unprepared for the last five minutes of the film, much less the cut to black on the spinning, but wobbling, top? The “twist,” insofar as there was one, was really about Cobb’s soul and less about plot mechanics or where amongst the various levels of reality he actually was, and the final bit of ambiguity — the top’s losing stability, so it has to fall, right? Or does it? — is just enough to leave the audience with closure on Cobb’s emotional journey (the real story in the first place) even if you can argue until the cows come home whether or not he’s in the “real” world. It’s like the origami unicorn at the end of Blade Runner, except that by the time audiences could see a cut of Blade Runner where the origami unicorn meant what Ridley Scott intended it to mean, they were already prepared for it to mean that. I’m not sure anybody was expecting the top.

An assertion that some reviews I’ve read have made is that, with DiCaprio in the leading role, there are uncomfortable similarities with Shutter Island. I don’t disagree necessarily that there are parallels, but I also think the claim is misleading. With Shutter Island, I knew from reading the reviews that Teddy Daniels would turn out to be crazy; the only question I had watching it was just how this would unfold. Dom Cobb’s issues have to do with his dead wife, much as with Teddy Daniels, and Shutter Island makes you question the “reality” of what you’re seeing, but that’s just about the extent of the similarities. Cobb isn’t crazy, and there are far more levels of reality-bending at play in Inception than in Shutter Island. Shutter Island really is a “twist ending” thriller, whereas Inception is an emotional and psychological drama playing out in the framework of a caper movie. If you go into Inception expecting it to be Shutter Island meets Dark City, you will be expecting a much different movie than what you actually get.

Something the two definitely have in common, however, is that Leonardo DiCaprio delivers a knockout performance in the lead role. I hadn’t been all that interested in him before a couple of years ago — I noticed Russell Crowe in The Quick and the Dead but really couldn’t care less about Leo; Romeo + Juliet and Titanic were neither fantastic nor offensive performances, as far as I was concerned; I remember Gabriel Byrne a lot more than I remember him in The Man in the Iron Mask, and I hated hated hated The Beach. Then I saw The Departed, Blood Diamond, Shutter Island, and Body of Lies in reasonably rapid succession, and realized he had developed into a fantastic adult dramatic actor. As a friend of mine put it, he’s no longer an ingenue. He captures Cobb’s guilt and regret and makes them compelling, while still being able to sell us a “master extractor” at the top of his game.

I might suggest that DiCaprio is somewhat unconventional for Christopher Nolan as a leading man; Hugh Jackman and Aaron Eckhart are both classically good-looking man’s men; Guy Pearce, Christian Bale, and even Jeremy Theobald (at least in spots) all have kind of a drawn, chiseled quality the way he photographs them — not necessarily meaning that they’re ripped (although certainly Pearce, Bale, and Hugh Jackman are), but rather that there’s a particular air of cultivated masculinity about each of them, especially in the face. DiCaprio doesn’t really fit into either category (although Cillian Murphy’s Fischer fits in with Pearce and Bale), and he definitely doesn’t have the rippling muscles that Nolan takes pains to show us with Pearce, Bale, and Jackman. He starts to assume an air of something like the latter category when he’s “Mr. Charles” in level 2 of the dreamworld, but it’s clear that it’s an act, or a “gambit” as the movie explains. In any event, even if DiCaprio isn’t exactly doughy, he is more of a physical Everyman than Nolan has given us before (except maybe with Al Pacino, although even then, c’mon, it’s Al Pacino).

Thematically, Inception is very much a development of what Nolan has done before; as I’ve noted in earlier musings, there are recurring motifs in his work, and they’re all here. Fischer’s need to resolve his feelings of letting his father down mirrors Bruce Wayne’s struggle in Batman Begins. Domestic tragedy, time being messed with, people leading multiple lives with multiple identities (or even multiple people sharing an identity), the overwhelming desire to simply go home to one’s family, a hidden place where one is hiding the truth from everybody, often including themselves — and, curiously enough, agonizing leg injuries have started to pop up, starting with Angier falling through the trapdoor in The Prestige, Batman dropping Maroni in The Dark Knight, and now Mal shooting Arthur in the kneecap. In fact, in a lot of ways, Inception is a reworking of some of Memento‘s story, with even a repeated visual quote (the view of the wife lying down but shot so that she’s oriented vertically, since she’d be parallel to the person whose point of view is providing the shot), except that it’s the wife with the damaged mind, and Cobb is aware of his own role in Mal’s death, making revenge a non-starter, and the Cobbs have children, whereas Leonard and his wife did not — giving Cobb something else to live for, a meaning to his life beyond Mal’s death that Leonard didn’t have. Leonard describes his condition “like waking, like you just woke up”, and goes to great lengths at one point in the movie to construct a scenario where he will wake up and think he’s still in his own home — essentially the same idea as Inception‘s “dream within a dream”. Not only that, but John Gammell’s constant insistence that Leonard doesn’t know what reality is, that he’s “wandering around, playing detective” prefigures Mal’s speech to Cobb that “you don’t believe in one reality anymore”. Of course, a key difference here is that Mal’s wrong… right?

There are also interesting similarities to Following, too, beyond the name “Cobb”. Both movies are about a long con, but one can also draw lines of connection between Inception‘s Cobb and Ariadne, at least at the outset, and Following‘s Cobb and the Young Man. In both cases, Cobb is the master taking the apprentice under his wing, and the first dreamshare training sequence with Ariadne, with Cobb explaining how people populate their dreams with their subconscious, has a curious parallel to Cobb in Following breaking into the first apartment with the Young Man, and explaining how people’s things in their apartment reflect who they are. Cobb tells Ariadne that if they design a safe, the dreamer will automatically fill it with their secrets; Cobb tells the Young Man that “everybody has a box… that’s sort of an unconscious collection… [that] tells something very intimate about the people.”

It also seems to me that the concept of a “totem”, something by which one keeps track of reality, is everywhere in Nolan’s films, even if he hasn’t named it before. In Memento Leonard has his “system,” his photos and his tattoos. In Batman Begins there are the arrowhead and his father’s stethoscope. In The Prestige it is Borden’s ball. In The Dark Knight it’s Harvey’s coin. They also appear to have varying degrees of efficacy — Leonard’s “system” doesn’t work at all, for example, but Borden, at least one of him, seems to keep it together pretty well. (By the way, on the third viewing I noticed that Ariadne is madly fiddling with her totem on the plane at the end, as the camera pans from Arthur across to her. It’s a nice touch.)

Something else that strikes me about Inception is that it is a surprisingly low-tech movie. The technological conceit of dreamsharing is pulled off through compounds fed into the bloodstream via tubes in the arms leading from a gadget in a suitcase, rather than slick-looking headpieces that jack into the brain. There is one computer in the whole film, a rather chunky looking laptop; we only see two cell phones, and they’re barely used at all. Beyond that, tech isn’t really a factor, like at all. Professor Miles writes in what look like Moleskine notebooks, for heavens’ sakes.

To be honest, I argue that Inception isn’t even science fiction, any more than The Prestige is. Philip K. Dick’s definition of science fiction is that the technology, or what he called the “conceptual dislocation” of the world in the story from the real world, must “result [in] a new society… generated in the author’s mind” (From a 1981 letter printed as the preface to “Paycheck and Other Classic Stories”, PKD). There’s not really anything of the kind in Inception; you have a particular technological conceit that facilitates the story (Cobb dealing with Mal’s death) but is not itself what the story is about. Dick’s stories are usually all about how the “conceptual dislocation” creates a new world, with that “conceptual dislocation” being what drives the story forward. PKD’s version of the story would ask the question, “What would the world be like if this were possible?” and use Cobb’s emotional journey as the way of answering that question (if we’d even get Cobb’s emotional journey — Cobb would probably be named Wheaton or something like that and be an unhappy minor bureaucratic functionary who just happened to accidentally press the button on the machine at the wrong time); Nolan, by contrast, uses Cobb’s emotional journey to drive the story forward in Inception, not the technology. We don’t really see how this technology changes the world. The same applies to the The Prestige, at least Nolan’s film of it.

A criticism I hear of Nolan that baffles me is that his work is technically brilliant but cold and emotionally uninvolving. I just don’t get that at all. I find his movies highly emotionally involving; I fail to understand how anybody could see the vertical-lying-down shot of the wife in either Memento or Inception, or Angier’s attempt to drown himself in the sink in The Prestige, or the memory of Thomas Wayne with young Bruce and the stethoscope, or Harvey Dent waking up in the hospital and finding the scarred coin, and be left cold. Perhaps those are losses that one must be able to reasonably fear sharing themselves in order to be able to relate. I suspect that the familial losses experienced in Nolan’s movies are the very ones by which he himself would be devastated; certainly there is an allusion to his own personal situation in Inception when Cobb says that he and Mal “were working together” (Nolan’s wife, Emma Thomas, is also his producer), and we see another co-worker couple destroyed by their professional association in The Dark Knight (Harvey Dent and Rachel Dawes). I wonder if there will be one down the road where the protagonist loses his child (present, but handled rather indirectly, in Insomnia).

So what’s the deal with the name “Cobb,” anyway? In Following Cobb is a smartly-dressed, violent thief who is ultimately long-conning the unnamed protagonist, and who disappears at the end leaving no trace of his existence. He’s nothing like Inception‘s Cobb… well, except for the part about the smartly-dressed thief pulling a long con, and I guess Inception‘s Cobb is violent at times, although only either in the context of a dream or when his life is in danger. Maybe Nolan is pulling some pieces from his early work and reforging them based on the artist he is now. Maybe “Cobb” is just a name; maybe it’s a reference to The Prisoner, a work that strikes me as likely having had an influence on Nolan (particularly since he was supposed to do a big-screen adaptation up until about a year ago). Cobb was a character in “Arrival,” the very first episode, a colleague of Number Six’s who had been brought to the Village only to commit suicide. At the end of the episode, it is revealed that his death was faked, and he was working with the Village all along. Anyway, it’s hard to say. Maybe it’s just one more thing to talk about endlessly.

So far, Nolan seems to going onward and upward. He’s the most exciting and interesting Hollywood filmmaker working right now, as far as I’m concerned, and while I can’t wait to see what he does next with Batman, there’s part of me that is even more interested to see what his next original story is like. (Although — I’ve said it before, but while I like the work Hans Zimmer and James Newton Howard have done with him, I really hope he finds a way to work with David Julyan again as the composer.) Is he the next Kubrick? You know, I really don’t care — so far I’m plenty happy with him just being Nolan.

“Why are you here?” as a research paper

The mandatory class for first-semester History graduate students was an interesting exercise. It was, as I’ve said before, largely the opportunity to read a number of things I wouldn’t have otherwise read, and to get a sense of whence certain ideas originate. Benedict Anderson’s Imagined Communities was worth thinking about, and popped up a couple of times in interesting contexts; one of the books I read for my Readings in Ancient Greek Forensic Oratory course referenced it, and it rather slapped me across the face when I volunteered for the Indianapolis International Fair last month (which should eventually be its own blog post). Foucault I still have more to say about, as I keep threatening.

For the final paper, there was a temptation to write a detailed response to Foucault, expanding on some of the ideas I discussed earlier. However, my final response paper, along with watching all of Christopher Nolan’s movies in chronological order (which should also eventually be its own blog post), suggested a different avenue that would be more interesting.

A rubric for the final paper which the professor offered as an experiment was to answer the question “Why are you here?” as a historical research paper. Using the last response paper as a jumping-off point, as well as drawing from the readings for the last week of the course, I decided to try to formulate the answer in a way that would examine the nature of my own memories. Foucault still wound up making an appearance, but it’s really only a cameo, and played for laughs.

It’s long; the assigned length was 15-20 pages, and with notes and bibliography I turned in something that was 32 pages long. (I turned it in five days early, however, so hopefully that gave the professor time to deal with it.) The title and the structure definitely reflect the influence of Christopher Nolan, but I really hope that it comes off as more than a party trick, because I don’t mean it as such.

Something I found out that surprised me was the direct role that the oil industry played in some of the circumstances of my life; I suppose, given my Alaskan origins, this should not have been a total shock, but I truly had no idea.

Anyway — here’s that up with which I came. (Or something. Sometimes not dangling one’s prepositions is awkward.) I can’t imagine I would have much of a venue for it otherwise.

(By the way — even if you don’t normally read notes, read notes 4 and 38.)

Memento Mori:

The Question “Why Am I Here?” and the Unintentionally Unreliable Narrator

By

Richard Barrett

Historicizing one’s own memories is a tricky proposition, for we are too often our own “unreliable narrator.” We may very well make our own history according to Marx,[1] but as Margaret MacMillan observes, “Being there does not necessarily give greater insight into events; indeed, sometimes the opposite is true.”[2] For one thing, if one is working with their own memories, then by definition one is already dealing with an unfinished, ever-changing product – “a perpetually active phenomenon, a bond tying us to the eternal present[.]”[3] To put it another way, if you have memories, you do not have all of the memories you will ever have – but if you lack the ability to make new memories, then you are either dead or there is something else wrong with you that likely renders you unable to communicate your own memories in a sustained, systematic fashion. History is the story of something that has already happened, but one’s memory is something that is still happening. A concrete example is this very essay; I do not yet remember having finished writing this paper, or turning it in, or getting it back with a grade. I cannot thus incorporate this essay into its own subject matter, at least not in full – but without the ability to make new memories, I would not be able to write the paper. This, combined with an inherent lack of objectivity dealing with personal memory, should give significant pause to the historian considering such a method.

This is not the only problem, however – how does one document their memory in a truly reliable fashion? At best, a historian can make the argument that somebody has claimed to remember something, but there is no empirical method by which one can actually verify the truth of that claim. I can claim that I remember what I was doing on 23 June 1989,[4] and perhaps somebody can even verify that I was doing what I say I remember doing, but nobody can prove that I remember what I claim to remember – the flipside to the problem of somebody claiming to not remember something, which is equally unverifiable.

Still more is the problem of making individual memory historical in and of itself. Is an individual’s set of memories a history? Or is real history a sorting through of the memories of a collective? Even then, what if one is the sole survivor of a particular community, and their memories are the only possible source of a particular kind of data – such as Jussi Huovinen, the only remaining “rune singer” of the Kalevala, the mammoth collection of Finnish poetry that represents their own collective cultural memory? While it is true that the text remains in print, he is the one man left alive who remembers this body of work incorporatively and not only inscriptively.[5] When he is gone, what will be lost?

What I aim to do with this essay is to examine the role of memory in the formation of an individual’s personal narratives. Personal histories, by definition, can only be constructed after the fact – we cannot remember what has not yet happened, and trying to do so is perhaps best called “conjecture,” or depending on how one spins that conjecture, “fear” or “hope,” which may well often (but not necessarily) be at odds with history. These personal narratives must also be reconciled with the histories of the communities with which the individual interacts – but how to best do this? As Elazar Barkan asks in the issue of American Historical Review current as of this writing, “Does constructing a ‘shared’ narrative mean giving equal time to all sides?”[6] How does the historian engaging in a self-reflexive historical study “[maintain] credibility and the appearance of historical impartiality[,]” particularly given the problem of memory and community?[7] Is it possible to “preserve the goal of not distorting the data to fit one’s conviction” when one is both subject and object of the study?[8] To put it another way, how can I, the individual historian, explore how I use my own memories to negotiate a place within the various communities I have had to exist in over the years, and in doing so “put the subjectivity of history not in the service of controlling or reversing the past, but rather to the delicate task of narrating the past in a way that enriches the present”?[9] How can I answer the question Why am I here? and know that I am in fact giving a truthful and complete answer and minimize the possibility of being self-serving, self-pitying, self-congratulating, and self-deceiving? What are the broader implications for the methodology of any historian of any period and any subject?

I seek to do this by constructing a narrative out of my memories that asks exactly the opposite question asked by most narratives. If “history binds itself to strictly to temporal continuities, to progressions and to relations between things,”[10] then by unhooking memories from that continuity, perhaps it will create a space in which memory may be examined as memory rather than as a point along a progression. Therefore, rather than providing a series of events that prompts the reader to ask, “What happened next?” I will arrange the chronology of the account so that the reader instead should ask, “What happened before that?” I argue that narratives are in fact initially constructed by the narrator looking backwards in the first place; that is to say, for the historian, causality may only be seen in reverse. Foreshadowing is a literary device, not a historical method. We remember an event and muse about why it happened, prompting the recollection of a previous event to contextualize that one. The tapestry must be unraveled before it can be woven back into one piece; thus, the goal here is to examine the threads as they are pulled out – that is, before they are re-synthesized into a bigger picture.

Where possible, I will refer to primary sources – letters, diaries, blog posts, and other pieces of evidence from the period of my existence. Perhaps this will lead to an experience such as Timothy Garton Ash’s, where what I claim to remember now is different from what I claimed to remember then.[11] Where appropriate, I will also aim to provide a greater historical context, both in terms of the greater world as well as the state of the historical field contemporary with the events being described, seeking commentary and context from an issue of American Historical Review contemporary with the events being narrated, as well as other literature as necessary.

If I am answering the question Why am I here? then it is necessary to define what the question means, which to some extent involves an inventory of current memories and ways of constructing my identity. “Why” is a question that for present purposes will assume the current state of things as a telos, subsuming the question of “how” but also assuming the existence of some kind of impetus forward. “I” means a thirty-three year old man, married to another full-time graduate student, no children yet, living nearly three-quarters of the way across the country from where I grew up. “Here” means at the end of my first semester of graduate school as a matriculated, full-time student in the Department of History at Indiana University.

Previous to this semester, the memories closest at hand which appear relevant center around 20 February 2009, when I returned after lunch to my then-day job on campus as Office Services Assistant at the Archives of Traditional Music. Checking my e-mail, I discovered a message from Edward Watts titled “Re: Good news from the History department.” “Dear Richard,” Professor Watts wrote. “Congratulations! I am very happy that this has come to pass…”[12] Congratulations? Why? Wait – this was a response to something else, but what? I scrolled down, to find the original e-mail “Good news from the History department” from Wendy Gamber. “Dear Mr. Barrett,” the e-mail began. The key information was in the very first line:

Congratulations! I’m delighted to inform you that you have been admitted for graduate study to the History department with a multi-year funding package.[13]

Good news, indeed – my wife Megan was perhaps even more thrilled than I was, crying tears of joy when I told her – and it was only the beginning. I had also been admitted to the West European Studies M. A. program starting that semester, but I was still a part-timer, and WEST was more of a way to put a Masters degree together out of the thirty-plus graduate credits I already had so as to not leave IU with a jumble of hours that could never transfer anyplace. Nonetheless, within a couple of weeks of History’s offer, WEST also awarded me a Foreign Language and Area Studies fellowship for both the summer and the next academic year, meaning I now had a funding package with two fellowship years, and I would be spending my summer in Greece. With all four of these possibilities having come to fruition – WEST, History, and both FLAS awards — I had an undeniable embarrassment of riches. No longer was I to be “de-territorialized,” a “diasporic [person] [rooted] physically in [his] ‘hostlands,’ but… [being] refused assimilation to [it].”[14] No, I now had unambiguous permission to make myself at home at Indiana University. My blog from March reads:

One way or the other, this has all been a rather stunning turn of events for me. Although my path has remained less-than-linear, to say the least, it’s been a real game-changer of a year, let me tell you. Δόξα τῷ θεῷ πάντων ἕνεκεν![15]

Four eggs, four hatched chickens. Ricardus est insufficiens petitor neque enim, Deo gratias.[16]

My employers were thrilled for me, and the sadness for everybody was that it was a position in which I had jelled nicely during the year I had been there – for me, a singular occurrence at Indiana University. On the 5th of June I left the Archives of Traditional Music for the last time, and I posted the following:

I’ve enjoyed what I’ve done and with whom I’ve worked, I leave on good terms with all of those people, I leave not having counted down the seconds till I could quit, and without anybody saying to not let the door hit me where the good Lord split me. To put it in show business terms, I’ve been able to leave ‘em wanting more, and in a good way. It’s a really nice feeling. I close this chapter excited to see what happens next, but sad to be leaving this behind. I am moving on to the next thing without desperation for perhaps the first time in my life.[17]

On 10 June 2009 I got on a plane and flew to Greece, returning on 5 August; orientation for the fall semester started on 24 August.

Teasing out the thread of memory a little further, I come to Thursday, 16 October 2008.

Before I explain the significance of this particular date, I must explain that my original schedule for the fall semester of the 2008/2009 school year had me taking second year Syriac and first year Coptic. However, in a fit of despair over the apparent improbability that I would ever find a path that would make use of those languages, I consolidated those two courses into one, trading them for first year Modern Greek.

J. B. Shank’s assertion in the October 2008 issue of American Historical Review that “approaching the notion of historical change through the notion of crisis is not entirely misguided”[18] does not exactly inspire confidence, but he nonetheless concludes the following:

Accepting that historians are not empirical natural scientists but practitioners of a particular kind of hermeneutical science, one with deep connections to storytelling, the question, then, is not whether they are warranted in deploying the concept of crisis at all, but rather the kind of deployment that is appropriate.[19]

Certainly, the outcome of my own crisis was indeed a marked historical change for me. I discovered quickly that my study of Ancient Greek greatly facilitated the speed at which I was able to absorb the modern vernacular, and that the coursework I already had would be easily applied to a Masters degree in West European Studies. I would perhaps require two classes and a thesis to finish the program. The Greek instructor, an earnest, supportive man looking for graduate students to help build a program, was more than encouraging of my application. I began to contact professors for recommendation letters.

Professor Watts’ response took me rather by surprise. He said yes, that he was happy to write me another letter, but had I considered re-applying to History? We made an appointment to meet and discuss the matter further, and so I found myself in his office the morning of 16 October.

I was up front with Professor Watts; I had not considered re-applying to History, since the faculty member who had spoken with me when I was rejected the first time had said rather unambiguously that I need not consider that an option. “Well, I know you now, Richard,” he replied. “I’ve taught you, and I know how you think. You’re far more sophisticated than you were when you first came to see me three years ago, and you’ve got a lot more that you can prove you have to offer. You’re plenty competitive now, and I will advocate for you as much as I can. I can’t promise anything, but I think it’s worth the fifty bucks for you to throw your hat into the ring.” He suggested that I talk with Professor Deborah Deliyannis about what we had discussed, so that she could know what to say in her letter of recommendation as well. When I met with her, she was very much on board; on the other hand, as accustomed as she was to having those conversations with me by this point, she teasingly referred to me as a “professional applicant.” I had to admit I knew what she meant.

A blog post from the end of that month makes the following reference:

I’ll wrap this up for now by saying that my application for West European Studies has been submitted, and that now it’s just a matter of my letters of recommendation rolling in. Hopefully I’ll know something soon. In the meantime, another option has come up in terms of a departmental home, and the person who suggested it did so unprompted. I don’t want to say much more about it for the time being. For right now I’ll just say that I’m flipping two coins, West European Studies and this other possibility, and we’ll see what comes up. Maybe both will come up heads, in which case I’m decidedly not opposed to leaving IU with more rather than less. Maybe both will come up tails, and I really will have to leave here with 30+ worthless graduate credits. We’ll see. Meanwhile, a near-annual conversation with a particular faculty member about said options has led to this person dubbing me a “professional applicant.” I suppose he/she isn’t wrong.[20]

The next thread of memory picks up seven months earlier, on 3 March 2008. Work was miserable, as was now the daily norm, with my support staff position in one of the campus recruitment offices having grown unbearably precarious over the previous year. I had not started looking for other jobs because I hoped to be a full-time student in the fall anyway. Still, e-mail had brought no good news yet, which meant that every day I checked the postal mailbox when I got home to see if bad news had come instead.

On this particular Monday, I flipped up the lid of the box on my front porch, and saw an envelope from the Indiana University Department of Religious Studies. I knew what it contained before I even opened it, and I almost threw it away still sealed rather than force myself to read the words.

“Thank you for your application…due to a high number of strong applicants…” I stopped there, crumpled it up, and tossed it in the trash. I sent a confused e-mail to the faculty member in Religious Studies who had encouraged me to apply, called in sick the next day, and started applying for other support staff positions.

I posted the following to my blog a week later:

So, Cheesefare Week, as noted earlier, started off with some bad news. I had been obliquely informed about a month ago that good news would come via e-mail, and bad news would come via postal mail; therefore, when I saw the envelope in my mailbox on Monday, I knew exactly what it contained before I even opened it. Bottom line: I will not be a matriculated graduate student this fall. Ricardus est insufficiens petitor.

Exactly what is next for me is unclear. I was instructed to thank God for keeping me from going down this path since He obviously has something better in mind for me, so I’ll start there. There are some well-placed people who have told me they absolutely believe I can do this and want to talk about what happened and what they think I can do from here; I’m more than happy to listen, but in the meantime, I am beginning to consider what my other options are, up to and including the possibility that, being 31, perhaps my window of opportunity just isn’t open anymore.[21]

The next month was a series of very understanding nods and deep sighs from the well-meaning people who had written my letters for this application. What I tended to hear, including from the faculty member who had suggested that I would be welcomed with open arms in the first place, was that whatever impression I might make in class, whatever my grades and test scores were, whatever my letters might say, the details of how I looked on paper were problematic, at least as far as an admissions committee for a humanities program at a big liberal arts university was concerned. “If you spoke to our Director of Graduate Studies right now, she’d probably sound a lot like History did a couple of years ago,” one person told me. “You’re just going to have to go someplace where they aren’t freaked out by a music degree,” said another. I recount one of these conversations in my blog:

So, I had a conversation a couple of days ago with one of the people who wrote letters of recommendation for me. This person wasn’t directly involved with the admission process, but had knowledge of what had happened, and was pretty up front with me about it. I wasn’t told anything I hadn’t already figured out, but this person remained encouraging, and had some concrete suggestions about better paths for me.

The bottom line seems to be this — there’s not really a way to make me look like a conventional applicant on paper… It’s one thing for faculty members to say, “Well, he doesn’t fit in this particular box, but he’s very capable, he’s a known quantity and has proven himself,” but when it comes down to having to make hard decisions, admissions committees have to look at me and say, “He may be capable and a known quantity, but he doesn’t fit into the same box as everybody else we’re admitting.” Without a liberal arts undergraduate degree, my application goes into a different pile than those who do, and that’s not the pile which makes it to the next round of cuts, regardless of my other qualifications. There was the hope on the part of those who supported me that I would be able to transcend these limitations, but sheer numbers did not allow for that.

As I said, this wasn’t anything I hadn’t already figured out. Two years ago I was told what ducks I needed to get in a row for grad school, but the person giving me this advice also said, quite bluntly, “Even then, if it’s somebody like me reading your application, you’re not going to have a lot of luck.” With a non-liberal arts background, plus the fact that within five seconds it becomes clear that it took me eleven years to finish a four year degree (i.e., I was a dropout), I was told, my letters of recommendation appear to be talking about a totally different person and can’t be seen as reliable. The person I was talking to on Tuesday told me that, unfortunately, all of that may be harsh, but it is not necessarily wrong, particularly when a humanities department is faced with more graduate applications than they’ve ever had before. “The reality is, we’re admitting people who have the option to turn us down to go to Princeton, Yale, Duke, and Columbia,” I was told. There is also the issue that my particular academic interests are generally more specifically addressed at religiously affiliated institutions, not big liberal arts universities. Being a “non-traditional applicant” combined with my interests being, in the long run, not the greatest fit in the world for how things are done here, and the work I’ve done over the last couple of years simply does not level the paper playing field.

So what will? In an ideal world, my interests would have been identified, encouraged, and fostered during my early teens, I suppose, but this isn’t what happened, and in the woeful absence of a Time-Turner, I must find a different path.[22]

My employment situation reached its nadir towards the end of the same month; thankfully, I was offered another position just as that crisis peaked, and I started at the Archives of Traditional Music on 21 April 2008.

Among my duties at the Archives was to schedule use of a meeting facility in Morrison Hall known as the Hoagy Carmichael Room. On 23 April, I received an e-mail from Debra Melsheimer, graduate secretary in Religious Studies, cancelling one of the two reservations they had for the room during the coming Fall Orientation. “Since we will have no ‘new’ incoming graduate students for the AY 2008-09 we will only need to hold one (1) meeting time…”[23] I politely confirmed the cancellation and angrily forwarded the e-mail to friends of mine in the department, asking if they knew what was going on. In short, everybody to whom they made offers were, as I was told, prospects who could turn them down for schools such as Yale and Columbia, and that is exactly what all of them did. Unfortunately, nobody turned them down in time for the department to be able to make other offers.

Thursday, 19 January 2006 is the next point along the timeline to which my memory turns. I had graduated from the Indiana University School of Music with my B. Mus. the month before at the age of twenty-nine, having taken eleven years to finish a four-year degree. My entire final semester of my undergraduate career, my focus was taking a wild turn from the operatic career I had come to Indiana University in 2003 to pursue. I spent the term embracing my new identity as a scholar who happened to sing rather than a singer who liked to read, and my course on Early Music History gave me plenty of opportunities for this – as did an undergraduate survey course on Medieval History taught by Professor Deborah Deliyannis. Much of the personal reading I had done over the past three years came in handy in both classes, to say nothing of the experience of the Eastern Orthodox liturgical cycle (enhanced by taking on choir directing duties the previous summer). Perhaps my areas of interest and the approach I found myself taking meant that I was complicit in “failing to break the grip of a history that roots humanity’s origins in Mesopotamia some 6,000 years ago” at a moment when there was “an intellectual and moral imperative” [24] to not fall into that trap, but so be it.

A conversation one day with Prof. Deliyannis led both of us to the conclusion that if I was looking for a post-opera path, perhaps History was the way to go. She said she was willing to write a recommendation, and she thought that it would probably be no particular trouble to admit me as a terminal Masters student, given what she had seen in class. She suggested I talk to Professor Ed Watts, and also said it would help if I could find a summer Latin program somewhere, but encouraged me to go ahead with the particulars of the application.

I took the GRE. I asked for letters of recommendation from the instructors for my more academic courses in the School of Music. I met with Prof. Watts. I found a summer Latin program at University College Cork in Ireland. I submitted my application to the Department of History.

Graduation came and went, as December graduations do. In January, I took a job as a bank teller, figuring I wasn’t going to be there past June if I was going to Ireland for the summer.

Then, an e-mail from Prof. Deliyannis came, strongly suggesting that I set up a meeting with a particular professor regarding my graduate application. I entered this person’s office on 19 January with knots in my stomach, knowing that this likely was not a promising development.

In short, what I heard was, “I don’t think you can get there from here.” Prof. Deliyannis had meant well, I was told, but was unfamiliar with the particulars of how the History department handled graduate applications. In the first place, History did not offer a terminal Masters. In the second place, History did not admit anybody they did not fund. In the third place, whatever my letters might have said about me and whatever my grades and test scores were, a B. Mus. simply could not be given the same weight as a B. A. and thus my letters and my grades could not be taken as seriously as they might be otherwise. In the fourth place, I needed at least some Greek and Latin before I could be admitted.

So what do I do? I asked.

“If I were you, I’d take classes as a non-matriculated student for a couple of years, and then apply elsewhere,” this person told me, stressing the word. “If it’s somebody like me reading your application, there’s very little you’re going to be able to do to make yourself competitive here.”

I left that office devastated (to say nothing of late for work). I had no idea that History would be so fundamentally different from the School of Music, where essentially the non-funded students paid for the funded students. Well, there was nothing for it; if I had to make myself a better applicant on paper, then that was exactly what I would do. By June I had found an on-campus job that had a tuition benefit, and fall of 2006 I started first year Ancient Greek.

A letter I wrote to a friend at the end of February 2006 provides this account:

I graduated in December. It only took me eleven years to finish a four year degree, and I am now sufficiently B.Mus’d (bemused). I don’t know exactly what’s happening with me next; I’m not doing another music degree here, and it frankly seems unlikely that I will be doing another degree at Indiana University, period. Megan’s program is opening all kinds of doors for her; she’s spending seven (paid) weeks in Germany this summer, she starts her PhD in the fall, and so on, but all of my attempts to figure out something useful to do in this environment have failed miserably. Medieval History seemed like a quite likely candidate (and it still does, just not here); I made a wonderful impression on a professor in a non-School of Music class last semester, and she started recruiting me. It seemed like a good fit (and still does), given my natural interests and proclivities, and I was able to get some strong letters of recommendation. Well, I can’t say that I know exactly what happened, except that in January, I was suddenly whisked into the office of somebody higher up in the food chain of the History department, who in no uncertain terms told me that the department’s interest in me had been vastly oversold, and that I needed to look at ways that I could make myself an attractive candidate “someplace else.” Like I say, I don’t know exactly what happened; the most I could get out of this person was that my recommendations didn’t really match the background my transcript showed, and that the recommendations aside, I’m just not competitive “on paper” as far as they’re concerned, coming from a music background. It rather came across as, “On paper, you look like an intellectual lightweight trying to change fields on somebody else’s dime.” What the professor who had been recruiting me said was, “I know what you’re capable of, I know your abilities, I know how you think and how you work, and I think you’re plenty competitive—but it’s not up to me.” I don’t know if, at the end of the day, my research interests…just didn’t match up well enough with those who actually had power to make decisions, or if this was more of an internal political conflict, or what. The plan of action from here, insofar as there is one at present, is to take a class or two a semester as a non-matriculated student for the next two or three years, and then when Megan is done with her coursework and exams, we can try to find a program where I can do my graduate studies and she can do post-doctoral work. I have to say, after the humiliating disappointment of my three years at the School of Music, this whole thing really took out of me whatever wind I had left in my sails.[25]

As we get farther away from the immediacy of the present, however, my memory is increasingly, but less-intentionally, elided. The same letter also contains this section:

To briefly sum up the various happenings of the last nine months… I am not at St. Vladimir’s. The idea was always that it would be fall of 2006 anyway, not fall of 2005, but that is not likely at this point. Perhaps fall of 2009 or 2010. In short, I visited there in October and loved it. Everything about the place impressed me—the location, the faculty, the campus life, the educational environment, the pastoral approach, and so on. Most especially, the centrality of the chapel in the rhythm of campus life just blew me away. However, two things happened—first, Megan, quite correctly and justly, decided that she was enjoying teaching and did not want to walk away from the remaining three years of her funding. Second, every person I talked to at St. Vlad’s gave me the same advice: wait as long as you can before coming. The answer was motivated in different ways by different people—the liturgical music professor said that they’re revamping the program so that it is aimed more towards people with a solid musical background, but that it’s going to be another five years or so before they get there. The dean of students said that spiritual maturity was going to be vital to one’s survival and education in that environment, and that a few years’ worth of time for things to settle would only help me. A student told me, “They will challenge everything you think you know, and your faith will need to be solid as a rock to withstand it. Let as much water run under the bridge as you can manage.” Excellent advice, all of it. I took it to heart, and combined with my wife’s circumstances, I hope to wind up there at some point in some capacity, but it won’t be this next year.[26]

Reading this section of the letter, I remember an entire series of events surrounding a campus visit to Saint Vladimir’s Orthodox Theological Seminary in October of 2005 and the exploration of the possibility of the priesthood. It was a trip that seemed so seminal, exciting, and which pointed an unmistakable way forward, but which rather spectacularly came to nothing in the end. Even so, it seems like I would at least remember it without prompting for purposes of a footnote, but it does not occur to me to remember it until faced with its record.

Tracing backwards from there, I am led to 13 February 2005. It was my fourth semester at the School of Music, and my penultimate term as an undergraduate. I had auditioned for the Masters program in Vocal Performance and was admitted, and I was still waiting for word on my funding to come through. My audition was good; it showed clear improvement during my time here in terms of range and musicality, and there was the matter of my article in The Journal of Singing making me the first School of Music person in some years to publish in the professional publication for voice teachers. At the beginning of March I was traveling to New York for the first time, having been invited to audition for the Metropolitan Opera’s Lindemann Young Artist Development Program, and I felt like I was singing well enough to feel good about such an opportunity.

It was in the midst of these hopeful circumstances that my wife and I converted to Eastern Orthodox Christianity on the second Sunday of the month, the day before Valentine’s Day. Our first confessions were heard, we were anointed with oil, and we received Communion for the first time. With some irony, the only family either of us had in attendance was my decidedly atheist father.

Larry A. Braskamp suggests that the interest in religion for students represents a “[search] for meaning and community… [which] often leads them away from the organized religious practices and beliefs of their past, [but] is… [nonetheless] a journey toward a more complex spiritual and religious identity.”[27] I can agree with him, but only partially; the “more complex spiritual and religious identity” in this case was far more organized and equipped with beliefs and practices than those of our past. It was amidst the collective memory of the Christian East, a memory both inscribed and incorporated, that I was faced with the “essential historicity of Christian religion,”[28] and I saw – or was shown? – that I had no other option than to find that compelling.

A handwritten diary records the following:

He [the priest] very nearly forgot to anoint our ears. Somewhat ironic, given what we do.

I am fighting a cold and sore throat, so I wound up not singing at all. However, several parts of the homily stuck with me. Deacon Lawrence preached, and he quoted several Church Fathers on the matter of choosing God’s will over one’s own. There are three options, one wrote. God’s way, our way, and the Devil’s way. The man who has not chosen God’s way is somebody who will clearly be ill at ease, who will find everything to be not right, who will not truly be at peace with anyone. That simply describes Dad to a “T”, I’m sorry to say.

Communion was very nearly over before it began; I carried my chrismation candle up to the Chalice with me, which was a touch awkward, and the spoon was in and out of my mouth before I really realized what had happened. No neon signs flashed in the sky, and truth be told… I didn’t need them to.

[…] While describing to Dad the night before just what he’d be seeing, it occurred to me that in many ways it would look like our wedding—we’d process to the front of the church, we’d answer questions, take some vows, and have jewelry put on us. As it worked out, gifts were also another similarity. We both now wear crosses that were given to us by our sponsors; my mother gave us a lovely pewter candle-snuffer; several of our friends made donations to All Saints’ building fund in our honor; the parents of our friend Benjamin also gave us a large ceramic pigeon we’ve named “Melvin”.[29]

It was after this affirmation and proclamation of faith that everything fell apart. The School of Music offered me less financial support than I had received as an undergraduate. My voice teacher pleaded with whom he could, but the most they would do is put it back to my undergraduate level, and they indicated to him that I should feel grateful for that. My New York trip was a fun first visit to the Big Apple, but that is all it ended up being – the audition yielded nothing. The final nail in the coffin was the audition for the fall’s operas, where inexplicably I simply had no high notes anymore. If I were to take my newly professed faith seriously, it would appear very much that God was closing the doors through which I was not supposed to venture.

Thinking that perhaps I could still stay in Music, I spoke with faculty members I knew in Musicology and Choral Conducting. Perhaps these would both be disciplines where my faith and how I practiced it could inform what I did without needing to be fundamentally challenged. As I would be studying specific practices rather than institutions or development of particular beliefs, I hoped that somehow I could be free of questions of “What is ‘religious’? How do we align our definitions with those of the persons we study? Where do we draw the disciplinary boundaries of ‘religious history’?”[30] Both departments told me the same thing, however – we would love to have you, they said. You would be a natural fit in either program. Unfortunately, we have no money at the Masters level, and if you come in as an unfunded student, it would hurt your chances of getting funding at the doctoral level.

Another letter from me to a friend reports the following:

Given how the graduate funding issue shook out, I decided to not accept the slot in the Master’s program here… [B]eing on the cusp of my thirties (having turned 28 this last November)… I do not feel like I can responsibly continue going into an indefinite amount of debt for an indefinite amount of time, while having no solid career prospects on the table. That raised the question, however, what are the implications of that for my career path? The blunt reality is that I don’t really have a career path at this stage of the game. By March of this year, in every respect, it had become quite plain to me that I could not, realistically, get “there” from here.[31]

The last (or is it the first?) of my memories to be strung along this thread is Saturday, 11 June 1994, the day of my high school graduation, being seventeen years old. There was so much to do, and the plan for the weekend had been formulated along very strict lines. Dad would fly in from Anchorage on Friday, the ceremony was on Saturday, there would be a family celebration following, and then he and Mom would fly to Anchorage together on Sunday, leaving Seattle for good. On Monday, I would supervise the movers as they packed up our house. A couple of weeks after that, I would fly to Anchorage myself, returning in the fall to start college at Western Washington University.

Dad had returned to Anchorage in the fall of 1993 to see if his luck might be better in the place where he had made his fortune to begin with; the idea of things getting any worse in Seattle was terrifying. His money had been too tied to Alaska for it to survive the so-called 1986 Oil Price Collapse,[32] and he had never sufficiently planted professional or financial roots in the Pacific Northwest to ride out the crisis. Since taking a loss of $100,000 on the house – to say nothing of having to sell virtually everything else that was not nailed down – in 1988, we had bounced more or less annually from rental to rental, each one less expensive than the previous, hoping that somehow things would turn around. Unfortunately, the rise of big box stores like Office Club, Office Depot and CostCo[33] were making it very difficult for the small office supply retailer to be competitive. Ironically, if we had just been able to hold on to the house for another six months, we would have caught the beginning of the suburban real estate boom in Seattle.[34] After five years of struggling unsuccessfully to make it work, returning to Alaska seemed to be the only option. My mother stayed behind so that I could finish high school where I had started, and since I was going to be starting college anyway, it seemed like a natural break.

The plan was executed neatly and efficiently, point by point. Dad flew in on Friday, I walked on Saturday, and they left together on Sunday. I drove my parents’ car back to our townhouse from the airport, which in its strewn-with-boxes state was no longer really “ours” except that I still lived there for one more day, and went to sleep that night as its sole occupant. The next day, the movers came. In the late afternoon I watched them drive the truck away with everything we owned in it, including my parents’ car. With it went any sense I had of any particular place being “home”; my parents now lived someplace I did not, and while it did not follow that I now had a new home as well, my old home (which itself had only been “ours” for nine months) was no longer mine to occupy. While surely not exactly what Steven Ruggles had in mind when he made this argument, I could have nonetheless independently confirmed his thesis that “a rise in economic resources of the elderly” – to use the term broadly – “…would have resulted in an increase of residence with kin[.]”[35] His larger argument that “the past century has witnessed a radical transformation of residential preferences [of families]”[36] was surely something to which I could also attest, having experienced the bizarre reversal of growing up, graduating high school – and having my parents move away.

Much as I began by observing that I cannot remember this project’s completion and outcome while still working on it, I must end with something else of which I can have no memory and must rely on the memories of others – or, rather, my memory of their memories. (Or, even more to the point, my claim to remember what they claim to remember.) My parents met in Anchorage, Alaska in 1974. In the same year, Lynn White, Jr. wrote that “[p]eople are organized… by the basic presuppositions – often unverbalized – that they share: their axioms.”[37] If this is true, then perhaps it is no real surprise that my parents were always disorganized. Both had been born in Alaska, but there the similarities effectively end. Dick, my father, was the second youngest of a large merchant-class family; his own father, Jack, had owned the first Piggly-Wiggly grocery stores in Alaska.[38] My mother was from a working-class family; her father was a truck driver for a dairy (ironically, a supplier to my other grandfather’s stores). The family business, of which my father was president, was Barrett Office Supply, a thriving supplier of office furniture in an economic environment fueled, as it were, by the 1968 discovery of oil in Prudhoe Bay on the state’s North Slope.[39] My mother had started working as a receptionist for Barrett Office Supply at nineteen shortly after her first divorce; my father was also recovering from his own first divorce.

The ferment of post-1960s sexual mores being what they were, it seemed like a good idea to the office to try to cheer Dick up by sending him to Hawaii – with Shirley, with whom he had never exchanged more than a few, intimidating (by her recollection) words. It is tempting to digress here into an examination of how the supposed liberation from the bourgeois repression of sexuality, in reality, set up an environment in which it was acceptable for a man and a woman to cede sexual agency and be “drafted,” more or less, into a sexual relationship which not only resulted in consequences not intended by the “drafters” (such as this author), but also in which were located many axes of power – an eight year age difference, inequality of family status, a status/power difference at a mutual place of employment, income disparity, and so on.[40] However, this would not further discussion of the main point at hand. Suffice it to say that shortly after their return from Hawaii, they moved in together. In 1976, the year the French language edition of Michel Foucault’s The History of Sexuality: An Introduction was first published, a second trip to Hawaii in February led to my birth on 21 November. They were married on 13 May 1977 – a Friday the 13th, incidentally, and just over a month before the first barrel of oil would be pumped from Prudhoe Bay into the newly completed Trans-Alaska Pipeline on 20 June 1977.[41]

Thus is the chain of memories upon which I draw to answer the question “Why am I here?” – but do any of them really answer the question? Can these disparate pieces actually be synthesized into a historical argument, or do they represent a draft of Richard Barrett, The Early Years: A Reader? Can I actually answer that question myself, or will it require a later historian to assemble the fragments into a mosaic? Would the picture that historian might assemble look like anything I would recognize myself as my own life? Is it my responsibility to remember in a way convenient for the historian? If the references to American Historical Review as a sort of historian’s Greek chorus show anything, it is that  how I remember things and what the discipline of history would like me to do with those memories are not always the same thing – for example, whether or not historians are entirely comfortable with the word “crisis” does not impact my experience of an event as a crisis. Good historians analyze memories, better historians synthesize them, but it does not follow that what they (or we, as I must remind myself) need to accomplish those tasks will instill a sense of obligation in the individual recounting their own memories to remember the way the historian would find ideal.

What forward-looking narratives might be assembled from these pieces, anyway? Am I here because God ordained it? Am I here because of how fluctuations in oil prices in the mid-1980s interacted with suburban expansion? Am I here because of how the Sexual Revolution manifested itself in Anchorage, Alaska? Am I here because intense feelings of abandonment led me to seek out community and identity in a highly structured religious environment, rich in traditions and practices that lend themselves to study? Am I here because my great-great-grandfather won at Palmetto Ranch? Am I here because I just plain was too dumb to know when to give up? All of these things? None of them? Even if I, as a Christian, lean towards the first of those explanations, it is incumbent upon me to remember that “the purpose of a historical understanding is not so much to detect the Divine action in history as to understand the human action, that is, human activities, in the bewildering variety and confusion in which they appear to a human observer.”[42]

Can I be trusted to be reliable with my own memories? I already know that some of the other parties involved recount some of the same events differently from how I do, but that does not change how I remember those events. In any case, it should be clear that I have elided, compressed, omitted, selectively emphasized, and otherwise edited my memories for public consumption, even if I have not done so intending to mislead. There is the matter of the abandoned pursuit of the priesthood, which I had entirely forgotten to remember until I saw my own words describing it. Did Professor Deliyannis really “recruit” me in 2005, as I told my correspondent? I thought that word appropriate at the time, but now I am not certain. It is unambiguously fitting for Professor Watts, who suggested of his own volition that I apply, and there is the not insignificant matter that I was actually admitted this time around. As well, the memories presented here certainly do not answer all questions about everything, and substantial gaps are left. The simple fact is that to “tell everything” can only be a pretty-sounding fiction.

Even within the convention I have attempted to follow of presenting memories in the order in which I access them, it is still necessary to contextualize and construct and narrate, to tell things in the order of before and after at least to some extent, in order for them to make sense. To the extent that there is such thing as “purity” of memory, it would seem that it might only be preserved as long as the memory does not need to be communicated to anybody else. That this problem begins to touch upon and intertwine problems both of an epistemological as well as an ontological nature greatly concerns me, but what to do about it? It suggests that I can only truly know what I think I know as long as I have no need to pass it on, in which case it becomes what I claim to know and must be held in suspicion by, above all, myself. However, if I cannot exist without some need to communicate with others, than what I think I know is constantly in tension with what I am, or perhaps what I need to be. The problem here is that I am neither philosopher nor theologian; I cannot dwell on such questions for too long without my head starting to hurt. I know what I know, and I remember what I remember, or at least I think I do. What can I say except “That’s my story, and I’m sticking to it”?

“The power of memory is great, exceedingly great, O God, a large and limitless inner hall,” writes St. Augustine. “Who has come to its foundation? Yet it is a power of this my soul, and it belongs to my nature, but I myself do not grasp all that I am.”[43] Maybe I am unable to answer the question “Why am I here?” I can produce my memories of what I think are the relevant events that led up to being here, but I cannot myself yet see the beyond the present moment sufficiently to synthesize those events into a meaning. Perhaps it is also telling that the greater the distance from the event being remembered, the easier time I have putting that event into a historical context – thus, again, I am too close to now to be able to see it in perspective. If I am so unfortunate as to draw the attention of another historian – or worse, a biographer – down the road, then perhaps that person will be able to construct a forest for the trees.

Which is all to say – that’s my story, and I’m sticking to it.

Works Cited

Ames, Christine Caldwell. “Does Inquisition Belong to Religious History?” American Historical Review 110, no. 1 (2005): 11-37.

Ash, Timothy Garton. The File. New York, NY: Random House, 1997.

Barkan, Elazar. “A. H. R. Forum: Truth and Reconciliation in History. Introduction: Historians and Historical Reconciliation.” American Historical Review 114, no. 4 (2009): 899-913.

Barrett, Richard. Letter, 26 February 2006.

———. Letter, 15 May 2005.

———. “13 February 2005.” Personal diary. Bloomington, Indiana, 2005.

———. “Counting Hatched Chicken #4.” In Leitourgeia kai Qurbana: Contra den Zeitgeist. Bloomington, Indiana: WordPress, 2009.

———. “Counting Hatched Chickens, Nos. 1-3.” In Leitourgeia kai Qurbana: Contra den Zeitgeist. Bloomington, Indiana: WordPress, 2009.

———. “In Which the Author Finds Himself Intentionally, Joyfully, and yet with a Tinge of Sadness, Unemployed.” In Leitourgeia kai Qurbana: Contra den Zeitgeist. Bloomington, Indiana: WordPress, 2009.

———. “More on the Alleged Plurality of Means by Which One May Remove Flesh from a Feline.” In Leitourgeia kai Qurbana: Contra den Zeitgeist. Bloomington, Indiana: WordPress, 2009.

———. “On Forgiveness Sunday, the Alleged Plurality of Methods by Which One May Relieve a Feline of Its Flesh, and Other Musings.” In Leitourgeia kai Qurbana: Contra den Zeitgeist. Bloomington, Indiana: WordPress, 2009.

———. “Things You Think About When You’re Trying Not to Fall.” In Leitourgeia kai Qurbana: Contra den Zeitgeist. Bloomington, Indiana: WordPress, 2009.

Braskamp, Larry A. “The Religious and Spiritual Journeys of College Students.” In The American University in a Post-Secular Age, edited by Douglas Jacobsen and Rhonda Hustedt Jacobsen, 117-34. New York, NY: Oxford University Press, 2008.

Connerton, Paul. How Societies Remember. Cambridge, England: Cambridge University Press, 1989.

Florovsky, Georges. “The Predicament of the Christian Historian.” In Religion and Culture: Essays in Honor of Paul Tillich, edited by Walter Leibrecht, 140-66. New York, NY: Ayer Publishing, 1959. Reprint, 1972.

Foucault, Michel. The History of Sexuality: An Introduction. Vol. 1. New York: Vintage Books, 1978. Reprint, 1990.

Gamber, Wendy. Electronic mail, 20 February 2009.

Hippo, Augustine of. “Confessions.”

Hunt, Jeffrey. The Last Battle of the Civil War: Palmetto Ranch. Austin, Texas: University of Texas Press, 2002.

Koepp, Stephen. “Cheap Oil!” TIME Magazine, 14 April 1986.

Lee, In. “Office Depot’s E-Commerce Evolution.” International Journal of Cases on Electronic Commerce 1, no. 2 (2005): 44-56.

Lynn White, Jr. “Technology Assessment from the Stance of a Medieval Historian.” American Historical Review 79, no. 1 (1974): 1-13.

MacMillan, Margaret. Dangerous Games: The Uses and Abuses of History. 2 ed. New York, NY: Modern Library, 2008. Reprint, 2009.

Marx, Karl. “The Eighteenth Brumaire of Louis Bonaparte.” In The Marx-Engels Reader, edited by Robert C. Tucker, 594-617. New York: W. W. Norton and Company, Inc., 1978.

Melsheimer, Debra. Electronic mail, 23 April 2008.

Naske, Claus-M, and Herman E. Slotnick. Alaska: A History of the 49th State. 2 ed. Norman, OK: University of Oklahoma Press, 1994.

Nora, Pierre. “Between Memory and History: Les Lieux De Mémoire.” Representations no. 26 (1989): 7-24.

Parra, Francisco R. Oil Politics: A Modern History of Petroleum. London: I. B. Tauris, 2004.

Peirce, Neal, Curtis W. Johnson, and Betty Jane Narver. “The Peirce Report: 1. Congestion and Sprawl: A Thousand and One Delayed Decisions Are Taking Their Toll, and Environmental Time Is Running out Fast in Puget Paradise.” The Seattle Times, 1 October 1989.

Ruggles, Steven. “The Transformation of the American Family Structure.” American Historical Review 99, no. 1 (1994): 103-28.

Shank, J. B. “A. H. R. Forum. Crisis: A Useful Category of Post-Social Scientific Historical Analysis?” American Historical Review 113, no. 4 (2008): 1090-9.

Smail, Dan. “In the Grip of Sacred History.” American Historical Review 110, no. 5 (2005): 1337-61.

Spiegel, Gabrielle M. “Presidential Address: The Task of the Historian.” American Historical Review 114, no. 1 (2009): 1-15.

Watts, Edward. Electronic mail, 20 February 2009.


[1] Karl Marx, “The Eighteenth Brumaire of Louis Bonaparte,” in The Marx-Engels Reader, ed. Robert C. Tucker (New York: W. W. Norton and Company, Inc., 1978)., 595.

[2] Margaret MacMillan, Dangerous Games: The Uses and Abuses of History, 2 ed. (New York, NY: Modern Library, 2008; reprint, 2009)., 44.

[3] Pierre Nora, “Between Memory and History: Les Lieux De Mémoire,” Representations, no. 26 (1989)., 8.

[4] It is not relevant to the discussion – or is it? – but I was at the Luxury Alderwood Theater in Lynnwood, Washington, seeing the movie Batman on its opening day.

[5] Paul Connerton, How Societies Remember (Cambridge, England: Cambridge University Press, 1989)., 72-104.

[6] Elazar Barkan, “A. H. R. Forum: Truth and Reconciliation in History. Introduction: Historians and Historical Reconciliation,” American Historical Review 114, no. 4 (2009)., 903.

[7] Ibid., 908.

[8] Ibid.

[9] Ibid., 913.

[10] Nora, “Between Memory and History: Les Lieux De Mémoire.”, 9.

[11] Timothy Garton Ash, The File (New York, NY: Random House, 1997)., 9-11.

[12] Edward Watts, Electronic mail, 20 February 2009.

[13] Wendy Gamber, Electronic mail, 20 February 2009.

[14] Gabrielle M. Spiegel, “Presidential Address: The Task of the Historian,” American Historical Review 114, no. 1 (2009)., 12.

[15] Richard Barrett, “Counting Hatched Chickens, Nos. 1-3,” in Leitourgeia kai Qurbana: Contra den Zeitgeist (Bloomington, Indiana: WordPress, 2009). The Greek means, “Glory to God for all things!”

[16] ———, “Counting Hatched Chicken #4,” in Leitourgeia kai Qurbana: Contra den Zeitgeist (Bloomington, Indiana: WordPress, 2009). The Latin means, “Richard is no longer an unworthy applicant, thanks to God.”

[17] ———, “In Which the Author Finds Himself Intentionally, Joyfully, and yet with a Tinge of Sadness, Unemployed,” in Leitourgeia kai Qurbana: Contra den Zeitgeist (Bloomington, Indiana: WordPress, 2009).

[18] J. B. Shank, “A. H. R. Forum. Crisis: A Useful Category of Post-Social Scientific Historical Analysis?,” American Historical Review 113, no. 4 (2008)., 1096.

[19] Ibid., 1097.

[20] Richard Barrett, “Things You Think About When You’re Trying Not to Fall,” in Leitourgeia kai Qurbana: Contra den Zeitgeist (Bloomington, Indiana: WordPress, 2009).

[21] ———, “On Forgiveness Sunday, the Alleged Plurality of Methods by Which One May Relieve a Feline of Its Flesh, and Other Musings,” in Leitourgeia kai Qurbana: Contra den Zeitgeist (Bloomington, Indiana: WordPress, 2009). The Latin means, “Richard is an unworthy applicant.”

[22] ———, “More on the Alleged Plurality of Means by Which One May Remove Flesh from a Feline,” in Leitourgeia kai Qurbana: Contra den Zeitgeist (Bloomington, Indiana: WordPress, 2009).

[23] Debra Melsheimer, Electronic mail, 23 April 2008.

[24] Dan Smail, “In the Grip of Sacred History,” American Historical Review 110, no. 5 (2005)., 1361.

[25] Richard Barrett, Letter, 26 February 2006.

[26] Ibid.

[27] Larry A. Braskamp, “The Religious and Spiritual Journeys of College Students,” in The American University in a Post-Secular Age, ed. Douglas Jacobsen and Rhonda Hustedt Jacobsen (New York, NY: Oxford University Press, 2008)., 133.

[28] Georges Florovsky, “The Predicament of the Christian Historian,” in Religion and Culture: Essays in Honor of Paul Tillich, ed. Walter Leibrecht (New York, NY: Ayer Publishing, 1959; reprint, 1972)., 141.

[29] Richard Barrett, “13 February 2005,” (Bloomington, Indiana, 2005).

[30] Christine Caldwell Ames, “Does Inquisition Belong to Religious History?,” American Historical Review 110, no. 1 (2005)., 13.

[31] Richard Barrett, Letter, 15 May 2005.

[32] For a contemporary account of the issue, see Stephen Koepp, “Cheap Oil!,” TIME Magazine, 14 April 1986.

[33] For background on Office Depot and Office Club as an example, see In Lee, “Office Depot’s E-Commerce Evolution,” International Journal of Cases on Electronic Commerce 1, no. 2 (2005)., 45.

[34] For a contemporary account of the economic situation in the greater Seattle area in the late 1980s, see Neal Peirce, Curtis W. Johnson, and Betty Jane Narver, “The Peirce Report: 1. Congestion and Sprawl: A Thousand and One Delayed Decisions Are Taking Their Toll, and Environmental Time Is Running out Fast in Puget Paradise,” The Seattle Times, 1 October 1989.

[35] Steven Ruggles, “The Transformation of the American Family Structure,” American Historical Review 99, no. 1 (1994)., 126.

[36] Ibid., 127.

[37] Jr. Lynn White, “Technology Assessment from the Stance of a Medieval Historian,” American Historical Review 79, no. 1 (1974)., 1.

[38] Jack’s own grandfather (my great-great-grandfather) was something of a footnote in Civil War history, being Colonel Theodore H. Barrett of the Union, commander of the 62nd U. S. Colored Infantry Regiment, winner of the Battle of Palmetto Ranch, the final conflict of the War Between the States – over a month after Appomattox. See Jeffrey Hunt, The Last Battle of the Civil War: Palmetto Ranch (Austin, Texas: University of Texas Press, 2002).

[39] For a brief overview, see Francisco R. Parra, Oil Politics: A Modern History of Petroleum (London: I. B. Tauris, 2004)., 269.

[40] Michel Foucault, The History of Sexuality: An Introduction, vol. 1 (New York: Vintage Books, 1978; reprint, 1990)., 120-7.

[41] Claus-M Naske and Herman E. Slotnick, Alaska: A History of the 49th State, 2 ed. (Norman, OK: University of Oklahoma Press, 1994)., 265.

[42] Florovsky, “The Predicament of the Christian Historian.”, 166.

[43] Augustine of Hippo, “Confessions.” Book 10, ch. 8. Translation mine.

Embracing paleostructuralism

It is late afternoon on Wednesday, and I have somehow managed to accomplish everything I needed to accomplish by this time. On Friday, this seemed like a goal that was unattainable, so I am reasonably pleased.

Somebody mentioned to me this last Saturday, “I occasionally read your rants against post-structuralism.” It had not been explicitly discussed in class that Foucault and company actually constitute an “-ism”, so I’m sure I was a deer in the headlights for a second while I figured out what my friend meant. Flesh of My Flesh has been explicitly exposed to more theory than I have, so I’ve been hearing about the supposed difference between signifier and signified for some time, but again, that this movement had a name was new information for me. A couple of things clicked once I understood the label; this is the same friend who a few years ago overheard me saying that it made no sense to me to read modern ideas of sexual equality and identity into texts for which those ideas would be anachronistic, and consequently chided me for “not believing in gender theory,” adding, “Applying theory is not ‘reading something into’ anything. That’s just you having an ideological problem.”

For all I know, maybe he’s right. He’s in the English department, and maybe there’s a way these things actually make sense from the standpoint of literature. Maybe, too, this is the difference between a “scholar” and an “intellectual” — I do not give a fat, furry, flying rat’s hindquarters about theory. I have not entered an academic discipline because I am interested in the “isms” which seem to plague the humanities right now. (I am told that “thing theory” was rather well-represented at last week’s Byzantine Studies Association of North America conference, which makes me want to tear out my own teeth with a rusty screwdriver.) I have entered an academic discipline, because, funny and naïve and idealistic as it may sound, I am actually interested in, and even like, my subject of study.

What does that make me? A paleostructuralist? If so, then so be it. (“Paleostructuralist” sounds cooler and more dignified than “anti-post-structuralist” anyway.)

I still have more to write on Foucault in this space, but it’s going to have to wait a bit yet while I finish some other things. In the meantime, my most recent (and last) response paper for my “Introduction to the Professional Study of History” course starts to sketch out some of the thoughts that will show up there. Certain elements will be no surprise to those who visit here somewhat regularly, there are a couple of moments where it will be evident that I just got through watching all of Christopher Nolan’s movies in chronological order (which merits its own post), and the couple of somewhat coy suggestions that certain things should be discussed elsewhere will be developed in my final paper for this course.

The Safe Retreat into Omniscient Third-Person:

The Problem of Historicizing Oneself

Or

A Response to Kate Brown’s “A Place in Biography for Oneself”

(As Well as a Number of Other Bits and Pieces from the Fall 2009 H601 Course)

“Historians,” writes Kate Brown in her essay “A Place in Biography for Oneself,” “expose other people’s biographies, not their own.”[1] How can this be, however, when according to Marx, “[m]en make their own history” [2]? How, ultimately, may historians be their own agents of history while being true to their own profession? How might historians assume the first person voice in their own work, that is to say, our own work, or still more to the point, my own work – honestly?

To expand Marx’s quote, men make their own history, “but they do not make it just as they please; they do not make it under circumstances chosen by themselves, but under circumstances directly found, given and transmitted from the past.” Brown certainly did not choose her circumstances. She is from a small Midwestern town whose economic history could have stepped out of the pages of The Marx-Engels Reader; in her home town of Elgin, Illinois, as she tells it, the beginning of her life intersected with a narrative of Western expansion, labor strife, industry flight, economic redevelopment, and gentrification.[3] Her own retelling of the story gives significant credibility to Marx’s claim that “[t]he tradition of all the dead generations weighs like a nightmare on the brain of the living”[4]:

From Elgin… I came to understand how closely one’s biography is linked to one’s place… I recognized the impulse to bulldoze and start over, to push on toward a brighter, cleaned-up destiny, which meant abandoning some places and people and losers of an unannounced contest.[5]

The past – that is to say, one’s history – and its relationship to location are a weight that one must learn to carry or learn to jettison. Perhaps this can be understood as an inversion of the opening line of Pat Conroy’s novel The Prince of Tides – rather than the wound being geography, the anchorage, the port of call, it is geography, and the confluence of circumstances that one encounters in that geography, that is the wound.

All well and good — but how real is this confluence of circumstances? How objectively may its existence be assumed? Per Benedict Anderson and his analysis of how seemingly disconnected events make up the front page of a newspaper, perhaps not much:

Why are these events so juxtaposed? What connects them to each other? Not sheer caprice. Yet obviously most of them happen independently, without the actors being aware of each other or of what the others are up to. The arbitrariness of their inclusion and juxtaposition… shows that the linkage between them is imagined.[6]

What, then, is the difference between one’s life and the front page of a newspaper? Do they both represent a constructed – that is to say, not objectively real – and affected way of arranging events? For the historian, how does that construction and that affectation influence how they read history, view history, and write history? How does understanding how one’s life interacts with one’s work impact either, for better or for worse?

As a scholar, I have been carefully trained to avoid using the first person in my work. “Don’t ever say things like ‘We can see the following…’ in your research,” I remember being told in one undergraduate course. “This is not a journey ‘we’re’ going on together. It’s a research paper.” My training in languages also tends to inform how I view texts – “Read what it says, not what you think it means,” my first Greek instructor repeatedly told our class. My research goal, therefore, is typically to state a clear, impersonal thesis and then get the hell out of the way of my own argument, simply letting the facts and the observations speak for themselves as much as possible. If I present it as something that “I” think, then I will have fundamentally devalued and undermined my argument – why should anybody care what I think?

Naturally, there is far more to it than a hope to rest comfortably on objectivity. Why should anybody care what I think, indeed. I’m a nobody, a college dropout from nowhere, a first generation college graduate at the age of 29, having taken eleven years to finish a four year degree (a B. Mus. at that, not a liberal arts degree), who then, even with good grades and test scores, still had to do three years of coursework as an unmatriculated student before there was any way to be competitive for graduate schools, all the while hearing from a chorus of professors, “I’m more than happy to write you a letter of recommendation, but I’m not sure you’re going to be able to get there from here.” Why should anybody care what I think? Good heavens, I will need to make sure I publish under a pseudonym just to be taken at all seriously. Better yet, I should somehow indicate on my C. V. that I simply sprang forth fully-grown from the head of Zeus with my PhD already in hand.

But there is still more to it than that, surely. I’ve been at Indiana University in one capacity or another since 2003, somewhat ironically making it the longest I’ve ever lived anywhere. My family bounced around a lot for reasons best recounted elsewhere, and even now, they live, quite dispersed, in places I have never lived, in houses I never called home, in zip codes I never visited until they moved there. Brown can rely on her connection with the place of Elgin, Illinois as an anchor for where she is now, but I am literally from nowhere, in the sense that I have had to construct my notion of “home” from different raw materials than place and family, and I find it very difficult to relate to concepts of home that do center around place and family. If my family moved around for reasons having to do with the military or career development, than I might be able to legitimately claim – as a friend of mine, the son of a prominent Russian History scholar, does – to be a “citizen of the world,” to be from everywhere. Alas, I can claim nothing quite so romantic or interesting. Robert Frost once said that home is where, if you have to go there, they have to take you, but the places where that is even marginally true are places that have never actually been a part of my life. If Brown is correct that one’s biography is closely linked to place, than I truly am the Nowhere Man – so again, why should anybody care what I think?

But, of course, there is still more to it than that.

“In my quest to explore the human condition,” writes Brown, “I have hidden behind my subjects, using them as a scrim to project my own sentiments and feelings.”[7] There is an undeniable connection between who somebody is and what interests them; for her own part, Brown describes this connection by saying, “I believe that I was able to see stories that had not yet taken shape for other historians because of the sensitivities I acquired in my past.”[8] My advisor, Professor Edward Watts, is potentially an example; he is an academic raised in a family of academics. His parents are both academics, and his sister is an academic. What was the subject of his dissertation? Rhetorical education in Late Antique Alexandria and Athens. As I told him after I read the book, it is difficult to not see his work as having an aspect of meta-commentary on the academic life. He chuckled and said, “You wouldn’t necessarily be wrong.”

Beyond that example, I saw with my own eyes how the personal connection between historian and subject might manifest with my colleagues during orientation and initial class meetings:

“Hi, I’m Roberto Arroyo, and I’m interested in Latin American history.”

“My name is Isaac Rosenbaum, and I do Holocaust history.”

“I’m Lakshmi Patel, and I’m studying the history of relations between India and Pakistan.”

The Late Antique Byzantinist whose last name is not “Ioannides” or “Sotiriou” is left at something of a disadvantage in such company. Yes, there is, in fact, a personal reason that connects me to my subject of inquiry, a personal reason that should not be too hard to surmise for the careful observer (but one that is best discussed in another setting), but a personal reason that is nonetheless internal, abstract, and conceptual rather than immediately and concretely constructed by place or family – that is to say, by the circumstances which I did not choose. I have personal stakes that led me to my areas of interest, but because they are of my own choosing I must be circumspect in how I speak in terms of “I”, “we”, and “our” if I am to be seen as having sufficient distance from my subject to be credible as a scholar. Edward Said and Dipesh Chakrabarty appear adamant that cultures and societies must define themselves, that to not allow such self-definition is cultural imperialism,[9] and yet this mandate of courtesy with respect to communal identity does not appear to extend to those who have embraced certain communities voluntarily.

Of course, I also have the problem that I am not interested in my subject from a critical point of view; I find it anachronistic to explicitly read whatever my own political beliefs and values may be – and, for today’s purposes, we may broadly describe them as uncomfortably conservative as Russell Kirk defined the word, which according to contemporary definitions probably makes me liberal – into my historical subject, but per Elizabeth Blackmar as quoted by Ted Steinberg, we historians are not supposed to evade the question of politics.[10] According to Steinberg, the role of the historian in the present day is evidently to explore “the history of oppression,”[11] and this attitude is one I see largely borne out in my cohort. Nonetheless, the reality is that such a history is not the history of the Late Antique Eastern Roman Empire I have any desire to write. I have better things to do than study something with the express purpose of tearing it down. I fundamentally believe it is possible to be more productive and constructive – but do I only believe that because of my other beliefs in the first place? Is my choice of the word “constructive” itself telling, possibly signifying that I would rather buy into the social constructions that historians are supposed to deconstruct? The 3rd person voice of objectivity keeps me from having to mess with such potentially treacherous questions.

If men make their own history, but not under circumstances they choose for themselves, and history is supposed to be the history of oppression, then must a historian writing their own history engage in self-hatred by definition? Brown does not appear to write a piece of self-hatred, but it is clear that she is uncomfortable with the implications of her own essay – “My palms sweat as I write this… The intimacy of the first person takes down borders between author and subject, borders that are considered by many to be healthy in a profession situated between the social sciences and the humanities.”[12] Chakrabarty suggests one possible way out, explicitly referencing autobiography and history as two separate and distinct genres[13] – so not only is autobiography, the history of oneself, not history, but history isn’t a discipline anyway, it’s a genre. But here is the rub – if history is a genre somewhere “between” the social sciences and the humanities, and a historian writing their own history must find a methodologically honest way to not engage themselves at the level of self-hatred, which then in fact moves the work into a different genre altogether, then the historian can never actually engage in a real work of self-historicization that is not self-mutilatory.

At any rate, can we claim objectivity anyway by avoiding biographical detail or the first person? In a post-structuralist world where we must assume a fundamental disconnect between signifier and signified, does it really matter to begin with? Or is a research paper written in the omniscient third person much like Bruno Latour’s depiction of the laboratory[14] or Bonnie Smith’s history seminar and archive[15] – a socially constructed, that is to say false, space of knowledge-based privilege that can assert authority it does not actually have simply because a particular group of people have become convinced that it does?

I do not have answers to my own questions, posed at the outset of this musing. I am not certain where to go with them. My inclination is to say the various circumstances of my own life may appear as arbitrary as Anderson insists the front page of the newspaper actually is, but by virtue of the very fact that I in fact experience those circumstances in chronological order, I nonetheless perceive them as my own narrative. My inclination is to say that I cannot be forced to historicize my own life as a history of oppression any more than I can legally be required to self-incriminate in a court of law. My inclination is to say that nonetheless, I am better off keeping my arguments in the third person and keeping my “self” out of the voice of my own work, that regardless of what I think, we all know what a coffee table will feel like if we rap it with our knuckles, and that in saying that I am not privileging people who have hands or who do not have nerve damage. My inclination is to say that there must be a world outside of our own minds, and that there must be a way we can discuss it, even if our own minds tell us how we’re going to organize our perceptions of that world. Are these words and ideas too strong, too dangerous, too naïve, too uninformed? I do not know, but I do not know where else to start.

And perhaps that is why it is good I work in a period many people find irrelevant. It keeps me from becoming a danger to myself or to others.

Works Cited

Anderson, Benedict. Imagined Communities. 2 ed. New York: Verso, 2006.

Blackmar, Elizabeth. “Contemplating the Force of Nature.” Radical Historians Newsletter no. 70 (1994).

Brown, Kate. “A Place in Biography for Oneself.” American Historical Review no. 114 (2009): 596-605.

Chakrabarty, Dipesh. “Postcoloniality and the Artifice of History: Who Speaks For “Indian” Pasts?” Representations no. 37 (1992): 1-26.

Latour, Bruno. “Give Me a Laboratory and I Will Raise the World.” In Science Observed: Perpsectives on the Social Study of Science, edited by Karin Knorr-Cetina and Michael Mulkay, 141-70. London: Sage, 1983.

Marx, Karl. “The Eighteenth Brumaire of Louis Bonaparte.” In The Marx-Engels Reader, edited by Robert C. Tucker, 594-617. New York: W. W. Norton and Company, Inc., 1978.

Said, Edward. Orientalism. New York: Vintage Books, 1994. Reprint, 2003.

Smith, Bonnie. “Gender and the Practices of Scientific History: The Seminar and Archival Research.” American Historical Review 100, no. 4 (1998): 1150-76.

Steinberg, Ted. “Down to Earth: Nature, Agency, and Power in History.” American Historical Review 107, no. 3 (2002): 798-820.


[1] Kate Brown, “A Place in Biography for Oneself,” American Historical Review, no. 114 (2009), 603.

[2] Karl Marx, “The Eighteenth Brumaire of Louis Bonaparte,” in The Marx-Engels Reader, ed. Robert C. Tucker (New York: W. W. Norton and Company, Inc., 1978), 595.

[3] Brown, “A Place in Biography for Oneself,” 600-3.

[4] Marx, “The Eighteenth Brumaire of Louis Bonaparte,” 595.

[5] Brown, “A Place in Biography for Oneself,” 604.

[6] Benedict Anderson, Imagined Communities, 2 ed. (New York: Verso, 2006), 33.

[7] Brown, “A Place in Biography for Oneself,” 603.

[8] Ibid., 605.

[9] Edward Said, Orientalism (New York: Vintage Books, 1994; reprint, 2003). Dipesh Chakrabarty, “Postcoloniality and the Artifice of History: Who Speaks For “Indian” Pasts?,” Representations, no. 37 (1992).

[10] Elizabeth Blackmar, “Contemplating the Force of Nature,” Radical Historians Newsletter, no. 70 (1994)., 4. Quoted in Ted Steinberg, “Down to Earth: Nature, Agency, and Power in History,” American Historical Review 107, no. 3 (2002), 804.

[11] Steinberg, “Down to Earth: Nature, Agency, and Power in History,” 802.

[12] Brown, “A Place in Biography for Oneself,” 603.

[13] Chakrabarty, “Postcoloniality and the Artifice of History: Who Speaks For “Indian” Pasts?”, 8.

[14] Bruno Latour, “Give Me a Laboratory and I Will Raise the World,” in Science Observed: Perpsectives on the Social Study of Science, ed. Karin Knorr-Cetina and Michael Mulkay (London: Sage, 1983). Accessed online at http://www.stanford.edu/dept/HPS/Latour_GiveMeALab.html on 9 November 2009.

[15] Bonnie Smith, “Gender and the Practices of Scientific History: The Seminar and Archival Research,” American Historical Review 100, no. 4 (1998).

Something I don’t usually do…

I’ve kept my comments here restricted to a few general categories — more to the point, there are a few things I’ve avoided talking about. I don’t talk about politics here (although perhaps some stances are indirectly discernible), I don’t generally blog about what I do at work (mostly because that would generate an even smaller number of readers; I work a desk job, and there’s not much to say), I don’t say much (with some exceptions) about my personal life, and I don’t for the most part do things like movie reviews.

Well, this isn’t going to be a movie review, exactly, but it’s going to be about movies, and it’s going to deal with, in large part, a movie you had a statistical likelihood of seeing this last weekend if you bothered darkening the door of your local movie theatre at all between midnight last Thursday and Sunday evening. (No, shocked as you may be, I’m not talking about Mamma Mia!)

I’ve been a Batman guy for a loooooooooong time. There was a little Brave and the Bold digest my mom brought me home once when I was probably about five and home sick; it collected this neat little Batman vs. Deadman story which was drawn by Neal Adams — “You Can’t Hide From a Deadman,” originally printed in Brave and the Bold #86, Oct/Nov 1969 — and I was hooked ever since. It was still a few years yet before I started reading comics regularly, but I remember coming in on the tail end of Batman: Year 2, reading Ten Nights of the Beast, The Killing Joke, and A Death in the Family when they were first printed (The Dark Knight Returns I came to about three years late or so just ’cause it looked kinda ugly to this ten year old when I first saw it on the shelves — let’s face it, I was just too young to get it), and I was definitely at the movie theatre on 23 June 1989 for the first night the Burton/Keaton Batman was playing, which I saw three times that summer while it was still in the theatres. A tick over nineteen years later, maybe not much has changed.

I’ve also been a Christopher Nolan guy for a good bit. I saw Memento three times when it was in theatres, and keep in mind that meant, at least for the first couple of times I saw it, driving about half an hour to the arthouse which was playing it. I talked it up to whomever would listen, took friends to see it, bought David Julyan’s score CD, bought the DVD (both versions), and so on. I bought (and quite liked) Following when it came out on DVD, and while for various reasons had to miss Insomnia when it was in theatres, I’ve enjoyed it on DVD since (but, truthfully, it remains the one I’ve seen the least number of times — maybe that’s nothing to lose sleep over?).

For me, getting to know a director’s work is getting to know what kinds of themes interest them, what kinds of images they consistently use, what they like to do with structure, which actors they re-use, who scores their movies, what kind of pattern they’re continuing (or establishing) with new works, and so on. To put it another way, a director is always infinitely more interesting to me when there is something more I can get out of one of their movies by placing it in the context of the rest of their work.

Which brings me to The Dark Knight. There are major spoilers to follow from The Dark Knight, Batman Begins, Memento, Following, Insomnia, and The Prestige, so do read at your own risk.

I’ve seen it twice now; I was lucky enough to get in on the viral marketing IMAX ticket giveaway, only because I live in a market where two hours after the site went live there were still tickets available (although only just — I was 164/170), and thus saw it 50.5 hours before most of the rest of the world had a chance. I also saw it again (in IMAX, natch) last Friday night. I’ll get the easy stuff out of the way first: some movies are overhyped. A couple of well-edited trailers, a flashy marketing campaign, and a couple of glowing early reviews can all add up to a disappointing experience in the actual cinema because in the end, it’s “just a movie” and doesn’t cure cancer. The Dark Knight is not this movie — it is every bit as good as you’ve heard, and gets better on repeat viewings. Everybody brings their A-game, be they in front of the camera or behind the camera, be they in a big role or a small role, and they craft a crime epic for the ages along the lines of Heat or The Departed or L. A. Confidential. This isn’t Tim Burton’s self-aware, dark-but-comic freakshow; this isn’t Joel Schumacher tweaking your nose. And — I say this as somebody who saw Iron Man twice and who thinks it was absolutely terrific — this also definitely isn’t Jon Favreau’s bright, flashy, action-adventure story in which A Flawed But Ultimately Good Man Learns Something While Wearing A Costume. The Dark Knight takes the idea of a “comic book movie” and elevates it to a whole new genre and a whole new level of filmmaking; you’ll either like that or you won’t (I’ve read some reviews that say it’s pointless to try to elevate source material which is absurd to the core to the level of Hamlet), but if your reasons for not liking a movie like this have to do with the title character being dressed as a bat rather than wearing a shirt and tie, that’s really your problem, and not that of the filmmakers.

One thing about the story – two years ago, when the title was revealed and Nolan said it was “quite important to the film” (http://www.batman-on-film.com/batmovienewsarchives48.html), I immediately understood it to be a double entendre with “the dark night”. Putting that together with the recasting of Rachel Dawes rather than creating a new character (seeming to point to the need for somebody in whom we were already invested emotionally), I believed from that moment on that Rachel was going to be dead as a doornail by the end of the film. Sure enough. If the rumors are true that Rachel was originally supposed to be Harvey Dent in Batman Begins until there was a studio note saying “We need a chick in the pic, stat”, then that was a great way for Nolan to take it and run with it.

Okay — I haven’t said anything new there. You’ve probably read a dozen reviews which have said something similar, so I’ll move on now to what I really want to write about.

The Dark Knight keys off of several important ideas set up in Batman Begins; definitely that of “escalation,” established in the closing conversation in BB between Gordon and Batman, but also very much the repeated line, “It’s not who I am underneath, but what I do that defines me” as well as the notion that Batman isn’t just Bruce Wayne in a mask; it is in fact who Bruce Wayne actually is.

And as bleak as people are saying it is, the two important things with which you’re left at the end of the movie are a) nobody on either boat pressed the button and b) Batman did not let the Joker fall to his death — a correction of a huge misunderstanding on Tim Burton’s part nineteen years ago, and a clear indicator that Batman has decided that “I won’t kill you, but I don’t have to save you” is no longer a clear enough line in the moral sand, not when your enemy has no rules whatsoever.

All well and good. However, looking at it through the lens of Nolan’s other work, bigger themes start to emerge. Much has been made of how much Nolan likes to play with time, but that’s really window dressing, a story-telling tool rather than a theme (and it’s telling how, as he has matured as a filmmaker, he’s done it less and less). Identity, on the other hand, has been a key question in his movies from the get-go; Following and The Prestige study characters who deliberately take on other identities but then find that they can’t just simply go back to normal when they’re done. The narrator of Following turns himself into somebody who seems slick and clever and affluent, emulating Cobb as much as he can at Cobb’s own encouragement, but in doing so he makes it impossible to prove that Cobb ever existed and/or that he’s not, in fact, Cobb — which was, of course, “all part of the plan” in the first place. Angier and Borden in The Prestige both are leading double lives (I’ve always thought it was particularly clever of Nolan to cast as his leads actors most famous for playing superheroes, and that it added a fascinating subtext to the film), and must maintain the “act” at all costs, to the extent that their lives are quite dependent on it. In trying to figure out each other’s magic tricks, Angier and Borden are actually solving the problem of who the other person actually is.

Memento‘s Leonard Shelby, on the other hand, is an examination of how our natural identities depend on how we experience time and form memories — if we can’t, we’re slaves to external sources of information, never able to trust our own instincts about who we are. As a result, Leonard can never stop mourning the death of his wife, because his own perception of time (or lack thereof) can never provide any distance from it. It will always have just happened for him.

Batman Begins and The Dark Knight bring many of these concepts together. Bruce Wayne, as depicted by Nolan, his brother Jonathan, and David Goyer, has assumed at least one identity (if not two, counting the “playboy Bruce” persona) which changes him and those with whom he interacts permanently. At the same time, the whole reason he does so is because, to a certain extent, he can never get past the trauma of losing his parents, because the one way he knows to heal, revenge, has been taken from him — and he realizes it won’t necessarily do what he hopes for anyway. One way or the other, he’s permanently emotionally stuck, much like Leonard Shelby, and this drives him to take on the dual persona (which is itself an interesting inversion of Bale’s character/characters in The Prestige, Borden — two men having to lead one life). To apply Memento‘s questions, then — can Bruce trust his own sense of who he is? Is the Batsuit really a message he sends to criminals, or a message he’s sending to himself, much like Leonard Shelby’s tattoos? To apply The Prestige‘s questions, can he stop being Batman without causing himself, to say nothing of others, harm?

Several of the films deal with familial loss; Memento and The Prestige both with the loss of a wife under circumstances which are left ambiguous, Batman Begins obviously having the murder of parents as a pivotal point (but also has Ra’s Al Ghul talking about taking vengeance for a murdered wife), and The Dark Knight has the murder of Harvey Dent’s fiancée as a major engine of the plot (while also showing Gordon’s wife mourning his death in the line of duty). All of these cases are motivators for vengeance, but in The Prestige in particular it’s clear that Angier’s lust for revenge eventually wanes, and he continues out of the sheer momentum of hatred — “I don’t care about my wife, I care about his secret,” he spits out halfway through the movie. Looking forward, perhaps we can surmise that Bruce Wayne eventually faces the danger of being Batman just to be Batman, with no particular purpose driving him.

One can draw a line of connection between Dormer’s corruption in the name of catching the bad guys in Insomnia to what Gordon and Batman choose to do at the end of The Dark Knight; if it can be proven Dormer planted evidence even once, then it calls all of the convictions to which he’s contributed into question. In TDK, if Harvey’s spree as Two-Face is made public, their efforts to fight the mob will have been entirely in vain. Bending the rules to make the criminals pay is a slippery slope in Nolan’s universe, one with very real consequences. Even if everything ostensibly turns out all right, somebody will have to pay the piper — and sometimes it’s the wrong person.

(By the way, Nolan has mentioned that Aaron Eckhart was looked at for Memento. I think one can look at Eckhart’s Two-Face for a glimpse of what he might have been like as Leonard Shelby. I think that was a career-defining performance for Guy Pearce, so I wouldn’t have it any other way, but Eckhart would have been really interesting in the role. Different from what we got, but definitely interesting.)

In terms of recurring visual motifs — there is always a “lair” in Nolan’s film, a location where the main character’s dark secrets are hidden and their true identity may be found. This goes all the way back to Following, where The Young Man takes Cobb to his apartment under the ruse that it’s somebody else’s (and of course Cobb knows what’s going on immediately). In Memento it is the basement of the abandoned building where Jimmy Gantz’s body and Leonard’s real clothes have been left. In Insomnia it is Finch’s cabin. In Batman Begins and The Dark Knight it is the Batcave (or its temporary replacement). In The Prestige it is the theatre basement where all of Angier’s “prestige materials” are kept. More often than not it involves “going down” someplace; I will leave others more qualified to speak about Jungian psychology to discuss the implications of the “descent” into the place where one keeps secrets, potentially even from oneself.

Breaking of legs is something which has shown up twice in a row now — Angier falls through a trapdoor and the cushion has been removed, shattering the entire limb from the looks of the brace he’s put in, and Batman drops Sal Maroni from a height which is just enough to break his ankle (although Nolan is far more subtle about showing Maroni with a cane later than he is with Angier). Being a year and a half out from a broken ankle myself and still recovering to some degree, both of those moments make me squirm each time.

Casting is an interesting point — Nolan doesn’t quite seem to have the “stock company” of David Mamet or, to a lesser degree, of David Lynch or Tim Burton, but three films in a row now have starred Christian Bale and Michael Caine. To some extent, Bale, Guy Pearce, and maybe Following‘s Jeremy Theobald, too have a visual continuity I can’t quite explain — distinctive faces which wear woundedness well, maybe. Bale and Pearce definitely have a carved-out-of-wood quality — and I don’t mean that in a bad way — to their facial and bodily structures (harder to say with Theobald). Hugh Jackman and Aaron Eckhart are somewhat the opposite — classically good-looking men who you want to like and have be the ostensible good guys (which, naturally, is what makes their characters more interesting when they turn). Mark Boone Junior has had significant supporting roles a couple of times, Theobald showed up in a bit role in Batman Begins, and Larry Holden is quite the chameleon, going from L. A. lowlife Jimmy Gantz in Memento to the dapper D. A. Finch in Batman Begins (as well as having a bit in Insomnia). He gets really interesting performances out of people, be it well-established character actors like Joe Pantoliano or Andy Serkis or people harder to define, such as David Bowie — who, by the way, I really wished would have gotten a Best Supporting Actor nod for Tesla. He was onscreen for all of ten minutes, maybe, but he owned every frame in which he appeared. I’d love for Nolan to find a reason to cast Carrie-Anne Moss again — her appearance in Memento is still the most interesting thing she’s ever done, in black vinyl or out of it, as far as I’m concerned. (Probably, since she’s being cast in “mom” roles now — at all of 40! — she would be out of the running for a future cinematic appearance of Selina Kyle/Catwoman. Maybe she could play Selina’s mother, Cougarwoman?) Gary Oldman disappears so completely into the role of Gordon that it’s hard to tell if Nolan is actually giving him any direction or if he’s just getting out of the way.

So — back to The Dark Knight for a moment. What’s ahead? Where can a third Batman movie potentially go Or, to put it another way, what in the world do they do to top this one? And I think the answer is, they need to not bother trying. They need to go in the opposite direction and do something smaller, grittier, more – dare I say it – intimate. Maybe even something which feels more like a stage play than a movie – I don’t know, something more like Memento (and maybe even with David Julyan scoring rather than the Zimmer/Howard team, good as it is). They’ve set it up well to go in that direction – Batman is on the run (although presumably Bruce Wayne is not, since his being outed was prevented), his relationship with Lucius is strained (if not severed entirely) so that he won’t be able to just go to him with gadget requests anymore, and so on. Somehow he’ll have to repair what the events of The Dark Knight have broken, but that’s going to require operating on a smaller scale for awhile, I expect. Robin could work within this thematic framework, but it seems unlikely they’d go that way. Presumably in a third movie we’d be able to return to Wayne Manor, which itself could thematically represent some sort of shift back to the status quo.

In terms of what this means for a villain – hard to say. I would say that the Riddler could work on the thematic level – possibly something like Saw, only, y’know, worth watching. I’d be really surprised if the people at the helm wanted to revisit the character. Hard to say. I wonder if maybe somebody like Talia wouldn’t work, perhaps continuing her father’s efforts? It would certainly tie into what’s come before, and with Batman as a reluctant outlaw, there would be the element of temptation for him to join her on several levels. Catwoman could be done easily within the rules of Nolan’s universe, but I doubt Warner Bros. wants to go anywhere near that character for awhile.

(Here’s an idea – what about a Gotham Central TV series set in the Nolanverse between TDK and film #3 — The Dark Knight Returns??? — ? If Batman is having to be in hiding to some extent, then he can just be a vague, undefined presence to some extent who doesn’t have to directly appear.)

(Which reminds me – having watched the Gotham Knight DVD a few times now, too, I’ll say that presumably, Anna Ramirez was used instead of Renee Montoya because the powers that be didn’t want Renee Montoya to wind up as a dirty cop. That said, “Crossfire” now makes less sense as a result, not more – unless the point was to make her fall be even tougher for those who figured she was just a generic Montoya stand-in. Also, the Goyer segment doesn’t exactly jive with what we see of Crane at the beginning of The Dark Knight, but at the same time, Gotham Knight appears to also forget that Wayne Manor burned down during Batman Begins, so it’s not an exact match anyway.)

(I will also note that I think the Joker could be recast. Likely? No. Possible? Yes. Nolan would just have to do what he did this time, which is find the best actor for the job.)

(Okay, enough with the parentheticals.)

(I mean it.)

Maybe the biggest clue is in the final scene. It’s clear that Nolan and co. regard the Batman/Gordon relationship as the moral foundation of the particular stories they’re telling. That may seem obvious, but I think it’s very much underscored in the last scene of TDK, which is a clear counterpoint to the last scene of BB. For example –- Gordon telling Batman “thank you” and him replying, “You don’t have to thank me” seems to be very much a reference to the last line of BB, and in general both scenes function to wrap up the moral point of the preceding story and establish what the guiding principle will be for the next one. Given that, as much as escalation was the driving force of TDK, can we surmise that Gotham learning to accept the hero they need will be the plot engine of the third film?

Enough for now. I have plenty to say about all of this, but this is already quite long. I’ll come back to it another day, maybe.


adventures in writing alexander lingas all saints bloomington all saints orthodox church american orthodox architecture american orthodox music american orthodoxy Antiochian Archdiocese Antiochian Orthodox Christian Archdiocese of North America Antiochians books byzantine chant cappella romana chant church architecture ecclesiastical chant ethnomusicologists ethnomusicology fellowship of ss. alban and sergius Greece Greek greek food greekness hazards of church music international travel tips ioannis arvanitis joe mckamey john michael boyer kurt sander Latin liturgical adventures liturgical architecture liturgical music liturgical texts and translation liturgy liturgy and life lycourgos angelopoulos medieval byzantine chant Metropolitan PHILIP militant americanist orthodoxy modern byzantine architecture modern greek music music as iconography my kids will latin and greek when they're newborns my kids will learn latin and greek when they're newborns orthodox architecture orthodox architecture is bloody expensive Orthodox choir schools Orthodox Ecclesiology orthodox outreach orthodox travel pascha at the singing school Patriarchate of Antioch Patriarch IGNATIUS IV Patriarch of Antioch publishing random acts of chant richard barrett in greece richard toensing rod dreher sacred music st. vlads st john of damascus society Syriac the Bishop MARK fan club the convert dilemma the dark knight The Episcopacy The Episcopate the only good language is a dead language this american church life travel we need more american saints why do we need beautiful music in churches?

Blog Stats

  • 242,952 hits

Flickr Photos