Skip to main content

Eleonore Stump's Dewey Lecture

I am grateful to the APA for the invitation to give this lecture, and to the APA staff who helped with the arrangements for it. Like everybody else who has given a Dewey lecture, I am sensible of the honor; but also like everybody else I am somewhat overwhelmed by the assignment. For some people, the main challenge has been the requirement to sketch the history of the discipline. In my case, the most daunting part of the assignment is the requirement to make the lecture autobiographical.

When I was a graduate student, it was a commonly accepted view that the essence of a human being is the origination of that human being in an egg contributed by her mother and fertilized by the sperm of her father. This is a view that now gets a mildly perplexed dismissal by some philosophers of biology. But since it was orthodoxy in the time when I was trained, I will begin explaining myself and philosophy in my time with a brief word about my parents.

My father’s parents were immigrants from Rumania, and they were practicing, yiddish-speaking Orthodox Jews. As might perhaps have been expected, their son, my father, rejected his parents’ Orthodox Judaism entirely. He was a zealous atheist, a fervent Marxist, and an unstoppable political activist. He was energetic in all the causes of his day: in the NAACP, in union organizing, and many other things besides. I grew up singing labor union songs, some of which I still remember: They say in Harlan County there are no neutrals there; you’ll either be a union man or a thug for J.H. Blair. Which side are you on? Which side are you on? It still seems to me that this is not a bad question to ask.

My mother, on the other hand, was a thoroughly traditional monocultural German. She was 6 years old when Hitler came to power and 18 years old when Germany lost the war. She met my father at that point because he was in the occupying American army. And so I was born in a hospital in Frankfurt, Germany two years after the end of the second world war. 

When I was three years old, the US decreed that all the children born to American soldiers stationed in Germany were retroactively American citizens from birth. At that point, my father brought my mother and me across the ocean to the US. But we didn’t stay long. When my mother was very young, before the end of the war, there had been wealth in her family and some striving for a kind of fin-de-siecle cultural aristocracy. In her whole long life, she never made her peace with American culture or even with the English language. When I was four, her father died; and she took the opportunity and me and headed back across the ocean to Frankfurt. And there we stayed for well over a year. My earliest memories are of bombed out buildings, traumatized people, and severe poverty among most of those familiar to me then. Eventually, my mother was persuaded to return to the US; and so, at age six, I made my third transatlantic voyage to begin school in a very small town in the north of Minnesota. With no understanding of English, I was dropped off in a first-grade classroom, surrounded by people with no understanding of German and no love for German people either. 

My father was harassed during the McCarthy era; but he went westward and wound up in Wyoming, where they were slower to figure things out. By the time he was fired in Wyoming for being a subversive, the McCarthy era was over. He sued and won and was reinstated with back pay. Consequently, I lived eight non-contiguous years, not counting all the summers, in Wyoming, where I never understood the locals. Those were non-contiguous years because I also spent some of my school time in Frankfurt. And those years didn’t include all the summers because my father was an intellectual nomad who spent summers at universities around the country; and, as I got older, he took me with him. I was generally left to my own devices during those summers, and I occupied myself by sitting in on courses at whatever university my father had settled into for that summer.

When I finished high school, my grades were lackluster because I didn’t pay attention at school; but my test scores opened doors for me. My father was disappointed at the result, because, he said, if I hadn’t gotten scholarships, I would have found my way to the barricades instead. One of those scholarships was to Radcliffe; but I turned Radcliffe down to go to Grinnell College. The Radcliffe handbook of that time said something like this: “when our young ladies go into Boston, we require that they wear skirts.” As I remember it, the Grinnell handbook said something like this: “we don’t care what you do as long as you learn; and we will do everything we can to help you learn what you want to learn.” 

If I had had to say then what it was that I wanted to learn, I would probably have said that it was everything; but, of that everything, I had fixed on neurobiology as the discipline most likely to help me understand those things that make life worth living. 

It is hard, I think, for people who were not raised in that time to imagine what intellectual life was like then. At Grinnell, I was given free rein with regard to biology courses; and I began with an upper-level neurobiology class. The professor introduced the course by writing on the board ‘THE MIND-BODY PROBLEM’. And then he crossed out the word ‘MIND’. That is how we solve the problem, he said. It seemed like an unpromising beginning, but I persisted through more neurobiology at Grinnell and then also at Rutgers in the summer where some generous faculty in the medical school let me sit in on their courses. But, in the end, I thought that the Grinnell lecturer had been right: there was something mindless in the neurobiology of that time; it seemed unlikely that in my lifetime neurobiology would be able to teach me what I most wanted to learn. 

Actually, in the end, it wasn’t just neurobiology that seemed unpromising then; it was all of biology. The revolutionizing discovery of Watson and Crick was only 12 years old when I started college, but it had already given rise to an orthodoxy. Chromosomes in the cell nucleus consisted of some genes but even more junk DNA. The genes coded for proteins, and the proteins were responsible for things that happened outside the cell nucleus. Nothing in the cell outside the nucleus had any influence on the magisterial genes in the nucleus. And Lamarck, who thought that the environment could have an influence on the inheritance of acquired characteristics, was a heretic, that is, a person whose official views were both totally false and also pernicious. 

The view that was standard then has since been characterized as the theory of the monogenomic differentiated cell lineage, and it rested not only on incomplete biological understanding but also on philosophical assumptions that seem to have been largely invisible to those who held them then. They were pervasive, however, not just in the sciences but throughout the humanities as well. One of the most important of these was the assumption of reductionism, both metaphysical reductionism with regard to composite things, and theoretical reductionism with regard to the relation among scientific theories. There was also an unreflective privileging of an isolated individualism, if the relevant attitude can be characterized in that way. Knowledge was taken to be something that an individual acquired by himself through following certain patterns of thought carefully, in the way specified by foundationalism, for example, or coherentism. And of course there was skepticism, which seemed to dominate or at least to threaten epistemology: maybe no patterns of thought could actually be sufficient to yield knowledge. Then there was language learning: to some thinkers, it seemed as if language learning should in theory be impossible because in theory there seemed to be no way for a developing child to specify a reference for the vocalized sounds of her caregiver. 

What these positions had in common was a failure to notice the social character of human cognition and human learning. Children do learn language, of course; and we now have considerable understanding of the brain mechanisms that allow a developing child in effect to mind-read her care-giver in shared attention. It is the social side of language learning that makes it possible. And it seems obvious now that much of our knowledge is transmitted to us through testimony. Second-personal connection through shared attention and the social character of knowledge acquisition are currently widely discussed topics; but they were not much in evidence during my undergraduate years. Mention of mind-reading, the intuitive cognition of another person’s mental state, would have been taken as a sign of mental disorder. It is true that Wittgenstein’s work was still popular then; and Wittgenstein wrote, “We see emotion’—As opposed to what?—We do not see facial contortions and make the inference that he is feeling joy, grief, boredom.” But that view was no doubt among the many things taken by the immediately following post-Wittgensteinian generation of philosophers as evidence that Wittgenstein’s work was a collection of aphorisms with little explanation and less argument.

A blinkered individualism was common in ethics too. When I gave up on biology, I turned to philosophy. My ethics professor was a storyteller, and the story that ended my brief undergraduate flirtation with philosophy had to do with his daughter. She had been arrested by the police for shoplifting at a local store. She said to her philosopher father, “But, Dad, I see something that I like. Why can’t I take it?” And her philosopher father said to us, “I just didn’t know what to tell her.” It was the heyday of moral relativism; and the notion that something had for a person only the moral value which that person assigned to it seemed then to be a kind of chic progressivist position. 

I experimented with the religion classes too. The existentialism class was held right after lunch; and the instructor, who plainly always had a good lunch and enjoyed it, came into the classroom, smacked his full belly with his open palm, and told us that nothing had any meaning.

I couldn’t see what my teachers thought they were seeing, even in biology. It seemed so implausible that most of the DNA in the chromosomes was junk that had no function or that things in the extranuclear environment of a cell could have no influence on the genes. And, of course, we now have overwhelming evidence against both those views, from the rapidly expanding field of epigenetics, for example. Now it seems that a gene functions as it does because it is part of a whole symphony of mutually influential, contingent interactions with many things in its environment within the cell and outside it. There is, as it were, a social side even to genetics, if one can speak this way. The point is more obvious when one thinks about the moral relativism common then. That view imagined the ethical value of actions to be conferred for an individual largely or solely by that individual’s own beliefs and desires. But if the ethical value of something is determined for a person just by his own beliefs and desires, then how could one support, for example, the powerful push for social justice on the part of the labor union movement, or any other resistance to injustice? 

Without doubt, Grinnell was a wonderful place to get a broad-based education, and I have never regretted my decision to go there. But nonetheless what struck me about many of my teachers there was that they seemed not to have noticed that they didn’t believe what they were teaching. If I had said after class to one of the faculty promoting skepticism, “Excuse me, do you know where the ladies room is in this building?”, he would simply have told me. And yet they were so confident in their skeptical views and so dismissive of those who disagreed with them.

One prevailing current of thought in philosophy then was that one should not study medieval philosophy because the medievals based their work on dogma and authority. In retrospect, that view seems to me funny. What I didn’t understand as an undergraduate was that a lot of my education was based on relatively local dogma and authority and in fact had a strong undercurrent of bullying about it. Later, when I had become a graduate student at Cornell, one of my teachers came to my adviser to complain about me. He said, “She’s completely unteachable. She won’t just learn what I’m telling her; she keeps asking me questions.” In philosophy, taxonomy was (and still is) one of the weapons enforcing orthodoxy in the profession. Often enough when there was some topic that seemed interesting, it was dismissed with condescending authority by the question, “Yes, but is it philosophy?”

It took me years to see the bullying in such questions and to understand how to respond to it. Once, much later, when I was lecturing in Warsaw on Thomistic metaphysics, an angry professor in the audience confronted me by saying, “Is it possible to do metaphysics after Kant?” But by then I had a different attitude toward such challenges. “Ab esse ad posse valet consequentia,” I told him. I just did metaphysics, and so it must be possible. And the right response to the question, “Yes, but is it philosophy?” is “Well, I’m a philosopher, and as a philosopher I’m interested in it; so it is philosophy.” The kind of taxonomizing that devalued feminist philosophy or any other kind of philosophizing which deviated from the narrowly focused pattern-processing favored by a small group of established philosophers talking largely only to each other was a gate-keeping that impoverished the discipline. Norman Malcom was famous for reducing graduate students in conversation with him to a sweaty mess by progressively reiterating with increasing Wittgensteinian agony the line, “I don’t understand.” It took me years to realize the right response in that case also. Malcolm could be kind, and he was kind to me; but I should have said to him, “Really? Gosh! I wonder why not. Do you have any insight into why you don’t understand?” 

But I am getting ahead of my story here. By the time I was into my sophomore year at Grinnell, I had tried three different disciplines-- biology, philosophy, and religious studies -- and had abandoned them all. Somewhere in the Spring of that year, a classics professor, John Crossett, who was annoyed that there were no courses on biblical studies in the religion department, offered to teach an informal course on biblical studies over dinner in the student union. At that point, in consequence of the differing anti-religious attitudes of my parents, I was like someone from Mars. I had never read or heard anyone else read out loud any biblical text. I had never been in a synagogue; and I had been only once in any church, when my father was organizing something at a local AME church and took me with him. I had the settled conviction that although there were some benighted people somewhere who believed in God, all educated people, all reasonable people, all people who were not complete Neanderthals, were atheists. For me, the hypothesis that there is a God was as dead as the hypothesis that there is a real Santa Claus. But John Crossett was a Samuel Johnson sort of man, difficult to deal with but intellectually powerful and admirable. Some of the basic furniture of my mind, mediated by the poetry of Homer and Dante, for example, came through his teaching. As far as I know, he never published anything; but there is a small coterie of people who are now publishing scholars and who are connected to each other because we once were his students. We all still remember him with a mixture of love, fear, and great respect. So since it was John Crossett who proposed to teach us the Bible, I went to the evening class. 

It was just like him that the biblical book he picked to teach was Ecclesiastes. It was in many respects clearly a perverse choice, but at least in my own case it also proved to be an inspired one. In a sense, Ecclesiastes was perfect for the Zeitgeist on college campuses of that time. Here is one over-riding theme of Ecclesiastes: nothing human beings care about lasts; every human achievement comes to nothing; inescapable heartbreaking suffering and irremediable injustice are everywhere; and everybody dies. That theme in all its particulars seemed so true to me, and to the students around me then too. It summarized the sorrowful human condition that anyone could and should see.

But, paradoxically, Ecclesiastes also has a kind of counter-point to that theme, which is encapsulated in the line, “God has made everything beautiful in its time.” I had no idea what Ecclesiastes meant with that line, but I was blown away by whatever it was that made that text seem so powerful, so captivating, to me. From that time onwards, I never lost the desire to understand what I saw in the book of Ecclesiastes: the acknowledgement of unspeakable human suffering paradoxically coupled with a countervailing reverence for the beauty in human lives.

So I declared a classics major, without ever having had a classics course; and I spent the remaining part of my undergraduate career working feverishly to make up for lost time. In those days, there were still comprehensive exams for majors. To graduate with a classics major, among other things one had to translate some English text into grammatically correct and even elegant Latin. The wit who set that part of the exam in my year gave us a text of Washington Irving’s to translate. Trust me, getting his lush prose into Latin is almost impossible. But I passed anyway. 

I was the valedictorian of my class – I had a straight A average, something that was rare in the school’s history; and I won a Danforth fellowship for graduate school. I mention these facts because I think they explain how it was that I was admitted to the graduate program in biblical studies at Harvard without ever having taken for credit a single biblical studies class in my entire undergraduate career.

By the time I got to Harvard, I was a Christian – a new, theologically illiterate Christian, but still a Christian. When I confessed to my father that I was going to be baptized, he was shocked and dismayed. My mother responded to the news with anger that I would embarrass her in that way. She told me that if I got married in a church, she certainly would not come. They both eventually got used to it, of course; and they were both there for my wedding. I married Donald Stump; and in the fifty and more years of that marriage, it has been the pillar of my life and work. The children that came, the two marvelous people that they married, the small troupe of smart and adorable grandchildren that they produced, all these people are central to my life. But this is a Dewey lecture; and so these much-loved family people are not going to come into the story after this.

My undergraduate and graduate years were the time of the Vietnam protests. On one of my first days in Boston, on the way to the obligatory orientation meeting at Harvard, my path was blocked by crowds of people, some of them bleeding, and police, and police cars with blaring sirens. When I finally got to the orientation meeting, I said excitedly to the leaders, “There are police and hurt people everywhere!”, and they said, “You’re new here, aren’t you?” 

Like many students at that time, I hated the war, and I despised Henry Kissinger; but I despaired when the TV news reported that protesting students had trashed Kissinger’s Center for International Affairs at Harvard. There was a big sign in front of the building that announced the Center for International Affairs; but what the protesting students never figured out is that Kissinger’s Center was only at the top of the building. On the ground floor of the building, with no sign identifying it, was the Department of Near Eastern Languages and Literature. The protesting, trashing students never went up the stairs. It was a sign of the times that when a crowd of Harvard students went with gas cans to burn down the ROTC building, they couldn’t find it. It was a tiny structure next to the great and marvelous Divinity School Library, which would have burned with it.

I don’t know what the Harvard faculty expected of me; but for my part I expected to have problems with them because of my faith, which I knew would put me at odds with at least some of them. What took me by surprise was that reason, not faith, was the main source of my problems.

I was in trouble at Harvard right from the beginning. At the start of the first graduate seminar, mandatory for new students, I asked with some perplexity whether the seminar was aiming at a historical examination of a certain period in biblical history or whether it meant to engage in a construction of historical fiction, imagining without too much excavation of historical data what human events then might have been like. That question got me labeled ‘a logic chopper’ in the notes on new graduate students distributed to everyone in the department. For one of my first papers, I wrote an essay demonstrating that if one used Wellhausen’s argument forms about the structure and history of the Pentateuch, one could conclude that Abraham Lincoln’s Gettysburg Address was modeled on a Hittite suzerainty treaty. That paper got an A and only one comment, which said, “I hope for your sake that you never review any of your own work.”  

In the spring, there was a mandatory seminar in which faculty and grad students in the program had to participate; and every week three different students had to distribute their seminar papers for the whole seminar to discuss. With what has to be acknowledged as a non-compliant attitude, I wrote a paper giving a philosophical and literary analysis of the story of Mary Magdalene at the empty tomb. The faculty saw it as their opportunity finally to do something about me. Somehow word got around, and the seminar room included significantly more visitors to the seminar than were usually present. One of the faculty opened the session on my paper by asking, very incautiously, whether I identified with Mary Magdalene. He had momentarily forgotten that in the early Christian period she was believed to be a whore. I put my hand on my heart, looked entirely shocked, and said, “Certainly not!” And the room erupted in involuntary laughter. The session on my paper didn’t actually get better after that.

Many years later, the seed of that paper became woven into my Gifford book, which has since received much favorable attention. But what that paper got me then was a demand that for some time I meet weekly with my adviser to determine whether it was that I could not or that I would not accommodate myself to the program. 

Meetings with my adviser were always difficult. Some of us used to discuss in the student lunch room how a woman could keep sufficient space between herself and that faculty member while hiding from him the fact that she was doing so. It never, simply never, crossed my mind that we could report him. In a generation of students sometime after mine, one of the women students did report him; and she discovered that Harvard already knew about him. In fact, they had a funny nickname for him. But that seems to have been the end of the story. In that period, it doesn’t seem to have occurred to Harvard either that they could do anything about a famous professor who was behaving in such ways towards his women students. 

It was a time when sexual promiscuity was seen as chic and progressivist, too, and when there was a kind of treatment of women by those in power over them that now seems incredible. But then it was so pervasive as to be unremarkable. Once, when I had become a graduate student at Cornell, one of the few women graduate students then in the Cornell Philosophy Department came back from an APA meeting distressed because, as she said, she had been propositioned numerous times but no one had given her a job interview. One of the Cornell faculty said to her with a completely straight face meant to make his remark seem more funny, “Has it ever occurred to you that you are in the wrong profession?” Those who heard him thought he was witty and smart; nobody seemed to notice that he was cruel.

In my first years as a faculty member, I was sometimes asked whether I had ever met with any sexism; and I always said fiercely, “Never!” I think I interpreted the questioner as trying to believe that my accomplishments were only the result of favoritism for minorities, as women were then thought to be in philosophy. But now I would say that for something to be seen as sexism, for example, or any other kind of unwarrantable discrimination, at least two things are needed: first, one has to recognize the discrimination as an injustice, and then one has to believe that such an injustice is unacceptable. If an injustice is pervasive and entrenched in a community, then even people who feel its injustice can come to accept it with a shrug as just one more of the things that make the world seem weird. 

During my graduate student years, it was simply expected that graduate students would go to bars or parties with faculty. Before the uproar over the fairly recent events in one American philosophy department and the prohibition against faculty and students socializing in bars and off-campus after-hour parties which was imposed on that department by the APA committee sent to investigate, it had never occurred to me to think of that almost ubiquitous practice as optional. But as a graduate student, I hated going to bars with faculty. They didn’t pay for the drinks or the food, of course; and the expense was problematic for a person trying to live on a small graduate stipend. Much worse was the disinhibiting effect of alcohol on some faculty. At that time, I simply assumed that the consequences of their disinhibition were my problem, not theirs. Much later, when the profession was revolted by one scandal after another involving the treatment of women by senior faculty, Helen de Cruz and I started a petition asking the APA to frame a code of conduct for the field. And I argued strenuously with my own Department that we too should prohibit faculty from socializing with graduate students in bars. The APA did eventually produce a code of conduct; but on the matter of socializing with students in bars, I lost the fight with my colleagues decisively.

In any event, I got through the weekly meetings with my Harvard adviser, although they were tense. At one point, he explained to me with considerable emotion that while he lived and had power in the field, no one would ever be able to study biblical texts in the way I wanted to do or in any other non-historical way either. I believed him, although, as it turned out, he was wrong. While he lived and had power in the field, Robert Alter published his narrative analyses of biblical texts, Jon Levenson published a book attacking historical biblical studies for its roots in German antisemitism, feminist biblical studies emerged as an energetic part of the field, and so on. 

Another one of the Harvard faculty said to me then, in a way that was meant kindly, “You don’t belong in the field of biblical studies because you care about the truth of propositions. You should try philosophy.” There were many things in my two years of Harvard education which I loved and for which I am still grateful. I read the Greek orators, for example, and I learned a considerable amount of Hebrew. But I believed this Harvard faculty member also. And so while I was having weekly meetings with my Harvard adviser, I was also negotiating with the Danforth foundation to change fields entirely and still keep my fellowship.

I thought that if I picked a medieval studies program and concentrated on medieval biblical exegesis, I might still find some way to study what I had fallen in love with when I first read Ecclesiastes. And so I applied to Cornell’s medieval studies program. By the time my Harvard adviser determined that I had to stay at Harvard so that the faculty there would not count as intellectually tyrannical, I had my acceptance from Cornell and Danforth’s permission to take my fellowship there. 

At that time, in an article titled “Omniscience and Immutability”, Norman Kretzmann had just published what seemed to be an elegant short proof of the non-existence of God; and it was getting significant attention from Anglophone philosophers. He was listed as being part of the Cornell medieval studies program. So I wrote an essay for my Cornell application in which I explained that his proof failed because it assumed that God is temporal. Many of the great Jewish and Christian philosophers and theologians rejected that assumption in favor of the claim that God is eternal; with that claim substituted for the assumption that God is temporal, Kretzmann’s proof was invalid. I think I might also have said something to the effect that if Cornell admitted me, I would explain Kretzmann’s error to him. In retrospect, I assume that Cornell admitted me largely because I had a Danforth fellowship. 

In my own experience, it turned out that there was no medieval studies program at Cornell at that time. There were faculty whose research had something medieval about it and who therefore could be listed as members of the program, but basically that was it. There were no rules, no mandatory courses, no exams, no policies. I was simply handed over to Kretzmann to educate, and his own interests at that time were solely in medieval logic. 

It was expected that I take every seminar taught by Kretzmann, and I also had reading courses with him. I insisted that I be allowed to take a symbolic logic course, but Kretzmann felt that there was no way to put it into my program. So Carl Ginet graciously offered to give me an informal reading course working through Kalish and Montague, which was at that time a standard logic text for philosophy graduate students. In truth, except for Kretzmann’s seminars, I found most of what I was meant to learn in my Cornell graduate career pretty close to unintelligible or boring if intelligible. As far as symbolic logic went, I couldn’t see why anyone cared about it. I couldn’t fathom why anyone would suppose that the depth and richness of philosophical understanding could be captured in the pencil sketches of formal logic. But philosophers then most certainly did suppose so. At that time, it was imperative to fill one’s papers with logical symbols, and a paper which lacked such formalism was taken to be weak or flimsy or just not smart enough. So I dutifully tried to understand what I was supposed to learn from Kalish and Montague. There was a point at which I couldn’t, and I concluded that there was an error in the logic in the text. Ginet assured me that that was impossible. But then he looked at the text with me and agreed, incredibly enough, that there was an error in the text, not a glitch in the printing, but an error in the logic. I mention this small triumph on my part as my bona fides: I wasn’t just defiant. I was trying to learn what I believed I was required to know. 

And I readily concede it: Kretzmann had much to teach. He was a ruthless critic, and one had to be constantly on the qui vive in talking with him or writing for him. I learned a great deal from him, including the value of care and precision in thought and words. On the other hand, it also has to be said that I gave him more trouble than he could have imagined when he accepted me as his student. Our jointly authored retraction of his argument in “Omniscience and Immutability” was eventually published in The Journal of Philosophy. 

When it came time for me to pick a dissertation topic, I told Kretzmann that I wanted to work on Augustine. He said that topic was out of the question; I would work on fourteenth-century logic. I said that I hated fourteenth-century logic. And he said, OK, then we would compromise; I would work on sixth-century logic. I would translate a logical treatise by Boethius and annotate it. And that was the end of the discussion.

Because of Kretzmann, I did learn a lot of medieval logic, and I came to be grateful that I did. The Boethian treatise I worked on for my dissertation is part of what was called ‘the old logic’, and it was the backbone of the medieval curriculum; failure to understand the old logic properly can lead an unsuspecting historian to misinterpret seriously a significant bit of scholastic theology, for example. And the fourteenth century logic I didn’t want to work on is actually wide-ranging in its topics and brilliant in its treatment of them. So it was good for me to have learned as much as I did of the history of logic. It just wasn’t what I wanted to study or write about.

After I finished my dissertation, I had a one-year post-doc at Cornell and then a two-year post-doc when the first one ran out. During the second post-doc, as I remember these events, the Dean got funding for a tenure-track position in the history of Christian thought; and he invited the Classics Department and the Philosophy Department to compete for it. I was a candidate for the position from the Philosophy Department, and I gave a paper on philosophical puzzles about petitionary prayer in the history of Christian thought. On reviewing the competing candidates in the two Departments, the Dean decided to award the line to the Philosophy Department, which then offered me the position; and I accepted it. Two days later, I was told that the Philosophy Department had changed its mind and the offer was retracted. 

Clearly, there is a back story here, to which I was not then privy and which I don’t know now either. But the explanation then given to me was that in the end the Philosophy Department had found itself unable to stomach the idea of having a position in the history of Christian thought in a philosophy department, even if the Dean was giving it to them as a gift. Christian thought just wasn’t philosophy, and a fortiori neither was its history. It consoles me now to say that the paper I gave as the job talk for that position was published in American Philosophical Quarterly and has since then been reprinted seven times in various philosophical anthologies. In his excellent volume in the Cambridge Elements series on the philosophy of religion, Scott Davison calls it “the landmark article that started the contemporary debate about petitionary prayer.” 

In the circumstances, I resigned the last year of my post-doc at Cornell and went with my husband to Colgate University, where he had a visiting professorship. By that time I had already published a number of papers and had a book in press; so I sent my CV to the Colgate Philosophy Department and asked if they might be willing to accord me visitor status. In particular, I asked to be on their mailing list for colloquia, to have library privileges, and to be allowed to have a mailbox in the Department. In response, the Chair of the Department sent me a note explaining that if they put every faculty wife on their mailing list, it would be too much trouble for them. In addition, he said, faculty wives already had library privileges; they just had to bring a signed note from their husbands with them each time they needed something from the library. And, finally, he said, the Department would have nothing to do with my attempt fraudulently to present myself as a member of the Colgate Philosophy Department by having a mailbox there. 

In my new status as faculty wife at Colgate, I began to see that Kretzmann had been right about my dissertation topic, at least in some respects. I had learned a great deal about the history of logic through that work, and it was professionally fruitful. Cornell University Press published my dissertation, and I went on to translate a second Boethian logic treatise, twinned with the one in my dissertation, which Cornell University Press also published. Then the raft of my papers on the history of logic was also published by Cornell as a volume of my collected essays. And things kept going on in that way. I began to think that there was no way I could ever stop doing the history of logic.

It should be said that at that time, although the history of philosophy was actually burgeoning, many philosophers in the Anglophone tradition thought that the history of philosophy simply wasn’t philosophy, which is to say that it wasn’t worth doing. The history of medieval logic had become the new excitement in medieval philosophy; and, in the wider philosophical profession, medieval logic had this going for it, that it was a study of something that somebody sometime believed to be logic. What it had going against it was that the medieval idea of logic was so far removed from the symbolic logic then standard among Anglophone philosophers that, in general, medieval logic seemed as discountable as any other part of the history of philosophy.

There was also a dispute over the right way to do history of philosophy. Should one investigate a historical philosophical text only as a historical product of its times, or should one try to learn something philosophically significant from it? The contention over this issue was sharp at that time. Once in those early years, when I was at Columbia to give a lecture, one of the eminent faculty there took both my hands in his and said to me earnestly, “Let me give you some advice, young lady. Just stay totally away from Norman Kretzmann and his method of doing medieval philosophy.” On a later occasion when I was lecturing in Bonn on Aquinas’s meta-ethics, which rests on the correlation of being and goodness as Aquinas saw it, the senior professor in the audience said sternly to me, “Young woman, I have spent years going over the texts of Aquinas; and I can assure you that in his entire corpus there is no Latin word equivalent to ‘meta-ethics’.”

In my view, these disputes have been laid to rest now, not because we now all see things the same way but because those older disputes have been replaced by newer and more localized versions of the same kind of controversy. We now fight about whether the approach to the study of Aquinas which a senior scholar unwisely baptized ‘analytical Thomism’ is true to the thought of Aquinas or a travesty of it. We argue about whether the new developments in philosophical investigation of theological thought, which another senior scholar wisely and correctly labeled ‘analytical theology’, constitute a bridge between philosophy and theology or are simply an invasion of theology by aggressive, historically uninformed analytic philosophers. And so on.

In the spring of that year at Colgate, my husband got a job at Virginia Tech; and, as luck would have it, that university was also advertising a position in medieval philosophy, for which I applied and which I was offered. It isn’t possible to tell everything in one lecture, and so in this lecture I am largely leaving out stories about service and travel and teaching. These things are important to me, but I don’t know how to fit everything important to me into this lecture. I do want to say, though, that the students I taught at Virginia Tech stay in my mind as specially admirable and dear. As part of my position there, I had to teach humanities courses required of every student at the University. Those courses were hard to teach, not least because they were required; many students thought of them as an unwarranted intrusion into their much more valuable professional education. One engineering student whom I remember with fondness would come into my Hellenistic and Early Christian Humanities course carrying the New York Times in his hand. He would sit down in the front row, spread the paper out across his lap, and read it ostentatiously while I was trying to teach. I thought about throwing him out, of course, but I didn’t want to. I liked his defiance, and I wanted to see if he couldn’t learn to love what was worth caring about in what I was teaching. And finally he did. At one point in the course, he said to me, “You know, we are discussing Augustine over dinner, we are discussing Augustine in the dorms; heck, we are discussing Augustine in the showers.”

Somewhere in that same period, I found the rescue from the history of logic that I had been looking for. Routledge was in the process of commissioning their great series The Arguments of the Philosophers, and they had asked a well-known philosopher to do the Aquinas volume. I won’t comment on that invitation except to say that he turned them down and suggested that they ask me instead; and they did. At that point, I had no more credentials to do such a book than he did. But I asked Kretzmann if he would do the book with me, and he agreed; and Routledge gave us both a contract. From that point on, I turned down every invitation to write on the history of logic on the grounds that I was busy writing this Aquinas book. 

In the subsequent years, both Kretzmann and I tried to tailor our own teaching and research to studies of Aquinas; and we did some joint papers on various aspects of Aquinas’s thought also. I don’t know how many years elapsed between our signing the contract and the manuscript’s being sent to Routledge; but it was a lot. When I finally sent the manuscript in, I thought Routledge would surely reject it. It was years overdue, and it was more than double the word length specified in the contract. I think that the only reason Routledge accepted it anyway is that they had just recently rejected Marilyn Adams’s Ockham volume on the same grounds: it was very late and way over the word limit. They discovered their mistake when her book was published by a different press, and the entire scholarly world lauded it as the definitive study of Ockham, certain to be a classic in the field. Routledge didn’t want to take any chance on a repetition of such a mistake, and so they accepted my Aquinas book even in those circumstances.

And by the time I sent the manuscript in, it had become just my Aquinas book. Kretzmann was diagnosed with multiple myeloma in 1991, and we decided together that his remaining energies ought to go into trying to finish the projected three-volume study of Aquinas’s Summa contra Gentiles which he had in progress. He was finally bested by the cancer in 1998, still several years before the Aquinas volume was done. He was my teacher and mentor, and over the years he became my friend. I have no words to express what a loss his death was for me.

In all that time, I never lost my desire to write something on human suffering, to try to understand and somehow share with others what I thought I had glimpsed in that early study of Ecclesiastes. When I was invited to give the Gifford lectures, I felt as if I had finally been given the chance to write such a book. By then, I had accepted a full professorship at Notre Dame and then resigned it for an endowed Chair at Saint Louis University. It seemed to me that a person in my position who did not use the opportunity of the Gifford lectures to overstep the culturally imposed boundaries of the field was simply a wimp. So I made my Gifford lectures turn on a philosophically reflective study of four biblical narratives in order to show something significant about the nature of human suffering and the possible remedies for it. 

Because that work was likely to be seen as controversial by scholars in more than one discipline, I gathered together at the Ecumenical Institute at St. John’s in Minnesota a group of scholars to work over the lectures with me in painstaking detail before I published them. There were analytic philosophers, theologians with competence in historical biblical studies, and composers and musicians who also were theologians. The weeklong workshop in that great Benedictine monastery has remained for me one of the highlights of my career. The expertise and erudition of those scholars was matched only by their generosity. The memory of the excitement of working through the ideas of the book with them is still vivid to me, and so is the blessing of their friendship. 

As I was in the process of finishing the revisions prompted by that workshop, I was asked to give the Wilde lectures and then also the Stewart lectures. I had my heart in the Gifford project as in nothing else I had ever done before, and I didn’t want anything to distract me. But then I saw that both the Wilde and the Stewart lectures could be used to build an infrastructure for the Gifford lectures. I could try to explain to my peers what narrative has to contribute to philosophical reflection, which is impoverished without it, and more fundamentally what kinds of human cognitive mechanisms explain the fact that narratives have such power. Those and other related topics became my Wilde and Stewart lectures, and then I labored to weave them into the already polished Gifford lectures. The effort slowed down the production of the book but also greatly improved it. 

In writing that book, Wandering in Darkness. Narrative and the Problem of Suffering, I relied heavily on my Aquinas book. It gave me the exposition and defense of the Thomistic worldview which I used in the Wandering book to sketch the possible world of a defense or theodicy. Nonetheless, the Wandering book itself eventually grew so big that I found I could not put into it everything I had originally intended and still thought needed to be added. 

And so I set out to finish the reflection on human suffering by writing another book on atonement and the problems of human guilt and shame. Because of the religiously fraught topic and the controversial nature of the basically Thomist views I meant to explain and develop, I wanted to hear every possible objection before the work was done. And so, besides presenting the heart of the book as the Stanton lectures, I gave lectures based on the book at various schools, in various countries. Also, reading groups at Baylor, St. Andrews, and York worked through the manuscript; and a number of other philosophers and theologians commented extensively on one or more chapters. I tried to learn from all the comments and objections given me by these generous and accomplished people. Even so, as I had expected, the book met with opposition from some quarters when it appeared. But my favorite compliment on the book came from a reviewer who acknowledged that he had had to struggle to review the book charitably. He said that I reminded him of a person who climbs mountains without ropes. I don’t deserve that compliment, but it is my favorite one anyway.

By the end of the Atonement book, which was large also, I found that I still had not been able to stuff into it everything which I had originally intended to put into the Wandering book. I had not so much as touched on what is, in my view, one of the toughest parts of the problem of human suffering. 

To see this problem, suppose, just for the sake of argument, that there is a successful theodicy or defense against the argument from evil and an acceptable interpretation of the doctrine of the atonement, which shows how to remedy human guilt and shame. That is a lot to suppose, of course. But here is the point of the exercise. A theodicy or defense purports to show a morally sufficient reason for God to allow a person’s suffering, and that reason will generally include a description of a benefit to the sufferer that outweighs the suffering and couldn’t be gotten without it. But why, even so, is it not the case, as the book of Ecclesiastes seems to be claiming, that every human life is still a distressing or depressing variation on what it might have been? Even if there is a morally sufficient reason for God to allow a person’s suffering, why isn’t there nonetheless something for that person to mourn over in his suffering? Or think about the matter this way. Aquinas makes a distinction between God’s antecedent and God’s consequent will. The antecedent will is what God would have willed if everything had been up to God alone. God’s consequent will is what God actually wills given what human beings will. But why does this distinction not imply that God has to settle for plan B with regard to creation, so that God has reason to mourn over what his creation turned out to be? In my view, these and other reflections raise the problem of evil again in a new form, the problem of evil regrouped as a problem of mourning. 

And so I wrote one more book to deal with this remaining part of the problem of suffering and some of its correlative issues, including the notion of the true self, the relational character of human nature, truth in narratives, and a host of others. That book, The Image of God. The Problem of Evil and the Problem of Mourning, appeared last year. With that book, I have finally completed what I had originally set out to do in writing a book on human suffering. For sure and certain, that is the last book I will write on this topic.

If I were forced to try to convey in a single sentence the thesis of all these books, I would say it is the whole Christian worldview in all its complexity, the view that at the ultimate, irreducible foundation of all reality there is love in powerful, welcoming personal relationship. And if I had to sketch the solution in the Christian tradition to the problem of mourning in particular, I would put it this way. In one place, the Psalmist says to God, “In the presence of the angels, I will sing your praises.” It is one thing for a human being to sing praises to God; it is another thing entirely for a human being to sing those praises in the presence of the angels. How would the Psalmist dare? The answer lies in suffering love. Although on medieval angelology the angels in heaven are metaphysically greater than human beings, they do not suffer; and so they do not mirror the love of God manifest in Christ’s passion in their own love of God either. And so, on the Christian tradition, when the Psalmist or any sufferer sings the praises of God in the presence of the angels, the angels will want to listen because the song that is the life of the sufferer is more glorious than any song the angels sing.

At various times while I was writing these books at Saint Louis University, it was proposed to me that I ought to accept a position elsewhere. On one of those times, an eminent philosopher who was a friend of mine argued earnestly with me that I should accept an endowed chair being offered me at a different university; he said I owed it to God and the church to do so. But, I told him, maybe I am called to write books; and I can write better if I stay where I love and am loved. Well, he said impatiently, you would love and be loved at the other school too. It seemed to me, however, that loves are not as readily transferable as he supposed; and I turned down that offer too. Saint Louis University is a Jesuit school, and it attracts people whose love for something greater than themselves drives them. The same thing can be said about Aquinas Institute, the Dominican school in St. Louis.

Furthermore, unlike Benedictines, Jesuits and Dominicans are not bound to a particular locality; they wander freely, in effect. And all universities encourage travel. So over the decades of my career at Saint Louis University I have lived and worked in a community of people many of whom have been local with me, but some of whom have their primary homes in other places, not only in the US but also, for example, in Oxford, St. Andrews, York, Munich, Cracow, Melbourne, Buenos Aires, Jerusalem, Wuhan and elsewhere as well. I have no idea how to express gratitude adequately for all that I have been given by this community, although I have tried to do so in the preface to one book after another. And something analogous needs to be said about the students. In my years at Saint Louis University, I have had many superb and admirable students; and I couldn’t be more proud of what they have accomplished in their own work or more fond of them as the people they are. I don’t accept the mantra I was raised with: Selbstdisziplin ist alles ; and I don’t accept either the notion of intelligence as it is explained by those who construct intelligence tests. But I do believe that love can accomplish great things, and I have been blessed to live and work with many great and loving people who have been part of my local or extended community. 

So here is the end of my story. The practice of philosophy is much deeper and richer now than it was when I was young in the discipline. And however great the remaining injustices against those marginalized in the profession are, there has nonetheless been much welcome progress in recognizing and combating them. I don’t want to add here a list of such remaining injustices. I mean to refer to every significant injustice currently still staining our profession, and I don’t see why I should suppose that I myself am capable now of recognizing all of them, or even just the most important of them. For my part, the books on the problem of suffering are finished, the current crop of students are on their way, and I am going into phased retirement at the end of this year. I have done what I gave my heart to, in an inchoate way, as an undergraduate. There is still a lot of work waiting for me, of one sort or another, and I am looking forward to being immersed in it. But central to it is just one remaining paper on Ecclesiastes, whose paradoxical saying about beauty in the lives of suffering human beings I think I have finally understood.