• Features
  • Reviews
  • Teaching
  • Watch & Listen
  • About

The past is never dead. It's not even past

Not Even Past

Honorable Mention of 2013 Essay Contest: Covered with Glory: The 26th North Carolina Infantry at Gettysburg by Rod Gragg (2000)

imageby Adrienne Morea

Harry Burgwyn was twenty-one years old when he led more than eight hundred soldiers of the 26th North Carolina Infantry into battle at Gettysburg on July 1, 1863. Two and a half days later, after two bloody assaults, fewer than one hundred remained fit for duty. According to some calculations, the 26th North Carolina “incurred the greatest casualties of any regiment at Gettysburg” (Gragg 210). Despite these losses, the 26th rebuilt itself and continued fighting for an additional twenty-one months.

This fascinating regiment is the subject of Rod Gragg’s Covered with Glory: The 26th North Carolina Infantry at Gettysburg. As the subtitle indicates, the majority of the book covers the Gettysburg campaign, but it is also an admirable history of the 26th North Carolina and its role in the American Civil War, from the regiment’s establishment in the summer of 1861 to its surrender at Appomattox and the postwar lives of its survivors.

This book is the story of the men and their regiment. By and large, it is not about politics, nor is it an argument about the causes or broad issues of the war. It is a narrative of the experiences of men and boys, in camp, on the march, and on the battlefield. Such a detailed, personal view can enhance anyone’s understanding of the monumental history involved.

Readers make the acquaintance of many Tar Heels, from privates to generals, who fought in or were closely associated with the 26th North Carolina. This regiment was remarkable for the youthfulness of its commanders, several of whom were college students before the war. Colonel Henry King Burgwyn Jr. had graduated from two

image

institutions of higher education before he was twenty. Major John Thomas Jones, twenty-two, had been a schoolmate of Burgwyn. Lieutenant Colonel John Randolph Lane turned twenty-eight the day after the fighting at Gettysburg ended. At thirty-four, Brigadier General James Johnston Pettigrew, who commanded the brigade that included the 26th, was already an accomplished scholar in several disciplines. The officers are important and engaging characters, but they are not the entire story. Readers also meet lowlier fellows such as Private Jimmie Moore, a farmer’s son who was fifteen when he enlisted and seventeen when he was wounded at Gettysburg, and Julius Lineback, a slight, observant musician of twenty-eight.

Gragg tells the tale with eloquence, with great affection for the men of the 26th, and with respect for their opponents in blue. Covered with Glory is a work of nonfiction, but it is also a fine piece of storytelling. Sixteen pages of images help to put faces on the people in the text.

We are now in the sesquicentennial year of the Gettysburg campaign. This is a fitting time to study the events and people of the Civil War. As Lane said in a postwar speech, the story of the men of the 26th does not belong only to North Carolina or to the South, but rather it is “the common heritage of the American nation” and represents “the high-water mark of what Americans have done and can do” (Gragg 245). If you are interested in the American Civil War, in nineteenth-century life, or in military history, you should read this book. If you are or ever have been a college student in your twenties, you should read this book.

Photo credits:

Unidentified Union soldier, 1860-1870 (Image courtesy of Wikimedia Commons)

And be sure to check out Kristopher Yingling’s winning submission to Not Even Past’s Spring Essay Contest.

Filed Under: 1800s, Periods, Regions, Reviews, Slavery/Emancipation, Topics, United States, War Tagged With: Civil War, Gettysburg, Undergraduate Essay Contest, United States

Winner of Spring 2013 Essay Contest: Survival in Auschwitz by Primo Levi (1956)

by Kristopher Yingling

In Survival in Auschwitz, Primo Levi depicts a life where, under the severe conditions of hunger, cold, illness, and constant fear, men are transformed into beasts, and where justice and morality become insignificant in the fight for survival. Upon entering Auschwitz, families are separated and immediately hundreds are sent to their deaths. Tattooed and given their new identity of serial numbers, many forget their own past and their names.

image

Initially, Levi accepts his imminent death as everyone emphasizes that the only exit from Auschwitz “is by way of the chimney.” But for the next year, Levi learns to live pragmatically and efficiently under the Nazis’ continuous brutality. In doing so, he discovers two very different reactions from living in Auschwitz: those that are “saved” and those that are “drowned.”  The drowned rarely exist in normal society, but in Auschwitz, they are everywhere. For many, the constant cruelty dehumanizes them to the level of animals, where men accept their fate and work themselves to certain death. Conversely, a few manage to push back and use their strength, intelligence, and patience to fight relentlessly within themselves against Nazi enslavement. It is only this perspective that gives men the determination and mental fortitude to survive.

Eventually, the prisoners worked their bodies to such desperate conditions that every man fended for himself. Friendships were based on pragmatism and selfish interest. Faced with utter solitude, many prisoners would lose all motivation for survival. Out of the thousands that entered Auschwitz weekly, only a few hundred would survive. These usually included the most valuable of prisoners, such as doctors, tailors, and shoemakers. Levi and many other camp veterans understood this and reacted to make themselves appear useful and not to drown in their detestable conditions. In doing so, the saved depended on the “underground art of economizing on everything, on breath, movements, even thoughts.” In the process, Levi witnesses how this strategy often leads the saved to social savagery and how “the struggle for life is reduced to its primordial mechanism.”

image
Jews from the Carpathian Ruthenia region arrive at Auschwitz, May 1944 (Image courtesy of United States Holocaust Memorial Museum)

Auschwitz represented a period of incomprehensible extremes, where the terms hunger, “tiredness, fear, pain,” and winter do not characterize their normal, societal notions. Levi “entered the camp like all others: naked, alone, and unknown.” However, he quickly learned the importance of defiance. By maintaining his humanity and individualism, he avoided becoming part of the drowned anonymous mass of beings marched into the gas chambers. Shipment after shipment, men worked themselves to death in silent desolation. But for Levi, these men had already died. Their names and memories left this world the moment they arrived. Through the crematorium’s smoky black ash their existence was forever forgotten, never even known by the few who witnessed their fate. Fortunately for Levi, rescue came before his breaking point. However, towards the end, even he showed signs of the “drowned.” As French prisoners tell Levi of the German’s retreat, he “no longer felt any pain, joy, or fear.” By now, it was only a matter of fact.

And be sure to check out Adrienne Morea’s honorable mention submission to Not Even Past’s Spring Essay Contest. 

Filed Under: 1900s, Europe, Periods, Regions, Reviews, Topics, War

Einstein, Relativity and Myths

by Alberto A. Martínez

We’ve all heard of the theory of relativity, but what factors really led Einstein to that famous work? In this fascinating talk, Professor Al Martinez discusses how young Einstein formulated relativity, by focusing on debunking several historical myths. His talk is based on his books: Science Secrets: The Truth About Darwin’s Finches, Einstein’s Wife, and Other Myths (2011), and Kinematics: The Lost of Origins and Einstein’s Relativity (2009).

Al Martínez’s piece about Einstein’s religious beliefs

Michael Stoff’s piece about the evocative “Einstein Letter”

Filed Under: 1900s, Discover, Features, Science/Medicine/Technology, United States Tagged With: digital history, history, History of Science, US History

Papal Resignation: What the News Media Left Out

by L. J. Andrew Villalon

When Pope Benedict XVI announced his resignation, effective February 28, 2013, he caught almost everyone by surprise. No sooner was the announcement made than the media began casting about for how long it had been since a pope had resigned rather than die in office. The morning after the announcement, one TV show host stated confidently that it had been 719 years, a number that takes us back to the reign of Celestine V who resigned in 1294. Later, however, a consensus emerged among the various news shows that the most recent resignation of a pope had actually come just under six centuries ago (1415) and involved Pope Gregory XII.

Having determined a date (or rather, two dates), the media now seems content to move on without conducting even the most perfunctory investigation into these medieval precedents or any consideration of how they might relate to the latest instance of what is an exceedingly rare event in Christian history.  Were the resignations of Celestine V and Gregory XII in any way comparable to that of Benedict XVI?  Were the “resignations” of these two medieval popes actually resignations? And which resignation serves as a better precedent for the current pope’s decision?

Pope Celestine V

Let’s start with Celestine V.  Born in Sicily of humble origins, Pietro da Marrone, Pope Celestine V, was a hermit and member of the Benedictine Order whose life spanned most of the thirteenth century. He is known primarily for having founded the Celestine Order, a highly ascetic branch of the Benedictines, and for having allowed himself to be elected to the papacy at age 79, against his better judgment and despite being temperamentally unfit for the job. He only surrendered to the entreaties of others, including two contemporary kings, after the papacy had been vacant for two years!  After serving only five months, however, he clearly had had enough. He resigned, having first issued a papal decree justifying such a course of action for a pope.  Among other reasons, he cited both his health and a desire to lead a purer and more tranquil life.

Despite his wishes, Celestine was not permitted to reassume his former lifestyle. Fearing that he might become a focal point of opposition, his successor and one-time friend, Benedetto Caetano, the formidable Boniface VIII, had him imprisoned. Celestine died ten months later, perhaps with some help from Boniface. In 1313, less than two decades after his death, he was canonized at the instance of the French king, Philip IV “the Fair.”

Just after this period, in the fourteenth and early fifteenth centuries, the western church was rocked by two related events known as the Avignon Papacy (1305-1378) and the Great Schism (1378-1417). During the first of these, a series of French popes left Rome and took up residency in the southern French town of Avignon on the Rhone River. In 1377, the last of these French popes, Gregory XI, returned to the Holy See, but died within a year, setting the stage for the first papal conclave to be held in Rome in seven decades.  In April, 1378, the cardinals who had accompanied Gregory back to Rome elected an abrasive Italian churchman who took the title Urban VI.  Five months later, those same cardinals, highly disillusioned with their original choice, declared the first election invalid due to its having been conducted under threat from the Roman crowd, which was demanding the election of an Italian pope. They then selected another pope, a relative of the French king, who took the papal name Clement VII and reestablished himself in Avignon. For nearly forty years, this division endured as the west was treated to a vision of two and eventually three squabbling popes, hurling anathemas and even preaching crusades against one another.  Meanwhile, European nations chose up sides with about half supporting Rome and half supporting Avignon. The original division became self-perpetuating when, upon the death of each rival pope, his supporters elected a successor.

Gregory XII was the last of the Roman popes to reign during the Great Schism. Although his career is nowhere near as interesting as Celestine’s, it does shed useful light on the whole issue of resignation.  Gregory was chosen pope in 1406 on the condition that if his rival at Avignon, Benedict XIII, were willing to resign he would do the same, thus clearing the way for an end to the schism. Ultimately, neither man proved willing to take this step.

Pope Gregory XII

As a result, disgusted cardinals from both camps met and called for a council of the church at which they planned to depose both popes and elect a single successor.  The council met in 1409 in the Italian city of Pisa.  Both popes were invited and when both failed to show up, the cardinals deposed them in absentia as heretical schismatics who were scandalizing the church. Afterwards, the council selected a new pope, Alexander V, who soon died and was replaced by a worldly soldier-turned-churchman, Baldesare Cossa, Pope John XXIII, whose only real virtue seems to have been that he could defend the council.  Since neither of the other two popes accepted dismissal, for the next six years, the western church had three popes.

In October 1413, another church council was called at the instance of King Sigismund of Germany and Hungary (later the Holy Roman Emperor); it began to meet north of the Alps, in the city of Constance. By 1417, the Council of Constance had reunited the Church under a single pope, Martin V, thus putting an end to the Great Schism. As part of the process, on July 4, 1415, Gregory XII’s representative at the council tendered his resignation in absentia.

Council of Constance

Technically, Gregory resigned, but not with any great willingness. The resignation itself was of questionable status given that six years earlier the Council of Pisa had already deposed Gregory and declared him both a heretic and a schismatic. For such a figure to now be allowed to resign is, to say the least, highly ironic.  In effect, the Council of Constance was sweetening the deal in order to get Gregory’s blessing on any election that would follow. It did so by cancelling the earlier deposition, expunging his conviction as a heretic and schismatic, recognizing him as the Dean of the College of Cardinals, and allowing the family members he had appointed as cardinals (despite his earlier commitment not to undertake any such appointments) to retain their positions.

While the council would undoubtedly have made similar concessions to Gregory’s rivals, neither man proved willing to renounce his position.  As a result, John XXIII was deposed, tried, and found guilty of heresy, schism, and a whole range of more mundane crimes, and imprisoned.  Even he, however, was eventually forgiven and made Cardinal bishop of Tuscany.

The third pope, Benedict XIII, living safely outside the reach of the council in his native Aragon, continued to claim that he was the only legitimate pontiff.  As his last act on earth, he appointed four new cardinals and charged them with electing his successor. In a complex election, they produced two more popes, Clement VIII and Benedict XIV, both of whom later abdicated. Although the resignation of both of these men postdated by a decade that of Gregory XII, neither is recognized as the most recent to resign since neither is recognized by the church as a true pope.

Despite the international media consensus, there is no comparison between the resignation of Gregory XII and that of the current pope. At the time of his departure, Gregory’s legitimacy was already very much in question. He was under great pressure to resign and would have faced removal and possible imprisonment had he refused. Instead, for his willingness to go along, he became dean of the College of Cardinals. By contrast, no one would question the legitimacy of Benedict XVI. No pressure to resign has accounted for his decision; instead, that decision was completely voluntary. In short, anyone who looks to Gregory XII as a precedent for the present situation is ignoring the very different historical context that surrounds the two resignations.

On the other hand, the resignation of Celestine V is much more in line with Benedict’s.  Both men willingly (in Celestine’s case, eagerly) chose to leave the papacy and its strenuous demands.  Both cited their health as one reason for quitting a job they were no longer physically able to perform. If there is a major difference, it might lie in Celestine’s wish to lead a more pure life by escaping the corrupting influences of church government; given Benedict’s history as an “organization man,” it seems unlikely that he would concur in these sentiments.

Even with that difference, it is fairly certain that Benedict would point to Celestine rather than Gregory as an inspiration for his decision to retire. Perhaps he was already thinking along those lines when he proclaimed the year from August 2009 to August 2010 as the Year of Saint Celestine.

Images via Wikicommons

Filed Under: 1400s to 1700s, Europe, Features, Film/Media, Religion Tagged With: Papal resignation

State of Virginity: Gender, Religion, and Politics in an Early Modern Catholic State by Ulrike Strasser (2004)

imageby Julia M. Gossard

Munich’s central square, Marienplatz, is best known today for its magnificent Rathaus-Glockenspiel that delights tourists and townspeople alike with its melodies. But until the nineteenth century, the square’s main attraction was a golden pillar adorned with the Virgin Mary known as the Mariensäule.  Still standing today, the Mariensäule is a reminder of the religious reformations Bavaria endured as well as the Bavarian state’s early attempts at centralization and modernization in the seventeenth and eighteenth centuries.

Erected in 1638 by order of Elector Maximilian I of Bavaria to thank the Virgin for protecting the city from an attack by Protestant Swedes during the Thirty Years War, the Mariensäule not only represented Maximilian’s fervor for Catholicism, but, as Ulrike Strasser writes, also represents his use of “virginity as a master metaphor to elaborate ideas about good governance and a functioning society.” Usually used to imply innocence, purity, and occasionally frailty, images of virgins and virginity were among Maximilian’s strongest metaphorical tools.  State of Virginity explores how Maxilimilian employed female virginity to increase patriarchal power and limit female agency and facilitate Bavaria’s centralization.

Drawing on a wide variety of archival documents including Bavarian laws, civil court records, ecclesiastical court documents, and select convents’ records, Strasser investigates the ways in which marriage, family organization, and female religious life changed as a result of the new emphasis placed on virginity as the female moral and political ideal.   Strasser explains that judicial records are useful to her study because they show how individuals explained their own behavior, emotions, and identities under the eye of powerful institutions. These records permit her to observe the state or the church at work, and to see how people reacted to mandates from above.

Elector_Maximilian_I_of_Bavaria_and_Elisabeth_Renee_of_Lorraine_by_an_unknown_artistStarting with an examination of Bavarian marriage, Strasser notes that people explained their attitudes toward marriage and sexuality in the context of competing religious and secular judicial discourses.  The Catholic Church wished to have all couples marry, regardless of social status, in order to affirm their respect for the sacrament in marriage and avoid licentious behavior.  The state, on the other hand, took a rather paradoxical approach to marriage with its establishment of Munich’s marriage bureau.  Of the utmost importance to the marriage bureau was a bride’s virginal status.  If a woman was not a virgin, the union was unlikely to be approved by the marriage bureau.  The state saw this virginal prerequisite to marriage as a way to prevent poor people from procreating outside of marriage, and reduce sexually licentious unions. However, in addition to virginal status, the marriage bureau also scrutinized the financial stability of couples.  On top remaining chaste, the prospective spouses also had to prove they were capable of providing for a family.  For the poor couples, this was often difficult to achieve.  Therefore, the creation of the bureau resulted in marriage becoming a type of social status reserved for the upper echelons of society.

Penn_Provenance_ProjectBy making the prerequisites of marriage so strict, Bavarian authorities required women to “uphold the boundaries of a new social and sexual order” that made virginity a moral obligation, among both upper and lower classes.  When wealthy women remained chaste, their families’ economic interests and possible alliances with other wealthy families remained intact, benefitting both the families and the state, which relied on these families for money and support.  When women from the lower sorts remained chaste, the state believed the number of illegitimate children and single mothers would greatly decrease. This would also further strengthen the patriarchal household that the Catholic state viewed as being essential to an orderly and stable society.  Although virginity became the female moral and political ideal, as Strasser argues, that was often difficult for women of the lower sorts to achieve.  With marriage being denied to poor couples, these couples entered into nonmarital sexual relationships that were not sanctioned by the state.  Strasser hints that the “perpetual state of virginity” that the state advocated for women who were denied marriage by the bureau, was simply an unrealistic goal.  One of the only institutions that guaranteed a perpetual state of virginity for women was a convent. However, just like marriage, in the seventeenth-century, Bavarian cloisters turned away poorer women and increasingly became depositories for elite, unmarried women. Though groups of unmarried, uncloistered virgins, like the English Ladies, were established, they too consisted of “honorable women,” meaning those from the upper-middling classes or the elite.  Although poor women may have remained chaste, the Bavarian state began to view unmarried and uncloistered poor women, regardless of their individual virginal status, as a “social and sexual threat” to the Bavarian state.

800px-View_of_Rathaus_and_Frauenkirche_from_Marienplatz_Munich

With marriage, family, and the convent all becoming elite institutions, what happened to the unmarried, poor, virginal woman?  Are we to believe that she merely succumbed to “the sins” of the lower sorts and entered into profligate relationships?  Strasser suggests, without much evidence, that the new marriage regulations and convent restrictions may have strengthened the state’s control over noble society but actually led to more relationships outside of marriage among the lower classes. Despite this lack of evidence, State of Virginity is an innovative piece of scholarship. Other studies have focused solely on the impact that this new “virginity” had on women’s experiences, but while Strasser does include the effects on women, her most poignant arguments explain how the state’s regulation of virginity brought about changes in societal structure, specifically the centralization of the Bavarian state. State of Virginity successfully repositions the role of the female sexualized body as a factor in the strengthening of Bavarian patriarchy and the process of state building under Maximilian I.

Photo Credits:

Maximilian I, Elector of Bavaria, with his wife Elisabeth Renée of Lorraine, 1610 (Image courtesy of Wikimedia Commons)

Hand colored illustrated of Maximilian I at the age of 11 (Image courtesy of Penn Provenance Project)

Munich’s Rathaus-Glockenspiel (Image courtesy of Wikimedia Commons)

 

Filed Under: 1400s to 1700s, Europe, Periods, Regions, Religion, Reviews, Topics Tagged With: catholicism, european history, gender, religious history

A Rare Phone Call from One President to Another

by Jonathan C. Brown

“Señor Presidente,” Lyndon Baines Johnson said via a long-distance telephone call from the Oval Office.  “We are very sorry over the violence which you have had down there but gratified that you have appealed to the Panamanian people to remain calm.”  President Johnson often talked politics on the phone but seldom with foreign leaders.  Johnson, who had just succeeded to the presidency of the world’s most powerful country, was speaking to the head of state of one of the smaller nations of the Western Hemisphere.  The call marked the only time that Johnson spoke to a Latin American counterpart by telephone during his presidency—a fact that demonstrates how serious he considered the situation.  This unique president-to-president phone conversation occurred on January 10, 1964, following the first full day of riots by Panamanian youths along the fence line between Panama City and the U.S. occupied Canal Zone. It was the first foreign crisis of the Johnson presidency.  Johnson’s call was translated by a Spanish-speaking U.S. Army colonel, transcribed by the White House staff, and preserved in the archives of the LBJ Presidential Library and Museum.

800px-Dean_Rusk_Lyndon_B._Johnson_and_Robert_McNamara_in_Cabinet_Room_meeting_February_1968

Remarkably, Panamanian President Roberto F. Chiari, more than held his own in this conversation between unequal powers.  “Fine, Mr. President,” responded Chiari, “the only way that we can remove the causes of friction is through a prompt and thorough revision of the treaties between our countries.”  Johnson answered that he understood Chiari’s concern and said that the United States also had vital interests connected to this matter.

But Chiari did not relent.  “This situation has been building up for a long time, Mr. President,” said the Panamanian head of state, “and it can only be solved through a complete review and adjustment of all agreements. . . .  I went to Washington in [1962] and discussed this with President Kennedy in the hope that we could resolve the issues,” the Panamanian chief executive explained.  “Two years have gone by and practically nothing has been accomplished.  I am convinced that the intransigence and even indifference of the U. S. are responsible for what is happening here now.”

Anti-American protests and violence occurred frequently in the decade of the 1960s.  Why did President Johnson consider that this riot in Panama amounted to an international crisis that he had to handle personally?

A number of factors explain the importance of Panama to American foreign policy during the early days of the Johnson Administration.  He had just assumed office following the assassination of the popular John F. Kennedy, who successfully faced down the Russians in the October 1962 missile crisis.  Johnson undoubtedly felt that he also needed to prove his toughness in foreign affairs.  His presidential legitimacy was at stake.

CHIARI

Moreover, the Cuban Revolution of 1959 and its challenge to American hegemony in the hemisphere posed a threat that Communism might take over another Latin American nation.  No sitting president would win reelection if a “second Cuba” occurred during his watch, and exaggerated reports from Panama were pouring into the White House warning of Communist agents active in the violence.  President Johnson already had his eye on the 1964 presidential election coming in just ten months.

Finally, the existence of the U.S.-controlled Canal Zone was becoming a prominent issue in Inter-American relations.  The zone itself consisted of ten-by-fifty-mile swath of land surrounding the inter-oceanic canal in which about five thousand English-speaking administrators, operators, and military personnel lived.   It divided the Spanish-speaking nations of Mexico and Central America from those of South America.  To many, the Panama Canal symbolized U.S. domination over the entire hemisphere.

The Zone also nurtured a colonial mentality among its civilian workers, many of whom had spent most of their adult years there. Surrounded by impoverished Panamanians, the three thousand American citizens operating the Panama Canal tended to be exceptionally patriotic, even jingoistic.  Some had never ventured into Panama City.  Time magazine once called the Zonians “more American than America.”  Many households had Panamanian or West Indian maids and gardeners.  Yet the Zonians disdained the Panamanians and refused to fly their national banner.  According to canal treaty dating from 1903, the United States occupied the Canal Zone but the Republic of Panama retained sovereignty of the strip of land that split the nation into two parts.  Panamanians were demanding that the Americans also raise the flag of Panama too.  Presidents Dwight D. Eisenhower and Kennedy agreed, and each had ordered the joint display of both national banners in the Canal Zone.

However, the Zonians and their school kids disobeyed the presidential mandates.  American residents of the Canal Zone, who voted absentee in US elections, enjoyed strong support on Capitol Hill and Senators and Congressmen encouraged their opposition.  Congress refused to increase the payments to the government of Panama for the lease of the Canal Zone lands and the Senate stymied the renegotiation of the 1903 treaty.

Missouri_panama_canal

After six decades of American intransigence, Panamanian students had had enough.  On January 9, 1964, they entered the Canal Zone throwing bricks and smashing windows.  Arsonists set fire to automobiles and buildings.  Fidel Castro’s revolution in Cuba and his anti-American speeches inspired some of the rioters.  Others reacted to the nation’s shame about its dependency on foreigners and the presence of the U.S. Armed Forces and the civilian Zonians.

Under these circumstances, American troops stationed in Panama were called out to defend the Canal Zone with small arms fire.  During the several days of rioting, twenty-one Panamanians and four American soldiers lost their lives.  The wounded numbered in the hundreds.  In the final analysis, President Lyndon Johnson’s extraordinary phone call to the Panamanian head of state marked the beginning of a long process of negotiations that ended up, thirteen years later, in the treaty ceding the inter-oceanic canal to control to Panama.  President Jimmy Carter and the popular Panamanian dictator General Omar Torrijos signed this agreement at the White House in 1977, and the final stage of the process of transmission came in 1999.

Permit me to add a personal postscript.  This research forms part of one chapter of my book manuscript on United States-Latin American relations in the turbulent decade of the 1960s.  I myself played a minor role in the drama.  During the 1964 Panamanian flag riots, I was an undergraduate student and cadet in the Reserve Officer Training Corps at the University of Wisconsin, Madison.  Later, as a Second Lieutenant in the U.S. Army, I took up my first foreign assignment at Fort Amador on the Pacific side of the Panama Canal.  I arrived in December of 1968, just two months after the coup d’état, by which Lieutenant Coronel Torrijos had seized power.  Now I am writing the history through which I have lived.

bugburnt

You can read a full transcript of the conversation here and listen to the audio in the video below (it begins after a brief intro):

 

You may also like:

Mark Atwood Lawrence’s piece about LBJ’s 1964 conversation with George McBundy on Vietnam.

Photo Credits:

Secretary of State Dean Rusk, President Lyndon B. Johnson, and Secretary of Defense Robert McNamara at a meeting in the Cabinet Room of the White House, 1968 (Image courtesy of the United States Federal Government)

Panamanian President Roberto F. Chiari (Image courtesy of La Estrella)

The U.S. battleship Missouri traveling through the Panama Canal, October, 1945 (Image courtesy of the United States Navy)

Images used under Fair Use Guidelines

Filed Under: 1900s, Discover, Features, Latin America and the Caribbean, Politics, United States Tagged With: LBJ, Lyndon Johnson, Panama, Panama Canal, Roberto F. Chiari

The Founders and Finance by Thomas K. McGraw (2012)

imageby Mark Eaker

Thomas McGraw argues that there was something in the background of immigrants to the United States that distinguished them from native born Americans and contributed to their suitability to become Secretaries of the Treasury. Including those born in Africa, less than 8% of the population was not native born and yet four of the first 6 Treasury Secretaries were immigrants. They served in that capacity for 78 percent of the period from 1789 through 1816. McGraw makes his case based almost entirely on the two most important of the Treasury Secretaries, Alexander Hamilton and Albert Gallatin.

Most of the Founders, like  Washington, Jefferson and Madison under whom  Hamilton and Gallatin served, were raised as wealthy members of the planter class. Their experiences and lifestyles revolved around agriculture and large landholdings. In contrast Hamilton and Gallatin both had early exposure to merchant activities in which they developed knowledge of markets and finance. Hamilton had little interest in land and agriculture and although Gallatin had a romantic notion of land and the West, he was not successful as a landowner.

Although plausible, the argument is not very convincing. First, McGraw provides no evidence to connect immigrants in general with a merchant background. He does not even make that connection with the two other Treasury Secretaries who were immigrants. Second, Gallatin shared the Republican view that land and agriculture were of paramount importance to the future of the country even though he came from an urban background.

image

A daguerreotype of Albert Gallatin, taken sometime between 1844 and 1860 (Image courtesy of the Library of Congress)

Fortunately, the linkage between Hamilton and Gallatin’s service to the country and their immigration status is not very important in assessing the contributions that the two men made. McGraw makes the case that they were the most dominant  cabinet members in the administration in which they served. The two of them along with another immigrant, Robert Morris, who served as Superintendent of Finance under the Articles of Confederation were largely responsible for establishing the foundation of the country’s economic policy.  They made the new nation credit worthy by implementing a national tax system that reduced the revolutionary war debt of the states and by establishing the Bank of the United States that provided  a stable supply of currency.

image

A 1791 draft of Alexander Hamilton’s “Report on Manufactures,” a treatise on American manufacturing (Image courtesy of Library of Congress)

Historians often emphasize the policy differences between Hamilton and Gallatin, but the similarities are much more important. Both men understood markets and the importance of national credit worthiness. Hamilton was instrumental in the first battle to establish the Bank of the United States and Gallatin convinced Jefferson of the need to renew the charter. The policies that each supported were less a function of their views than the views of the principals for whom they worked. Washington was a committed nationalist who believed that the Federal government should take the lead in fiscal matters. Jefferson and Madison believed in a minimal role for the Federal government and more authority for the states. Hamilton and Gallatin provided policy recommendations consistent with the beliefs of their Presidents  and the functioning of the market.  Both men were pragmatists who placed an emphasis on what would work rather than on ideology.

image

Statue of Alexander Hamilton in front of the United States Treasury Building, Washington, DC. (Image courtesy of Wikimedia Commons)

It is one of the great ironies of the era that had Jefferson and Madison prevented the establishment of the Bank of the United States, the Louisiana Purchase would likely not have been possible and the United States would have had difficulty fighting the War of 1812.

The Founders and Finance provides a valuable historical perspective on our current fiscal problems. The nation has confronted from its earliest days questions about our fiscal policies and the potential answers to them.  McGraw died within three weeks of the publication of the book but not before he wrote an op-ed piece for The Wall Street Journal.  In that essay he applied the lessons of his book to our current fiscal crisis. McGraw did not offer a specific plan, but he argued the need for the type of leadership that Hamilton and Gallatin provided the nation in its first three decades.

You may also like: 

Mark Eaker’s review of Lords of Finance, a history of the most influential central bankers of the 1920s

Filed Under: 1400s to 1700s, 1800s, Business/Commerce, Capitalism, Periods, Regions, Reviews, Topics, United States Tagged With: economic history, immigration, United States

The Sapphires (2012)

By Kristie Flannery

Wayne Blair’s The Sapphires is the best new historical film that you most likely have not seen, yet.  It is based on Tony Briggs’ 2004 play with the same name and premiered at the Cannes Film Festival in 2012.  It tells the story of four Aboriginal women singers from the Cummeragunja Mission in rural New South Wales, Australia, who travelled as “The Sapphires” to Vietnam in 1968 to entertain US troops there.  The film is based on a true story that Blair knows intimately: his mother, Laurel Robinson, and her sister, Lois Peeler, were members of the band.

image

Music is the highlight of the film. Deborah Mailman, Jessica Mauboy, Shari Sebbens, and Miranda Tapsell have amazing voices and create beautiful harmonies.  It is a pleasure to watch The Sapphires wearing brilliant costumes and performing covers of American soul hits from the late 1960s including “What a Man” and “I Heard It Through the Grapevine” in front of rowdy soldiers in darkly-lit Saigon bars and rural military camps.  Off stage we hear The Sapphires sing the gospel song “Ngarra Burra Ferra” in Yorta Yorta, the native tongue of the original band-members.

In Australia, critics have welcomed the film as a feel-good story about Indigenous people that offers relief from the harsh themes of poverty, violence and drug and alcohol abuse that have been prominent in other recent films about Aboriginal communities, such as Samson and Delilah.

This does not mean that the Blair has ignored issues that still shape the lives of Aboriginals and make many Australians uncomfortable.  Racism is an important theme in The Sapphires.  We are introduced to the band at talent quest in a country pub – a place where Aboriginal patrons are not welcomed.  We watch white audience members leave rather than listen to the sisters sing.  The film also explores how the war-torn Vietnam of the 1960s was a place that Aboriginal people – in this case the members of a band – interacted with African Americans for the first time in their lives, and suggests that such encounters initiated the formation of transnational conceptions of black identity.

RAAF_TFV_HD-SN-99-02052

Members of the Royal Australian Air Force arrive at Tan Son Nhut Airport, Saigon, August 10, 1964 (Image courtesy of the U.S. Government)

The problems of “the stolen generation” are also addressed.  Approximately 100,000 indigenous children in Australia were removed from their families and sent to be raised in white families before the 1960s. One member of the band, Kay, is criticised for ostensibly abandoning her Aboriginal identity and living as a white woman, which is facilitated by her light skin tone.  It is eventually revealed that the government forcibly removed Kay from her family as a child and placed her with a white family in a city far away from home. Kay struggles make sense of her indigeneity after a long separation from her family members.  The Sapphires’ experience in Vietnam was perhaps unique, but the wounds created by forced removal of children are still felt by many people.

image

The four Aboriginal singers–sisters Laurel Robinson, Lois Peeler and their cousins Beverley Briggs and Naomi Mayers–who inspired the film (Image courtesy of Hopscotch Films)

It would be unfair to say that The Sapphires romanticizes war, but it is disappointing that the film makes no attempt to delve into the complex politics surrounding the Vietnam War in Australia.  Certainly the film addresses the dangers that entertainers confronted in a war zone – for example, on one occasion the camp where the band is performing is bombed. But the version of The Sapphires’ story told on the big screen conveniently erases opposition to the Vietnam War that manifested in a vibrant culture of protest.  Back in 1968, two original members of the band, Naomi and Beverly, refused to join the Vietnam tour because they were staunchly opposed to the conflict and US imperialism in South-East Asia.  They had participated in Melbourne’s large moratorium movement, and could not bring themselves to entertain soldiers fighting a war they understood as fundamentally immoral. This is a glaring hole in the story that should not have been excluded.  Films about our past are as valuable for what they exclude as for what they include.

You may also like:

This oral history of Aboriginal Australians’ experiences as “The Stolen Generation”

Filed Under: 1900s, Australia and Pacific Islands, Biography, Fiction, Music, Pacific World, Race/Ethnicity, Reviews, Transnational, United States, War, Watch Tagged With: 1968, Australian Aborigines, The Sapphires, Vietnam

Pinching and Swiping, or How I Won the Digital War

image

Battle of the Bulge
Shenandoah Studio
iPad, version 1.0.3

I have been refighting the Second World War my entire life. My campaign began with the board game Axis and Allies and continued on the computer with Panzer General and Close Combat. I spent hours as a teenager designing scenarios for the war in Civilization II, with a computer mouse in one hand and my history textbook in the other. I particularly enjoyed creating scenarios in which the player had to run an airlift over enemy territory in order to resupply a beleaguered city – allowing me to relive my grandfather’s stories about flying over the hump. Perhaps this pastime reeks of warmongering, but I’ve always looked at it as glorified puzzle solving with a dash of history. I didn’t know it at the time, but these gaming experiences represented my first foray into historical research: I checked out books from the public library on particular campaigns so that I could provide the proper context and I studied my father’s world atlas to make sure I had the topography correct. I didn’t just want to have fun. I also wanted to get it right.

Shenandoah Studio is also interested in getting it right and having fun at the same time. Their game, Battle of the Bulge, is a strategic simulation of Germany’s surprise counteroffensive in 1944, which was called the “Battle of the Bulge” because of the bulge the campaign created in maps of the frontline at the time.

The battle represented Germany’s last attempt to salvage the war before their country was completely encircled. The second half of 1944 found German troops in steady retreat as American and British forces broke out from the Normandy beaches, while the Soviet army completed successful campaigns into the Baltic States, Poland, and Romania. The German high command, however, had yet to give up hope. They believed that a settled peace with America and Britain could still be achieved with one decisive victory on the Western Front. While Allied forces rested for a new campaign in spring 1945, the German army quietly collected troops and supplies for a surprise attack.

image

Screenshot of gameplay (Image courtesy of the author)

The plan for the German offensive was twofold. First, the German high command hoped to capture the Belgium city of Antwerp – a major supply port that the Allies intended to use during the invasion of Germany. The Allied army possessed a significant advantage in men and material by the winter of 1944, but they still relied on resupply from ports in western France. This left their supply lines dangerously long and vulnerable to attack. Second, in the process of taking Antwerp, Germany hoped to drive a wedge between American and British forces on the Western Front. The path of the offensive would take German forces through the Ardennes Forest, which lay roughly at the point where American and British forces met on the other side of the lines. German generals believed they could encircle one or both of these forces during the offensive, a move that would encourage division between Allied leaders and lead to a peace settlement.

The Battle of the Bulge allows the player to relive the German counteroffensive from the perspective of either the Allies or the Axis, and through either a short or long scenario. The short scenario, “Race to the Meuse,” includes the first three days of the battle and can be completed in about half an hour. The long scenario, “Battle of the Bulge,” follows the first week or so of the campaign and can take up to two or three hours. In the game, the player takes command of division size units (e.g. the 101st Airborne, the 116th Panzer division, etc.), and determines their movement around the battlefield. The game is broken into days that last from 6 am to 6 pm. Within each day, players are given a random number of turns that are determined by behind-the-scenes dice rolls.

image

Screenshots of gameplay (Images courtesy of the author)

The game board may look imposing, but the gameplay is easily accessible to anyone who has played a game of Risk. Combat is determined by under-the-hood dice rolls, which allow players to barrel into the game without having to learn a complicated set of rules. The iPad is particularly suited to this style of game because it gives players a bit of the tactile feel of playing an actual board game without having to clear a coffee table or clean up after aggressive dice rollers.

image

Generally, the goal of the Axis player is to drive their forces as quickly as possible to the Meuse River, and then protect their advances from attack by Allied reinforcements. The Allied player’s goal is to play a spoiling role by delaying, or, if possible, halting the Axis advance before they are able to achieve victory. While the game allows the players to make their own decisions regarding the movement of units, these decisions are couched in the historical realities of the actual campaign. These realities can either help or hinder the player’s cause. For instance, the game begins with the Axis surprise attack on December 16, 1944, but on the morning of the attack Axis armor was delayed by a traffic jam. This means that Axis players in Battle of the Bulge cannot use tank units in their first three turns. Additionally, as the campaign enters its later stages, Axis players must contend with gas shortages that limit the movement of their armor. Axis players are helped, however, by the presence of commandos behind enemy lines (Otto Skorzeny’s famous English-speaking German soldiers), which they can use to delay the movement of Allied units. On the Allied side, players are helped by extra reinforcements and air support, but like their historical counterparts they must wait several days for new forces to arrive and for the skies to clear.

image

Screenshot of gameplay (Image courtesy of Shenandoah Studios)

The presence of these accurate limitations means that players must pay attention to the game’s historical narrative, which is delivered through a “Daily Briefing” before each day. These briefings include information regarding reinforcements and supplies as well as a short history about the real life events for that particular day. The short histories provide players with an opportunity not only to learn about the actual event, but also to compare their strategy with the strategies pursued by generals on both sides of the conflict. In my multiple play throughs of the short and long campaign, I learned that one of the keys to victory is keeping a close watch on the in-game calendar, which provides a shorthand description of the details listed in the “Daily Briefing.” This information is particularly useful for Allied players, as they can use the schedule of reinforcements and resupply to plan out the path of their retreat and determine the timing of their counterattack. Thus, the history in Battle of the Bulge is not merely window dressing. It can mean the difference between victory and defeat, and the players who ignore it do so at their own peril – or at least the peril of their digital army.

image

Screenshot of gameplay (Image courtesy of Shenandoah Studios)

One might say that this level of attachment to the historical narrative would predetermine the outcome of most matches, but in fact this potential problem is mitigated by two variables: the decision making of individual players and the game’s turn mechanic. The game offers two levels of artificial intelligence for single players to face off against, and these computer opponents can put up quite a fight. I never felt that I played against the same strategy twice. In addition, players can face off against friends in “face to face” matches (what old fogeys like me call “hot seat” matches) where they pass the iPad back and forth, or challenge each other online through Apple’s Game Center. The turn mechanic adds an extra layer of variability because players are not guaranteed a certain number of turns each day. This feature adds to the excitement, particularly near the end of each day, when players are not exactly sure how many turns they have left. This leads to a bit of brinksmanship, where opponents attempt to delay their final moves so that their enemy will not be able to respond.

image

Screenshot of gameplay (Image courtesy of Shenandoah Studios)

Only a few minor flaws hold up an otherwise stellar production. The game’s historical narratives, particularly the background history available in the main menu, contain several typos that include mistakes with punctuation and between singular and plural pronouns. The teacher in me also wanted to see a short, recommended reading section or bibliography for players who wanted to learn more. On the point of replay value, the game does not yet include additional scenarios, or the means to easily modify the game’s preconditions and rules. The addition of this sort of feature may be too much to ask for a small studio like Shenandoah, but it should be considered for their upcoming game on El Alamein.

In conclusion, Battle of the Bulge is a game that matches a challenging tactical simulation with an excellent historical narrative. This sort of package would be considered a bargain on a console or computer for $30 or more. The fact that the game is available on a tablet, and for a scant $10, adds a great deal to Shenandoah Studio’s achievement. It is perhaps the only mobile game that I would forgive students for playing in class.

You may also like:

“You Have Died of Dysentary,” Robert Whitaker’s look at American history in video games

Filed Under: 1900s, Europe, Fiction, Reviews, Transnational, United States, War Tagged With: battle of the bulge, video game, video game history, World War II

Philippa Levine on Eugenics Around the World

By Philippa Levine

Early in the twentieth century governments all over the world thought they had found a rational, efficient, and scientific solution to the related problems of poverty, crime, and hereditary illness. Scientists hoped they might be able to help societies control the social problems that arose from these phenomena. From Mexico to Maine, from Switzerland and Scandinavia to South Carolina, from India to Indiana, the science-turned-social-policy known as eugenics became a base-line around which social services and welfare legislation were organized. In laboratories and college classrooms eugenic research and teachings were advanced and enthusiastically funded by institutions such as the Rockefeller Foundation. In many countries – Germany, the Soviet Union, Denmark, Belgium, and Britain, among others – scientific institutes devoted to eugenic research emerged.  

Eugenics, in its intellectual as well as political forms, was founded on an optimistic belief that controlling heredity would improve society by eliminating diseases, anti-social tendencies, mental illness, and crippling afflictions. 

The term eugenics had been coined in 1883 by Francis Galton, a British writer who, like many of his generation, was fascinated by the promise of the new developments in science. Interested in statistical probability and heredity, he published in 1869 a book entitled Hereditary Genius. Here he set out to demonstrate that genius was hereditary by showing that certain families produced more eminent men than would be found in the general population. In the lab, he transfused blood between different breeds of rabbits looking for evidence that characteristics could be passed through blood. But where Galton’s experimental work on humans was limited to observation, in the next generation eugenics sometimes became more invasive.

As this new science was picked up both in the lab and in the corridors of politics, a whole slew of social policies emerged based on assumptions about hereditary traits. In Indiana, for example, in 1907, a new law provided for the involuntary sterilization of “confirmed criminals, idiots, imbeciles and rapists.” This was the beginning of a globally widespread sterilization program, which has become the most notorious and well known of the many eugenic practices that took off in the early twentieth century. But sterilization was by no means the only influence eugenics had on societies around the world. The teaching of good parenting, the promotion of family planning, and the organizing of prenatal care were eugenic ideas that many believed would mold better, healthier citizens in the next generation. In the United States, fitter family competitions promoted eugenic motherhood while in the Soviet Union women who had large families were awarded motherhood medals at special ceremonies honoring their loyalty to the nation.  In one way or another, the family was almost always central to eugenic thinking.

The problem with much of this was that someone had to decide what constituted fitness, mental capacity, and the like.  And here was where things frequently went awry. Nazi Germany is routinely regarded as the place where eugenics got completely out of hand. Hitler introduced a widespread sterilization program almost immediately he came to power, and in the concentration camps doctors and scientific researchers made cruel use of helpless prisoners upon whom they experimented without regard for pain and suffering. Yet it is worth remembering that many of the practices that Germany took up so enthusiastically between the two world wars were not only consistent with research that had been going on there earlier in the century, but were widespread in many parts of the world. We tend to focus on Germany in part because of the enormity of the Nazi war crimes, and because the Nuremberg trials immediately after the war brought to light many of the atrocities committed in the name both of science and of racial purity.  But each of the four countries who prosecuted the Germans at the Nuremberg trials – Britain, France, the Soviet Union, and the United States – had adopted eugenic practices of one kind or another. In the United States, most states required blood tests before marriage to check for hereditary and sexually transmitted diseases, and involuntary sterilization was legal in twenty-seven states by the early 1930s. In 1911 Winston Churchill called for compulsory labor camps to house “mental defectives.” Those same people could, under Britain’s 1913 Mental Deficiency Act, be compulsorily institutionalized. In France, a National Office of Social Hygiene promoted healthy and more babies. And though the Central Committee of the Communist Party outlawed work on eugenics in the USSR in 1930, under Lenin a Bureau of Eugenics had flourished with state funding, and as in France the care of new-borns was of major importance. Soviet law prohibited marriages between mentally ill people as a way of preventing the inheritance of undesirable traits. 

In all these cases, just as in Hitler’s Germany, definitions of mental incapacity, of inferiority, and of difference from the norm could have massive, sometimes fatal, and often lifelong consequences for those caught in the system. It was not just a family history of hereditary disease such as Huntington’s or cystic fibrosis that triggered the state’s interest in a family but just as often a history of criminal offenses, prostitution, drunkenness, deafness, and much more.  Indeed the landmark Supreme Court case of 1927, which definitively legalized compulsory sterilization in the US – Buck v. Bell – hinged on three generations of white working-class women who were regarded as both mentally and morally inadequate. Emma Buck was suspected of working as a prostitute while her daughter Carrie gave birth to a daughter out of wedlock. Vivian, conceived as a result of rape, died too young to be branded immoral, but a nurse who knew her as a newborn had found her “peculiar.” Her school reports suggest nothing of the sort, revealing an average child with no outstanding problems. Still, the famed jurist, Oliver Wendell Holmes, spoke for the court when he declared in the now famous decision: “Three generations of imbeciles are enough.”

Everywhere it was people from the wrong side of the tracks who were most likely to capture the attention of eugenic reformers. In the Scandinavian nations where sterilization was voluntary, the poor were often pressured and those in mental institutions frequently found that sterilization was a precondition for release. Under Nazism, many were vulnerable: Jews, people of mixed-race heritage, gays, Roma, and many more. Propaganda excoriated “useless eaters” for wasting precious German resources. The message was loud and clear: those who were a burden on the state would be better off dead. But while it’s easy to point to the extremes of Nazi Germany as somehow exceptional, consider the tactics of Harry Haiselden, Chief Surgeon and President of Chicago’s German-American Hospital. In 1915 Haiselden, who was a committed eugenicist, publicized his practice of denying treatment to impaired newborn babies. Haiselden was investigated three times and always acquitted of wrongdoing.  In one case the state, in the other a prominent physician – both authority figures – made decisions that had a critical impact on the lives of others.  Germany looks a lot less exceptional when we consider the history of eugenics around the world.

Eugenics is important not only because the topic is itself a fascinating window into the relationship between science and society and into a broad variety of complex welfare and social issues, but because it is so central to major ethical decisions.  It is literally about life and death, and the power over life and death.  It doesn’t get more important than that.

Further Reading

The best overview of the history of eugenics is the classic study by Daniel J. Kevles, In the Name of Eugenics: Genetics and the Uses of Human Heredity, (1998).

Nancy Stepan, The Hour of Eugenics: Race, Gender, and Nation in Latin America (1991) offers a fascinating analysis of the eugenic practices found throughout Latin America. Latin American eugenics was often markedly different from the policies found not only in its neighbor to the north but in much of northern Europe as well.

Martin S. Pernick, The Black Stork: Eugenics and the Death of ‘Defective’ Babies in American Medicine and Motion Pictures since 1915 (2000) looks at the Harry Haiselden case. After he was expelled from medical circles, Haiselden made a film about his crusade, the ‘Black Stork’ of Pernick’s title.

Alexandra Minna Stern, Eugenic Nation: Faults and Frontiers of Better Breeding in Modern America (2005). Its focus is mostly on the west, and especially on California which was one of the most active eugenic states in the nation.

Paul A. Lombardo’s Three Generations, No Imbeciles: Eugenics, the Supreme Court, and Buck v. Bell (2010).

Stefan Kühl, The Nazi Connection: Eugenics, American Racism, and German National Socialism (1994) lays out the ties between American and German eugenics in the inter-war years.

Robert N. Proctor’s Racial Hygiene: Medicine Under the Nazis (1988) details the work carried out in the camps and beyond.

Alison Bashford and Philippa Levine, eds., The Oxford Handbook of the History of Eugenics, (2010).

Further reading:

Philippa Levine, “Bad Blood: Newly Discovered Documents on Syphilis Experiments“

Download video transcript

Filed Under: 1800s, 1900s, Features, Politics, Race/Ethnicity, Science/Medicine/Technology, Transnational Tagged With: Eugenics, forced sterilization

« Previous Page
Next Page »

Recent Posts

  • NEP’s Archive Chronicles: A Brief Guide Through Some Archives in Gaborone and Serowe, Botswana
  • Review of Hierarchies at Home: Domestic Service in Cuba from Abolition to Revolution (2022), by Anasa Hicks
  • Agency and Resistance: African and Indigenous Women’s Navigation of Economic, Legal, and Religious Structures in Colonial Spanish America
  • NEP’s Archive Chronicles: Unexpected Archives. Exploring Student Notebooks at the Institut Fondamental d’Afrique Noire (IFAN) in Senegal
  • Review of No Place Like Nome: The Bering Strait Seen Through Its Most Storied City
NOT EVEN PAST is produced by

The Department of History

The University of Texas at Austin

We are supported by the College of Liberal Arts
And our Readers

Donate
Contact

All content © 2010-present NOT EVEN PAST and the authors, unless otherwise noted

Sign up to receive our MONTHLY NEWSLETTER

  • Features
  • Reviews
  • Teaching
  • Watch & Listen
  • About