• Features
  • Reviews
  • Teaching
  • Watch & Listen
  • About

The past is never dead. It's not even past

Not Even Past

It’s in Their Blood

By Ted Banks

(This article is reposted from Fourth Part of the World.)

The Progressive-Era white press and their audience had a fascination with Indians judging from the amount of ink that was devoted to musings on their place and progress in society.  One component of that fascination, indeed one that was the basis for much speculation on how successfully or not Indians were integrating into white America, was how much “Indianness” could be attributed to Indian blood.

Many observers have noted that notions of blood and “mixing” among whites varied depending on whose blood was being considered.  While the “one-drop” rule dictated that a single drop of black blood could overwhelm generations of otherwise Anglo (or Indian) infusion, Indian blood offered no such absolute outcome.  At times commenters noted the tenacity of Indian blood, as demonstrated by its ability to preserve Indian physical characteristics across generations.  Other times, white observers painted Indian blood as conversely unstable, susceptible to dilution through intermarriage, and seemingly at times, social contact or cultural proximity.

In a 1907 article penned by Frederic J. Haskins titled “Indians Increasing in America,” the author cites several examples of the persistence of “Indian” traits, which he ties to a rough accounting of blood quantum.  He notes that the “strength of Indian racial traits is shown by the fact that the 700 persons now in Virginia who can prove their descent from Pocahontas and her English husband, John Rolfe, still have the Indian hair and high cheek bones.”  Commenting on a handful of Indian politicians, Haskins introduces “Adam Monroe Byrd, a Representative from Mississippi, [who] is also of Indian blood.”  Haskins reports that Byrd “traces his ancestry through a long line of distinguished Cherokee chieftains,” and that “He has the high cheek bones, copper skin and straight hair which indicate the blood of the original American.”  Haskins’s article reveals the casual ambivalence with which settlers framed the racial makeup of Indians, and their desire to monitor the relative progress of Indians in America accordingly.

Four years before Haskins’s piece, an article on the upcoming Indian exhibition at the St. Louis World’s Fair played the other side of the ambivalence spectrum while employing much the same rhetoric regarding Indian racial traits.  Titled “Pageant of a Dying Race,” the feature dramatically promised the “last live chapter of the red man in American history is to be read by millions of pale faces at the Universal Exposition.”  Like Haskins, the author of “Pageant of a Dying Race,” T. R. MacMechen, describes the persistence of Indian racial traits, observing that “(the) blood of Pontiac, of Black Hawk, of Tecumseh and his wily brother, The Prophet, flows in the veins of the descendants who will be at the exposition,” and that “(no) student of American history will view the five physical types of the Ogalalla Sioux without memories of Red Cloud, nor regard the (word unclear) without recalling the crafty face of that Richelieu of Medicine Men, Sitting Bull.”  However, MacMechen argues that despite the seeming durability of Indian traits, “the savage is being fast fused by marriage and custom into a dominant race, so that this meeting of warriors becomes the greatest and probably the last opportunity for the world to behold the primitive Indian.”  In MacMechen’s account, marriage and custom function as ways to counterbalance, or perhaps mask, the otherwise durable Indian blood.

Festival Hall at World Fair (via Wikipedia)

White supremacy dictated the ways in which whites interacted with racial “others,” but not in such a way that all of these interactions were uniform across groups.  That is to say that while intermarriage between blacks and whites was prohibited throughout much of the country on either a de facto or de jure basis, intermarriage between settlers and Indians was, at least at times, encouraged.  A 1906 Dallas Morning News piece reported that “Quanah Parker is advocating the intermarriage of whites with the Indians for a better citizenship among the Indians.”  The piece noted that “Quanah’s mother was a white woman and several of his daughters have married into white families.”  The item quoted Parker as saying “Mix the blood, put white man’s blood in Indians, then in a few years you will have a better class of Indians,” and noted that “(Parker) hopes to live to see the time that his tribe will be on the level with those of pure anglo-saxon blood.”  Another DMN article from two years later seems to reveal a gendered wrinkle to such unions, reporting that “(with) the coming of Yuletide Chief Quanah Parker of the Comanche Indians realized one of the greatest ambitions of his life when his young son, Quanah Jr., a Carlisle graduate, was married to Miss Laura Clark, a graduate of the Lawton High School last year,” and that “(this) is the first time in the history of Indians of this section where an Indian has been married to a girl of white blood.”

If persistent racial traits were attributed to Indian blood, but Indians were being “fast fused by marriage and custom” into white society, the result might be some Indians in unexpected places, or at least circumstances.  Haskins, in his piece, noted that at the 1904 St. Louis Exposition, “. . . the strong voice at the entrance of the Indian Building calling through a megaphone, . . . (the) barker who thus hailed the passing throng in the merry, jocular fashion of the professional showman was a full-blooded Indian boy, a product of the new dispensation of things, just as Geronimo was of the old.”  Of Charles Curtis, a US Senator from Kansas, Haskins observed that “(he) is not of pure Indian heritage, but his mother belonged to the Kaw tribe.  . . . He has the hair and color of an Indian, but in politics does not play an Indian game.”  A Dallas Morning News correspondent reported in 1906 that Quanah Parker had been elected a delegate to the Republican convention, but that he had declined, stating that he had no interest in politics.  The anonymous scribe went on to comment that

Quanah is a half-breed, his mother having been Cynthiana (sic) Parker.  Having
white blood in his veins, his conduct is absolutely incomprehensible.  For who
ever heard before of a white man, or any kind of a man with white blood in his
veins, who did not want the honors or the salary of office?  Still we must remember
that Quanah is King of the Comanches, and that is a pretty good position itself.

The writer’s tone indicates he was speaking somewhat in jest, but the gist of his comment was that Quanah, although possessing “half” white blood by his estimation, was “playing the Indian game” by staying out of politics, and, in doing so, positioned himself a world away from “any kind” of white man.

(via Wikimedia)

This is all to say that if one wanted to track the uses of “blood” in white America’s Progressive-Era discourse on Indians, the results would be—excuse the pun—mixed, to say the least.  Like their feelings on Indians in general, the habitual deployment of blood as an explanatory concept nonetheless exhibited a remarkable ambivalence; white Americans seemed to think both that “Indian blood” definitely was of immense importance and that it could mean about anything they needed it to.  This ambivalence stands out even more starkly when compared to the aforementioned belief in the complete impenetrability of African blood of the same period.  A cynical reading might well deduce that white Americans said anything and everything about blood that would help to fortify white supremacy.  A devil’s advocate counterpoint might argue that the rise of the eugenics movement indicated that white Americans of the time indeed believed in at least some of what they said.  And still another would remind us that both of those could be, and probably were, the case.

You May Also Like:

Fandangos, Intemperance, and Debauchery
The Illegal Slave Trade in Texas
Conflict in the Confederacy: William Williston Heartsill’s Diary

Narcotic Culture: A History of Drugs in China, by Frank Dikötter, Lars Peter Laamann, and Zhou Xun (2004)

By Horus T’an

The opium myth is one of the most important pillars of the conventional narrative of modern Chinese history. According to the myth, opium is presumed to be a highly addictive narcotic and highly harmful to its users’ health, and Great Britain used its military superiority to impost the shameful opium trade on China and turn it into a nation of opium addicts who were “smoking themselves to death while their civilization descended into chaos.” In the opium myth, opium symbolizes the imperialists’ pernicious intention to dominate China and the tragedies suffered by all the nations facing imperialist aggression. In Narcotic Culture: A History of Drugs in China, Frank Dikötter, Lars Laamann, and Zhou Xun debunk the opium myth through exploration of the history of opium in China from the sixteenth century to the mid-twentieth century. They point out that the opium myth was invented by nationalist reformers and never reflected the reality of opium in Chinese society during the late imperial period. The authors also argue that the miseries experienced by Chinese opium smokers  from the end of the nineteenth century were brought on by the anti-opium campaigns launched by the Chinese authorities rather than the chemical property of opium. These campaigns degraded the opium smokers into a morally depraved status and forced them to use more harmful semi-synthetic opiates like morphine and heroin.

The opium myth analyzed opium smoking practices in China and India in isolation from the cultural and social factors sustaining these practices. In contrast, this book shows that opium in China served as an essential lubricant in male social activities. Opium was prepared and appreciated in highly sophisticated ceremonies by male social elites. Opium also served as a panacea for many ailments. Quite contrary to the incurable addicts in the opium myth, the authors argue that the opium consumed in both China and India was relatively moderate and had few harmful effects on either health or longevity. Most opium smokers were able to control the quantity of the opium they consumed, and the irresistible compulsion toward ever-increasing doses was not a common phenomenon among them.

The highlight of this discussion about the history of opium before the end of the nineteenth century is the comparison between tobacco and opium. The authors demonstrate that tobacco and opium played a relatively similar role in social activities and people showed similar attitudes toward them. There were alarms in the 1830s and 1840s from a few Han officials over moral decay and the breakdown in social order caused by the prevalence of opium. The opium myth interpreted these critiques as Chinese people’s unyielding resistance to imperialists’ attempt to turn China into a nation of opium addicts. Nevertheless, the authors prove that these alarms were based on Confucian asceticism rather than Han officials’ understanding of the addictive chemical property of opium since some officials expressed similar concern about the popularity of tobacco. In addition, the authors emphasize that the critique of opium by Han officials was related to their desire to restore the scholar-official class to the position of moral authority that it possessed during the Ming dynasty.

The authors suggest that the opium myth, which emerged at the end of the nineteenth century, was a confluence of two trends. The first is the prevalence of opium prohibition in Europe from the 1870s. Opium prohibition was “part of the medical profession’s search for moral authority, legal control and statutory power over pharmaceutical substances in their fight against a popular culture of self-medication.” The second is  Chinese nationalists’ effort to defend their own country from the encroachment of imperialism. The nationalists were eager to figure out why China was repeatedly defeated by imperial powers. The authors suggest that the  Chinese nationalists viewed opium smoking as the origin of national weakness rather than a personal behavior and that they saw anti-opium campaign as a useful tool to save China from a world dominated by imperial powers.

The authors’ second conclusion is that the anti-opium campaigns, rather than the opium itself, brought miseries to opium smokers. The anti-opium campaigns transformed the public image of opium smokers from gentlemen to thieves, swindlers, and beggars who were enslaved by powerful chemicals. These campaigns also transformed opium houses from a culturally sanctioned venue for male sociability into a site of perdition, a marker of uncivilized behavior and barbarism where vulgar and despicable addicts were leading the country to complete extinction. The prohibition laws passed in these campaigns gave authorities the right to arrest, punish, and kill opium smokers. Besides creating a criminal underclass, these campaigns also pushed smokers from moderate opium to more addictive and more harmful semi-synthetic opiates like morphine and heroin. Even worse, these semi-synthetic opiates are consumed in a much more harmful pattern: heroin and morphine were usually mixed with other unknown compounds and snorted, chewed, or injected with dirty needles shared by many addicts without any protection.

There are some omissions in this book. The first is the process by which the opium myth gained its concrete shape. The authors do a great job in deconstructing the opium myth but fail to dedicate enough attention to this process. This omission weakens the credibility of their argument. The second is the role of racism in the anti-opium campaigns. Opium smoking was mainly a habit practiced by Chinese and Indian. Racism against Chinese immigrants in the United States is responsible for linking opium smoking as a Chinese behavior with opium smoking as a barbarian behavior. Some Chinese intellectuals might accept the anti-opium ideas without any awareness of the racism behind it. The absence of the discussion of racism makes this book less useful than it is supposed to be in understanding how Chinese intellectuals changed their way of thinking through their interaction with the Western world. Furthermore, the authors’ conclusion that the anti-opium campaigns facilitated the spread of the semi-synthetic narcotics is also questionable. After the collapse of the Ch’ing Dynasty, some places of China witnessed the prosperity of both opium and semi-synthetic narcotics. This prosperity could not be explained just with the pressure of the anti-opium campaigns. Despite these omissions, Narcotic Culture: A History of Drugs in China serves as essential scholarship for the researchers of modern Chinese history. It re-interprets opium use in Chinese society from the sixteenth century to the mid-twentieth century and shatters one of the most important pillars of the conventional narrative of modern Chinese history. It reveals the complexity of modern Chinese history and implies the failure of the conventional narrative in addressing this complexity. The book throws lights on opium smokers’ miseries caused by the anti-opium campaigns and reminds readers that some important stories are crushed and abandoned in the writing of modern Chinese history. Narcotic Culture: A History of Drugs in China also indicates the significance of culture in shaping public opinion about narcotics and encourages readers to reconsider the effectiveness of the restrictive prohibition law in dealing with the spread of narcotics.

You May Also Like:

Peeping Through the Bamboo Curtains: Archives in the People’s Republic of China
Great Books on Women’s History: Asia

Fandangos, Intemperance, and Debauchery

“Can any good come out of San Antonio?” This was the question at the heart of an 1846 letter penned by the Rev. John McCullough. He was writing to his Presbyterian superiors on the East Coast, who had assigned him the task of conducting missionary work on the new American frontier in Texas.

McCullough’s letter, housed on the UT Austin campus at the Briscoe Center for American History, is colorful, detailed, and dour, providing a rare first-hand account of a fledgling Texas community caught in the crossfire of the Mexican-American War.

Photograph of a letter by Rev. John McCullough
McCullough’s letter, housed at the Briscoe Center for American History at the University of Texas at Austin.

McCullough describes San Antonio as a cosmopolitan merchant town of 4,000 people, the majority being Mexican, with Anglos, Germans, and French making up the remainder. He notes that the city was filled with “traders from the Rio Grande,” as well as medical tourists — “travelers” there for health reasons. In addition, the town was “thronged with strangers” — a testament to the presence of 2,000–3,000 newly arrived U.S. troops. The mix of troops, tourists, merchants and locals created a moral landscape that made McCullough recoil.

For the Reverend, San Antonio was a place full of “people exhibiting intemperance and uttering blasphemy.” Gambling was the “prevailing vice,” the sabbath was ignored and locals engaged in a “species of night frolics called fandangos.” It was also a place where priests kept cockerels “shod for fighting” in the church annex. Such men-of-the-cloth also had “a respectable posterity” of children “scattered throughout town.”

Black and white print entitled Sketches in San Antonio--The Fandango--From A Sketch by Our Own Correspondent
Fandangos were a source of revenue for San Antonio, raising $560 in 1847, 10 years after a licensing scheme had been passed (by a council consisting of Anglos and Mexicans).

McCullough obviously experienced a significant degree of culture shock on the frontier. Of the other remaining accounts of San Antonio during the period, most are morally neutral, even celebratory.  For example, in 1828, José María Sánchez and the botanist Jean Louis Berlandier passed through, Sánchez noting without prejudice that the “care-free” people were “enthusiastic dancers” while Berlandier spoke dancing as “the chief amusement among the lower classes.” In 1845, the traveler Frederic Benjamin Page described San Antonians as a people for whom “music and dancing, hunting and the chase, cards and love make up their whole existence.” In 1857, Frederick Law Olmsted Jr. cheerily recalled a “jumble of races, costumes, languages and buildings,” a “free and easy, loloppy sort of life,” populated by women whose dresses “seemed lazily reluctant to cover their plump persons.”

A print featuring a large map of San Antonio from the 19th century

Undoubtedly, McCullough’s spiky moralism was influenced by personal convictions and a desire to secure funding for his missionary endeavors. Nevertheless, life on the frontier was precarious and often tragic — factors which may have fueled the preachy intensity. According to R. F. Bunting, McCullough’s successor, the San Antonio of 1846 was a “miserable and dilapidated place,” wrecked by war and preyed upon by “desperados” and “undesirables.” Indeed, McCullough survived several attempts on his life by those who took umbrage at his use of the pulpit to rail against gambling and saloons. He had some success setting up a local school but in 1849 his mental health was failing. The same year, his wife died in a cholera outbreak and he moved to Galveston to recuperate with family members. After recovering his faculties he founded a seminary for women with his two sisters there. However in 1853 Galveston endured a severe outbreak of yellow fever. The school closed down — McCullough lost both his sisters as well as a nephew and niece to the outbreak. Dejected and defeated, he left for Ohio.

Black and white portrait of Rev. John McCullough
A portrait of Rev. McCullough

Despite his moral indignation, nervous disposition and chaotic life, McCullough ultimately waxed optimistic in his account of Texas: “Can any good come out of San Antonio?” His answer was identical to the biblical passage of John’s gospel that he was paraphrasing — “with God all things are possible.” But his faith in Texas was material as well as spiritual: “no doubt . . . this will, in a few years be a large town.” For McCullough, the area around San Antonio had enough rivers (with enough girth and fall) to build “manufactories” that could “surpass Lowell,” the Massachusetts town that had grown rapidly into a manufacturing powerhouse in the first half the 19th century. He also mused that central Texas might one day be the “best cotton growing region in the world,” a comment that underlined his ambivalence to slavery as much as his penchant for speculation. (McCullough was from a staunchly abolitionist family and preached to black congregations throughout his life. However one early 20th century account of him adds — rather euphemistically — that he “accepted southern culture.”) Perhaps it was his optimism about Texas that led to his return later in the decade. During the 1850s McCullough had married again (to a woman whose extended family owned several slaves) and apparently settled for a quiet life in Ohio as a salaried minister. But at some point in 1859, he decided to mess with Texas once more, moving to Burnet County in a wagon carrying his family and grand piano, and with plans, according to the Southwestern Presbyterian, to “preach in that destitute region” and found another school. It turned out to be a disastrous decision. The Civil War disrupted his fundraising and left him bankrupt. He died of apoplexy suddenly in 1870, leaving a widow and nine children. Obituaries remembered McCullough as a pioneer preacher and a kind man, despite the fact that his “attachment to principle [was] inflexible.” The adobe walled huts in which he used to teach English to street children had long since vanished from San Antonio’s streets. Today, he is commemorated by a five mile long stretch of tarmac north of Interstate 35: “McCullough Street.”

You May Also Like:

Family Outing in Austin, Texas by Madeline Hsu
Peace Came in the Form of a Woman, by Julliana Barr (2007)

Monumental Reinterpretation

On the west side of the Denver Capital building stands a soldier atop a stone monument. The soldier is easily recognizable as a Civil War soldier with his rifle ready, sword at his side, his distinctive hat, and the gaze of a vigilant soldier, saddened to be fighting his brother and countrymen. Ari Kelman dedicates portions of his book, A Misplaced Massacre: Struggling Over the Memory of Sand Creek, to a discussion about the history of this Civil War monument. The monument was erected and dedicated in 1909. This date places the monument in a period of United States history that saw the rapid erection of monuments across the landscape. Americans had emerged from the smoke and haze of the Civil War into a brave new world of freed slaves, Indian wars, and reform movements. Memorialization allowed for the reinterpretation of the racially motivated fratricide and cleansing of the west. Instead memorializers could reforge the familial bonds of the Union in stone. Denver memorialized this glory with their Union Soldier statue and a plaque that proudly displays a list of all the battles and engagements of the Civil War that Coloradans participated in. Notably listed in the battles is Sand Creek.

On November 29, 1864, Colonel John Chivington, with 700 men, attacked the Cheyenne and Arapahos camped peacefully along Sand Creek. Within the encampment was Black Kettle, a well known peace negotiator between the white settlers and the Indians. Black Kettle had recently returned to the Sand Creek camp, after concluding peace settlement negotiations at Fort Weld, where Chivington happened to be stationed. Upon realization of Chivington’s betrayal, Black Kettle immediately raised the American Flag and a white flag of surrender above his tipi, desperate to prove how those in the camp were friends of Americans and therefore peaceful. Chivington and his men took no heed of the raised flag, and continued the attack, killing not only men, but women and children who begged for mercy on their knees. Many ran to the sand beds along the creek where they burrowed into the sand, seeking cover from the sea of bullets. As the dust settled, 150 Cheyenne and Arapahos were dead. Chivington suffered the loss of ten men. His remaining 690 men proceeded to mutilate and desecrate the bodies of the deceased, with many keeping various body parts as grisly mementos.

Colonel John Chivington (via wikimedia)

As Kelman shows, immediately following the events of Sand Creek the public memory becomes cloudy and convoluted. For Chivington, the Union soldiers, and the American Nation, Sand Creek was a glorious battle in the story of westward expansion and the expulsion of the rebellious and violent Indians from the landscape. For the Cheyenne and Arapahos, Sand Creek was a brutal slaughter and massacre. One of Chivington’s men saw it the same way. Silas Soule was uneasy as he marched out on the day of the attack. When they arrived at Sand Creek, Soule refused to order his men to fire and he watched from the sidelines as the rain of bullets poured down on Black Kettle’s camp. Soule recorded the event in his letters, agonizing over his memories of that day.

A battle is often defined as an extended struggle between two organized armies. A massacre on the other hand is understood as the brutal and violent killing of multiple victims. The terms battle and massacre both carry heavy and violent meanings, but the picture they evoke are not the same. This difference in how to view the history and memory of Sand Creek coalesced around the Civil War monument in Denver in the late 1990s and early 2000s. The inclusion of Sand Creek in the list of battles and engagements on the monument at the Denver State Capitol projects an authority over the definition of the event and downplays its injustice by suggesting that there was a more even playing field between two opponents equally engaged. This leads the general public to believe that the band of Cheyenne and Arapahos provoked Chivington’s attack.

The Sand Creek Massacre as imagined by Harper’s Weekly in 1868 (via wikimedia)

The debate over the monument was strikingly similar to the many debates we have seen in the past year over the many Confederate monuments across the American landscape. The central question is what do we do with these monuments that valorize highly politicized motivations but also provide a glimpse into the people, culture, and history of those who erected these very monuments? For historical preservationists, this question creates a crucial internal battle. Preservationists recognize the white veil that hides the ugly truth of the monument’s history and purpose. However, their desire to preserve leads them to a fiery inferno. Ultimately, preservationists cannot come to a consensus on what should be done, however. many advocate for at least reinterpretation of the monuments.

Reinterpretation was the path Colorado ultimately decided upon. A small plaque was attached, not to the monument itself, but to the brick knee-high wall around the monument. The plaque provides a small nugget of insight into the controversy over the memory of Sand Creek; and yet it still leaves open just enough ambiguity to allow a visitor to interpret Sand Creek as a battle.

Monuments have authority. They are literally etched in stone. They influence the way the public perceives and remembers history. After all, how do you argue with a giant bronze plaque attached to a monumental piece of stone, holding up a heroic citizen soldier who fought to preserve our Union?

Further Reading:

Ari Kelman, A Misplaced Massacre: Struggling Over the Memory of Sand Creek (Cambridge: Harvard University Press, 2013).

Thomas J. Brown, The Public Art of Civil War Commemoration: A Brief History with Documents (Boston: Bedford/St Martins, 2004).

Stephanie Meeks, “Statement on Confederate Memorials: Confronting Difficult History.”

Other Articles You Might Like:

On Flags, Monuments, and Historic Myths by Joan Neuberger

Reconstruction in Austin: The Unknown Soldier by Nicholas Roland

Paying for Peace: Reflections of the “Lasting Peace” Monument by Jesse Ritner

Did the British Empire depend on separating Parents and Children?

Empires ancient and modern are large, hierarchical organizations, structurally founded on deep inequalities of risk and reward. The British Empire in Asia was no exception. At the front lines of imperial power were, all too often, common men (and some women) who were tricked, cozened, misled, coerced, and whipped into serving as the cannon-fodder of Empire. The temptation to desert was often present and the thought of mutiny cannot have been absent. These plebeian men were ‘kept in line’ men of status who served as commercial agents and military officers. But even among them, kickbacks and commissions were omnipresent and could grow into serious leakages of revenue or foment major acts of treason. Furthermore the wholesale desertion of a dynasty by its elite subjects was not unknown. In Britain in both 1660 and 1688, the political establishment and key army units deserted their established government to side with an invader sponsored by a foreign power. We could multiply such examples.

Transoceanic empires built by corporations like the British and Dutch East India Companies faced even greater problems because they lacked the sacred aura that surrounded kings and helped maintain nominal loyalties. It took nearly half a year for an inquiry or command to reach a functionary in Asia and it took many more months before a report or an excuse would come back. The military, commercial, or political situation could change dramatically in the interim. Many readers will be aware, for example, that the British and Americans continued to fight for six weeks in 1815 after the peace treaty was signed between the two powers. One of these peace-time battles cemented Andrew Jackson’s reputation and propelled him to the presidency. Asia was much further away and across more dangerous waters.

Corporations growing into empires, such as the Dutch East India Company and English East India Company were keenly aware of what modern organization theorists, such as Oliver Williamson, have termed the “agency problem.” This is simply the difficulty of monitoring subordinates and ensuring that they act mainly in the interest of those (“the principals”) whose “agents” they had been hired to be. In 1613,  a vexed East India Company merchant, Nicholas Withington  reported that the many Portuguese “renegades” were already being joined by a trickle of Englishmen, like one Robert Claxon. He converted to Islam for money but, dissatisfied with something, returned and appeared penitent.  He was then trusted with Company funds and absconded for good. As the vexed Withington recounted it, Claxon:

had also turned Mahometan in the Decan, with a good allowance at [the Sultan’s] court; but, not being contented, he came    to Surat, where he was pitied by us for his seeming penitence; but being entrusted with upwards of forty pounds [sterling: a considerable sum at the time] under pretense of making purchases, he gave us the slip and returned to the Decan. Thus there are at present four English renegadoes in the Decan, besides many Portuguese.

This  was an example of the problems of large organizations: how can you ensure compliance and loyalty when agents are far removed and have sanctuaries beyond your control?

Even in 1787, the reforming Governor-General Cornwallis, came to India fresh from America. He often inveighed against the East India Company’s English employees for their incapacity or corruption.  The editor of his letters wrote plainly of how the Company had been cheated by its senior employees in, for example, the purchase of silk.

The East India Company was an established ruling power in large parts of India after 1757. But the indiscipline and venality of even its senior-most civil and military officials once they realized how quickly they could grow wealthy brought the Company to the edge of ruin in less than 20 years. It was forced to seek a “bail-out” from the Royal Treasury. The illustration below depicts an East India Company official in regal guise, lording it over the “natives.”

An East India Company Grandee (via Getty Images)

This resulted (after some years of partisan grid-lock) in the dispatch of new governor-general with sweeping powers. This was Lord Cornwallis who came out to “reform” British India fresh from a bruising surrender at Yorktown in Virginia in 1781. He realized that neither a common language nor a common religious affiliation could guarantee either honesty or loyalty. From the time of his tenure as chief executive in India (1786-1795) therefore, the British regime carefully managed the social reproduction of European officers and soldiers. This was done to prevent the formation a dangerous Creole settler class. The Company had long sought to limit the numbers and control the conduct of private Europeans in India in order to maintain its commercial monopoly against “private trade.”  Under Cornwallis, political prudence provided another rationale. The value of this strategy was made obvious when Governor General John Shore was faced with a mutiny among the East Company Army’s European officers in 1795-1796. Reflecting on the episode a year or so later, Shore’s successor in office, Richard Wellesley, wrote that:

If Europeans had been settled with their families in India; or if these men had, or could have had, their homes in that country, the Company would have lost it, [their Indian empire] and nothing could ever have regained it.

It was from awareness of this danger that, as Indrani Chatterjee was the first to show, the Company assiduously sought to limit the development a local power elite with any genealogical depth. This was intended to preempt any consequent claim to the “rights of Englishmen” that had just been forcefully raised in North America. The children of mixed European and Indian parentage were therefore turned into a socially inferior class of Eurasians, excluded from power. As early as 1786, the Company forbade the children of “native women” from traveling to England, after discovering that the Indian-born John Turing, “dark as his mother,” had done so and secured a cadet’s appointment in the Army. Two decades later, a “mulatto” candidate secured an appointment only by paying a young Englishman to impersonate him at the interview.  Although disavowed progeny were increasingly excluded from the Army and higher civil service, some Eurasians were in found jobs in other state employment into the 1830s. But the official policies toward them derived from a well-established, generalized contempt for those of mixed descent.  As early as 1786, Surgeon Richard Wilson, in proposing the creation of a charity school to raise such children as loyal Protestants, remarked that it “hath long been a severe and unanswerable Reproach from the Natives of this Country that Britons, above all other Nations, have neglected and despised their progeny.”

If efforts at social integration had succeeded despite such attitudes, British India might have developed into a casta-ranked society like the Spanish Americas. But the need to win the support of the indigenous clerical classes, as well as the fear of promoting a Creole elite like the treacherous Americans, led the East India Company onto a different track. In the last few decades of its rule, before the revolt of 1857, Eurasian clerks were gradually displaced in state service by Indians from the traditional clerical classes, both Hindu and Muslim and, around Bombay, also Parsi and Goan Catholic. A greater regard by the British for their own “blood” returned after 1857, when Anglo-Indians were extensively recruited into the developing railway system in order to ensure imperial control of this strategic asset.  Eurasians, however, could not compete with the indigenous clerical classes in subordinate employment, that is to say, clerical work.

The government continued to follow the logic of Wellesley’s argument against allowing Europeans to set down familial roots within India, and sought to ensure that the affective ties and personal aspirations of key cadres such as Covenanted Service and Army officers should be directed toward England. The disciplinary value of this policy for the East India Company’s government is shown by Sleeman’s dedication of his Rambles and Recollections (1844), to his sister. He observed how nine out of ten Englishmen in India found their greatest pleasure in letters from their sisters at home, which filled the landscapes so dear to our recollections, with ever varying groups of the family circles, among whom our infancy and our boyhood have been passed; and among whom we still hope the spend the winter of our days.

He added that the approbation of the circles represented in these letters was an important restraint on Englishmen in India, and so the sisters should be considered “a valuable species of unpaid magistracy to the Government of India.”  The psychic isolation of young men well indoctrinated in this system and left among Indians without their families was described to Emily Eden in 1837 as a “horrible solitude” that produced depression. One such officer told her of “the horror of being three months without seeing an European, or hearing an English word …”    Indirectly, therefore, we may see patterns of marriage and family formation being managed by the British imperial regime to bolster the loyalty of key elements of its governing apparatus. The political and military efficacy of that apparatus thus depended on constant policing of the boundaries of ethnicity.

Legitimate reproduction was now focused on Europe-born women. Lord Cornwallis had raised official salaries both to ensure fidelity and to allow mature civil servants to make “suitable” marriages and sustain the establishment needed for them. Licit sex and open conjugality were now limited to English-born women. By the 1850s it was said of the junior-most Indian Civil Service officer that he was worth “three hundred [pounds sterling] a year, dead or alive.” Numbers of young women sailed out to India as part of what was archly termed “the Fishing Fleet.” Many did marry officials there. But it was soon discovered that their infants died in alarming numbers, doubtless aided by Victorian medicine and its therapeutic use of opium, alcohol, mercury and blood-letting for all ages and sexes. Furthermore, the still prevalent climatic theories of “racial qualities” suggested that children raised in hot climates deteriorated from the parental stock. From the mid-nineteenth century therefore, young children were usually sent back to Britain while in India fathers worked and mothers sought to monopolize all legitimate conjugality. The result was that generations of children were torn away from their parents and if boys, certainly introduced to that staple of Victorian education, the rattan cane. Two of these children were initially too young (six and three) for school, so Rudyard Kipling and his sister were left in Lorne Lodge, Southampton.

Rudyard Kipling Heritage Site at: 43 Villiers Street, Charing Cross, London (via Wikimedia)

Kipling later wrote of himself and his sister that, when he was told his parents had left him “for ever,” he “went out and wept bitterly with Judy, into whose fair head he had driven some ideas of the meaning of separation.” He also invoked the desolation he had felt in a later poem:

A Well-a-day for we are souls bereaved!

Of all the creatures under Heaven’s wide scope

We are most hopeless, who had once most hope

And most beliefless, who had once believed.

There are doubtless children in the USA  today with good reason to echo that.

Arresting Dress: Cross-Dressing, Law, and Fascination in Nineteenth-Century San Francisco by Clare Sears (2015)

by John Carranza

Clare Sears, associate professor of sociology at San Francisco State University, explores cross dressing and its place in the formation of San Francisco as an urban center in the mid- to late-nineteenth century. Sears uses codebooks, arrest records, and court reports to reconstruct the history of cross dressing in an urban setting. In her analysis, Sears determines that cross-dressing laws expanded to include the  policing of race, sex, citizenship, and city space. The result is a multifaceted work that examines how law and fascination with other people’s bodies create marginalized individuals. Sears pioneers what she calls “trans-ing analysis” as a mode of inquiry that encompasses the historical understanding and production of a boundary between what is considered normative and nonnormative gender.

The first two chapters of Arresting Dress construct for the reader a San Francisco that had been shaped by the instability at the end of the Mexican-American War in 1848 and the Gold Rush of 1849. These two events created a space in which a distinctly “American” urban landscape was never established, and it fostered a community of predominantly men. In this society where women were largely absent, cross-dressing became a form of entertainment and the subject of desire for other men. While some women did live in early San Francisco, there was only a small segment who engaged in cross-dressing. These women usually tried to subvert gender norms or they wove the social evils of prostitution and cross-dressing together by dressing as men to show their availability to customers.

One of the more interesting discussions in the book occurs in the third and fourth chapters as Sears recounts how the San Francisco Board of Supervisors moved to outlaw what they considered indecency and vice. While there were several activities they considered vices in San Francisco in the nineteenth century, the association of cross-dressing with prostitution was one reason why cross-dressing was outlawed. Furthermore, outlawing cross-dressing applied to other marginal groups of San Francisco society, such as Chinese immigrants who were viewed as threats to white Americans. For example, Chinese immigrants were accused of perpetuating prostitution and taking jobs from European-American workers. In many instances, it was presumed that Chinese immigrants entered the ports by cross-dressing, which threatened the United States and its citizens. In many ways, applying cross-dressing laws to the Chinese was a way in which white Americans could regain control.

Omar Kingsley perfoming as Ella Zoyara. Kingsley performed in San Francisco throughout the 1860s and 1870s (via Wikimedia)

In the final two chapters of Arresting Dress, Sears outlines how enforcing anti-cross-dressing laws was based on looking at others’ bodies and deciding if their bodies could be clearly labeled as man or woman. Prescriptive dress and mannerisms were how law enforcement determined whether a person fit into gender roles deemed “normal.” As a result, law enforcement employed pulling wigs off women believed to be men, jailhouse medical examinations, and more invasive “mysterious” examinations by jail matrons. Enforcing such laws was an invasive means of preserving order, but it brought to the public’s attention to other people’s bodies that were subject to examination and held to be inferior if they did not adhere to expected norms.

While laws were established to regulate public cross-dressing, Sears also elucidates instances where the public exhibition of cross-dressing was allowed. Using freak show documents, Sears shows the reader that despite its illegality cross-dressing was allowed for entertainment. Where cross-dressing in public would have been grounds for arrest, the freak show was a source of entertainment and a livelihood for those individuals who cross-dressed before an audience. Sears also discusses slumming tours where the wealthy could view attractions such as Chinese opium dens and other similarly indecent spaces. As part of the experience, some tourist women could dress in men’s attire to have the full experience. If caught cross-dressing, these tourist women who were frequently white and wealthy could often escape punishment.

Ella Wesner around 1973 (via Wikimedia)

Finally, Sears also attempts to make far-reaching connections between cross-dressing and national identity by analyzing the intersection of cross-dressing and immigration laws in San Francisco. Sears rightfully mentions that federal laws such as the Chinese Exclusion Act of 1882 and local laws could not operate on the same level, but they were similar in their goals of ensuring that desired gender norms were required to belong in society. In judging who could come into the United States, those who cross-dressed were viewed as lacking in moral integrity and unworthy of inclusion in the national community. Positioned with a list of enumerated undesirables, such as prostitutes, the disabled, and vagrants, it is easy to see how the perceived connection between vice and cross-dressing and the subversion of gender norms would create a new category to discriminate against.

As a work of history, Sears makes extensive use of archival sources, but the viewpoint of the people who engaged in cross-dressing practices was unavailable. However, the sources she does use paint a portrait of how the law can regulate behavior that was previously acceptable. Sears, a sociologist, is also careful to not impose the present on her reading of the past, and respects the people who cross-dressed by not assigning sex or gender to them without their permission, so to speak. Arresting Dress is a necessary read for a time when transgendered individuals were frequently the subjects of laws that dictated acceptable gender expression. The history of cross-dressing shows the reader how the law can be used to discriminate against those who are different and in theory could be used to move away from such discrimination.

Also by John Carranza on Not Even Past:

How to Survive a Plague: The Inside Story of How Citizens and Scientists Tamed AIDS by David France (2016)

You may also like:

From Yellow Peril to Model Minority by Madeline Hsu
Chris Babits explores the Dallas Gay Historic Archives
Joseph Parrott reviews The Lavender Scare: The Cold War Persecution of Gays and Lesbians in the Federal Government by David K. Johnson (2006)

The American “Empire” Reconsidered

by A. G. Hopkins

Whether commentators assert that the United States is resurgent or in decline, it is evident that the dominant mood today is one of considerable uncertainty about the standing and role of the “indispensable nation” in the world. The triumphalism of the 1990s has long faded; geopolitical strategy, lacking coherence and purpose, is in a state of flux. Not Even Past, or perhaps Not Ever Past, because the continuously unfolding present prompts a re-examination of approaches to history that fail to respond to the needs of the moment, as inevitably they all do.

This as good a moment as any to consider how we got “from there to here” by stepping back from the present and taking a long view of the evolution of U.S. international relations. The first reaction to this prospect might be to say that it has already been done – many times. Fortunately (or not), the evidence suggests otherwise. The subject has been studied in an episodic fashion that has been largely devoid of continuity between 1783 and 1914, and becomes systematic and substantial only after 1941.
There are several ways of approaching this task. The one I have chosen places the United States in an evolving Western imperial system from the time of colonial rule to the present. To set this purpose in motion, I have identified three phases of globalisation and given empires a starring role in the process. The argument holds that the transition from one phase to another generated the three crises that form the turning points the book identifies. Each crisis was driven by a dialectic, whereby successful expansion generated forces that overthrew or transformed one phase and created its successor.

The first phase, proto-globalisation, was one of mercantilist expansion propelled by Europe’s leading military-fiscal states. Colonising the New World stretched the resources of the colonial powers, produced a European-wide fiscal crisis at the close of the eighteenth century, and gave colonists in the British, French, and Spanish empires the ability, and eventually the desire, to claim independence. At this point, studies of colonial history give way to specialists on the new republic, who focus mainly on internal considerations of state-building and the ensuing struggle for liberty and democracy. Historians of empire look at the transition from colonial rule rather differently by focussing on the distinction between formal and effective independence. The U.S. became formally independent in 1783, but remained exposed to Britain’s informal political, economic and cultural influences. The competition between different visions of an independent polity that followed mirrored the debate between conservatives and reformers in Europe after 1789, and ended, as it did in much of Europe, in civil war.

“A Rival Who Has Come to Stay. John Bull – Good ‘evins! – wotever ‘ll become of my ship-building monopoly, if that there Yankee is going to turn out boats like that right along?” Puck magazine, July 24, 1895 (via Library of Congress)

The second phase, modern globalisation, which began around the mid-nineteenth century, was characterised by nation-building and industrialisation. Agrarian elites lost their authority; power shifted to urban centres; dynasties wavered or crumbled. The United States entered this phase after the Civil War at the same time as new and renovated states in Europe did. The renewed state developed industries, towns, and an urban labor force, and experienced the same stresses of unemployment, social instability, and militant protest in the 1880s and 1890s as Britain, France, Germany and other developing industrial nation-states. At the close of the century, too, the U.S. joined other European states in contributing to imperialism, which can be seen as the compulsory globalisation of the world. The war with Spain in 1898 not only delivered a ready-made insular empire, but also marked the achievement of effective independence. By 1900, Britain’s influence had receded. The United States could now pull the lion’s tail; its manufactures swamped the British market; its culture had shed its long-standing deference. After 1898, too, Washington picked up the white man’s burden and entered on a period of colonial rule that is one of the most neglected features of the study of U.S. history.

Columbia’s Easter Bonnet: In the wake of gainful victory in the Spanish–American War, Columbia—the National personification of the U.S.—preens herself with an Easter bonnet in the form of a warship bearing the words “World Power” and the word “Expansion” on the smoke coming out of its stack on a 1901 edition of Puck (via Library of Congress)

The third phase, post-colonial globalisation, manifested itself after World War II in the process of decolonisation. The world economy departed from the classical colonial model; advocacy of human rights eroded the moral basis of colonial rule; international organisations provided a platform for colonial nationalism. The United States decolonised its insular empire between 1946 and 1959 at the same time as the European powers brought their own empires to a close. Thereafter, the U.S. struggled to manage a world that rejected techniques of dominance that had become either unworkable or inapplicable. The status of the United States was not that of an empire, unless the term is applied with excessive generality, but that of an aspiring hegemon. Yet, Captain America continues to defend ‘freedom’ as if the techniques of the imperial era remained appropriate to conditions pertaining in the twenty-first century.

The signing of the NATO Treaty, 1949 (via Wikimedia Commons)

This interpretation inverts the idea of “exceptionalism” by showing that the U.S. was fully part of the great international developments of the last three centuries. At the same time, it identifies examples of distinctiveness that have been neglected: the U.S. was the first major decolonising state to make independence effective; the only colonial power to acquire most of its territorial empire from another imperial state; the only one to face a significant problem of internal decolonisation after 1945. The discussion of colonial rule between 1898 and 1959 puts a discarded subject on the agenda of research; the claim that the U.S. was not an empire after that point departs from conventional wisdom.

The book is aimed at U.S. historians who are unfamiliar with the history of Western empires, at historians of European empires who abandon the study the U.S. between 1783 and 1941, and at policy-makers who appeal to the ‘lessons of history’ to shape the strategy of the future.

A.G. Hopkins, American Empire: A Global History

Mapping the Country of Regions: The Chorographic Commission of Nineteenth-Century Colombia, by Nancy P. Appelbaum (2016)

By Madeleine Olson

What occurs when elite driven narratives about national identity dramatically different differ from the realities people experienced? During the nineteenth century throughout Latin America, when national boundaries were just beginning to become coherent, the upper echelons of society constructed tales about their nations that often vastly differed from lived experiences.

Between 1850 and 1859, the Chorographic Commission traveled the territory of present day Colombia in an attempt to map the land and the people who lived there, using chorography, or detailed representations of a particular region. Sponsored by the government of New Granada (an older name for Colombia), the commission produced a wealth of maps, texts, illustrations, as well as travel journals and diaries, in order to construct the image of a unified nation. Implicit in the commission’s initial mandate was the assumption that it would justify the existing administrative order by making that order appear natural.

The visual culture it produced, however, depicted a nation that was far from cohesive, with regional individuality and diversity.  Instead of portraying a unified nation, the commission presented the country as fragmented into different, and often opposing regions, inhabited by racially and culturally distinct races, that reinforced assumptions of Andean and white mestizo superiority. In this new book, Nancy Appelbaum expands our understanding of this central paradox, demonstrating that the commission’s materials reveal some of the ways that Colombian elites grappled with the challenges posed by varied topographies and diverse inhabitants.

The leaders of the Chorographic Commission included both foreign members and others who were born and bred in New Granada. Two of the key figures on the commission, whose writings play an important role throughout the book, were Agustín Codazzi and Manuel Ancízar. Born in Italy’s papal states and a Napoleonic war veteran, Codazzi was in fact first contracted to map the Venezuelan provinces, which had seceded from Gran Colombia, in 1830. The secession precipitated Gran Colombia’s dissolution into Venezuela, Ecuador, and New Granada. Manuel Ancízar, a Colombian lawyer, writer, and journalist, joined Codazzi on the Commission in 1850.

Gran Colombia and modern countries (via Wikimedia Commons).

Using personal correspondence between Codazzi and Ancízar, Appelbaum argues that although the creation of the commission reflected nationalist aspirations of the government, it was fundamentally shaped through its leaders’ own exposure to foreign culture. The geographic writings of Prussian Alexander von Humboldt, as well as Italian Adriano Balbi, strongly influenced Codazzi’s and Ancízar’s initial overviews, as they “draped themselves in the ‘mantle of Humboldtianism’ to emphasize their own scientific legitimacy.” Inspired by Humboldt, Codazzi divided the terrain of New Granada according to the differing altitudes, winds, and vegetables that he encountered. The ideological influence of Humboldt, together with Balbi’s schematic list methodology, helped the Commission create a novel and distinct approach to chorography that was more affordable than the fashionable trigonometric survey.

The detailed accounts produced by Codazzi and Ancízar on the commission’s initial expeditions to the highland region of Antioquia and the Pacific lowland, comprised not only field reports, but also included detailed watercolors created by the commission’s first illustrators in order to depict the populations they encountered.  Through comparing the perceptions noted in the field reports with the pictorial representations, the tension comes out between the inclination to show these regions as homogenous when the commission clearly experienced great heterogeneity of the people and customs.

William Price, Typical Inhabitants of the Province of Medellín (via World Digital Library).

This visual culture reflected a literary and artistic current in nineteenth-century Latin America called costumbrismo, or using descriptive prose and dialogue to verbally paint a local scene,  emphasizing the customs and particularities of that locale. Within these works, tipos “types” that organized the population into component parts defined by race, occupation, and place, were created to provide both a visual and discursive way to manage the heterogeneity that the commission encountered. In William Price’s Tipos de Medellin, the commission’s artists displayed idealized images of people one would encounter when visiting these places. These images of the racial types that the commission produced updated the eighteenth-century genre of casta paintings for the republican era.

As the commission moved into the tropical lowlands and the eastern plains, Appelbaum further elaborates how the illustrations were largely aimed at an external audience in order to attract immigrants and economic investment to the region. The commission’s efforts were not meant to simply represent the landscape, they were also to transform it in the service of economic advancement. Codazzi believed that these areas, rich in agriculture and livestock, could support many more people than their sparse, midcentury population.  Elites defined the tropics, for instance, as being filled with disease, poverty, and backwardness, making it a desirable region for colonization and improved methods of production. The commission took on an ethnographic dimension, as studying the population became an integral part in determining the economic capabilities of the land. Reports, maps, and paintings that Codazzi, Ancízar, and others, produced provided abundant information aimed to facilitate the conquest of the regions they mapped and studied.

Manuel María Paz, Provincia del Chocó: Aspecto esterior de las casas de Nóvita (via World Digital Library).

Applebaum goes on to discuss the methods that mid-nineteenth-century intellectuals living in the highlands around Bogotá employed to rationalize their claim over the local populations. By emphasizing the glorious origins and civilization of the Andean region around Bogotá, at the expense of lower “savage” climates, intellectuals reinforced Bogotá’s claim over this topographically disparate territory. Codazzi projected national history into the ancient past by weaving geology, archeology, and history together and referencing the past as “history” rather than prehistory. They placed themselves, not the local indigenous populations, who ultimately were the Commission’s guides, at the top of the intellectual scale to read the cataclysmic past.

Carmelo Fernández, Piedra grabada de Gámesa. Provincia de Tundama (via World Digital Library).

After Codazzi died in 1859, elites left behind tried to make sense of his project and battled each other over meanings and representations of the nation. Although members of the commission had high hopes for the mass reproduction and circulation of the materials they produced, that did not occur.  Chorography and the work of the Chorographic Commission died with Codazzi, supplanted later in the nineteenth century by newer forms of mapping which are still common today, such as topography.

Although the work that the Chorographic Commission created between 1850 and 1859 was not as widely received as hoped, the spatialized and racialized regional hierarchy inherent in its visual materials would be reproduced and refined within Colombian scholarly and popular discourse. By no means the originator of this fragmented discourse, the Commission’s cartographic project formed the basis for most maps of Colombia into the early twentieth century.

Gracefully written, integrating over thirty images and maps, Mapping the Country of Regions ­­­offers a fascinating window into both the visual culture produced during the nineteenth century in Colombia, and the ways that territories, boundaries, and state-lines are constructed. Appelbaum’s contextualization of her source base that she makes explicit within her analysis heightens her claims about the use of geographic, ethnographic, and visual methods to secure territory.  This theme of racialization of geographic hierarchy is not solely limited to Colombia, as ideas about how race and region have historically informed each other throughout Latin America. The blending of analysis with visual representation enables this book to be of use for those interested in not only Latin American nation-state building, but this  methodology of combining visual and textual analysis would be of value for anyone incorporating visual culture into their own work.

Nancy P. Appelbaum, Mapping the Country of Regions: The Chorographic Commission of Nineteenth-Century Colombia (Chapel Hill: The University of North Carolina Press, 2016).


Also by Madeleine Olson on Not Even Past:
A Texas Historian’s Perspective on Mexican State Anticlericalism.

You may also like:
Antonio de Ulloa’s Relación Histórica del Viage a la America Meridional, by Haley Schroer.
Casta Paintings, by Susan Deans-Smith.

 

Why I Ban the Word “Feminism” from My Classes

by Jacqueline Jones

In each of my graduate seminars, at the beginning of the semester, I caution students not to use certain words I consider problematic; these words can actually hinder our understanding of a complex past.  Commonly used—or rather, overused—in everyday conversation as well as academic discourse, the banned words include “power,” “freedom,” and “race.”  I tell my students that these words are imprecise—they had different meanings depending upon the times and places in which they were used– and that today we tend to invoke them too casually and even thoughtlessly.

Oh yes, and there is another word I ask my students to avoid—“feminism.”  Students often greet this particular injunction with surprise and dismay. Does it mean that their instructor believes that women should stay at home and not venture into the paid labor force?  If so, why is she standing in front of a classroom now?  So I have to be sure to make a case about the pitfalls related to the use of the word.  Even the broadest possible definition is problematic, as we shall see.

Protesters at the 2017 Women’s March (via Wikimedia Commons).

The purpose of the massive march on Washington held on January 21, the day after President Trump’s inauguration, was to protest his election.  It was called the “Women’s March,” and as we all know, sister marches took place all over the country and the world the same day.  A group of women initiated the idea of the protest, and took care of all the logistics; many participants wore pink “pussy hats” to call attention to the President’s demeaning remarks about grabbing women’s genitals captured on the infamous Access Hollywood videotape.  The hand-held signs at the rally covered a whole range of issues, including abortion and reproductive rights, equal pay, sexual harassment, Black Lives Matter, protection for undocumented immigrants, public education, and women’s struggles for fair treatment and equality generally.   Presumably, Trump’s election had prompted an historic level of anger and frustration among women. Many news outlets, participants, and observers suggested that the march represented a remarkable display of re-energized, twenty-first century feminism, with the word itself suggesting a kind of transcendent womanhood bringing together women of various ages, races, classes, and ethnicity.

Protesters at a sister rally in 2017 (via Pixabay).

Well, not exactly.  Although only 6 percent of African American women voted for Trump, 53 percent of white women did.  We can safely assume, then, that many white women not only stayed away from the march, but also objected to it in principle: the pink-pussy-hat contingent did not speak for them.  So we might ask, which groups of women did not march?  Here is a possible, partial list: devout Catholic women who believe that birth control, abortion, and gay marriage are sins against God; former factory workers who were fired from their jobs when their plants were shipped overseas; the wives and daughters and mothers of unemployed coal miners; anti-immigrant activists; women of color who saw the march as dominated by white women; and pro-gun rights supporters. Missing too were probably women who found Mr. Trump’s video sex-talk disgusting but chose not to see this as the defining issue in the 2016 Presidential campaign–just as some liberal women might have disapproved of Bill Clinton’s affair with Monica Lewinsky but did not let that affair diminish their support for him when he was president.  In both these cases, the pro-Trump and pro-Clinton supporters expressed less solidarity with the men’s victims and more support for other elements of the men’s politics.  In other words, these women eschewed any putative “sisterhood” in favor of other political issues.

Suffragists parade down Fifth Avenue, 1917 (via Wikimedia Commons).

Another way of looking at this issue is to challenge the view that feminists had as their greatest priority a woman president.  How many self-identified feminists were eager to see Sarah Palin run for president in 2012?  Again, for many women, their overriding concern is not womanhood per se but a wide range of political beliefs and commitments. As we learned soon after U. S. women got the right to vote in 1919, different groups of women have different politics; in the 1920s, the suffragists were astonished to find that women tended to vote the way their husbands did, according to a matrix of ethnic and class factors.

Delegation of officers of the National American Woman Suffrage Association, 1917 (US National Archives via Flickr).

The example of the Women’s March suggests that, for all the talk today of “intersectionality” (the interconnectedness of certain social signifiers such as class, religion, “race,” and gender) “feminism” promotes a very specific political agenda, one that does not necessarily reflect the priorities and lived experience of a substantial portion of the female population. In essence, the word “feminism” is too vague to have much meaning within a society where women have multiple forms of identity, and gender might or might not be the defining one at any particular time.  Even the broadest possible definition—feminists are people who seek to advance the interests or the equal rights of women—has its limitations.

As an historian, I would suggest several reasons why students should avoid the use of the word “feminism”–unless they encounter the word in a primary text; then they should try to figure out what the user meant by it.

  • The word itself did not appear in common usage until the 1920s. Therefore it would be a mistake to apply it to people before that time, or to people since who themselves have not embraced the label; otherwise we risk imposing a term on historical actors who might or might not have used it to describe themselves.
  • Throughout history, various waves of the so-called “women’s” or “feminist” movement were actually riven by intense conflicts among women. Around the turn of the twentieth century, leading white suffragists went out of their way to denigrate their black counterparts and express contempt for immigrant and working class men and women. The early organizers of the National Organization for Women feared that association with lesbians and militant black women would taint their drive for respectability.  Organizers of the 2017 Women’s march debated whether or not anti-abortion women could or should be included in the protest: could one be a feminist and at the same time oppose reproductive rights for women?

Two Lowell mill workers, ca. 1840 (via Wikimedia Commons).

  • Often in history when we find solidarity among women it is not because these groups of women sought to advocate better working conditions or the right to vote for all women; rather, their reference group consisted of women like themselves. In the 1840s, Lowell textile mill workers walked off the job and went on strike not as “feminists,” but as young white Protestant women from middling households—in other words, as women who had much in common with each other.  Religion, ethnicity, lineage, and “race” have all been significant sources of identity for women; when a particular group of women advocates for itself, it is not necessarily advocating for all other women.
  • Similarly, we are often tempted to label those strong women we find in history as “feminists,” on the assumption that they spoke and acted on behalf of all women. Yet they might have believed they had more in common with their male counterparts than with other groups of women.  Female labor-union organizers probably felt more affinity with their male co-workers than with wealthy women who had no experience with wage work.  In other words, the transcendent sisterhood that feminism presupposes is often a myth, a chimera.
  • The word not only lacks a precise definition, it also carries with it a great deal of baggage. Indeed, some people have a visceral, negative reaction to the sound of it. It is difficult to use a term with such varied and fluid meanings.  And feminism meant something different to women of the 1960s, when they could not open a credit-card account in their own name or aspire to certain “men’s jobs,” when they debated the social division of labor in the paid workplace and in the home, compared to young women today, who at times see feminism through the prism of music lyrics, movies, fashion, and celebrity culture:  Is the talented, fabulously wealthy Taylor Swift a feminist?
  • Finally, a personal note: In the 1960s, I was a college student and caught up in what was then called the “feminist movement” as shaped by Betty Friedan’s book The Feminine Mystique and the newly formed National Organization for Women.  My mother disapproved of my emerging priorities in life; she had gotten married right after World War II, and she believed (rightly, as it turned out) that the movement denigrated her choice to stay home full-time with her children.  I was puzzled and distressed that my mother could not appreciate my choices; but now I am also puzzled and distressed that the movement could not appreciate her choices.  Coming of age during the war, she feared that she would never marry and have a family, and when she finally had that opportunity, she was happy—for the most part—to embrace it, despite the considerable financial sacrifice for the household that her choice entailed.

Women’s March 2017 (Backbone Campaign via Flickr).

Perhaps, with very few exceptions—equal pay for equal work?—there are few issues on which all women everywhere can agree.  My own view is that, we can pursue social justice in ways that advance the interests of large numbers of men as well as women, without having to defend the dubious proposition that “feminism” as constructed today speaks to and for all women.  It doesn’t.  For the historian, that fact means that we have to come up with other, more creative ways of discussing forms of women’s activism and personal self-advancement that took place in the past, and, in altered form, continue today.

Also by Jacqueline Jones on Not Even Past:

The Works of Stephen Hahn.
On the Myth of Race in America.
History in a “Post-Truth” Era.

Modern Islamic Thought in a Radical Age, by Muhammad Qasim Zaman (2012)

by David Rahimi

coverStarting with the encounter with European colonialism and modernity in the eighteenth century, Muslims increasingly began to worry that Islam was beset by existential crises as Muslim countries slowly fell under colonial domination. Some thought Islam had stagnated and made Muslims weak; others said true Islam already had the answers to modernity. Consequently, many prominent Muslim intellectuals from the Middle East and South East Asia, like Rashid Rida, Shah Wali Allah, Muhammad Iqbal, ‘Ubayd Allah Sindhi, and Jamal al-Din al-Afghani, insisted over the course of the next two centuries that Islam must in some way rediscover, renew, or reform itself to address the challenges of a changing world. This, of course, raised a host of questions. What needed to be reformed? How should reform be enacted? Who or what had the authority to decide such matters? Were these crises even real?

800px-miqbal4

Muhammad Iqbal was knighted in 1922, by King George V (via Wikimedia Commons)

Muhammad Qasim Zaman takes these concerns as his starting point to examine Muslim contestations of religious authority and “evolving conceptions of [Sunni] Islam” from the nineteenth century to the present day.” At its core, this is a story of inconclusive debates, ambiguity, and cyclical tension as old wounds reopen and close, as lay and traditional religious scholars (the ‘ulama) contest how Islam should be understood and lived. By tracing the contextualized debates of the modern ‘ulama in a comparative, transnational framework, Zaman shows the multifaceted dimensions of internal debate and how this fosters ongoing fragmentation of religious authority in Islam, despite efforts to the contrary. Disposing with an overall chronology or single narrative, Zaman divides his book into the following key thematic issues: religious consensus, ijtihad (i.e. independent analogical reasoning), the common good, religious education, the place of women in law and society, socioeconomic justice, and violence. The problems surrounding these issues have continuously resurfaced within Muslim intellectual and religious circles since the nineteenth century. What links these hot-button, yet seemingly disparate, topics together are the fundamental issues of religious authority, that “aspiration, effort, and ability to shape people’s belief and practice on recognizably ‘religious’ grounds,” and internal criticism among Muslims. Each chapter topic, then, serves as a vehicle through which to explore the interplay between authority and criticism, and what the consequences and implications are for Islamic thinkers and Muslims more broadly.

muslim_percent_population_v2-svg

Countries with Muslim-majority populations (via Wikimedia Commons).

The real world consequences of this battle over religious authority through internal criticism come across strongly in the chapter on violence. For example, the moderate Yusuf al-Qaradawi, the head of the European Council for Fatwa and Research (ECFR), wrote in his 2008 magnum opus Jurisprudence of Jihad that jihad was only permissible in cases of defense. Zaman shows, however, that this opinion does not align with the majority of classical Islamic jurisprudence, to which Qaradawi claims to faithfully adhere. Furthermore, another prominent moderate, Taqi ‘Uthmani of the Pakistani branch of the Deobandi school, had previously rejected purely defensive jihad in the 1970s and 1980s. In 2009, ‘Uthmani reaffirmed this theory of offensive jihad, adding that it only applied, however, to “formal” Islamic states and not to individuals. This disagreement about jihad conveys Zaman’s central point that is replicated across the other chapters as well. The ‘ulama are active in articulating their views, but who or what holds ultimate authority to resolve these religious problems remains unclear, since even the theory of authoritative scholarly consensus is hotly contested. Ultimately, Zaman argues that greater attention must be given to religious authority as a relational concept, formed by the specificities of the context in which this authority is performed. Abstract authority not only comes into tension with authority as it is practiced in real life, but historical circumstances and individual beliefs shape how Muslims respond to or recognize religious authority. The ECFR, founded in 1997, exemplifies this tension, since on the one hand, it seeks to create a new authoritative consensus around a particular set of ‘ulama, yet on the other hand, it claims not to compete with the authority of the many non-affiliated ‘ulama. The unsettled nature of these debates, Zaman insists, results in an “authority deficit” and persistent fragmentation within Muslim intellectual and religious circles.

qaradawi_wih_free_syria_flag

Yusuf al-Qaradawi (center) in 2013 (via Wikimedia Commons).

Modern Islamic Thought in a Radical Age is a work of tremendous insight and compelling vignettes. The weakest portions are its introduction and conclusion, which both tend to be verbose and slightly disorganized. It is also unclear at moments whether the author intends to offer a strong overarching argument or to merely “open a new window onto the Muslim religious and public sphere” – one that forefronts debates among the ‘ulama. Thankfully, these are minor problems. While not meant for readers looking for an introduction to Islam, those hoping for a meticulously researched study of the internal religious dynamics of Sunni Islamic thought will find their expectations well met.

Muhammad Qasim Zaman, Modern Islamic Thought in a Radical Age: Religious Authority and Internal Criticism (New York: Cambridge University Press, 2012).
bugburnt

You may also like:
Listen to 15 Minute History Episode 58: Islam’s First Civil War 
See our suggestions for Great Books on Islam in American Politics & History
Lior Sternfeld recommends Making Islam Democratic: Social Movements and the Post-Islamist Turn by Asef Bayat (2007)
bugburnt

« Previous Page
Next Page »

Recent Posts

  • NEP’s Archive Chronicles: A Brief Guide Through Some Archives in Gaborone and Serowe, Botswana
  • Review of Hierarchies at Home: Domestic Service in Cuba from Abolition to Revolution (2022), by Anasa Hicks
  • Agency and Resistance: African and Indigenous Women’s Navigation of Economic, Legal, and Religious Structures in Colonial Spanish America
  • NEP’s Archive Chronicles: Unexpected Archives. Exploring Student Notebooks at the Institut Fondamental d’Afrique Noire (IFAN) in Senegal
  • Review of No Place Like Nome: The Bering Strait Seen Through Its Most Storied City
NOT EVEN PAST is produced by

The Department of History

The University of Texas at Austin

We are supported by the College of Liberal Arts
And our Readers

Donate
Contact

All content © 2010-present NOT EVEN PAST and the authors, unless otherwise noted

Sign up to receive our MONTHLY NEWSLETTER

  • Features
  • Reviews
  • Teaching
  • Watch & Listen
  • About