Harry Jaffa's Crisis at 65: A Symposium
Bridging the gap between historical, empirical, and theoretical research, American Political Thought (APT) is the only journal dedicated exclusively to the study of the American political tradition. Interdisciplinary in scope, APT features research by political scientists, historians, literary scholars, economists, and philosophers who study the foundation and political tradition of concepts such as democracy, constitutionalism, equality, liberty, citizenship, political identity, and the role of the state.
Next year marks the sixty-fifth anniversary of the publication of Harry V. Jaffa’s Crisis of the House Divided: An Interpretation of the Issues in the Lincoln-Douglas Debates, which was published by University of Chicago Press in 1959. Jaffa completed the book in 1958, “on the eve of the centennial” of the debates between Abraham Lincoln and Stephen Douglas in the Illinois Senate contest of that year.
In recognition of the sixty-fifth anniversary of the publication of Crisis, George Thomas and the Henry Salvatori Center for the Study of Individual Freedom in the Modern World at Claremont McKenna College commissioned essays to assess the legacy of Jaffa and his famous book. The essays that follow were submitted to an external reader for the journal, an expert on the political thought of Lincoln.
Editors Jeremy D. Bailey and Susan McWilliams Barndt would like to take this occasion to offer their thanks to the Salvatori Center for not only its support of this symposium but also its support for three manuscript workshops that took place in 2019, 2021, and 2022. These workshops have led to peer-reviewed research articles in this journal, but, more broadly, they have seeded the field of American political thought, and for that we are grateful.
Competency on the Court
The average age of a Supreme Court Justice serving on the court today is 67 years old. Justices commonly remain on the Court well into their eighties because the American Constitution allows them to hold their position for life, given good behavior. Even this stipulation is negligible, as no Supreme Court Justice has ever been successfully impeached. The process would involve clearing a near-impossible bar: approval by a majority of the House and a two-thirds supermajority of the Senate. Additionally, as people enter the later decades of their life, their competency becomes a valid question. Allowing Justices well into their eighties to make decisions that carry major implications for the nation is both irresponsible and senseless because of their inevitably declining mental faculties.
Justices’ ages clearly affect their conduct: Ruth Bader Ginsburg, who is currently 86 years old, fell asleep during two State of the Union addresses. This lack of professionalism from such a powerful individual is ill-advised. If a Justice cannot dedicate their full attention to an hour-long speech, their ability to remain attentive during arguments lasting approximately the same amount of time is called into question. Justices have made decisions that influenced who can vote, who can enter the country, who can spend money on presidential elections, and who receives health care, among other things. Each of these decisions has the potential to impact the everyday lives and well-being of millions of individuals domestically and internationally. Supreme Court Justices write history and their influence is not to be taken lightly. Why wouldn’t we want the most mentally capable Justices presiding over these cases? The fact that one mentally incapable Justice could sway the laws, norms, and entire future of our nation is unacceptable.
Further, life appointments lead to a problematic lack of turnover on the Court. Justice William O. Douglas, for instance, refused to retire after a stroke severely impacted his faculties. Upon returning from medical leave, he served one final year as a Justice. During this year, he frequently addressed people by the incorrect names, arrived at illogical and incoherent conclusions, and fell asleep during oral arguments. According to Jeff Jacoby’s Boston Globe article, after retiring, he would often show up at court and furiously claim to be a sitting justice. This is a perfect example of why a life appointment is unreasonable. Justices quickly approaching dementia should not only be encouraged, but also expected to retire from their position. Justices are also not infallible, and like most others, they cannot objectively evaluate their mental state and recognize the necessity of retirement.
Some argue that older Supreme Court Justices gain valuable knowledge and experience that younger Justices lack. Mandatory retirement ages would inevitably lower the experience level of the Court, which may have negative implications on the quality of the Court’s decisions. However, while younger Justices may lack experience, they are more well-versed in the current state of affairs than many of the older justices. For instance, they possess more knowledge of the ever-changing world of technology than judges born in the 1930s. As modern legal issues develop around issues like data privacy, Supreme Court Justices need to understand the sort of technology they are dealing with. Furthermore, they were educated much more recently with the most developed legal knowledge, the newest information, and the most current science and technology. Justices born well before World War II should not be making decisions that will have the heaviest influence on the lives of those born eight decades later. If we care about the impact of the Supreme Court, we should ensure that Justices with the best knowledge on the current climate, norms, and culture are making these significant rulings.
Although the Supreme Court lacks preemptive measures to deal with the declining mental state of judges, lower courts have enacted solutions. The Economist reported that the Ninth Circuit Court of Appeals has been particularly proactive in establishing prudent practices. It holds regular seminars to teach its Chief Justices about the indications of cognitive impairment and encourages them to resign if they identify these signs.
They encourage judges to undergo regular cognitive assessments and specify which friends or family members they would most trust to intervene if concerns about their competency arise. The Court also provides a hotline where judges and court staff can discuss signs of cognitive decline they have identified in their colleagues. The Supreme Court should follow suit.
Many have already called on Chief Justice Roberts to implement mandatory psychological screenings for federal justices. However, the Constitution is vague on the responsibilities and powers of the Chief Justice, which may explain his lack of action on the matter. Others have proposed introducing recommended judicial retirement ages, while more radical individuals want a constitutional amendment to enforce eighteen-year term limits for justices.
The problem ultimately lies in the political nature of the Supreme Court. Even though a Justice may recognize their incompetence, they will refuse to step down until they are sure they will be able to endorse their successor. This is likely the reason why Ruth Bater Ginsberg refuses to leave the Supreme Court at the age of 86 after a number of spells of ill health; she cannot allow President Trump to appoint another conservative Justice. In many ways, it is the same reason that any Justice stays on longer than they ought to. It is a form of political protest against the newcomer. And as long as we continue to appoint Justices for life, this sort of behavior is inevitable. In the meantime, Ginsberg, and all other Justices, will continue to make decisions that affect over 330 million lives. Given the gravity of their positions and their demonstrated reluctance to retire, we must limit the terms of Supreme Court Justices. While Justices deserve to be immune from political pressures, the negative ramifications of life appointments greatly outweigh its benefits. If we truly value the sanctity of the American government, we must ensure each branch is operating at its fullest potential. We need to prevent incompetence within the Supreme Court. We need to end life appointments.
The Importance of Cake
“I can’t believe there’s all this fuss over a cake,” a peer of mine lamented after I had spent the past hour rattling on about the recent Supreme Court Case Masterpiece Cakeshop v. Colorado Civil Rights Commission. The Masterpiece case originated in 2018, when the owner of a cakeshop, Mr. Jack Phillips, refused to make a custom cake for a homosexual couple’s wedding because of his objection to same-sex marriage based on his religious beliefs. In Colorado, discrimination based on sexual orientation is prohibited by the Colorado Anti-Discrimination Act (CADA). When the couple, Mr. Craig and Mr. Mullins, was refused service, they turned to the Colorado Civil Rights Commission to address this discrimination under the Act. Both the Colorado Civil Rights Commission and the Colorado Court of Appeals ruled in favor of the couple, but the United States Supreme Court reversed the decision. On hearing the facts behind Masterpiece, many people, including my peer, contend that the couple could have easily frequented another bakery to get their wedding cake. They claim that the freedom that consumers have in the marketplace should allow them to circumvent disputes like those presented in Masterpiece. Those who make such an argument fundamentally misunderstand the case; it’s about equality, not cake.
Unwittingly, the critics of Craig and Mullins who insist that they should have gone to another bakery reiterate the widely abhorred arguments put forth by the majority in Plessy v. Ferguson. At stake in Plessy was a Louisiana law that required railway companies to provide separate but equal accommodations for passengers of different races. Plessy was argued just 28 years after the ratification of the Fourteenth Amendment, which guarantees the “equal protection of the laws.”
In his majority opinion upholding the constitutionality of the law, Justice Brown argued that laws requiring separation based on race were not only constitutional, but in no way implied racial inferiority:
We consider the underlying fallacy of the plaintiff’s argument to consist in the assumption that the enforced separation of the two races stamps the colored race with a badge of inferiority. If this be so, it is not by reason of anything found in the act, but solely because the colored race chooses to put that construction upon it (Plessy v. Ferguson, 1161).
Justice Brown refused to acknowledge the very real impact of discrimination by insinuating that its harms are merely “constructions” devised by Black individuals. These are about the damaging effects of discrimination and the failed promises of equality.
Justice Harlan captured this sentiment in his dissenting opinion in Plessy. In contrast to Justice Brown, he acknowledged that discrimination and the separation of races necessarily imply inferiority. Justice Harlan relied on the Fourteenth Amendment to argue that the Louisiana legislation at issue in the case is unconstitutional. He asserted that the Fourteenth Amendment includes a positive right guaranteeing that all persons be exempted “from legal discriminations, implying inferiority in civil society, lessening the security of their enjoyment of the rights which others enjoy, and discriminations which are steps towards reducing them to the condition of a subject race” (Plessy v. Ferguson, 1162). Throughout his opinion, Justice Harlan emphasized what Justice Brown failed to understand: separate is not equal. Discriminating on the basis of race condemns Black citizens to an inferior status in a manner that is inconsistent with the Fourteenth Amendment’s promises of “equal protection of the laws.”
Though Justice Harlan wrote his dissent in 1896, many of the sentiments captured in his opinion come out in Brown v. Board of Education, argued over fifty years later. At issue in Brown was the desegregation of public schools. Cases in Kansas, South Carolina, Virginia, Delaware, and Washington D.C. had arisen over laws that allowed public schools to be segregated on the basis of race at the state level, despite federal policy that stipulated integration. The plaintiffs in these cases contended that segregation was a clear violation of the Equal Protection Clause of the Fourteenth Amendment, as states were undermining the “equal protection of the law” by actively enforcing discrimination through racial separation in schools. The Court unanimously ruled that the state-sanctioned segregation at issue in the case was indeed a violation of the Fourteenth Amendment. Writing for the Court, Chief Justice Warren struck down the “separate but equal” doctrine that had been crafted in Plessy, and declared that separating students “solely because of their race generates a feeling of inferiority as to their status in the community that may affect their hearts and minds in a way unlikely ever to be undone” (Brown v. Board of Education, 1169). Justice Warren put particular emphasis on “tangible” and “intangible” factors. While Justice Warren conceded that some of the schools were indeed equalized in terms of tangible factors such as buildings, curriculum, and teachers, the fact that they were segregated made them inherently unequal in terms of intangible factors including social equality and inferiority in civil society.
Justice Harlan’s dissent in Plessy and Justice Warren’s opinion in Brown teach us valuable lessons about the intangible harms of discrimination. Those who argue that the homosexual couple in Masterpiece should have looked elsewhere for cake contend that the tangible factors would be the same at another bakery; the cake would likely be just as beautiful and delectable. Yet these types of arguments miss the intangible harms of discrimination. They overlook the inherent feelings of inferiority and pain that come from being told some certain element of one’s identity makes them unworthy of receiving service and equal treatment in public accommodations. They also miss the significance of public accommodations themselves. Public accommodations are fundamentally characterized as places that are open to all comers. Closing the door on certain customers in public accommodations, on the basis of their identities and for religious reasons, undermines fundamental equality in the commercial arena.
Cases such as Bell v. Maryland touch upon the issue of discrimination in public accommodations with regard to race, which provides pertinent precedent for Masterpiece. In Bell, a group of Black students were arrested after conducting a sit-in at a restaurant after they were refused service. Justice Douglas’ concurring opinion powerfully captures the significance of discrimination in public accommodations. He wrote, “why should we refuse to let state courts enforce apartheid in residential areas of our cities, but let state courts enforce apartheid in restaurants?” (Bell v. Maryland, Douglas, concurring). To Justice Douglas, even though the discrimination had been carried out by a private entity, a restaurant, it was still repugnant to the established protections of equal civil rights. Equality is something that can be seen and protected in civil society.
Justice Douglas identified the cascading effects of permitting discrimination in public accommodations. He wrote, “here it is a restaurant refusing service to a Negro. But so far as principle and law are concerned, it might just as well be a hospital refusing admission to a sick or injured Negro” (Bell v. Maryland, Douglas, concurring). While the Masterpiece case may be “just a cake,” it opens a gateway in which the denial of many more services may become permissible. Would we feel comfortable if a supermarket denied a same-sex couple access to food on the basis of their sexual orientation? Discrimination in public accommodations, which is so clearly implicated in the Masterpiece case, has far reaching consequences that should concern us all.
The prolific scholarly and legal debate that surrounds Masterpiece is evidence that the case raises issues that are both nuanced and ripe for disagreement. Certainly, there is room for legitimate discourse surrounding what to do when First Amendment protections of religious exercise and free speech come into conflict with anti-discrimination legislation and the Fourteenth Amendment. Those that support the baker, Mr. Phillips, claim that forcing him to make a custom cake would constitute compelled speech. Critics of Mr. Phillips, on the other hand, cite the fact that the specifics of what would be on the cake were not discussed before the denial of service, and claim that making a cake does not imply moral approval. The essential issue here is the purchase of a cake without discussion of what would be on it. If we fail to recognize the impact of discrimination in public accommodations and allow for the denial of service, we threaten to undermine the very basis of equal citizenship, which requires toleration in America’s commercial republic.
Race & Felony Disenfranchisement
In contrast to most other democracies, many U.S. states apply a policy of felony disenfranchisement to individuals convicted of a crime. An estimated 6.1 million Americans are barred from casting a vote due to felony disenfranchisement. These policies disproportionately affect African Americans and contribute to the systemic oppression of black people in the United States. At a rate four times greater than that of all other Americans. one in every thirteen voting-age African Americans cannot vote due to a felony conviction. As the U.S. approaches conversations about race, discrimination, and privilege with new vigor in 2020, the enfranchisement of African Americans is of critical importance in the enduring fight for racial, social, and economic equality.
The use of disenfranchisement as a punitive measure traces back to a feature of English common law ominously dubbed “civil death.” Historically, depriving a person of their right to vote only applied in individual cases for serious or election-related crimes, not as an all-encompassing nationwide policy. It is without precedent for the United States to apply civil death as a form of social control that endures today.
Felony disenfranchisement did not act as a barrier to voting until the end of the Civil War when the expansion of suffrage to black men transformed the electorate’s racial demography. The 13th Amendment formally abolished slavery in 1865. It provided an exception, however, that allowed states to impose involuntary servitude as a punishment for a crime. The exception allowed Southern states to lease prisoners to mines, plantations, and private railways. The profits were pocketed exclusively by states and private companies. Prisoners earned no pay for their work while facing deadly work conditions and inhumane treatment. This system, known as “convict leasing,” created a financial incentive for southern governments to increase the prison population and produce cheap labor for the South’s floundering economy. Lawmakers, especially in the South, implemented a number of laws that strategically targeted black citizens. The new criminal laws known as the Black Codes applied only to black people and were “essentially intended to criminalize black life.”
South Carolina adopted some of the harshest Black Codes. In addition to establishing a racially separate court system, it sanctioned drastically different punishments for white people and black people accused of the same crime. The code also prohibited black people from “possessing most firearms, making or selling liquor, and coming into the state without first posting a bond for “good behavior.” Also, blacks could not practice any occupation, except farmer or servant under contract, without getting an annual license from a judge.” Black Codes once again linked the oppression of black people to the prosperity of white southerners.
The implementation of Black Codes was coupled with sweeping disenfranchisement laws. Within a 15-year period after the Civil War, over a third of U.S. states enacted felony disenfranchisement laws. To ensure that disenfranchisement mainly affected African Americans, officials in the South imposed the laws as a punishment specifically for crimes they perceived as being committed more often by black people. The author of Alabama’s disenfranchisement provision “‘estimated the crime of wife-beating alone would disqualify sixty percent of Negroes,’ resulting in a policy that would disenfranchise a man for beating his wife, but not for killing her.” In time, states imposed disenfranchisement as a punishment for all felonies, rather than only select crimes. African Americans were stripped of their rights and therefore deprived of their ability to use the democratic process to generate change.
Selective enforcement by a white-controlled criminal justice system continued to ensure that “race-neutral laws” applied almost exclusively to black people. The Black Codes revived and legalized the exploitation of black people by directly linking them to the Southern economy. At least 90 percent of prisoners forced into convict leasing to provide cheap labor for Southern businesses were black. These factors quickly prompted a massive disparity in incarceration rates. In Alabama, nonwhite prisoners comprised 2% of the prison population in 1850. By 1870, the proportion had increased to 74 percent.
Researchers Christopher Uggen, Jeff Manza, and Angela Behrens determined that “the higher the proportion of nonwhite inmates in a given state’s prison population, the more likely that state was to adopt restrictive felon disenfranchisement measures.” These new laws enabled de facto slavery and allowed the government to continue enforcing racial hierarchies. With no rights and limited job opportunities in a corrupt system, black people were essentially forced to remain in a constant, oppressive cycle and stripped of any prospects for upward social and economic mobility.
The origins of disenfranchisement provisions stem from post-Civil War efforts to deny black people equal citizenship. These efforts are the basis for the mass incarceration and disenfranchisement that African Americans experience in the U.S. today. The war on drugs and the tough-on-crime approach adopted in the 1970s aggravated the existing racial disparities and extended them into the 20th and 21st centuries. By taking away felons’ political voice after rejoining civil society, felony disenfranchisement laws perpetuate an endless cycle of racism and inequality in the United States. The United States will never be truly equal until each citizen’s vote is counted.
Publius Part V: Conclusion
During the Constitutional ratification debates, Federalists and Anti-Federalists alike appealed to the ancient Roman Republic, a government successful in ruling without a monarch. However, the opposing sides pointed to the Roman Republic for different reasons: one example highlighted the glory and freedom that can come with a republic, while the other warned of the dangers of tyranny.
The Federalists viewed the Roman Republic through a positive lens. Hamilton, Madison, and Jay invoked the successes of the Roman Republic with the pseudonym Publius. Publius symbolized the hope of the newly formed Roman Republic. Reflecting the positive traits of the Roman Republic, Harriet Flower writes, “The importance of Rome’s republican model for… American revolutionaries lay in the courage it gave them to contemplate government without a king by providing politicians with a rival set of political institutions opposed to the hereditary principle.”1 The Roman Republic provided Americans with an example of a republican government, operating without a monarchy. Yet, as the Anti-Federalists pointed out, the Roman Republic was inherently flawed because it ultimately collapsed. Plutarch acknowledged this flaw while detailing the biography of Julius Caesar: “Some were so bold as to declare openly, that the government was incurable by a monarchy….”
After decades of civil war, the Republic ceased to exist at the end of the firstcentury BCE, giving way first to the dictatorship of Julius Caesar and then ultimately to the Roman Empire. Thus, by using the pseudonym “Brutus,” the Anti-Federalists recalled the last days of the Republic, fanning concerns about the extended powers the Constitution gave to the national government. They also recalled one of the Republic’s last defenders; a man committed to standing up against tyranny at a momentous point in Roman history.
The Anti-Federalists made a compelling point about the downsides of the Roman Republic. For example, the Roman Republic did not even remotely represent an equal political system. Instead, politics and government were exclusively the domains of the educated, wealthy elite. Running for office was a highly competitive and cutthroat process. In order to be successful, one needed to already have come from wealth and have established connections in Rome. Bribes were especially common. Since much of the political influence was consolidated into a small portion of the Roman population, the term “oligarchy” seems to suit the Roman Republican system best. The Roman Republic was deeply flawed, but the Federalists and post-Revolutionary generation instead looked upon its honorable principles. Hence, they alluded to the virtues that Publius exhibited: living free from a tyrant’s rule, providing liberty to the people, and representing the best interests of society at large. Ultimately, the Federalists made a strong case for looking toward the ideals on which the Roman Republic was built rather than the corruption and flaws that ended it.
Publius Part IV: Brutti, Part II
The Rome in which Marcus Brutus lived, the late Republic, was vastly different from the Republic Lucius Brutus established centuries prior. Late Republican Rome governed a much more extensive territory.
Despite living under dissimilar circumstances, Marcus Brutus commanded respect in the realm of late-Republican politics because of his ancestors. On his father’s side was Lucius Brutus. On his mother’s side was the Servius Ahala, “who, when Spurius Maelius… designed to make himself king… struck him with his dagger and slew him.”1Upholding and defending the Republic, even to the point of killing those who threatened it, was in Marcus Brutus’s blood. Additionally, his uncle influenced Marcus Brutus and his ultimate convictions. Marcus Brutus’s uncle was Cato the Younger, a senator famous for opposing Julius Caesar. Marcus Brutus aspired to be like Cato, as Cato “was whom of all the Romans his nephew most admired and studied to imitate.”2 Under the wing of Cato the Younger, Marcus Brutus became skilled at recognizing tyranny. Thus, it is no surprise that Marcus Brutus devoted much of his military career to salvaging the Republic.
During the civil war between Caesar and Pompey (49 – 45 BCE), Brutus judged his commitment to the Republic more important than his friendship with Julius Caesar. At first, Marcus Brutus was conflicted. But then, “thinking it his duty to prefer the interest of the public to his own private feelings, and judging Pompey’s to be the better cause, [Brutus] took part with him.”3 Pushing his own private feelings aside, Brutus ultimately sided with Pompey in order to prevent a life-long dictatorship. At the end of Caesar’s civil war, Brutus was on the losing side; Caesar was victorious and Pompey was dead. Nonetheless, Brutus’s allegiances were still with the Republic, though it was in a state of utter chaos.
After the civil war ended, Brutus committed the act for which he is most known: assassinating Caesar and sparking the second round of civil war. Brutus heard a rumor that “Caesar’s friends intended… to move that he might be made king,”4 confirming his worst nightmares. Thus, when Brutus’s comrade, Cassius, asked him to be a part of the plot against Caesar, Brutus agreed. He was willing “to stand up boldly, and die for the liberty of [his] country.”5 However, he did not take the decision lightly. The idea of assassinating his friend tormented him, filling him with “unusual trouble.”6 Though he loved Caesar, Brutus could not sit back and allow him to destroy the remnants of the Republic.
Maintaining his honor to the very end, Brutus committed suicide when he realized that he was going to lose the Battle of Phillippi. Brutus’s loss at Phillippi and subsequent suicide marked the Second Triumvirate as victors, leaving Rome to dictatorship.
Both Bruti were fiercely committed to ensuring that no Roman king would rise again, making the Anti-Federalists drawn to the pseudonym “Brutus.” While the Federalist Publius stirred up images of hope for a new Republic, the Anti-Federalist image of Brutus recalled the difficulties of maintaining a republic. For, as the Romans experienced, a republic could crumble into a regime.
Voting Rights
Although the majority of Americans consider voting a guaranteed right in the United States, this right has a long and contentious history. Enfranchisement has steadily grown since the first election in 1788 but was unfortunately largely slowed by individuals or groups with strongly-held ideologies that opposed a broader franchise. According to historian Alexander Keyssar, the majority “of these forces or factors have long been recognized: racist and sexist beliefs and attitudes, ethnic antagonism, partisan interests, and political theories and ideological convictions that linked the health of the state to a narrow franchise.” Many opponents of universal suffrage used and continue to use unfair and nuanced rules to bar political participation from all Americans, despite the numerous laws and amendments enacted to the opposite effect. In coming blog posts I will speak to these specific voting-rights controversies. The idea of a series of posts is to help provide a better understanding of voting rights in the United States. But to start, let me give a brief history.
A Brief History of Voting Rights
Voting rights have been steadily expanding since the 1770s. At the time of the Founding, in 1776, only land-owning white men could vote in most states. It wasn’t until 1856, eighty years later, that the right to vote was extended to all white men regardless of property or religion in all states. In the general election that same year, approximately four million people, still only 17% of the total U.S. population at the time, cast their vote. Although this expansion was small, it marked the first of many acts and amendments which made the American electorate what it is today.
Only a few years later, the 15th Amendment, ratified in 1870, prohibited federal or state governments from denying men the right to vote based on race. Initially, African American participation in elections grew quickly and many African Americans were elected to political offices. Over time, however, many states developed loopholes to continue to disenfranchise Black Americans through hefty poll taxes and nearly impossible literacy tests. African Americans who passed the tests and managed to pay poll taxes still faced harassment and violence when they tried to vote. In theory, the second section of the 14th Amendment, which stipulated that Congressional representatives would be limited for states that abridged voting, should have provided an incentive for encouraging high voter turnout. The second section of the 14th Amendment was disregarded as the voting rights of Black Americans were narrowed over the next decades.
Literacy tests and other bureaucratic measures were consistently used to deny Black Americans the right to vote and evade the terms of the 15th Amendment. Black Americans waited for nearly 100 years from the ratification of the 15th Amendment to have their right to vote clearly secured when Congress passed the Voting Rights Act in 1965. The Voting Rights Act required that new state voting practices and procedures be approved by the federal government. This was meant to remove the barriers that prevented African Americans from voting. The reforms immediately impacted voter turnout. By the end of 1965, 250,000 new black voters had been registered to vote. Over 73 million people participated in the 1968 presidential election, almost 41% of the country’s population.
Women’s suffrage was another slow process that spanned decades. In some states, such as New Jersey, women were able to vote in and around the founding. That right was nullified early in the nineteenth century when competing political parties “concluded that it was no longer to their advantage to have all ‘inhabitants’– including women, aliens, and African Americans– in the electorate”. New Jersey then reverted back to only allowing white men to vote. This struggle persisted throughout the nineteenth century.
Finally, in 1920, suffrage was granted to all women except Native American women through the 19th Amendment. This reform was a result of a decades-long movement of lobbying, petitioning, and picketing by women across the country. The amendment gave political representation to the 26 million women it enfranchised, fundamentally changing the understanding of women in society by empowering them to express their political preferences. It also dramatically changed the eligible electorate, which increased the growth of government and influenced the use of public resources. A paper by Yale Professor John R. Lott and University of Florida Professor Lawrence W. Kenny found that women’s suffrage led to the growth of government expenditures and more liberal voting patterns in the U.S. House and the Senate. Still, the amendment came under fire for failing to effectively enfranchise Native American and African American women who supported and lobbied for its approval.
The question of whether or not to enfranchise Native Americans was controversial into the 1920s. The legality remained murky due to the sovereignty of Native American tribes. At the time, only 8% of Native Americans were taxed and therefore eligible to become American citizens and exercise their vote. Yet in 1924, the Indian Citizenship Act granted citizenship to all Native Americans, including the nearly 50% of Native Americans who were not citizens. Although the Indian Citizenship Act was revolutionary, it too did not offer full protection of Native Americans’ voting rights. Some states, most prominently Arizona and New Mexico, adopted laws banning Native American citizens from voting. More subtly, Native Americans were also subject to many of the same poll taxes and literacy tests as African Americans until the Civil Rights Act of 1965 outlawed their use.
Over the course of American history, constitutional amendments and acts have incrementally expanded voting rights to men, women, African Americans, and Native Americans. Yet even today, 50 plus years after the Voting Rights Act, the right to vote is all too often narrowed. Some Americans continue to be disenfranchised, an issue that the John Lewis Voting Rights Act is currently under consideration to address, which I will delve into in future blog posts.
Publius Part I: The Culture of Classicism & Classical Dissent
Alexander Hamilton, James Madison, and John Jay signed eighty-five of the Federalist papers with the pseudonym Publius. By choosing this name, the Federalists associated themselves with Publius Valerius Poplicola, one of the founders of the Roman Republic.
Determining which Publius these founding fathers were referencing is relatively straightforward. Publius was a fairly common Roman praenomen, or first name, with Publius Valerius Poplicola and Publius Clodius Pulcher as the most well-known. Since Plutarch, a favorite classical author of Alexander Hamilton’s1, wrote a biography of Poplicola, the Federalists were most likely referring to him rather than Clodius, whom Plutarch never mentioned. As Plutarch and Livy described him, Publius embodied Republican virtue because he empathized with the people and preserved their newfound liberties.
To provide some context, Publius became influential in the formation of the Roman Republic as it was arising from the old Kingdom of Rome. The Romans, especially the Senate,2 were unhappy with Rome’s last king, Tarquinius Superbus, because he ruled by fear and violence.3 Led by nobles like Publius Publicola and Lucius Brutus, the Romans deposed Tarquinius Superbus and sent him into exile. In the aftermath of Tarquinius Superbus’s exile, the Roman Republic was born. The first consuls, who shared authority of Rome, were elected. Unfortunately for Publius, he was not one of the consuls.
In his reaction to the disappointing results of the election, Publius did what he perceived as best for the new Republic: he quit politics entirely. He feared that if he continued to be involved in Roman government, he would aspire to be king and ruin the new republic.4 But when one of the consuls was exiled, Publius took his place. He and Lucius Brutus, along with the Senate, shared the responsibility of ruling Rome.
Once Publius assumed his role as consul, he demonstrated his commitment to the people with public displays. When entering the assembly of the people, Publius would bow his fasces “to show, in the strongest way, the republican foundation of the government.”5 The fasces was the ultimate visual symbol of magisterial authority for the Romans. Composed of sticks and an axe blade, it was a legitimate weapon and used for beheadings in early Rome.6 Thus, when Publius dipped his fasces, he showed “deference to the source of magisterial authority, the populus in assembly.”7 Plutarch also recounted how the people called him by the name Publicola, meaning “the People’s Friend.”8 Publius not only expressed his support to the Roman people and their assembly outwardly, but also through his legislation.
When elected consul, Publius used his time in office to establish a political framework focused on the Roman people. According to Plutarch, Publius legislated “one [policy] granting offenders the liberty of appealing to the people from the judgement of the consuls” and “relief of poor citizens, which, t[ook] off their taxes…”9 This legislation was particularly important because Tarquinius had taxed the Roman people relentlessly during his reign. Through these laws, Publius made the Roman populus his focus, boosting his popularity among them. After the succeeding consuls” and was honored by the Roman people as a figure “full of all that is good and honorable”10 upon his death.
Publius was a sign of a new age for the Roman people. As one of the first elected consuls of Rome, Publius helped boot the last Roman king, made long strides in legislation, and represented a voice for the people. Thus, it is reasonable for the Federalists to sign under his name. Publius, in the Federalists’ eyes, was the epitome of what it was to rule in a governmental system without a king and embracing a Republic with governmental structure.
Publius Part II: The Federalists & Publius
Alexander Hamilton, James Madison, and John Jay signed eighty-five of the Federalist papers with the pseudonym Publius. By choosing this name, the Federalists associated themselves with Publius Valerius Poplicola, one of the founders of the Roman Republic.
Determining which Publius these founding fathers were referencing is relatively straightforward. Publius was a fairly common Roman praenomen, or first name, with Publius Valerius Poplicola and Publius Clodius Pulcher as the most well-known. Since Plutarch, a favorite classical author of Alexander Hamilton’s1, wrote a biography of Poplicola, the Federalists were most likely referring to him rather than Clodius, whom Plutarch never mentioned. As Plutarch and Livy described him, Publius embodied Republican virtue because he empathized with the people and preserved their newfound liberties.
To provide some context, Publius became influential in the formation of the Roman Republic as it was arising from the old Kingdom of Rome. The Romans, especially the Senate,2 were unhappy with Rome’s last king, Tarquinius Superbus, because he ruled by fear and violence.3 Led by nobles like Publius Publicola and Lucius Brutus, the Romans deposed Tarquinius Superbus and sent him into exile. In the aftermath of Tarquinius Superbus’s exile, the Roman Republic was born. The first consuls, who shared authority of Rome, were elected. Unfortunately for Publius, he was not one of the consuls.
In his reaction to the disappointing results of the election, Publius did what he perceived as best for the new Republic: he quit politics entirely. He feared that if he continued to be involved in Roman government, he would aspire to be king and ruin the new republic.4 But when one of the consuls was exiled, Publius took his place. He and Lucius Brutus, along with the Senate, shared the responsibility of ruling Rome.
Once Publius assumed his role as consul, he demonstrated his commitment to the people with public displays. When entering the assembly of the people, Publius would bow his fasces “to show, in the strongest way, the republican foundation of the government.”5 The fasces was the ultimate visual symbol of magisterial authority for the Romans. Composed of sticks and an axe blade, it was a legitimate weapon and used for beheadings in early Rome.6 Thus, when Publius dipped his fasces, he showed “deference to the source of magisterial authority, the populus in assembly.” 7 Plutarch also recounted how the people called him by the name Publicola, meaning “the People’s Friend.”8 Publius not only expressed his support to the Roman people and their assembly outwardly, but also through his legislation.
When elected consul, Publius used his time in office to establish a political framework focused on the Roman people. According to Plutarch, Publius legislated “one [policy] granting offenders the liberty of appealing to the people from the judgement of the consuls” and “relief of poor citizens, which, t[ook] off their taxes…”9 This legislation was particularly important because Tarquinius had taxed the Roman people relentlessly during his reign. Through these laws, Publius made the Roman populus his focus, boosting his popularity among them. After the succeeding consuls” and was honored by the Roman people as a figure “full of all that is good and honorable”10 upon his death.
Publius was a sign of a new age for the Roman people. As one of the first elected consuls of Rome, Publius helped boot the last Roman king, made long strides in legislation, and represented a voice for the people. Thus, it is reasonable for the Federalists to sign under his name. Publius, in the Federalists’ eyes, was the epitome of what it was to rule in a governmental system without a king and embracing a Republic with governmental structure.
Publius Part III: Brutii, Part I
Just as the Federalists used the pseudonym Publius, the Anti-Federalists conveyed a message with their own pseudonym, Brutus. Determining which Brutus the Anti-Federalists were referring to is a tougher question, since there were multiple “Brutuses,” or “Bruti,” in Roman memory. The most well-known Brutus was Marcus Iunius Brutus, Julius Caesar’s friend who ultimately took part in assassinating him. At the same time, the Anti-Federalists could have also been referring to Lucius Iunius Brutus, one of Marcus Brutus’s ancestors. Lucius Brutus was a leader in overthrowing the Roman monarchy and establishing the Republic. He was also a colleague of Publius Poplicola.
Regardless of which Brutus the Anti-Federalists were referring to, both Bruti directly linked themselves with fighting against those who sought unlimited political power, a strikingly Anti-Federalist sentiment.
Brutus Part One Politically, Lucius Brutus emphasized squashing the possibility of another Roman king. After helping to overthrow Tarquinius Superbus alongside Publius, Lucius Brutus was elected as one of Rome’s first consuls. Before setting up the framework of the Roman Republic, Brutus’s first acts as consul was to ask the Roman populus to “swear an oath that they would suffer no man to be king in Rome.”1 Preventing tyranny was Lucius Brutus’s priority as consul, as his acts reflected. Rather than solely commit himself publicly to maintaining the Republic, Lucius Brutus’s opposition to tyranny prompted him to go to extraordinary lengths for the betterment of the Republic, including convicting his own son.
Lucius Brutus’s commitment to the Republic superseded his familial ties. Plutarch viewed Brutus as austere and perhaps too committed to his principles. As Plutarch described, when Lucius Brutus discovered that his sons were traitors of the Republic, he convicted them and sentenced them to death.
From a legal perspective, a father had the authority to sentence his children to death, no matter how old they were. However, by the time Plutarch was writing in the 2nd century CE, the practice became archaic and was seldom used. As Andrew Riggsby writes, the father’s power extended to “his right to execute them at will, though actual instances are so rare that some have questioned the rule itself.”2 So, when Plutarch described Lucius Brutus watching his sons die without “the least glance of pity to soften and smooth his aspect of rigor and austerity” and that he “sternly watched his children suffer,”3 it was not complimentary. By watching his children suffer, Lucius Brutus (in Plutarch’s eyes) was perhaps too zealous in his maintenance of the Roman Republic.
No matter how extreme Lucius Brutus was in applying the principles of the Roman Republic, he ultimately sacrificed himself for its preservation. As Tarquinius Superbus invaded Rome in order to reclaim his throne, Lucius Brutus died in the fighting. Everything that Lucius Brutus did was for the benefit of the Republic, especially in his focus on preventing any king from rising again. Lucius Brutus protected the Republic, even if doing so meant dying. His descendants, especially Marcus Brutus, showed no less zeal.