Tag Archives: issues

The messy reality of religious liberty in America

The wedding cake on display at Masterpiece Cakeshop.
AP Photo/Brennan Linsley

David Mislin, Temple University

On Tuesday, Dec. 5, a visibly divided U.S. Supreme Court tackled the contentious issue of religious freedom when it heard oral arguments in “Masterpiece Cakeshop, Ltd. v. Colorado Civil Rights Commission.” The arguments appeared to evenly split the four conservative justices from the four liberals. Justice Anthony Kennedy, who is often a swing vote, seemed to side with the baker.

The case involves a Denver bakery owner who refused to make a wedding cake for a gay couple, citing his religious belief that marriage can be between only a man and woman. The couple sued, and a lower court ruled the baker violated Colorado’s public accommodations law. The statute forbids discrimination by businesses serving the public, including on the basis of sexual orientation.

In their appeal to the Supreme Court, the bakery’s lawyers have emphasized free speech issues by presenting the baker as an artist who has a right to choose how he expresses himself. But religious freedom remains central to the case. A key question is whether a business owner must provide services that conflict with his or her religious beliefs.

This divisive case highlights the vast difference between the reality and the rhetoric of religious freedom, which is often considered to be the ideal that promotes harmony and equality. But, history suggests that it does lead to more conflict.

The rhetoric: Equality and goodwill

It is true that throughout U.S. history, Americans have idealized religious freedom and imagined that it brings harmony.

Text of the First Amendment.
Jack Mayer, CC BY-NC-SA

The First Amendment’s clauses guaranteeing religious free exercise and preventing establishment of an official church seemed to promise less discord to the Founding Fathers. In an 1802 letter, Thomas Jefferson, for example, wrote that “religion is a matter which lies solely between Man & his God.” As the nation’s third president, he argued that a “wall of separation between Church & State” would give all people equally the right to free conscience.

Later presidents echoed the view that religious freedom brings equality and unity by preventing government from favoring particular faiths.

Before his election in 1960, John F. Kennedy tried to ease fears about his Catholicism by affirming religious liberty. Kennedy believed this freedom kept one group from oppressing another. It formed the basis of a society, he declared, where people would “refrain from those attitudes of disdain and division which have so often marred their works in the past, and promote instead the American ideal of brotherhood.”

In the early 1990s, George H.W. Bush identified religious liberty as the basis for other rights. He credited it as a major reason for the vibrancy of American society.

The reality: Conflict and debate

But, the promised harmony has proved elusive. Scholars such as Steven K. Green and Tisa Wenger have documented arguments about religious freedom throughout U.S. history.

Minority communities, ranging from Catholics to Mormons, have fought to have their traditions and customs recognized as religious. As I show in my work on pluralism, Americans have debated what constitutes a religious expression rather than a cultural practice. People have also argued whether religious expression can extend into political, social and business interactions.

These debates have required the intervention of the courts and have often ended at the Supreme Court. Thus, a right intended to free Americans from government has instead necessitated frequent involvement of a major government institution.

Further complicating matters, the Supreme Court has changed its position over time. Its evolving interpretations show how religious freedom debates create shifting categories of winners and losers.

To the courts

Like Masterpiece Cakeshop, one of the Supreme Court’s first religious liberty cases involved marriage. In 1878, a Mormon resident of the Utah territory sued the federal government after he was charged with bigamy. He argued that the law violated his religious liberty by criminalizing his polygamous marriage. The Supreme Court disagreed. In Reynolds v. United States, the court ruled that the First Amendment guaranteed only freedom of belief, not freedom of practice.

In the 20th century, the Supreme Court showed greater sympathy to religious liberty claims. In several cases – including one brought by Jehovah’s Witnesses challenging a statute requiring a permit for public evangelizing and another by an Amish community that objected to Wisconsin’s compulsory public school law – justices sided with those who claimed their freedom was violated.

That changed in 1990. The court ruled against two men who lost their jobs after using peyote, the cactus, which has hallucinogenic properties and has long been used in Native American religious practices. Because they were fired for drug use, the men were denied unemployment benefits. They claimed that as members of a Native American church, they used the drug for religious purposes.

United States Supreme Court.
Josh, CC BY-NC-ND

Moving away from earlier decisions, justices ruled that religious belief was not a ground for refusing to obey laws “prohibiting conduct that the State is free to regulate.”

New century, new conflicts

The peyote case set the stage for Masterpiece Cakeshop. It was in response to the case that Congress passed the Religious Freedom Restoration Act (RFRA) of 1993. It required that laws restricting religious expression must show that they serve a compelling need.

RFRA was central in the Supreme Court’s 2014 decision in Burwell v. Hobby Lobby. That contentious split ruling allowed small, closely held companies the right to deny contraceptive benefits mandated by the Affordable Care Act on the grounds of protecting their owners’ religious liberty.

Similarly, in October 2017, the Trump administration invoked freedom of religion when it allowed all employers a religious exemption to the contraception coverage requirement in the Affordable Care Act.

Critics saw that policy change as an attack on women’s rights. Reaction to it on both sides again showed that government involvement in debates about religious freedom invariably produces winners and losers.

Given our polarized society and the division among the Supreme Court justices today, this pattern will continue, whatever the verdict is.

The ConversationThis is an updated version of an article first published on Nov. 28, 2017.

David Mislin, Assistant Professor, Intellectual Heritage Program, Temple University

This article was originally published on The Conversation. Read the original article.

Colleges need affirmative action – but it can be expanded

Race-neutral affirmative action can help identify first-generation students like Blanca Diaz and LaQuintah Garrett.
AP Photo/Amy Anthony

Eboni Nelson, University of South Carolina

In 2003, Justice Antonin Scalia predicted that the Supreme Court’s sanctioning of race-conscious affirmative action in higher education would spark future litigation for years to come. And right he was. From defeated claims of discrimination against the University of Texas at Austin to an ongoing lawsuit against Harvard, colleges continue to come under attack for considering race as a factor in admissions decisions.

The recent report of the Department of Justice’s possible investigation of “intentional race-based discrimination in college and university admissions” demonstrates that the assaults aren’t likely to end anytime soon.

As a professor of law and scholar dedicated to ensuring equal educational opportunities for students of color, I believe now is an important time to earnestly consider other methods for diversifying student bodies. Race-neutral alternatives could effectively consider such factors as socioeconomic status and educational background, while supplementing more traditional affirmative action.

Lawyer Bert Rein and his client, Abigail Fisher, failed in their discrimination case against UT Austin’s affirmative action policies.
AP Photo/J. Scott Applewhite

‘Race-based’ vs. ‘race-conscious’

When thinking about affirmative action, it’s important to first define (and debunk) a few key terms, starting with “race-based” and “race-conscious” affirmative action.

“Race-based affirmative action” is a misnomer often used to describe some college admissions policies. “Race-based” implies that an admissions decision is made solely because of or based upon an applicant’s race or ethnicity, which could not be farther from the truth. A university’s decision to admit, deny or waitlist an applicant is based upon myriad criteria, ranging from standardized test scores to state of residency. Race is just one of many admissions factors a university may consider.

This approach is more appropriately termed “race-conscious.”

Schools that employ race-conscious admissions policies do so in order to achieve the educational, social and democratic benefits of a diverse student body.

As the Supreme Court held in Gratz v. Bollinger, race is not and cannot be the determining factor under a constitutional race-conscious plan. Therefore, when people claim that an African-American or Hispanic student was admitted because of race, they’re often not only inaccurate but also dismissive of the student’s other numerous attributes that played a role in the university’s decision.

Race-neutral alternatives

Opponents of race-conscious affirmative action often assert that such policies are racist or disproportionately benefit privileged minority students from middle- and upper-class backgrounds.

Justice Sandra Day O’Connor delivered the majority opinion in Grutter v. Bollinger, which asserted that schools must consider ‘workable race-neutral alternatives.’
AP Photo/Susan Walsh

For its part, the Supreme Court is also skeptical of using racial classifications in governmental decision-making. As a result, it has held that institutions of higher education must afford serious consideration to “workable race-neutral alternatives” before implementing a race-conscious policy.

Importantly, the court’s use of the term “race-neutral” does not mean “race-blind.” That is, universities are permitted to think about how alternative admissions criteria could help them achieve their diversity goals. Race-neutral criteria could include socioeconomic background, high school or undergraduate institution, or class rank. In other words, these are factors that may contribute to a school’s racial diversity, but applicants themselves are not considered based on race.

In some cases, it’s proven difficult for race-neutral admissions policies to achieve the same levels of racial diversity as those achieved through direct consideration of race. However, such measures have been useful in helping to diversify student bodies when used in conjunction with or in lieu of race-conscious affirmative action.

The viability of race-neutral alternatives

When coupled with the stark racial disparities that continue to plague some professions, the uncertain future of race-conscious affirmative action calls for a renewed focus on alternatives that look beyond race alone.

TV isn’t the only place where the legal profession remains one of the whitest.
USA Network

My co-researchers, Dr. Ronald Pitner and Professor Carla D. Pratt, and I recently took a look at one particular aspect of higher education diversity: law school admissions.

Law schools play a unique role in training our country’s next generation of leaders. It is, in fact, vital to the future of our democracy that we continue to provide students from historically underrepresented racial groups with access to legal education. And yet, the legal profession was recently determined to be “one of the least racially diverse professions in the nation.”

To help law schools improve their diversity, we examined the relationship between race and race-neutral identity factors in law school admissions. The project, which was funded in part by a grant from AccessLex Institute, surveyed over a thousand first-year law students at schools throughout the country and asked about various aspects of their identity, such as socioeconomic status and educational background.

Our findings indicated that African-American and Hispanic students were significantly more likely than both white and Asian/Pacific Islander students to have qualified for free or reduced lunch programs in elementary or secondary school, had a parent or guardian who received public assistance when the student was a dependent minor, and received a Pell Grant during their undergraduate studies – all of which are race-neutral factors that schools could consider in admissions decisions.

Race-neutral affirmative action can help identify first-generation students and students from low-income families.
AP Photo/Pat Sullivan

How admissions could change

Based on the sample of participants in our study, it’s clear that privilege did not catapult all students of color to law school. Many of them had to overcome the structural inequalities of poverty, race and public education to embark on a legal career. Expanding opportunities for these and other minority students will benefit not only legal education and the legal profession, but also society more broadly.

Race-neutral admissions policies could help identify and create opportunities for these students.

To be clear, I do not advocate for the wholesale substitution of traditional race-conscious admissions measures with the factors we studied. Race-conscious policies continue to be the most effective means by which to create diverse student bodies.

However, we encourage law schools and other institutions of higher education to utilize these and other race-neutral admissions factors as a means of complying with the Supreme Court’s affirmative action mandates and testing the viability of policies that take such factors into account.

The ConversationDoing so will help ensure that traditionally underrepresented students of color will continue to have access to colleges and universities that serve as gateways to career, financial and life opportunities.

Eboni Nelson, Professor of Law, University of South Carolina

This article was originally published on The Conversation. Read the original article.

What’s with Catherine Zeta-Jones playing Columbian drug lord Griselda Blanco?

Over the Thanksgiving holiday, you might have seen the high-budget trailer for Lifetime’s Griselda Blanco biopic, Cocaine Godmother. If not, here you go:

If you’re astute to representation issues, you probably know what I’m going to point out as the problem. Catherine Zeta-Jones, a Welsh woman, is playing Blanco, a Colombian woman. Why is she, though?

There are plenty Latina actresses who could have played this role, and in fact, there is one who has been lobbying for this role for a very long time–Jennifer Lopez. Lopez has been jonesing to play Blanco for years, and has created a deal with HBO to bring her TV movie to life (as to when that movie is coming remains to be seen).

Surprisingly, it’s also not the first time Zeta-Jones has been tapped to play Blanco; she was initially supposed to play the Queen of Cocaine in a biopic called The Godmother. According to W Magazine, Zeta-Jones won the role over…Jennifer Lopez. According to a source to The Sunday Times in 2016, despite Lopez’s hard lobbying for the role, she didn’t win out because “she doesn’t have the acting quality to pull it off.”

Today, neither woman are in the role–it now belongs to Oscar-nominated actress Catalina Sandino Moreno (Maria Full of Grace). But both women are gunning to have the last word on Blanco’s life. Right now, we’re seeing Zeta-Jones’ vanity project in the lead.

This gets back to the main point of this article–why is a non-Latina actress playing a Latina figure? From where I’m sitting, it seems like another case of Hollywood (and maybe even Zeta-Jones herself) believing in casting white actors in non-white roles because they have an ethnic “look.” It’s another, subtler kind of whitewashing.

There’s a reason Zeta-Jones has been able to play Latina on more than one occasion–she played a Latina character in The Mask of Zorro opposite Antonio Banderas–and that’s because she’s a white woman who has ethnically-ambiguous looks. Casting-wise, Zeta-Jones fits the model Hollywood looks for when casting a stereotypical non-black “Latina” role; she’s, as Hollywood would describe her, “exotic” thanks to her olive skin and curvy features. But casting her also comes with the added bonus of whiteness, which adds “credibility,” and “name recognition” to the role. In this way, Zeta-Jones can play both sides, having her cake and eating it, too.

But in the stills and trailer for Cocaine Godmother, you can still see Zeta-Jones exaggerating her already ethnically-ambiguous features to the point where it starts becoming character makeup. Her naturally olive skin is bronzed even further to get it closer to Blanco’s, making her skin look like it has an unnatural tan. Her nose is somehow contoured and highlighted to look even more bulbous in an effort to match Blanco’s nose in real life. The overall look is meant to make her look less like a Welsh-English woman and more like a woman of color–the makeup treatment doesn’t want you to equate Zeta-Jones’ performance with brownface, but let’s face it; it’s brownface.

This is also not the first time a white actress has used ethnic ambiguity to their advantage. Shirley Maclaine, who has naturally hooded eyes, was able to do it in the 1962 film that’s basically posits a white woman stealing a role from a Japanese woman as a comedy, My Geisha, and in 1966’s Gambit, in which she plays opposite Michael Caine as “exotic Eurasian showgirl” Nicole Chang. Most recently, Floriana Lima, an Italian-American actress, was able to use her looks to play Latina Supergirl character Maggie Sawyer. Many more examples exist beyond these two.

Zeta-Jones is looking to have her cake and eat it too again with Cocaine Godmother. But this time, there’s a little bit of pushback.

The noise around this film is only going to grow the closer we get to the film’s 2018 TV premiere. We’ll see how the film handles the impending whitewashing discussion it’ll inevitably come up against.

How doctors’ bias leads to unfair and unsound medical triage

Photo by Piron Guillaume on Unsplash

Philip Rosoff

When someone is sick or needs the help of a physician, who should decide what is appropriate – what blood tests and imaging studies to order, what medicines to prescribe, what surgeries to perform? Should it be the doctor, the patient or some combination of the two? Most people nowadays (even most physicians) support what is called ‘shared decision-making’, in which the doctor and patient (and often her family or friends) discuss the situation and come up with a joint plan. The doctor’s role is that of experienced guide, whose medical knowledge, skill and expertise help to shape the conversation and whose understanding of the priorities, values and goals of the patients steers the plan in a given direction to the satisfaction of all.

Unfortunately, in the real world, things don’t always work this way. Doctors and patients have a number of masters, both welcomed and uninvited. Insurance companies or other third-party payers often intrude into the decision-making process, limiting the choices of what services and products might be available: a sick patient often must wait for pre-authorisation for expensive diagnostic tests and procedures; pharmacy formularies restrict the kinds of drugs available for prescriptions, and so on. Furthermore, some doctors have personal interests in the interventions they recommend. Many surgeons make more money if they do more surgery, cardiologists earn more if they put in more cardiac stents and pacemakers, and drug companies have better profits if they sell drugs for chronic conditions that never get better and require lifelong medication (such as high cholesterol, hypertension and diabetes). Such practices contribute to the seeming inexorable rise in healthcare costs (and a host of adverse outcomes) in the United States.

Yet controlling cost without sacrificing quality has been a daunting task. One strategy might be to pay more attention to what patients need, and less to what they want, assuming that the two don’t overlap. Another is limiting the excess of doctors who prescribe because of conflicts of interest or acts of ‘defensive medicine’ – in other words, to protect themselves from lawsuits, not aid the patient.

How does one go about rationing care? Will faceless bureaucrats be denying granny her medication or access to an intensive-care unit solely because she’s old, or saying that Billy can’t get his conditions treated because he is disabled? Indeed, dread of rationing – as well as a healthy dose of old-fashioned fear-mongering by crafty politicians – is what inspired the meme of ‘death panels’, an unfounded canard based upon a misinterpretation of a proposed federal rule for Medicare. Nevertheless, the concept of rationing is still of concern because it implies restriction of a resource that could be beneficial.

Therefore, rationing doesn’t apply to interventions that can’t help anyone at any time – for instance, antibacterial antibiotics that won’t work because the patient has a viral infection. A better example of true rationing is the allocation of organs – such as livers, hearts and lungs – for transplantation. Organ transplantation requires rationing because the supply never keeps up with demand. We also ration drugs that can suddenly become scarce (a distressingly common problem).

But there are other forms of rationing that are problematic, too. The most common one, intrinsic to the US healthcare system, involves limiting the kind and amount of healthcare one can obtain based on one’s financial situation. Poorer people get less and worse healthcare than wealthy people. While the most offensive aspects of this arrangement have been mitigated to some extent in those states that expanded Medicaid under the auspices of the Affordable Care Act, there are still alarming numbers of Americans who have limited access to effective medical care. This is one of the chief reasons why the US population as a whole doesn’t get as much bang per buck as citizens of many other nations, and this form of rationing is blatantly unfair.

But there is another form of rationing that is more insidious still. This is the so-called bedside rationing, in which doctors decide, on an individual per-patient basis, what should be available to them, regardless of the range of services that their insurance or finances might otherwise allow. The problem with this is that it is readily susceptible to prejudice and discrimination, both overt and hidden. It is well-known that doctors, like pretty much everyone else, harbour so-called implicit biases that are readily revealed on the implicit-association test (available online).

This does not mean that physicians express overt sexism, racism, or others forms of bigotry – but rather that these unconscious beliefs about others can influence the kinds of treatments that they offer. Thus, bedside rationing can violate one of the cardinal principles of fairness – that clinically similar situations be treated similarly. So doctors could offer one patient (say, a well-off white person) with unstable angina and blocked coronary arteries the standard of care with cardiac catheterisation and stents, while offering just medical therapy to an African-American patient with comparable disease. And there is ample evidence that such differential treatment occurs.

So how does one ‘choose wisely’ and escape the moral pitfalls of bedside rationing? It turns out that this is an extraordinarily difficult to do, especially in a system such as ours where physicians have such discretionary power about what diagnostic and treatment interventions should be on the ‘menu’ for each patient. This can readily lead to too much and too little offered to patients for reasons that cannot be easily justified.

I think that the solution, at least in the US, might require a wholesale re-engineering of our healthcare system to minimise the financial incentives to overprescribe, and to protect or immunise against the biases that lead to inappropriate rationing at the bedside. The only way to reduce the frequency of these behaviours is to have a single-payer system that controls (to a certain extent) the availability of certain interventions, analogous to the way in which the organ-transplant system regulates who gets transplanted and under what circumstances.

Of course, unlike livers and hearts, what needs to be rationed in the US is money and what it can buy. We could save money by efficiencies of scale and decreasing the waste and administrative costs that contribute at least 25 per cent of the total cost of what we now spend. Can we totally eliminate ‘bad’ rationing? No, of course not. But Americans should do all they can to avoid the moral tragedy of being the wealthiest nation on Earth that chooses dumbly, not wisely, about healthcare. Aeon counter – do not remove

Philip Rosoff is professor of paediatrics and director of the clinical ethics programme at Duke University Hospital in North Carolina. He is the author Drawing the Line: Healthcare Rationing and the Cutoff Problem (2017).

This article was originally published at Aeon and has been republished under Creative Commons.

Gun violence in the US kills more black people and urban dwellers

A man changes a flag to half-staff near the First Baptist Church of Sutherland Springs.
AP Photo/Eric Gay

Molly Pahn, Boston University; Anita Knopov, Boston University, and Michael Siegel, Boston University

On Nov. 5, just 35 days after the deadly Las Vegas shooting, a man walked into a church in a small Texas town and murdered 26 people with an assault rifle. The coverage dominated the news.

But the day before, even more people – 43 – were shot to death in cities and towns around the country. And nobody really seemed to notice.

Shootings kill more than 36,000 Americans each year. Every day, 90 deaths and 200 injuries are caused by gun violence. Unlike terrorist acts, the everyday gun violence that impacts our communities is accepted as a way of life.

Of all firearm homicides in the world, 82 percent occurs in the United States. An American is 25 times more likely to be fatally shot than a resident of other high-income nations.

As public health scholars who study firearm violence, we believe that our country is unique in its acceptance of gun violence. Although death by firearms in America is a public health crisis, it is a crisis that legislators accept as a societal norm. Some have suggested it is due to the fact that it is blacks and not whites who are the predominant victims, and our data support this striking disparity.

Urban and racial disparities

Within the United States, the odds of dying from firearm homicide are much higher for Americans who reside in cities. Twenty percent of all firearm homicides in the U.S. occur in the country’s 25 largest cities, even though they contain just over one-tenth of the U.S. population. Data from the Centers for Disease Control and Prevention show that of the 12,979 firearm homicides in 2015, 81 percent occurred in urban areas.

There is even more to the story: CDC data also show that within our nation’s cities, black Americans are, on average, eight times more likely to be killed by firearms than those who are white. The rate of death by gun homicide for black people exceeds those among whites in all 50 states, but there is tremendous variation in the magnitude of this disparity. In 2015, a black person living in Wisconsin was 26 times more likely to be fatally shot than a white person in that state. At the same time, a black person in Arizona was “only” 3.2 times more likely than a white person to be killed by a gun. The combination of being black and living in an urban area is even more deadly. In 2015, the black homicide rate for urban areas in Missouri was higher than the total death rate from any cause in New York state.

These differences across states occur primarily because the gap between levels of disadvantage among white and black Americans differs sharply by state. For example, Wisconsin – the state with the highest disparity between black and white firearm homicide rates – has the second-highest gap of any state between black and white incarceration rates, and the second-highest gap between black and white unemployment rates. Racial disparities in advantage translate into racial disparities in firearm violence victimization.

Americans are 128 times more likely to be killed in everyday gun violence than by any act of international terrorism. And a black person living in an urban area is almost 500 times more likely to be killed by everyday gun violence than by terrorism. From a public health perspective, efforts to combat firearm violence need to be every bit as strong as those to fight terrorism.

The ConversationThe first step in treating the epidemic of firearm violence is declaring that the everyday gun violence that is devastating the nation is unacceptable. Mass shootings and terrorist attacks should not be the only incidents of violence that awaken Americans to the threats to our freedom and spur politicians to action.

Molly Pahn, Research Manager, Boston University; Anita Knopov, Research fellow, Boston University, and Michael Siegel, Professor of Community Health Sciences, Boston University

This article was originally published on The Conversation. Read the original article.

I’m a librarian in Puerto Rico, and this is my Hurricane Maria survival story

Condado, San Juan, Puerto Rico.(Photo by Sgt. Jose Ahiram Diaz-Ramos/PRNG-PAO)

Evelyn Milagros Rodriguez, University of Puerto Rico – Humacao

I’ve always been fascinated by storms, particularly Puerto Rico’s own history of them. I think it’s because I was born in September 1960 during Hurricane Donna. In its wake, that storm left more than 100 dead in Humacao, the city where I am now a special collections librarian at the University of Puerto Rico.

In 1990, Israel Matos, the National Weather Service Forecast Officer in San Juan, told me that, “The tropics are unpredictable.” That comment only increased my interest in storms. Now, with the people of Puerto Rico still reeling from Hurricane Maria more than a month after it hit the island, his words seem prescient.

Today I have – if not the honor, then the duty – to describe, firsthand, what it is to live through the aftermath of the worst storm of this brutal hurricane season.

Academic crisis

Since the storm I haven’t been able to go to work at the library on the Humacao campus. At 88,000 square feet and three stories, the biblioteca is the biggest building on campus, and it’s among the worst damaged by Maria.

The library at the University of Puerto Rico, Humacao.
Evelyn M. Rodríguez, Author provided

It’s mold-infested and the roof is leaking, so there’s a lot of work to be done in both repairs and cleaning before students can use it. The mold has gotten into our collection – from books and papers to magazines – and most of the furniture and computers will have to be replaced.

According to the general damage report for the University of Puerto Rico, the infrastructure in all 11 campuses of the university system suffered severe losses.

Héctor Rios Maury, chancellor of the University of Puerto Rico’s Humacao campus, speaks to staff and students after the storm.
Evelyn M. Rodríguez, Author provided

The Humacao campus, located on the island’s eastern side, was the hardest hit, with damages calculated at more than US$35 million. Classes will start again on Oct. 31.

Students at the Río Piedras campus, which has been partially closed since Hurricane Irma skirted Puerto Rico on Sept. 6, have had only a week of class so far this year.

Five weeks after Hurricane Maria, all the campuses have now reissued their academic calendars and classes are resuming, though in some places the first semester will run through January to make up for lost time.

A culture of catastrophe

Starting on Sept. 20, 2017, Hurricane Maria swamped Puerto Rico with 20 inches of rain and battered it with 150 mph winds for over 30 hours.

The resulting humanitarian crisis has been widely reported worldwide: 80 percent of the island is still without electricity and there is not enough drinking water.

Communications – radio, television, telephones and internet – are now recovering slowly, after weeks of near nonexistence. Having said that, it took me more than two weeks just to write this article, between finding somewhere to charge my laptop and locating an internet connection strong enough to research the data and send a file by email. Eventually I discovered a Starbucks near my house with both electricity and Wi-Fi. Nothing is easy.

What outsiders are unable to see, perhaps, is that an entire culture has arisen around the catastrophe caused by Hurricane Maria – one with typically catastrophic traits: material scarcity, emotional trauma, economic catastrophe, environmental devastation.

Puerto Ricans are now facing a dramatically different way of life, which means our relatives and friends in the diaspora are, too.

Nothing about life resembles anything close to normal. An estimated 100,000 homes and buildings were demolished in the storm, and 90 percent of the island’s infrastructure is damaged or destroyed. Not only are there shortages of water and electricity but also of food, highways, bridges, security forces and medical facilities.

It’s dangerous to venture outside at night. An island-wide curfew was lifted last week, but without streetlights, stoplights or police, driving and walking are dangerous after dark.

The official tally of missing people varies, with police tallies ranging from 60 to 80 right now. Considering Puerto Rico’s hazardous conditions and limited health care services, that number is sure to rise. We are well aware that epidemic diseases, including leptospirosis and cholera, could come next. Health concerns are further stoked by the delays and disarray of the various federal agencies tasked with handling this emergency.

A deep uncertainty looms over our futures. There is post-traumatic stress involved in surviving in an overwhelming situation like this, so as a people we’re now waking up to that psychological pain, too.

The outlook from here

In short, Hurricane Maria has changed the modern history of Puerto Rico. For those who, like me, are curious about such things, the last storm of this caliber was San Felipe II, in 1928.

Known in the U.S. as the Great Okeechobee Hurricane, that massive storm was so destructive that it basically plunged Puerto Rico and Florida into the Great Depression a year before the rest of the country.

The aftermath of the Great Okeechobee Hurricane in Florida, 1928.
NOAA

In some ways, though, Puerto Ricans are well prepared for these challenges, for the history of the island is one of uncertainty and trouble.

Puerto Rico has never had a sovereign government. Instead, it has always been bound to some other larger and more powerful state. First it was Spain, which colonized our territory in 1508.

Then, since the 1898 invasion, it’s been the United States, a country with which Puerto Rico enjoys a tricky political relationship. That’s very clear right now, as the Trump administration wavers in coming to our aid.

Mired in uncertainty

Even before the hurricane arrived, Puerto Rico was facing uncertainty around another major challenge: bankruptcy. Considering lost pensions, jobs and savings, the real financial costs surely exceed by billions the official sum of $123 billion in unpaid government debt.

Hurricane Maria has deepened this economic crisis, creating a ripple effect that touches everyone across all levels of society.

Everyone is mired in uncertainty. What is the solution to this cascading set of problems? How long will recovery take? What could actually make life better for us? What will we miss? Will anything ever be the same?

Among all this concern and confusion, though, some things have become clearer since the storm. On this once-green island, the hurricane blew down and shredded thousands and thousands of trees.

The sky is more visible now. Houses once hidden are exposed, and we discern entire communities that that we rarely saw before.

A graffiti-scrawled reminder in Puerto Rico: ‘Behind the trees live a lot of people.’
Evelyn M. Rodríguez, Author provided

There’s graffiti popping up across the island, written by someone identified as “JC,” who reminds Puerto Ricans, as a kind of consolation, that “Behind the trees live a lot of people.”

Just as new environments are created in areas opened up by the hurricane, with trees and plants sprouting afresh, over time we’ll find that our current uncertainties also fade and transform. A brand new way of life is emerging among all Puerto Ricans – those who stayed, those who left, their relatives and their friends.

The ConversationLeer en español.

Evelyn Milagros Rodriguez, Research, Reference and Special Collections Librarian, University of Puerto Rico – Humacao

This article was originally published on The Conversation. Read the original article.