When someone is sick or needs the help of a physician, who should decide what is appropriate – what blood tests and imaging studies to order, what medicines to prescribe, what surgeries to perform? Should it be the doctor, the patient or some combination of the two? Most people nowadays (even most physicians) support what is called ‘shared decision-making’, in which the doctor and patient (and often her family or friends) discuss the situation and come up with a joint plan. The doctor’s role is that of experienced guide, whose medical knowledge, skill and expertise help to shape the conversation and whose understanding of the priorities, values and goals of the patients steers the plan in a given direction to the satisfaction of all.
Unfortunately, in the real world, things don’t always work this way. Doctors and patients have a number of masters, both welcomed and uninvited. Insurance companies or other third-party payers often intrude into the decision-making process, limiting the choices of what services and products might be available: a sick patient often must wait for pre-authorisation for expensive diagnostic tests and procedures; pharmacy formularies restrict the kinds of drugs available for prescriptions, and so on. Furthermore, some doctors have personal interests in the interventions they recommend. Many surgeons make more money if they do more surgery, cardiologists earn more if they put in more cardiac stents and pacemakers, and drug companies have better profits if they sell drugs for chronic conditions that never get better and require lifelong medication (such as high cholesterol, hypertension and diabetes). Such practices contribute to the seeming inexorable rise in healthcare costs (and a host of adverse outcomes) in the United States.
Yet controlling cost without sacrificing quality has been a daunting task. One strategy might be to pay more attention to what patients need, and less to what they want, assuming that the two don’t overlap. Another is limiting the excess of doctors who prescribe because of conflicts of interest or acts of ‘defensive medicine’ – in other words, to protect themselves from lawsuits, not aid the patient.
How does one go about rationing care? Will faceless bureaucrats be denying granny her medication or access to an intensive-care unit solely because she’s old, or saying that Billy can’t get his conditions treated because he is disabled? Indeed, dread of rationing – as well as a healthy dose of old-fashioned fear-mongering by crafty politicians – is what inspired the meme of ‘death panels’, an unfounded canard based upon a misinterpretation of a proposed federal rule for Medicare. Nevertheless, the concept of rationing is still of concern because it implies restriction of a resource that could be beneficial.
Therefore, rationing doesn’t apply to interventions that can’t help anyone at any time – for instance, antibacterial antibiotics that won’t work because the patient has a viral infection. A better example of true rationing is the allocation of organs – such as livers, hearts and lungs – for transplantation. Organ transplantation requires rationing because the supply never keeps up with demand. We also ration drugs that can suddenly become scarce (a distressingly common problem).
But there are other forms of rationing that are problematic, too. The most common one, intrinsic to the US healthcare system, involves limiting the kind and amount of healthcare one can obtain based on one’s financial situation. Poorer people get less and worse healthcare than wealthy people. While the most offensive aspects of this arrangement have been mitigated to some extent in those states that expanded Medicaid under the auspices of the Affordable Care Act, there are still alarming numbers of Americans who have limited access to effective medical care. This is one of the chief reasons why the US population as a whole doesn’t get as much bang per buck as citizens of many other nations, and this form of rationing is blatantly unfair.
But there is another form of rationing that is more insidious still. This is the so-called bedside rationing, in which doctors decide, on an individual per-patient basis, what should be available to them, regardless of the range of services that their insurance or finances might otherwise allow. The problem with this is that it is readily susceptible to prejudice and discrimination, both overt and hidden. It is well-known that doctors, like pretty much everyone else, harbour so-called implicit biases that are readily revealed on the implicit-association test (available online).
This does not mean that physicians express overt sexism, racism, or others forms of bigotry – but rather that these unconscious beliefs about others can influence the kinds of treatments that they offer. Thus, bedside rationing can violate one of the cardinal principles of fairness – that clinically similar situations be treated similarly. So doctors could offer one patient (say, a well-off white person) with unstable angina and blocked coronary arteries the standard of care with cardiac catheterisation and stents, while offering just medical therapy to an African-American patient with comparable disease. And there is ample evidence that such differential treatment occurs.
So how does one ‘choose wisely’ and escape the moral pitfalls of bedside rationing? It turns out that this is an extraordinarily difficult to do, especially in a system such as ours where physicians have such discretionary power about what diagnostic and treatment interventions should be on the ‘menu’ for each patient. This can readily lead to too much and too little offered to patients for reasons that cannot be easily justified.
I think that the solution, at least in the US, might require a wholesale re-engineering of our healthcare system to minimise the financial incentives to overprescribe, and to protect or immunise against the biases that lead to inappropriate rationing at the bedside. The only way to reduce the frequency of these behaviours is to have a single-payer system that controls (to a certain extent) the availability of certain interventions, analogous to the way in which the organ-transplant system regulates who gets transplanted and under what circumstances.
Of course, unlike livers and hearts, what needs to be rationed in the US is money and what it can buy. We could save money by efficiencies of scale and decreasing the waste and administrative costs that contribute at least 25 per cent of the total cost of what we now spend. Can we totally eliminate ‘bad’ rationing? No, of course not. But Americans should do all they can to avoid the moral tragedy of being the wealthiest nation on Earth that chooses dumbly, not wisely, about healthcare.
This article was originally published at Aeon and has been republished under Creative Commons.
But the day before, even more people – 43 – were shot to death in cities and towns around the country. And nobody really seemed to notice.
Shootings kill more than 36,000 Americans each year. Every day, 90 deaths and 200 injuries are caused by gun violence. Unlike terrorist acts, the everyday gun violence that impacts our communities is accepted as a way of life.
As public health scholars who study firearm violence, we believe that our country is unique in its acceptance of gun violence. Although death by firearms in America is a public health crisis, it is a crisis that legislators accept as a societal norm. Some have suggested it is due to the fact that it is blacks and not whites who are the predominant victims, and our data support this striking disparity.
Urban and racial disparities
Within the United States, the odds of dying from firearm homicide are much higher for Americans who reside in cities. Twenty percent of all firearm homicides in the U.S. occur in the country’s 25 largest cities, even though they contain just over one-tenth of the U.S. population. Data from the Centers for Disease Control and Prevention show that of the 12,979 firearm homicides in 2015, 81 percent occurred in urban areas.
There is even more to the story: CDC data also show that within our nation’s cities, black Americans are, on average, eight times more likely to be killed by firearms than those who are white. The rate of death by gun homicide for black people exceeds those among whites in all 50 states, but there is tremendous variation in the magnitude of this disparity. In 2015, a black person living in Wisconsin was 26 times more likely to be fatally shot than a white person in that state. At the same time, a black person in Arizona was “only” 3.2 times more likely than a white person to be killed by a gun. The combination of being black and living in an urban area is even more deadly. In 2015, the black homicide rate for urban areas in Missouri was higher than the total death rate from any cause in New York state.
These differences across states occur primarily because the gap between levels of disadvantage among white and black Americans differs sharply by state. For example, Wisconsin – the state with the highest disparity between black and white firearm homicide rates – has the second-highest gap of any state between black and white incarceration rates, and the second-highest gap between black and white unemployment rates. Racial disparities in advantage translate into racial disparities in firearm violence victimization.
Americans are 128 times more likely to be killed in everyday gun violence than by any act of international terrorism. And a black person living in an urban area is almost 500 times more likely to be killed by everyday gun violence than by terrorism. From a public health perspective, efforts to combat firearm violence need to be every bit as strong as those to fight terrorism.
The first step in treating the epidemic of firearm violence is declaring that the everyday gun violence that is devastating the nation is unacceptable. Mass shootings and terrorist attacks should not be the only incidents of violence that awaken Americans to the threats to our freedom and spur politicians to action.
Condado, San Juan, Puerto Rico.(Photo by Sgt. Jose Ahiram Diaz-Ramos/PRNG-PAO)
I’ve always been fascinated by storms, particularly Puerto Rico’s own history of them. I think it’s because I was born in September 1960 during Hurricane Donna. In its wake, that storm left more than 100 dead in Humacao, the city where I am now a special collections librarian at the University of Puerto Rico.
In 1990, Israel Matos, the National Weather Service Forecast Officer in San Juan, told me that, “The tropics are unpredictable.” That comment only increased my interest in storms. Now, with the people of Puerto Rico still reeling from Hurricane Maria more than a month after it hit the island, his words seem prescient.
Today I have – if not the honor, then the duty – to describe, firsthand, what it is to live through the aftermath of the worst storm of this brutal hurricane season.
Since the storm I haven’t been able to go to work at the library on the Humacao campus. At 88,000 square feet and three stories, the biblioteca is the biggest building on campus, and it’s among the worst damaged by Maria.
It’s mold-infested and the roof is leaking, so there’s a lot of work to be done in both repairs and cleaning before students can use it. The mold has gotten into our collection – from books and papers to magazines – and most of the furniture and computers will have to be replaced.
According to the general damage report for the University of Puerto Rico, the infrastructure in all 11 campuses of the university system suffered severe losses.
The Humacao campus, located on the island’s eastern side, was the hardest hit, with damages calculated at more than US$35 million. Classes will start again on Oct. 31.
Five weeks after Hurricane Maria, all the campuses have now reissued their academic calendars and classes are resuming, though in some places the first semester will run through January to make up for lost time.
A culture of catastrophe
Starting on Sept. 20, 2017, Hurricane Maria swamped Puerto Rico with 20 inches of rain and battered it with 150 mph winds for over 30 hours.
The resulting humanitarian crisis has been widely reported worldwide: 80 percent of the island is still without electricity and there is not enough drinking water.
Communications – radio, television, telephones and internet – are now recovering slowly, after weeks of near nonexistence. Having said that, it took me more than two weeks just to write this article, between finding somewhere to charge my laptop and locating an internet connection strong enough to research the data and send a file by email. Eventually I discovered a Starbucks near my house with both electricity and Wi-Fi. Nothing is easy.
What outsiders are unable to see, perhaps, is that an entire culture has arisen around the catastrophe caused by Hurricane Maria – one with typically catastrophic traits: material scarcity, emotional trauma, economic catastrophe, environmental devastation.
Puerto Ricans are now facing a dramatically different way of life, which means our relatives and friends in the diaspora are, too.
Nothing about life resembles anything close to normal. An estimated 100,000 homes and buildings were demolished in the storm, and 90 percent of the island’s infrastructure is damaged or destroyed. Not only are there shortages of water and electricity but also of food, highways, bridges, security forces and medical facilities.
It’s dangerous to venture outside at night. An island-wide curfew was lifted last week, but without streetlights, stoplights or police, driving and walking are dangerous after dark.
The official tally of missing people varies, with police tallies ranging from 60 to 80 right now. Considering Puerto Rico’s hazardous conditions and limited health care services, that number is sure to rise. We are well aware that epidemic diseases, including leptospirosis and cholera, could come next. Health concerns are further stoked by the delays and disarray of the various federal agencies tasked with handling this emergency.
A deep uncertainty looms over our futures. There is post-traumatic stress involved in surviving in an overwhelming situation like this, so as a people we’re now waking up to that psychological pain, too.
The outlook from here
In short, Hurricane Maria has changed the modern history of Puerto Rico. For those who, like me, are curious about such things, the last storm of this caliber was San Felipe II, in 1928.
Known in the U.S. as the Great Okeechobee Hurricane, that massive storm was so destructive that it basically plunged Puerto Rico and Florida into the Great Depression a year before the rest of the country.
In some ways, though, Puerto Ricans are well prepared for these challenges, for the history of the island is one of uncertainty and trouble.
Puerto Rico has never had a sovereign government. Instead, it has always been bound to some other larger and more powerful state. First it was Spain, which colonized our territory in 1508.
Then, since the 1898 invasion, it’s been the United States, a country with which Puerto Rico enjoys a tricky political relationship. That’s very clear right now, as the Trump administration wavers in coming to our aid.
Mired in uncertainty
Even before the hurricane arrived, Puerto Rico was facing uncertainty around another major challenge: bankruptcy. Considering lost pensions, jobs and savings, the real financial costs surely exceed by billions the official sum of $123 billion in unpaid government debt.
Hurricane Maria has deepened this economic crisis, creating a ripple effect that touches everyone across all levels of society.
Everyone is mired in uncertainty. What is the solution to this cascading set of problems? How long will recovery take? What could actually make life better for us? What will we miss? Will anything ever be the same?
The sky is more visible now. Houses once hidden are exposed, and we discern entire communities that that we rarely saw before.
There’s graffiti popping up across the island, written by someone identified as “JC,” who reminds Puerto Ricans, as a kind of consolation, that “Behind the trees live a lot of people.”
Just as new environments are created in areas opened up by the hurricane, with trees and plants sprouting afresh, over time we’ll find that our current uncertainties also fade and transform. A brand new way of life is emerging among all Puerto Ricans – those who stayed, those who left, their relatives and their friends.
Leer en español.
In the face of these threats, which Marvel superhero might be best equipped to defend the people, ideals and institutions under attack? Some comic fans and critics are pointing to Kamala Khan, the new Ms. Marvel.
Khan, the brainchild of comic writer G. Willow Wilson and editor Sana Amanat, is a revamp of the classic Ms. Marvel character (originally named Carol Danvers and created in 1968). First introduced in early 2014, Khan is a Muslim, Pakistani-American teenager who fights crime in Jersey City and occasionally teams up with the Avengers.
Since Donald Trump’s inauguration, fans have created images of Khan tearing up a photo of the president, punching him (evoking a famous 1941 cover of Captain America punching Hitler) and grieving in her room. But the new Ms. Marvel’s significance extends beyond symbolism.
In Kamala Khan, Wilson and Amanat have created a superhero whose patriotism and contributions to Jersey City emerge because of her Muslim heritage, not despite it. She challenges the assumptions many Americans have about Muslims and is a radical departure from how the media tend to depict Muslim-Americans. She shows how Muslim-Americans and immigrants are not forces that threaten communities – as some would argue – but are people who can strengthen and preserve them.
After inhaling a mysterious gas, Kamala Khan discovers she can stretch, enlarge, shrink and otherwise manipulate her body. Like many superheroes, she chooses to keep her identity a secret. She selects the Ms. Marvel moniker in homage to the first Ms. Marvel, Carol Danvers, who has since given up the name in favor of becoming Captain Marvel. Khan cites her family’s safety and her desire to lead a normal life, while also fearing that “the NSA will wiretap our mosque or something.”
As she wrestles with her newfound powers, her parents grow concerned about broken curfews and send her to the local imam for counseling. Rather than reinforcing her parents’ curfew or prying the truth from Khan, though, Sheikh Abdullah says, “I am asking you for something more difficult. If you insist on pursuing this thing you will not tell me about, do it with the qualities benefiting an upright young woman: courage, strength, honesty, compassion and self-respect.”
Her experience at the mosque becomes an important step on her journey to superheroism. Sheikh Abdullah contributes to her education, as does Wolverine. Islam is not a restrictive force in her story. Instead, the religion models for Khan many of the traits she needs in order to become an effective superhero. When her mother learns the truth about why her daughter is sneaking out, she “thank[s] God for having raised a righteous child.”
The comics paint an accurate portrait of Jersey City. Her brother Aamir is a committed Salafi (a conservative and sometimes controversial branch of Sunni Islam) and member of his university’s Muslim Student Association. Her best friend and occasional love interest, Bruno, works at a corner store and comes from Italian roots. The city’s diversity helps Kamala as she learns to be a more effective superhero. But it also rescues her from being a stand-in for all Muslim-American or Jersey City experiences.
Fighting a ‘war on terror culture’
Kamala’s brown skin and costume – self-fashioned from an old burkini – point to Marvel Comics’ desire to diversify its roster of superheroes (as well as writers and artists). As creator Sana Amanat explained on “Late Night With Seth Meyers” last month, representation is a powerful thing, especially in comics. It matters when readers who feel marginalized can see people like themselves performing heroic acts.
As one of 3.3 million Muslim-Americans, Khan flips the script on what Moustafa Bayoumi, author of “This Muslim American Life,” calls a “war on terror culture” that sees Muslim-Americans “not as complex human being[s] but only as purveyor[s] of possible future violence.”
Bayoumi’s book echoes other studies that detail the heightened suspicion and racial profiling Muslim-Americans have faced since 9/11, whether it’s in the workplace or interactions with the police. Each time there’s been a high-profile terrorist attack, these experiences, coupled with hate crimes and speech, intensify. Political rhetoric – like Donald Trump’s proposal to have a Muslim registry or his lie that thousands of Muslims cheered from Jersey City rooftops after the Twin Towers fell – only fans the flames.
Scholars of media psychology see this suspicion fostered, in part, by negative representations of Muslims in both news media outlets and popular culture, where they are depicted as bloodthirsty terrorists or slavish informants to a non-Muslim hero.
These stereotypes are so entrenched that a single positive Muslim character cannot counteract their effects. In fact, some point to the dangers of “balanced” representations, arguing that confronting stereotypes with wholly positive images only enforces a simplistic division between “good” and “bad” Muslims.
Kamala Khan, however, signals an important development in cultural representations of Muslim-Americans. It’s not just because she is a powerful superhero instead of a terrorist. It’s because she is, at the same time, a clumsy teenager who makes a mountain of mistakes while trying to balance her abilities, school, friends and family. And it’s because Wilson surrounds Kamala with a diverse assortment of characters who demonstrate the array of heroic (and not-so-heroic) actions people can take.
For example, in one of Ms. Marvel’s most powerful narrative arcs, a planet attacks New York, leading to destruction eerily reminiscent of 9/11. Kamala works to protect Jersey City while realizing that her world has changed – and will change – irrevocably.
Carol Danvers appears to fill Kamala in on the gravity of the situation, telling her, “The fate of the world is out of your hands. It always was. But your fate – what you decide to do right now – is still up to you … Today is the day you stand up.” Kamala connects the talk with Sheikh Abdullah’s lectures about the value of one’s deeds, once again linking her superhero and religious training to rise to the occasion. In both cases, the lectures teach Kamala to take a stand to protect her community.
Arriving at the high school gym now serving as a safe haven for Jersey City residents, Kamala realizes her friends and classmates have been inspired by her heroism. They safely transport their neighbors to the gym while outfitting the space with water, food, dance parties and even a “non-denominational, non-judgmental prayer area.” The community response prompts Kamala to realize that “even if things are profoundly not okay, at least we’re not okay together. And even if we don’t always get along, we’re still connected by something you can’t break. Something there isn’t even a word for. Something … beautiful.”
Kamala Khan is precisely the hero America needs today, but not because of a bat sign in the sky or any single definitive image. She is, above all, committed to the idea that every member of her faith, her generation, and her city has value and that their lives should be respected and protected. She demonstrates that the most heroic action is to face even the most despair-inducing challenges of the world head on while standing up for – and empowering – every vulnerable neighbor, classmate or stranger. She shows us how diverse representation can transform into action and organization that connect whole communities “by something you can’t break.”
I’m an anthropologist who grew up in Japan and has lived there, off and on, for 22 years. Yet every visit to Tokyo’s Harajuku District still surprises me. In the eye-catching styles modeled by fashion-conscious young adults, there’s a kind of street theater, with crowded alleyways serving as catwalks for teenagers peacocking colorful, inventive outfits.
Boutiques are filled with cosmetics and beauty products intended for both males and females, and it’s often difficult to discern the gender of passersby. Since a gendered appearance (“feminine” or “masculine”) often (but not always) denotes the sex of a person, Japan’s recent “genderless” fashion styles might confuse some visitors – was that person who just walked by a woman or a man?
Although the gender-bending look appeals equally to young Japanese women and men, the media have tended to focus on the young men who wear makeup, color and coif their hair and model androgynous outfits. In interviews, these genderless males insist that they are neither trying to pass as women nor are they (necessarily) gay.
Some who document today’s genderless look in Japan tend to treat it as if it were a contemporary phenomenon. However, they conveniently ignore the long history in Japan of blurred sexualities and gender-bending practices.
Sex without sexuality
In premodern Japan, aristocrats often pursued male and female lovers; their sexual trysts were the stuff of classical literature. To them, the biological sex of their pursuits was often less important than the objective: transcendent beauty. And while many samurai and shoguns had a primary wife for the purposes of procreation and political alliances, they enjoyed numerous liaisons with younger male lovers.
Only after the formation of a modern army in the late-19th century were the sort of same-sex acts central to the samurai ethos discouraged. For a decade, from 1872 to 1882, sodomy among men was even criminalized. However, since then, there have been no laws in Japan banning homosexual relations.
It’s important to note that, until very recently, sexual acts in Japan were not linked to sexual identity. In other words, men who had sex with men and women who had sex with women did not consider themselves gay or lesbian. Sexual orientation was neither political nor politicized in Japan until recently, when a gay identity emerged in the context of HIV/AIDS activism in the 1990s. Today, there are annual gay pride parades in major cities like Tokyo and Osaka.
In Japan, same-sex relations among children and adolescents have long been thought of as a normal phase of development, even today. From a cultural standpoint, it’s frowned upon only when it interferes with marriage and preserving a family’s lineage. For this reason, many people will have same-sex relationships while they’re young, then get married and have kids. And some even later resume having same-sex relationships after fulfilling these social obligations.
Like same-sex relationships, cross-dressing has a long history in Japan. The earliest written records date to the eighth century and include stories about women who dressed as warriors. In premodern Japan, there were also cases of women passing as men either to reject the prescribed confines of femininity or to find employment in trades dominated by men.
A century ago, “modern girls” (moga) were young women who sported short hair and trousers. They attracted media attention – mostly negative – although artists depicted them as fashion icons. Some hecklers called them “garçons” (garuson), an insult implying unfeminine and unattractive.
Gender, at that time, was thought of in zero-sum terms: If females were becoming more masculine, it meant that males were becoming feminized.
These concerns made their way into the theater. For example, the all-female Takarazuka Revue was an avant-garde theater founded in 1913 (and is still very popular today). Females play the parts of men, which, in the early 20th century, sparked heated debates (that continue today) about “masculinized” women on stage – and how this might influence women off the stage.
However, today’s genderless males aren’t simply weekend cross-dressers. Instead, they want to shatter the existing norms that say men must dress and present themselves a certain way.
They ask: Why should only girls and women be able to wear skirts and dresses? Why should only women be able to wear lipstick and eye shadow? If women can wear pants, why shouldn’t men be able to wear skirts?
Actually, the adjective “genderless” is misleading, since these young men aren’t genderless at all; rather, they’re claiming both femininity and masculinity as styles they wear in their daily lives.
In this regard, these so-called genderless men have historical counterparts: In the late 19th and early 20th centuries, cosmopolitan “high collar” men (haikara) wore facial powder and carried scented handkerchiefs, paying meticulous attention to their Westernized appearances. One critic – invoking the zero-sum gender attitudes of the era – complained that “some men toil over their makeup more than women.” Conservative pundits derided the haikara as “effeminate” by virtue of their “un-Japanese” style.
On the other end of the masculinity spectrum were the nationalistic “primitive” men (bankara) who wore wooden clogs (geta) to complement their military-style school uniforms. Ironically, like their samurai predecessors – and unlike the foppish haikara – the macho bankara would engage in same-sex acts.
Japan’s ‘beautiful youths’
Probably the biggest contemporary inspiration for today’s genderless males are a spate of popular androgynous boy bands. Cultivated and promoted by Johnny & Associates Entertainment Company, Japan’s largest male talent agency, they include boy bands like SMAP, Johnny’s West and Sexy Zone.
There’s a term for the type of teenage boy that Johnny & Associates cultivates: “beautiful youths” (bishōnen), which was coined a century ago to describe a young man whose ambiguous gender and sexual orientation appealed to females and males of all ages.
Similarly, Visual Kei is a 1980s glam-rock and punk music genre that features bishōnen performers who don flamboyant, gender-bending costumes and hairdos. In its new, 21st-century incarnation as Neo-Visual Kei, the emphasis on androgyny is even more pronounced, as epitomized by the prolific career of the androgynous Neo-Visual Kei pop star Gackt, who enjoys an international fan following.
Since the word “genderless” is misleading, a better term might be “gender-more,” in the sense that young men – especially in Tokyo – are insisting on the right to present and express themselves in ways that contradict and exceed traditional masculinity. In the long span of Japanese cultural history, there have been many things that were – and are – new under the sun. But genderless males aren’t among them.
On a damp October day in 2006, I followed Kazuo Ishiguro and my 10-year-old daughter Grace to a back table at a bustling cafe in London for an interview. As Ishiguro answered my questions, he explained how he “auditions” his characters’ voices and personalities in his head before they appear in his fiction. He spoke candidly about a writer’s messy work.
Now he is the laureate for the Nobel Prize in literature, for what the Swedish Academy praised as his unapologetic portrayals of “the abyss beneath our illusory sense of connection with the world.”
It’s a nod to the self-delusion that many of Ishiguro’s characters possess. One, for example, rationalizes his service to a fascist loyalist. Others see their past through the cloudy lens of trauma. If we were to peel back the warped self-deception, we might find a bottomless pit of despair.
At that interview years ago, Ishiguro talked about his characters’ painful chasms, the way they protected themselves by concealing their mistakes. But when everything seems hopeless, his characters often courageously turn to their imagination to forge a connection to life and meaning.
In doing so, they beckon readers to imagine something better, too.
When I asked Ishiguro about his 2005 dystopic novel “Never Let Me Go,” his tone shifted. He lowered his voice when he told me about the students in that novel, and how they eventually perish. But he was surprised when I said that I found the novel sorrowful.
“There is an inevitable sadness,” he admitted. “On the other hand, it’s not a bleak view of human nature.”
I could sense Ishiguro’s concern for how my daughter might take his observations about death and despair.
He continued: “The question, ‘What are we useful for?’ is the question that your daughter Grace asks, and one Tommy and Kathy ask in ‘Never Let Me Go.’ Some cold system says to Tommy and Kathy that they will be useful [to the world], and it’s the same as another system saying to Grace that someday she will be useful to the world economy.”
Human systems figure in all of Ishiguro’s novels, whether these are governments, communities or families. Often, these systems are damaged, and humans still must move through them. They try to repair them or save themselves. Ishiguro has examined many facets of what it means to live among and within countless systems.
The first-person narrators of Ishiguro’s first three novels, “A Pale View of Hills,” “An Artist of the Floating World” and “The Remains of the Day,” reflect on personal losses in the context of world events: friends and families dead from atomic bombings in Japan, unrealized romances, wrong choices and lives founded on delusion. These characters long for clarity, retribution or forgiveness.
The narrators of his next three novels are, variously, a pianist (“The Unconsoled”), a London detective (“When We Were Orphans”) and a roving hospice-type worker (“Never Let Me Go”). Whether they’re situated in Japan, Great Britain, some unnamed European city or even a medieval village, Ishiguro’s characters beguile his readers with their disclosures. His eloquent prose expresses their anguish or their repressed longings. We sense time passing darkly for these characters. We see how they face disappointments and ache for dignity.
Ishiguro explained that to probe the emotional force of his novels, we must understand that the characters are set within “an internal world [and] it’s an emotional logic that is being played out.”
In narrating their sorrows and their fruitless optimism, Ishiguro gives his readers a way to empathize with his characters’ situations.
Ishiguro’s capacity for compassion was cultivated during his university gap year, when he worked with the homeless. He also studied piano and guitar and dreamed of a career in music before he detoured to the creative writing program at the University of East Anglia. He still writes musical lyrics and works with musicians as an avocation.
By his own admission, Ishiguro is a slow writer; he produces a novel every few years. In 2015, when he came to Denver’s Lighthouse Writers Workshop to promote his latest novel, I was able to catch up with him. He remarked that he may have only a couple more books forthcoming.
“We’re not immortal,” he said. “We’re here for a limited time. There is a countdown.”
The Swedish Academy honors a laureate for a lifetime of achievement. To date, Ishiguro has published eight books as well as many short stories, television and film scripts. His career may seem disjointed when focusing on only the best-known novels, “The Remains of the Day” and “Never Let Me Go.”
But few contemporary authors have dared to take as many risks as Ishiguro. The more complicated, Kafka-esque novel “The Unconsoled” is a book some critics called disappointing. A different sort of writer might have quit, but Ishiguro persisted.
Similarly, even though some readers responded coolly to “The Buried Giant,” Ishiguro had taken yet another literary leap: The highly metaphorical story is set in an early English era that predated historical records. Memory, repression of pain and the resolve to protect oneself and loved ones return as themes, but in unusual, allegorical ways.
Each novel is a singular achievement; each successive undertaking enriches a broader canvas of Ishiguro’s portraits of alienated lives.
During that 2006 London interview, I watched Ishiguro banter with my daughter during a break. They were laughing about what it means to “snarf” food, and they were picking up some biscuits and spooning melted ice cream to demonstrate. Ishiguro’s ease and humor when speaking with my child captivated me.
In spite of the sadness in his books, Ishiguro is a gracious guardian of humanity. He is a fine curator of emotions and a skilled storyteller.
We don’t know how many more books Ishiguro will publish. But we can be certain that in his literary explorations, he will remain undaunted.
As an Argentinean woman who studies gender in the media, I find it hard to be surprised by Weinstein’s misdeeds. Machismo remains deeply ingrained in Latin American society, yes, but even female political leaders in supposedly gender-equal paradises like Holland and Sweden have told me that they are criticized more in the press and held to a higher standard than their male counterparts.
How could they not be? Across the world, the film and TV industry – Weinstein’s domain – continues to foist outdated gender roles upon viewers.
Television commercials are particularly guilty, frequently casting women in subservient domestic roles.
In it, a princess eager to receive her prince remembers that – gasp – the floors in her castle tower are a total mess. Thanks to Cif’s magic scouring fluid, she has time not only to clean but also to get dolled up for the prince – who, in case you were wondering, has no physical challenges preventing him from helping her tidy up.
But why should he, when it’s a woman’s job to be both housekeeper and pretty princess?
Somewhat paradoxically, advertisements may also cast men as domestic superheroes. Often, characters like Mr. Muscle will mansplain to women about the best product and how to use it – though they don’t actually do any cleaning themselves.
More recently, there’s been a shift – perhaps an awkward attempt at political correctness – in which women are still the masters of the home, but their partners are shown “helping out” with the chores. In exchange, the men earn sex object status.
We’ve come a little way, baby
Various studies on gender stereotypes in commercials indicate that although the advertising industry is slowly changing for the better, marketing continues to target specific products to certain customers based on traditional gender roles.
This year, U.N. Women teamed up with Unilever and other industry leaders like Facebook, Google, Mars and Microsoft to launch the Unstereotype Alliance. The aim of this global campaign is to end stereotypical and sexist portrayals of gender in advertising.
As part of the #Unstereotype campaign, Unilever also undertook research on gender in advertising. It found that only 3 percent of advertising shows women as leaders and just 2 percent conveys them as intelligent. In ads, women come off as interesting people just 1 percent of the time.
Britain paves a path
Even before it was forced to reckon with allegations that Harvey Weinstein had also harassed women in London, the United Kingdom was making political progress on the issue of women’s portrayal in the media.
In July, the United Kingdom’s Advertising Standards Authority announced that the U.K. will soon prohibit commercials that promote gender stereotypes.
“While advertising is only one of many factors that contribute to unequal gender outcomes,” its press release stated, “tougher advertising standards can play an important role in tackling inequalities and improving outcomes for individuals, the economy and society as a whole.”
As of 2018, the agency says, advertisements in which women are shown as solely responsible for household cleaning or men appear useless around kitchen appliances and unable to handle taking care of their children and dependents will not pass muster in the U.K. Commercials that differentiate between girls’ and boys’ toys based on gender stereotypes will be banned as well.
The U.K.‘s move is a heartening public recognition that gender stereotypes in the media both reflect and further the very real inequalities women face at home and at work.
Worldwide, the International Labor Organization reports, women still bear the burden of household chores and caretaking responsibilities, which often either excludes them from pay work or leaves them relegated to ill-paid part-time jobs.
In the U.K., men spend on average 16 hours per week on domestic tasks, while women spend 26. The European Union average is worse, with women dedicating an average of 26 weekly hours to men’s nine hours on caretaking and household tasks.
In Argentina, my home country, fully 40 percent of men report doing no household work at all, even if they’re unemployed. Among those who do pitch in, it’s 24 hours a week on caretaking and domestic chores for men. Argentinean women put in 45 hours.
You can do the math: On average, Argentinean women use up two days of their week and some 100 days annually – nearly one-third of their year – on unpaid household labor.
These inequalities, combined with advertising that reinforces them, generate what’s called the “sticky floors” problem. Women – whether would-be investment bankers or, I dare say, aspiring Hollywood stars – don’t just face glass ceilings to advancement, they also are also “stuck” to domestic life by endless chores.
The cultural powers that be produce content that represents private spaces as “naturally” imbued with female qualities, gluing women to traditional caregiving roles.
This hampers their professional development and helps keep them at the bottom of the economy pyramid because women must pull off a balancing act between their jobs inside and outside of the domestic sphere. And they must excel at both, all while competing against male colleagues who likely confront no such challenges.
Former U.S. president Barack Obama once pointed out this double standard in homage to his then-competitor Hillary Clinton. She, he reminded an audience in 2008, “was doing everything I was doing, but just like Ginger Rogers, it was backwards in heels.”
The sticky floor problem puts women in a position to be exploited by men like Weinstein, who tout their ability to help female aspirants to get unstuck. Until society – and, with it, the media we create – comprehend that neither professional success nor domesticity has a gender, these pernicious powerful dynamics will endure.