Get the latest from AlphaComm Strategies!
Harvard Business Review
Amid the political uncertainties that continue to cloud the future of U.S. health care, one thing hasn’t changed: Patients, clinicians, health plans, payers, and policy makers are still striving to achieve better outcomes at lower costs. Given the heavy financial burden that health care is imposing on the country, the top priority should be fundamentally changing the way we care for high-cost, high-risk patients. Redesigning their care is a major way we can improve lives and sustainably reduce overall health care costs.
Almost half of the nation’s health care spending is driven by the top 5% of the population with the highest spending, while the top 1% account for more than 20% of total health care costs. These high-cost patients bear the highest economic risk because they bear the highest clinical risk. They are our sickest and most injured. They are often at the end of their lives, facing frequent hospitalizations for multiple chronic conditions. They are often in intensive care units with ill-defined goals of care. They may undergo tests and procedures that are designed to better characterize and manage complications of chronic conditions rather than prevent or treat them.
To build a better model of care, physicians and leaders of care organizations must begin by asking where we should care for these patients, how we should care for them, and, ultimately, why we care for them.
Where. The good news is that there have been declines in readmission rates under the Hospital Readmissions Reduction Program, which was established under the Affordable Care Act to financially penalize hospitals with higher-than-expected readmission rates for Medicare beneficiaries with select conditions. Nevertheless, hospital spending continues to increase due to the severity of illnesses and prices. In 2014 the Medicare fee-for-service program spent $173 billion on 9.7 million Medicare hospitalizations.
In addition to representing a key driver of health care costs in the United States, these hospitalizations are associated with risk for our patients. Each hospitalization depletes physiologic reserves, leaving patients at a higher risk of early readmission. Each additional readmission is associated with a higher risk of mortality.
Many hospitalizations may be preventable for patients with chronic diseases, particularly with timely access to alternative venues of care that can act quickly to address the signs and symptoms of decompensation. The Ambulatory Cardiac Triage, Intervention, and Education Unit at Brigham and Women’s Hospital and the Duke Heart Failure Same-Day Access Clinic are two examples of how on-demand access to intensified outpatient management can decentralize care from the hospital, thereby improving outcomes and lowering costs through fewer hospitalizations.
The CareMore Health System, of which we are both leaders, has pioneered the role of early intensive chronic-disease management in neighborhood-based care centers and Extensivists, physicians who follow high-risk patients across multiple settings (hospitals, skilled nursing facilities, and post-discharge clinics) to make sure there is appropriate continuity of care across settings.
How. Novel outpatient alternatives are only effective if they are accessible. Transportation barriers are common among patients with chronic diseases and represent a significant impediment to improving patient outcomes. Among patients receiving dialysis, for example, those who rely on a transportation service are at an increased risk of missing hemodialysis treatments. Intensive chronic disease management programs only work when patients can get to their appointments in a timely manner. Seeking to further improve our patients’ ability to access preventive and chronic care services, CareMore has partnered with Lyft and National MedTrans to test digital approaches to nonemergency medical transportation. The pilot program has shown lower wait times, lower costs, and higher patient satisfaction.
Better transportation to better outpatient care models may lead to better outcomes. But we also need better means of identifying at-risk patients. There is significant potential to triage patients with subclinical and preclinical decompensation of chronic diseases through high-value approaches to remote monitoring. But the value of these technologies remains poorly understood.
The relentless pace of technology has produced numerous products and services to monitor patients at home, but few have been tightly integrated with alternative care models. One exception may be CareMore’s Neighborhood Care Centers, where a team not only delivers the care but also remotely monitors data collected by patients in their homes to determine when that care might be needed for conditions such as heart failure, diabetes, and hypertension. The care team, which has gained the trust of its patients through longstanding relationships, can intervene when it spots worrisome trends through remote monitoring by either modifying a patient’s self-care regimen or scheduling an on-demand appointment for in-person evaluation and multidisciplinary management in a Neighborhood Care Center.
Without the widespread use of such care models that manage patients in the community rather than in the hospital, the use of remote monitoring technology may paradoxically produce earlier and higher rates of emergency department use and hospitalization. As a result, the evidence base to date remains mixed as to whether remote monitoring improves the quality of care.
In the traditional fee-for-service reimbursement model, it’s often difficult to demonstrate that these technologies are justifiable on a return-on-investment basis. Remote monitoring in the context of alternative payment models, such as fully capitated medical groups or health systems, can improve the value of care through increasing quality, lowering costs, or both — because the cost savings produced by these technologies are accrued to the organizations that invest in and implement them. The population-based payment models of Medicare Advantage and Medicaid managed care are examples of a reimbursement environment in which proven technologies could be widely adopted. Capitated payments enable organizations like CareMore to invest in the care management capabilities and digital infrastructure needed to validate and implement technologies that can help patients stay out of the hospital.
Why. In uncertain and changing times, we physicians often ask ourselves why we entered medicine in the first place. Ever-growing demands on physicians’ time for administrative, rather than patient-facing, tasks have led physicians to suffer more burnout than other workers in recent years. We often revisit our original motivations when our children and students ask us why we do what we do, but the most frequent and intense reminders occur at the patient’s bedside. We are reminded of the joy of delivering health care. It goes beyond simply meeting measures of health and survival to deliver and redesign care in a way that better meets the goals and needs of our individual patients.
Patients participating in Patient-Centered Outcomes Research Institute studies have identified being alive at home without hospitalization as a top goal. Redesigning our care delivery solutions with this patient-centered goal in mind can create a virtuous cycle in which better patient engagement leads to better outcomes at lower costs.
Changing the care paradigm is an opportunity to better serve the interests of our patients and the public good by thoughtfully challenging the status quo. The suboptimal value of legacy care models, especially for those highest-cost, highest-risk patients, and the potential for patient-centered care outside the hospital present a fertile opportunity if we have the will to develop, evaluate, and implement a new way of care.
In 2015 Doreetha Daniels received her associate degree in social sciences from College of the Canyons, in Santa Clarita, California. But Daniels wasn’t a typical student: She was 99 years old. In the COC press release about her graduation, Daniels indicated that she wanted to get her degree simply to better herself; her six years of school during that pursuit were a testament to her will, determination, and commitment to learning.
Few of us will pursue college degrees as nonagenarians, or even as mid-career professionals (though recent statistics indicate that increasing numbers of people are pursuing college degrees at advanced ages). Some people never really liked school in the first place, sitting still at a desk for hours on end or suffering through what seemed to be impractical courses. And almost all of us have limits on our time and finances — due to kids, social organizations, work, and more — that make additional formal education impractical or impossible.
As we age, though, learning isn’t simply about earning degrees or attending storied institutions. Books, online courses, MOOCs, professional development programs, podcasts, and other resources have never been more abundant or accessible, making it easier than ever to make a habit of lifelong learning. Every day, each of us is offered the opportunity to pursue intellectual development in ways that are tailored to our learning style.
So why don’t more of us seize that opportunity? We know it’s worth the time, and yet we find it so hard to make the time. The next time you’re tempted to put learning on the back burner, remember a few points:
Educational investments are an economic imperative. The links between formal education and lifetime earnings are well-studied and substantial. In 2015 Christopher Tamborini, ChangHwan Kim, and Arthur Sakamoto found that, controlling for other factors, men and women can expect to earn $655,000 and $445,000 more, respectively, during their careers with a bachelor’s degree than with a high school degree, and graduate degrees yield further gains. Outside of universities, ongoing learning and skill development is essential to surviving economic and technological disruption. The Economist recently detailed the ways in which our rapidly shifting professional landscape — the disruptive power of automation, the increasing number of jobs requiring expertise in coding — necessitates that workers focus continually on mastering new technologies and skills. In 2014 a CBRE report estimated that 50% of jobs would be redundant by 2025 due to technological innovation. Even if that figure proves to be exaggerated, it’s intuitively true that the economic landscape of 2017 is evolving more rapidly than in the past. Trends including AI, robotics, and offshoring mean constant shifts in the nature of work. And navigating this ever-changing landscape requires continual learning and personal growth.
Learning is positive for health. As I’ve noted previously, reading, even for short periods of time, can dramatically reduce your stress levels. A recent report in Neurology noted that while cognitive activity can’t change the biology of Alzheimer’s, learning activities can help delay symptoms, preserving people’s quality of life. Other research indicates that learning to play a new instrument can offset cognitive decline, and learning difficult new skills in older age is associated with improved memory.
What’s more, while the causation is inconclusive, there’s a well-studied relationship between longevity and education. A 2006 paper by David Cutler and Adriana Lleras-Muney found that “the better educated have healthier behaviors along virtually every margin, although some of these behaviors may also reflect differential access to care.” Their research suggests that a year of formal education can add more than half a year to a person’s life span. Perhaps Doreetha Daniels, at 99, knows something many of us have missed.
Being open and curious has profound personal and professional benefits. While few studies validate this observation, I’ve noticed in my own interactions that those who dedicate themselves to learning and who exhibit curiosity are almost always happier and more socially and professionally engaging than those who don’t. I have a friend, Duncan, for example, who is almost universally admired by people he interacts with. There are many reasons for this admiration, but chief among them are his plainly exhibited intellectual curiosity and his ability to touch, if only briefly, on almost any topic of interest to others and to speak deeply on those he knows best. Think of the best conversationalist you know. Do they ask good questions? Are they well-informed? Now picture the colleague you most respect for their professional acumen. Do they seem literate, open-minded, and intellectually vibrant? Perhaps your experiences will differ, but if you’re like me, I suspect those you admire most, both personally and professionally, are those who seem most dedicated to learning and growth.
Our capacity for learning is a cornerstone of human flourishing and motivation. We are uniquely endowed with the capacity for learning, creation, and intellectual advancement. Have you ever sat in a quiet place and finished a great novel in one sitting? Do you remember the fulfillment you felt when you last settled into a difficult task — whether a math problem or a foreign language course — and found yourself making breakthrough progress? Have you ever worked with a team of friends or colleagues to master difficult material or create something new? These experiences can be electrifying. And even if education had no impact on health, prosperity, or social standing, it would be entirely worthwhile as an expression of what makes every person so special and unique.
The reasons to continue learning are many, and the weight of the evidence would indicate that lifelong learning isn’t simply an economic imperative but a social, emotional, and physical one as well. We live in an age of abundant opportunity for learning and development. Capturing that opportunity — maintaining our curiosity and intellectual humility — can be one of life’s most rewarding pursuits.
Although women make up 40% of the global workforce, they hold only 24% of senior management roles around the world — a figure that has not changed significantly over the past decade. Of chief executive officers of S&P 500 firms, only about 5% are women. Why aren’t more talented women moving up? Researchers have pointed to an array of reasons, from explicit discrimination to promotion processes that quietly favor men, but one of the most perplexing is that women themselves aren’t as likely as men to put themselves forward for leadership roles through promotions, job transfers, and high-profile assignments.
Women begin their careers with ambitions that are just as high as their male peers, but before long they scale back their goals and shy away from competing for these jobs. The reason, many assume, is because women are risk averse or lack confidence, or maybe because they have different career preferences than their male colleagues do. But our research suggests another reason.
We recently conducted a study of more than 10,000 senior executives who were competing for top management jobs in the UK. We found that women were indeed less likely than men to apply for these jobs, but here’s the interesting part: We found that women were much less likely to apply for a job if they had been rejected for a similar job in the past. Of course, men were also less likely to apply if they had been rejected, but the effect was much stronger for women — more than 1.5 times as strong.
The implications here are not trivial, because rejection is a routine part of corporate life. Employees regularly get rejected for promotions, job transfers, important project assignments, and so on. To reach the top of the organization, people need to keep playing the game, over and over again, even after repeated disappointments. So even small differences between how men and women respond to rejection could lead to big differences over time.
To investigate this effect further, we interviewed top women executives about their experiences in recruitment processes and found a common complaint: dissatisfaction and frustration with how those processes were managed. For example, the CFO of a biotech company recalled that she had been considered for a CEO position. After failing to get the job after many rounds of interviews, she had been left with the impression that she was asked to apply merely because she was female and the firm needed a woman on the shortlist — not because the company was serious about hiring her. This may or may not have been true, but that’s the impression she had, and as a result she said she would be unlikely to put herself through a similar process in the future.Related Video Why So Few "Diversity Candidates" Are Hired Finalist pools can reinforce the status quo. See More Videos > See More Videos >
This was not an isolated anecdote. We heard many similar stories in our interviews, and results from a survey and randomized experiment conducted with executives confirmed that female managers weren’t dropping out after being rejected because of risk aversion or a lack of confidence. It’s not that they didn’t think they were good enough; they were withdrawing from the corporate race because of concerns that they would not be valued or truly accepted at the highest levels in the organization. Often that feeling was a result of the way hiring and promotion processes were being managed (or mismanaged), sending women subtle (and sometimes overt) signals that the highest rungs of the corporate ladder were intended only for men.
In line with these findings, we discovered that women tend to place greater weight than men do on the fairness of the recruitment and selection processes. This is because fair treatment is interpreted by female managers as a signal that they belong and are accepted in the executive community. Moreover, women who are rejected tend to perceive their treatment as less fair than men do.
While this could sound to some like sour grapes (e.g., “I didn’t win, therefore the contest was rigged”) our findings suggest something subtler. Women’s decisions to remove themselves from competition after having been rejected is driven partly by their experience of being a negatively stereotyped minority in the executive labor market. Think about it — women executives were coming to the table with past experiences of being in the minority, and they may have been in situations in which they felt like outsiders or felt that their leadership ability wasn’t recognized. Because the majority of men had generally not been subject to these same situations, men were less likely to take rejection as a signal that they did not belong in the corner offices, and therefore such disappointments had less of a negative impact on their willingness to apply again.
And, by the way, this same underlying mechanism should apply to any underrepresented group. In other words, what we found is not that there’s something unique about women; it’s that women are a minority, and minorities are often not perceived as legitimate leaders. Indeed, we would expect that men would behave in the same way in contexts where they were seen as illegitimate or outsiders.
These results have important managerial implications. For any company wanting to improve its gender diversity at the senior levels, the most important thing is to avoid the temptation to solely focus on encouraging more women to throw their hat into the ring. That approach misses the mark because it doesn’t address the underlying problem that female executives may feel that the company doesn’t truly believe that they belong in top management. This can be true whether or not the organization is actually contributing to that feeling. In fact, issuing blanket encouragements to women to apply for leadership positions could even backfire if it means the company ends up rejecting more women. Our research suggests this will make those women less likely to apply for similar jobs in the future, compounding the company’s gender problem.
Companies must take a hard look at their recruiting and promotion processes to assess whether they are indeed fair — and, just as important, whether those processes are perceived to be fair, especially by women and other minorities. A series of questions can help make that determination: Does the company have the right procedures in place to manage rejection in recruitment and promotion processes? For example, does it give appropriate feedback to candidates who are rejected? What signals is it sending to both men and women who are rejected? Companies need to look beyond recruitment and promotion and ask themselves whether they foster a sense of belonging and how they can ensure that underrepresented groups don’t feel overlooked or slighted. By addressing any deficiencies in the above, firms can begin to chip away at their glass ceilings. After all, when it comes to gender diversity, it’s not so much a matter of getting women to lean in; it’s more a matter of preventing them from leaning out.
President Trump has just appointed two men to head up his women in the workplace initiative. The reactions are predictable: How can men appropriately represent women?
But that is the typical misframing of the gender issue. Gender equality is not a “women’s issue” — it’s a huge political, economic, and social opportunity. It is a massive business issue that more than 75% of corporate CEOs currently put on their agenda of top 10 issues.
Research shows that gender balance happens in companies only if it is personally and forcefully led by the CEO. The reality is that many of the companies starting to look truly balanced are or were led by men. Successful gender balancing requires convincing the majority of your employees that it’s a good idea. Smart CEOs of male-dominated companies know that the real push on gender balance (especially in leadership) is getting leaders, most of whom are male, to own the accountability for balancing. And they know that the best person to convince them of this isn’t a woman. It’s one of their own.
So getting men to lead the charge is a smart choice, as more and more organizations are recognizing. But how can you tell whether a CEO is a good leader on gender? It’s pretty easy: See who’s on the company’s executive team. How many women are there? Are they in strategic roles, or staff functions?
Judging this way, are Trump’s choices, the CEOs of Walmart and EY, a good pick? Take a quick look at their top teams: EY has three women out of 17, most of whom are in staff jobs, such as HR. Walmart has five men who sit at the top of its pyramid and probably run the show, but another 35 people on its website under “Executive Management.” Of the 35, 12 are women, with at least half that number in staff jobs again (legal, HR, communications), a classic strategy for first-stage balancers.
Many companies and CEOs are voicing support for gender issues. But while most companies compete to say how much they care, fewer actually deliver more than tokenism at the top. Companies have gotten much more skilled at what we call pinkwashing. They sound good on gender issues, their websites trumpet their dedication, and they sponsor great women’s conferences, but the core jobs on their leadership teams remain male-dominated. They may manage to get a few women up into the latest pink ghetto: the staff jobs of the executive team.
On gender issues, there are three kinds of companies today:
- The Progressive. Companies that are truly balanced, with a mix of genders in nonstereotypical roles on their leadership teams (especially across both line and staff responsibilities).
- The Pretending. Those that say all the right things, run a lot of women-branded initiatives, but still have women only in noncore P&L roles on their executive team. Better than nothing, some of you may be thinking. Perhaps, but it doesn’t promise much for the future.
- The Plodding. Those that ignore the issue completely and stick unapologetically to their all-male status quo.
The first category here is still rare, unfortunately. Even McKinsey & Co., which each year publishes convincing “Women Matter” reports on the importance of strategic, CEO-led gender initiatives that deliver consistent outperformance — reports that are widely cited in discussions about women in business — can muster only 11% women in its senior leadership.
Trump’s own cabinet sits somewhere between the last two categories. They’ve made a nod to a handful of women (four out of 17, the least balanced team for decades), but you know that they aren’t seen as core functions to this administration. One of the characteristics of unconvinced leaders is that they tend not to pick very strong women — almost as though they don’t believe such women can be found. So Betsy DeVos, a major Republican donor, has proved to be the most imperiled cabinet pick as the most glaringly unqualified.
Pinkwashing is increasingly easy to identify. For example, no sooner had Audi aired its gender-equality-extolling Super Bowl ad than viewers were quick to point out that its board is 100% male and its executive team contains only two women (in the pink ghetto functions of HR and communications).
The organizations that really care enough to attain gender balance have been working at the issue for decades, and it shows, very publicly, in their boards and executive teams. The others are looking more transparently traditional. This transparency has been hard to measure up until now; very few companies had managed any form of balance up to the top of their corporate pyramids. Gender statistics are a tightly guarded secret, because they can paint an embarrassing picture. In the competitive war for talent, companies that can prove their balancing success will become vastly more powerful magnets of top female talent. And since women now represent 55% of American university graduates, this is likely to be a huge lever for continued competitive advantage. This, of course, has long been predicted, but the next decade may show the wheat running away from the chaff.
Careful observers will be able to distinguish which efforts are for show and which actually show conviction. Beware pinkwashing, but recognize and celebrate the men who push for the real thing. Their numbers are growing.
Esther is a well-liked manager of a small team. Kind and respectful, she is sensitive to the needs of others. She is a problem solver; she tends to see setbacks as opportunities. She’s always engaged and is a source of calm to her colleagues. Her manager feels lucky to have such an easy direct report to work with and often compliments Esther on her high levels of emotional intelligence, or EI. And Esther indeed counts EI as one of her strengths; she’s grateful for at least one thing she doesn’t have to work on as part of her leadership development. It’s strange, though — even with her positive outlook, Esther is starting to feel stuck in her career. She just hasn’t been able to demonstrate the kind of performance her company is looking for. So much for emotional intelligence, she’s starting to think.
The trap that has ensnared Esther and her manager is a common one: They are defining emotional intelligence much too narrowly. Because they’re focusing only on Esther’s sociability, sensitivity, and likability, they’re missing critical elements of emotional intelligence that could make her a stronger, more effective leader. A recent HBR article highlights the skills that a kind, positive manager like Esther might lack: the ability to deliver difficult feedback to employees, the courage to ruffle feathers and drive change, the creativity to think outside the box. But these gaps aren’t a result of Esther’s emotional intelligence; they’re simply evidence that her EI skills are uneven. In the model of EI and leadership excellence that we have developed over 30 years of studying the strengths of outstanding leaders, we’ve found that having a well-balanced array of specific EI capabilities actually prepares a leader for exactly these kinds of tough challenges.
There are many models of emotional intelligence, each with its own set of abilities; they are often lumped together as “EQ” in the popular vernacular. We prefer “EI,” which we define as comprising four domains: self-awareness, self-management, social awareness, and relationship management. Nested within each domain are twelve EI competencies, learned and learnable capabilities that allow outstanding performance at work or as a leader (see the image below). These include areas in which Esther is clearly strong: empathy, positive outlook, and self-control. But they also include crucial abilities such as achievement, influence, conflict management, teamwork and inspirational leadership. These skills require just as much engagement with emotions as the first set, and should be just as much a part of any aspiring leader’s development priorities.
For example, if Esther had strength in conflict management, she would be skilled in giving people unpleasant feedback. And if she were more inclined to influence, she would want to provide that difficult feedback as a way to lead her direct reports and help them grow. Say, for example, that Esther has a peer who is overbearing and abrasive. Rather than smoothing over every interaction, with a broader balance of EI skills she could bring up the issue to her colleague directly, drawing on emotional self-control to keep her own reactivity at bay while telling him what, specifically, does not work in his style. Bringing simmering issues to the surface goes to the core of conflict management. Esther could also draw on influence strategy to explain to her colleague that she wants to see him succeed, and that if he monitored how his style impacted those around him he would understand how a change would help everyone.
Similarly, if Esther had developed her inspirational leadership competence, she would be more successful at driving change. A leader with this strength can articulate a vision or mission that resonates emotionally with both themselves and those they lead, which is a key ingredient in marshaling the motivation essential for going in a new direction. Indeed, several studies have found a strong association between EI, driving change, and visionary leadership.
In order to excel, leaders need to develop a balance of strengths across the suite of EI competencies. When they do that, excellent business results follow.
How can you tell where your EI needs improvement — especially if you feel that it’s strong in some areas?
Simply reviewing the 12 competencies in your mind can give you a sense of where you might need some development. There are a number of formal models of EI, and many of them come with their own assessment tools. When choosing a tool to use, consider how well it predicts leadership outcomes. Some assess how you see yourself; these correlate highly with personality tests, which also tap into a person’s “self-schema.” Others, like that of Yale University president Peter Salovey and his colleagues, define EI as an ability; their test, the MSCEIT (a commercially available product), correlates more highly with IQ than any other EI test.
We recommend comprehensive 360-degree assessments, which collect both self-ratings and the views of others who know you well. This external feedback is particularly helpful for evaluating all areas of EI, including self-awareness (how would you know that you are not self-aware?). You can get a rough gauge of where your strengths and weaknesses lie by asking those who work with you to give you feedback. The more people you ask, the better a picture you get.
Formal 360-degree assessments, which incorporate systematic, anonymous observations of your behavior by people who work with you, have been found to not correlate well with IQ or personality, but they are the best predictors of a leader’s effectiveness, actual business performance, engagement, and job (and life) satisfaction. Into this category fall our own model and the Emotional and Social Competency Inventory, or ESCI 360, a commercially available assessment we developed with Korn Ferry Hay Group to gauge the 12 EI competencies, which rely on how others rate observable behaviors in evaluating a leader. The larger the gap between a leader’s self-ratings and how others see them, research finds, the fewer EI strengths the leader actually shows, and the poorer the business results.
These assessments are critical to a full evaluation of your EI, but even understanding that these 12 competencies are all a part of your emotional intelligence is an important first step in addressing areas where your EI is at its weakest. Coaching is the most effective method for improving in areas of EI deficit. Having expert support during your ups and downs as you practice operating in a new way is invaluable.
Even people with many apparent leadership strengths can stand to better understand those areas of EI where we have room to grow. Don’t shortchange your development as a leader by assuming that EI is all about being sweet and chipper, or that your EI is perfect if you are — or, even worse, assume that EI can’t help you excel in your career.
DJ Khaled, the one-man internet meme, is known for warning his tens of millions of social media followers about a group of villains that he calls “they.”
“They don’t want you motivated. They don’t want you inspired,” he blares on camera. “They don’t want you to win,” he warns. On Ellen DeGeneres’s talk show, Khaled urged the host, “Please, Ellen, stay away from them!”
The “they” Khaled invokes are clearly a sinister force. But who are they? Khaled offered clues when he told DeGeneres, “They are the people who don’t believe in you.…They is the person that told you you would never have an Ellen show.”
Although Khaled’s claims may seem outlandish, he is in fact leveraging a powerful psychological hack: scapegoating. The practice of imagining a villain that’s conspiring against us, scapegoating can be an effective way to motivate ourselves and change our behaviors. Of course, as history has shown, terrible things can happen when people act on baseless conspiracy theories. But sometimes the antidote is in the venom.
Khaled isn’t the first to use the technique. In The War of Art, Steven Pressfield uses an entity he calls “Resistance” to describe the force conspiring against creative output. “Most of us have two lives,” Pressfield writes. “The life we live, and the unlived life within us. Between the two stands Resistance.” Throughout his book Pressfield reminds readers, “Resistance is always plotting against you.”
The author and game designer Jane McGonigal described a similar conspiracy of bad guys in her book SuperBetter. McGonigal blames villains like “Mrs. Volcano” and “Snuff the Tragic Dragon” when she loses her temper with her kids or feels self-pity.
Khaled, Pressfield, and McGonigal know that “they,” “Resistance,” and the “bad guys” don’t actually exist. For Khaled, that’s the joke that powers the meme. If Khaled were to point a finger at a real group of people intent on sabotaging him, such as an ethnic group or a particular corporate entity, his scapegoating wouldn’t be funny — it would be malicious or dangerous.
In order for scapegoating to work, it’s important not to assign blame to something or someone specific (say, a boss) from the start; if we do so, we’ll shirk our responsibilities and won’t change our actions.
Instead, we need to find the underlying causes of our behaviors and understand the source of our problems, which requires asking difficult questions — especially since our intuition is frequently wrong. Maybe we don’t binge on junk food or YouTube videos because of the pleasure in what we’re consuming, but because of deeper problems consuming us. Perhaps the true reason we allow our phones to interrupt dinner is not that we’re addicted to our phones, but that we’re addicted to work.
Once we’ve identified the cause, the next challenge is to implement a change, which can be difficult if we think what’s happening to us is beyond our control. In these situations it’s easy to feel powerless and to give up. It’s here that scapegoating can be used to our advantage. By directing our anger and anxieties at an invisible they, the forces working against us seem more tangible, so we feel like we have more power to fight them.
Several recent studies have observed a strong connection between the way we think about our ability to act and our follow-through. For example, to determine how in control people feel regarding their cravings for cigarettes, drugs, or alcohol, researchers administer a standard survey called the Craving Beliefs Questionnaire (CBQ). The assessment is modified for the participant’s drug of choice and presents statements like “Once the craving starts…I have no control over my behavior” and the cravings “are stronger than my willpower.” How people rate these statements tells researchers how powerful or powerless they feel in the face of temptation. Lower scores reveal that subjects believe they are more in control, while higher scores correlate with people who believe the drugs control them.
A study of methamphetamine users that appeared in the Journal of Substance Abuse Treatment in 2010 concluded that people with low CBQ scores were more likely to stay sober and that participants whose scores decreased over time — indicating that they felt more powerful as time passed — had increased odds of abstinence. A study of cigarette smokers published in 2014 found similar results: The smokers most likely to fall off the wagon after quitting were the ones who believed they were powerless to resist.
Though the logic isn’t surprising — if we believe we’re powerless, we don’t even try not to fail — the extent of the effect is remarkable. A 2015 study published in the Journal of Studies on Alcohol and Drugs found that individuals who believed they were powerless to fight their cravings were much more likely to drink again. In fact, beliefs of powerlessness determined whether someone would relapse after treatment as much as the level of physical dependency itself did.
Besides making us feel more powerful, scapegoating can harness our instincts to resist threats to our freedom and autonomy, a phenomenon that psychologists call “reactance.” For example, when your boss micromanages you and tells you what to do in a patronizing way, you may feel crummy and decide to do the opposite, to “stick it to the man.” Scapegoating uses the power of reactance toward productive ends. If we feel that someone or something is conspiring against us, we’re more likely to work harder to prove them wrong.
Eliciting reactance has been used successfully in public health efforts, such as the antismoking Truth campaign, which tried to appeal to rebellious high schoolers (who feel reactance toward just about everyone). Instead of showcasing far-off consequences like emphysema and black lungs, the Truth campaign did away with the gore and instead painted the tobacco industry as a bunch of scheming jerks. In one ad activists attempt to deliver a case marked “lie detector” to the headquarters of a tobacco company and are promptly kicked out. In another spot, cartoon characters interrupt smokers at a party by shouting “It’s a trap!”
We can apply the same methods to use careful scapegoating to increase our own motivation. If we imagine a force working against us, we’re more likely to get fired up, resist our temptations, and work harder to achieve our goals.
Of course, it’s actually just us against ourselves. But for the times when we don’t want to admit that, providing a clear enemy to rebel against — a “they” who doesn’t want you to leave that extra cookie on the plate or get back to writing that email — can help us summon the tenacity we need to succeed. Even if, in reality, that “they” resides in each of us.
When is a project finished? For most of us, it seems pretty simple: when we ship the product or launch the service. But we need to take a step back and consider what “done” really means.
Most teams in business work to create a defined output. But just because we’ve finished making a thing doesn’t mean that thing is going to create economic value for us. If we want to talk about success, we need to talk about outcomes, not just outputs. And as the world continues to digitize and almost every product and service becomes more driven by (or at least integrated with) software, this need grows even stronger.
For example, we may ask a vendor to create a website for us. Our goal might be to sell more of our products online. The vendor can make the website, deliver it on time and on budget, and even make it beautiful to look at and easy to use, but it may not achieve our goal, which is to sell more of our products online. The website is the output. The project may be “done.” But if the outcome — selling more products — hasn’t been achieved, then we have not been successful.
Most companies manage projects in terms of outputs, not outcomes. This means that most companies are settling for “done” rather than doing the hard work of targeting success.Defining Done as Successful
In some situations these ideas are the same thing or have such a clear, well-understood relationship that they might as well be the same thing. This is frequently the case in industrial production. Because of the way industrial products are designed and engineered, you know that when your production line is spitting out Model T cars, you can be reasonably certain they will work as designed. And because of years of sales history, you can be reasonably certain that you will be successful: You will sell roughly the number of cars you expected to. Managers working in this context can be forgiven for thinking that their job is simply to finish making something.
With software, however, the relationship between we’ve finished building it and it has the effect we intended is much less clear. Will our newly redesigned website actually encourage sharing, for example, or will the redesign have unintended consequences? It’s very difficult to know without building and testing the system. And, in contrast to industrial production, we’re not making many instances of one product. Instead, we’re creating a single system — or a set of interconnected systems that behave as one system — and we are often in the position of not knowing whether the thing we’re making will work as planned until we’re done.
This problem of uncertainty, combined with the nature of software, means that managing our projects in terms of outputs is simply not an effective strategy in the digital world. And yet our management culture and tools are set up to work in terms of outputs.Using the Alternative to Output: Outcomes
The old cliché in marketing is true: Customers don’t want a quarter-inch drill. They want a quarter-inch hole. In other words, they care about the end result, and don’t really care about the means. The same is true of managers: They don’t care how they achieve their business goals; they just want to achieve them.
In the world of digital products and services, uncertainty becomes an important player and breaks the link between the quarter-inch drill and the quarter-inch hole. Some managers try to overcome the problems caused by uncertainty by planning in increasingly greater detail. This is the impulse that leads to detailed requirements and specification documents, but, as we’ve come to understand, this tactic rarely works in software.
It turns out that this problem — the way our plans are disrupted by uncertainty, and the fallacy of responding with ever-more-detailed plans — is something that military commanders have understood for hundreds (if not thousands) of years. They’ve developed a system of military leadership called mission command, an alternative to rigid systems of leadership that specify in great detail what troops should do in battle. Mission command is a flexible system that allows leaders to set goals and objectives and leave detailed decision making to the people doing the fighting. Writing in The Art of Action, Stephen Bungay traces these ideas as they were developed in the Prussian military in the 1800s and describes the system that those leaders developed to deal with the uncertainty of the battlefield.
Mission command is built on three important principles that guide the way leaders direct their people.
- Do not command more than necessary or plan beyond foreseeable circumstances.
- Communicate to every unit as much of the higher intent as is necessary to achieve the purpose.
- Ensure that everyone retains freedom of decision within bounds.
For our purposes, this means that we would direct our teams by specifying the outcome we seek (our intent), allowing our teams to pursue this outcome with a great deal of (but not unlimited) discretion, and expecting that our plans will need to be adjusted as we pursue them.Case Study: Putting This into Practice
In 2014 the Taproot Foundation wanted to create a digital service that would connect nonprofit organizations with skilled professionals who wanted to donate their services. Think of it as a matchmaking service for volunteers. Taproot had to work with vendors, and ended up choosing our firm for the project.
In our early conversations, Taproot leaders described the system that they wanted to build in terms of its features: It would have a way for volunteers to sign up, a way for volunteers to list their skills, a way for nonprofit organizations to look up volunteers based on these skills, and so on. We were concerned about this feature list. It was a long list, and although each item seemed reasonable, we thought we might be able to deliver more value faster with a smaller set of features.
To shift the conversation away from features, we asked, “What will a successful system accomplish? If we had to prove to ourselves that the system was worth the investment, what data would we use?” This conversation led to some clear, concrete answers. First of all, the system needed to be up and running by a specific date, about four months away. The foundation participates in an annual event to celebrate the industry, and executives wanted to have a demonstrated success that they could show off to funders at that event. We asked, “What does up and running mean?” Again, the answers were concrete: We need to have X participants active on the volunteer side, and Y participants active on the organization side. Because the point of the service would be to match volunteers with organizations so that they could work on projects together, we should have made Z matches, and a certain percentage of those matches should have yielded successful, completed projects.
This was our success metric: X and Y participants; Z matches; percentage of completed projects. (We actually set specific numerical targets, but we’re using variables here.)
Next, we asked, “If we can create this system and achieve these targets without building any of the features in your wish list, is that OK?” This was a harder conversation.
The executives signing the contract were understandably concerned. What guarantee did they have that we would complete the project?
This is the bind that executives and managers face. As they negotiate with partners, they are bound to protect their organizations. They need to find contractual language that ensures the partners will deliver. The problem with contracts, though, is that to make them work, managers are forced to settle for the protection they find in the concrete language of features: You build feature A, and we will pay you amount B. But this linguistic certainty is a false hope. It guarantees only that your vendor will get to “done,” as in, “The feature is done.” It does not guarantee that the set of features you can describe in a contract will make you successful. On the other side, vendors are understandably hesitant to sign up to achieve an outcome, mostly because vendors rarely control all of the variables that contribute to project success or failure. Thus both sides settle for a compromise that offers the safety of “done” while at the same time creating constraints that tend to predict failure rather than create the freedom that breeds success.
Our contract with Taproot, then, contained not only a list of desired features but also a list of desired outcomes. It included: The system will connect volunteers to organizations [at the following rate]; it will allow these parties to find each other, communicate well with each other, and report on the success of their projects; it will do so at [the following rates] and by [the following date]; etc. Of course, there was also some legalese. But this compromise — listing the features we thought were important, but being clear about outcomes and agreeing in advance that outcomes are more important — is the key to managing with outcomes instead of output.
The team decided that the most important milestone was to get the system up and running. Rather than wait four months, the length of the project, they decided to launch as quickly as possible, going live to a pilot audience within one month. They launched a radically simplified version of the service, one with very few automated features. The Taproot team knew it would need more automation if it wanted the system to scale, but it also knew automation could come later. Launching early achieved two goals. First, it ensured that the team would have something to show to funders at the annual event. This was a hugely important marketing and sales goal. But launching early addressed an even more important goal: It allowed the team to learn what features it would actually need in order to operate the system at scale. In other words, it allowed the team to establish a sense-and-respond loop — a two-way conversation with the market that would guide the growth of the service.
The project planners had imagined, for example, that the skilled volunteers would need to be able to create profiles on the service. Organizations would then browse the profiles to find volunteers they liked. This turned out to be exactly wrong. When the team tried to get volunteers to make profiles, they responded with indifference. The team realized that, in order to make the system work, volunteers had to be motivated to participate; they needed to find projects that they were passionate about. In order to do this, the system needed project listings, not volunteer listings. In other words, the team had to reverse the mechanics of the system, because the initial plans were wrong.
By the second month of the project, the team had built the system with the revised mechanics. Then they concentrated on tuning the system, identifying the details of the business processes needed and building software to support those processes. How would the team make it easy for organizations to list their projects? How would team members make sure the listings were motivating to volunteers? How simple could they make the contact system? How simple could they make the meeting scheduler? At the end of the four-month project, the team had a system that had been up and running for three months and that far exceeded the performance goals written into the contract.
This project worked because the team followed the principles of mission command, which is based on outcomes, not outputs. Give teams a strategy and a set of outcomes to achieve, along with a set of constraints, and then give them the freedom to use their firsthand knowledge of the situation to solve the problem. This approach to project leadership is not common, but we see it more frequently on startup teams and in smaller organizations. Scaling the approach to multiple teams and to larger organizations can be a difficult, subtle challenge, requiring careful balance between central planning and decentralized authority. But it is quickly becoming a necessary shift in our software-driven world.
This post is adapted from the Harvard Business Review Press book Sense and Respond: How Successful Organizations Listen to Customers and Create New Products Continuously.
Given the avalanche of email we receive each year — 121 messages per day, on average — it’s no wonder that we have become somewhat desensitized to its impact on our professional brand. We’ll spend hours polishing our LinkedIn profiles and revising our résumés, but hastily hit send on an unintelligible missive simply because we’re in a rush. “Sent from my device, please overlook typos” is not a get-out-of-jail-free card for shoddy communications.
Have you ever thought about the brand you’re conveying through your emails? You should. Every email you send affects your professional reputation, or brand. Don’t make these all-too-common mistakes in your communication:
Your emails are too long for anyone to digest. Are your messages typically the length of all 12 installments of Crime and Punishment? Do you include all the backstory a reader could ever want to know? While context is critical to guiding the reader’s interpretations, remember that what they need to know is inevitably a subset of everything you could tell them. Given that the adult attention span is a mere eight seconds, it’s important to make every moment count. Get to the point.
You’re including way too many people. Do your Cc habits ensure that a cast of thousands is in the loop? If so, ask yourself who is truly the essential audience for the message. In many organizations, overuse of Cc reflects a political culture in which people cover their tracks by overinclusion. Remember that each message you send contributes to everyone’s inbox, including your own, especially when one of your recipients decides to Reply All.
You’re dashing off incomplete thoughts. While there’s a lot to be said for brevity, there’s a big difference between being concise and being terse. Do you find yourself shooting off one-liners that pick up in the middle of a thought without considering whether the reader can follow the thread? Do you end up with a high volume of clarifying questions in response to your messages? If so, that’s a clue that your emails need more composition and more context.
You’re burying the lede. It shouldn’t take a symbologist to find the important message hidden in your email. Make sure your readers know what the ask is and why they should care about responding. Despite our compulsive relationship with it, responding to email is not a sacred duty. If you want your readers to digest your message, and perhaps even take action on it, make it easy for them to do so.Further Reading
- How to Write Email with Military Precision Business writing Digital Article Get to the point.
When it comes to composing an email, I think we could all take a cue from Mark Twain’s writing style: He developed a unique and memorable voice, relentlessly edited himself, and was easy to understand. As he said, “Anybody can have ideas — the difficulty is to express them without squandering a quire of paper on an idea that ought to be reduced to one glittering paragraph.” Take the time to truly craft your messages, and you’ll find that your results improve accordingly. Sacrifice quantity for quality. Not every email merits your attention.
However, the one characteristic of Twain’s brand that I wouldn’t emulate is his being a curmudgeon. We already have a negativity bias toward email messages. As has been demonstrated in the emerging field of social neuroscience, without the social cues — voice tone, facial expression, and physical gestures — that we rely on to interpret communication, we are prone to conclude the worst. Don’t skip the niceties, or your audience may assume a message that wasn’t intended, and you’ll be forced to do damage control.
The next time you start to write an email, follow a few rules:
- Use an intuitive subject line that clearly states the purpose of the message. Bonus points if you include a header, e.g., [ACTION] or [INFORM], that helps the reader understand the expected response.
- Provide a clearly stated request right at the beginning of your email in case your audience fails to read beyond the preview pane. At least you’ll increase the chances that people will understand the essence of your message.
- Bold the names of anyone who’s been assigned a task or asked a question in the body of the email to increase the likelihood of it getting the needed attention.
- Take the time to be nice. It will help your audience truly hear what you intended to say.
The next time you’re in your email account, take a closer look at your sent folder. Everything you need to know about your email brand is contained within. If you don’t like what you see, tomorrow is another day. There’s always another chance to shape your email reputation.
Several years ago I sat down with the CEO of a fast-growing retail business. The company started as a single store, but about a decade later it was a national chain on the heels of filing an IPO. I asked the CEO, whom I’ll call Mike, about the secret of his company’s rapid growth. His answer blew me away.
“Say no!” He told me he regularly said no — to more staff, to bigger marketing budgets, to additional equipment.
Most of us don’t like to be told no. We consider it a rejection of our ideas and of ourselves. It’s a sign that our projects aren’t valued and our careers are stalling out. But, as Mike’s employees learned, hearing “no” can help boost us toward our goals.
We have been conditioned to believe that the more resources we have, the better results we’ll achieve. While this belief is true at times, it leads us to underuse our creativity and our determination to work with what we have. The belief that what we have isn’t enough to reach our goals raises our anxiety, delays us from taking action, and makes us lose sight of what we want to accomplish.
The next time you hear a “no” at work, instead of hitting the panic button, try following these steps:
Expect more. When the boss turns down a request, we usually have two immediate reactions. First, we think that he or she doesn’t understand the magnitude of the problem — otherwise, we reason, we’d be given the necessary resources for a resolution. Second, we resign ourselves to failure: Without more time, the quality of work will suffer. Without extra headcount, we’ll need to limit the scope of a project. Without a larger marketing budget, sales will drop.
When we become defeated, we start to reduce our effort, which leads to a negative self-fulfilling prophecy. We act as if our projects can’t be completed with the highest standard with what we already have — and that’s the future we end up making.
Research has found that people work to meet their own and others’ expectations. When we misinterpret a “no” from the boss as an indication that we are undervalued, we end up sinking to those expectations.
Instead, set higher expectations. Think about how hard work, the creative use of existing resources, and collaboration with others will enable you to meet project deadlines, sales targets, or any other objectives. A “no” gives you the opportunity to prove to others that you can find creative solutions to deliver quality work with less.
Try something new. One of my childhood television heroes was MacGyver, the secret agent who could solve virtually any problem with a pocket knife, duct tape, or ordinary household goods. He lacked the high-tech gear or superpowers of typical action heroes. But he had one important thing going for him: He was resourceful.
We have become accustomed to needing more to do more. When we have a lot of resources, there’s no need to get creative with how to use or maximize them. But when those resources disappear, we struggle. We haven’t developed skills in resourcefulness.
So the more experience we have with scarcity — the more times our bosses have told us no — the better our chances of learning to use our ingenuity to invent solutions. Research finds that when we’re denied resources, we give ourselves a license to try new ways of using the resources we already have. Without a hammer, we’re more likely to think of a shoe as a good tool to get a nail pounded into the wall. Every time the boss says no and we successfully adapt, we not only solve a problem but also break our dependence on needing more to do more.
In 2010 I spent an afternoon with one of CEO Mike’s highest-performing store managers, a person I will call Ethan. Ethan talked about all the times he’d been told no in his career: on plans to merchandise products, sophisticated inventory control systems, and basic training handbooks.
One time, Ethan’s store received big quantities of a poorly made dress. They were so flimsy that they wouldn’t even stay on hangers, much less be purchased by customers. He asked if he could return them to the warehouse, but was told no.
So he went to work with what he had. He started mentally breaking down the product: It wasn’t a dress, but some cloth in a nice pattern. He took a pair of scissors and cut off the straps of the dress. He rolled up the garment, tied it with a pretty ribbon, and labeled it a “beach cover-up.” The “dress” turned into a best seller, and other stores adopted his solution.
Get moving (in any direction). Every minute we spend worrying about what we don’t have is one less minute we spend actually doing something. When we take a no personally, believing that it’s a diagnostic of our work or ourselves, we feel diminished and struggle to tap our existing resources. After all, if we’re valued and doing a good job, we’d get a yes, right?
Researchers call this experience “threat rigidity,” which means that in times of threat (e.g., if we think we’ve done something wrong or are no longer valued) we fall into the trap of thinking less creatively about our resources. We find it difficult to become resourceful precisely when the situation calls for it. Hampered by threat rigidity, we squander opportunities to meet our objectives.
There’s a simple way to overcome feeling threatened by a no: Think about what you do have. Put your resources in motion by experimenting. As you start moving, it will become easier to start meeting goals without a complete plan, an ideal team, or a bigger budget.
Don’t let the boss’s no stop you from achieving your goals. Forge ahead and take it as an opportunity to do more with less. You’ll realize you have an opportunity to enhance the value of what you already have.
President Trump entered the White House with the lowest approval ratings any president has had when taking office, and they aren’t likely to go up for a sustained period of time. Even if Trump doesn’t believe the polls, as he has avowed, such low approval ratings are likely to have real consequences for him.
On the day he took the oath of office, Gallup’s tracker showed that only 45% of the American public said that they approved of the job Trump was doing, with an equal number disapproving. Since then, his approval has declined slightly, and his disapproval has risen to historically high levels.
To put these figures in context, George W. Bush, who also won the Electoral College while losing the popular vote, came into office with a 57% approval rating. It’s more typical for presidents to start with an approval rating over 60%, as Obama, Carter, and Eisenhower did, even if no one expects a modern president to start over 70%, as Johnson, Ford, and Kennedy did. What all of these presidents have in common is that they only retained their high approval ratings for about three months — what presidency scholars call “the honeymoon period,” before they began to dip significantly. These drops aren’t inevitable — national crises can cause people to rally around the president, as they did after Reagan was shot — but they’re the general rule. So if Trump’s approval numbers are middling-to-poor now, they’re likely to get worse soon.
The fact that Gallup and other polling houses have been measuring presidential approval since Harry Truman means that political scientists understand a great deal about how it works. In general, approval is highest when a president takes office and declines fairly slowly throughout the first term. Wars can temporarily boost approval, sometimes to stratospheric heights, but such boosts are generally short-lived. Presidential approval can be thought of as somewhat elastic: In statistical terms, we say that it has a strong memory function. Events can push approval up or down, but as my work with Matthew Lebo on presidential approval has shown, it reverts fairly quickly to its natural level.
There are two factors that seem able to keep approval above or below its natural level for longer periods: the economy and media coverage. In the same line of research, Lebo and I have shown that unemployment and inflation serve to push approval ratings up or down, at least among some voters, and work I’ve carried out on the Obama and Bush years shows that sustained media coverage, positive or negative, can move a president’s approval above or below where it’s expected to be. Trump faces challenges on both these points: Unemployment and inflation are historically low right now, making it unlikely that he’ll be able to improve on them much, and media coverage of every president since Clinton has been dominantly negative, a trend that Trump doesn’t seem likely to change.
Any efforts Trump may take to increase his approval ratings are further hampered by the increasing partisan divide in American politics. Up until Reagan, approval of the president among members of his party was 30–40 points higher than among members of the other party. Under Reagan, averaging over the course of his time in office, the gap was 52 points. Under George W. Bush, 58. Under Obama, 66. In Trump’s first week, he had 89% approval among Republicans and just 13% approval among Democrats, making for a 76-point gap. In essence, the only way Trump can increase his approval is by appealing to independents and Democrats, something he has shown little interest in doing thus far.
These low approval ratings, which are likely to only get worse, are a problem for Trump even if he doesn’t give them any credence, because any major policy initiative requires that he work with people who do pay attention to them. High approval ratings give a president a great deal of leverage over members of Congress.
When Eisenhower took office, for instance, many members of Congress, even in the other party, faced a situation in which the president was more popular in their districts than they were. Legislators are nothing if not sensitive to factors that could hurt their reelection bids, so they fell over themselves to support the president’s initiatives, even ones that they might otherwise have resisted. Southern Democrats, for instance, had long resisted a federal highway system, due to their distrust of any federal intervention in their states.
Popular presidents can offer appearances at signing ceremonies, host fundraisers, and do all sorts of things to reward cooperative legislators: Eisenhower’s highway program passed through committees controlled by Southern Democrats who could have stopped it.
Unpopular presidents, on the other hand, quickly find that they have very little leverage. Big policy changes — tax reform, immigration reform, health care reform — often force legislators to choose between what their district wants them to do and what the president or Congress wants them to do. A popular president can help to ease the burden of an unpopular vote; an unpopular president has to accept what Congress wants to pass if he wants to sign any bills at all.
Of course, there are some powers inherent to the presidency — control over the military, executive orders, and the like — and a clever president can make changes on the margins of the law with these powers. Such powers, however, are conditional: A president who doesn’t have the support of Congress can see his executive orders overturned, his cabinet members turn against him, his policies waste for lack of funding, and his press conferences ignored.
A president can have real power to shape the future of the country, but that only comes with popularity. An unpopular president is more likely to find himself hemmed in by protests, like Johnson and Nixon in their later years — or like Carter, so ignored by Congress and his own administration that he spent his time approving the White House tennis schedules.
If Trump wants to avoid their fate, he’ll need to change something dramatically. We’ll soon find out whether he can.
On January 27 President Trump signed an executive order blocking citizens, including doctors, from seven Muslim-majority countries from entering the U.S. for 90 days. This may have a measurable impact on the U.S. health care system. Many doctors may be blocked from returning to the U.S. after leaving the country. According to 2010 data, of approximately 850,000 doctors providing direct patient care in the U.S., 4,180 physicians were Iranian citizens and 3,412 physicians were Syrian citizens.
There are currently 260 people from the seven countries who are applying for residency slots in U.S. hospitals but are now banned from coming to the U.S. Match day, when students learn whether they have been accepted into a program, is on March 17, just over a month away. If the U.S. loses these applicants and cannot find candidates to take their spots, a simple calculation shows that this could affect 400,000 patients over the next year alone (estimated with assumptions that 50% of them successfully match to residency programs, become primary care doctors, and see 3,000 patients over the next year).
Approximately one in four practicing physicians in the U.S. completed medical school abroad. Many international medical graduates come to the U.S. for residency and then stay, agreeing to work in areas where there’s a shortage of health professionals. Studies have shown that internationally trained doctors are more likely to practice medicine in rural and underserved areas. In many instances, the doctors end up working in these areas for long periods of time.
Policy makers and the public have expressed concerns over the quality of work provided by immigrants, as compared to U.S.-trained doctors, despite the fact that internationally trained doctors have to pass three exams, complete residency in the U.S., and be licensed to practice in the U.S. While the issue has been debated for decades, there has been sparse evidence on whether patient outcomes differ based on where a physician went to medical school. Prior studies have produced mixed results, but they have usually examined small sample sizes or focused on narrow geographic areas.
In a recent paper published in The BMJ, we found that when Medicare patients were admitted to U.S. hospitals with general medical conditions, their probability of dying within 30 days of admission was 5% lower if they were treated by international medical graduates than if they were treated by U.S. medical graduates. We found no difference in whether patients were more likely to be readmitted to the hospital within 30 days after being discharged. We also saw that the cost of care was somewhat higher with foreign medical graduates than with U.S. medical graduates, though the difference was very small.
To arrive at these findings, we studied outcomes of Medicare patients treated by internationally trained internists and domestically trained ones. We measured across hospitals and within the same hospital to avoid hospital effects confounding our results. It is well-known that foreign medical graduates are more likely to practice in rural, underserved areas, where hospital resources may be lower and illness severity of the patient population may be higher. Both factors would lead to higher mortality among patients treated by foreign medical graduates, even if these physicians’ practice patterns and quality of care were identical to domestic medical graduates. So, by comparing patients treated by international versus domestic medical graduates within the same hospital, we effectively eliminated the effects of hospital quality and population characteristics on patient mortality. We also adjusted for several patient characteristics (e.g., age, gender, race, severity of illness, and socioeconomic status) and physician characteristics (age, gender, and how many Medicare patients they treated per year) to isolate the effects of medical school on patient outcomes.
We looked at care provided by more than 44,000 internists for over 1.2 million hospitalizations. In general, patients treated by international medical graduates were more likely to be a racial minority, have lower socioeconomic status, and have more comorbidities. After adjusting for patient, physician, and hospital factors, we found that patients treated by international medical graduates were less likely to die (11.2%) than patients treated by U.S. medical graduates (11.6%) in the 30 days following hospitalization. The difference was statistically significant, and we observed the same patterns when we compared physicians across hospitals or within the same hospital.
We also found that, 30 days following a hospitalization, patients treated by U.S. medical graduates and international ones (within the same hospital) were equally likely to have been readmitted to the hospital. Compared to U.S. medical graduates, international graduates spent slightly more on treating patients (e.g., through more tests and imaging studies), but the difference was very small ($47 per hospitalization). We also performed a number of sensitivity analyses to test whether our findings were affected by different assumptions that included how long patients stayed in the hospital, whether the patient was sent home or to a rehabilitation facility, and whether the hospital was a teaching hospital. Our findings were consistent across all analyses.
There are a number of possible explanations for why patients treated by international medical graduates are less likely to die. One reason is that in order to work as a physician in the U.S., international graduates must go through extraordinarily tough selection processes, which select the best of the best physicians from each country. For instance, only 49.4% of international medical graduates that apply end up with spots in U.S. residency programs. The tendency for international graduates to score higher on tests than U.S. graduates also supports this hypothesis. International graduates typically undergo more training — initially in their home country, and then again in the U.S. Yet we don’t know the degree to which any of these factors explains our results. Future work should explore exactly why international medical graduates deliver higher-quality care.
Our findings suggest that internationally trained doctors provide slightly higher quality of care than domestic medical graduates, at least in general treatment of Medicare patients. International graduates are vital to providing health care in the U.S., and policies that discourage doctors from other countries from wanting to practice in the U.S. are likely to have unintended consequences for the health of American people, especially for those who live in traditionally underserved areas.
A Super Bowl Ad Is the Equivalent of Lighting Money on Fire (Which Can Be More Strategic Than It Sounds)
Everyone knows that whispering “I love you” to someone you just met over some bangin’ beats in the club at 3 AM stands no chance of success. Similarly, suspect claims of honesty or authenticity in business will likely be heavily discounted as cheap talk by a skeptical counterpart. So why make these overtures? Because while the upside is limited, the cost isn’t just cheap, it’s nil; give it a try and hope there’s someone out there who’ll believe you. The challenge facing sellers of some genuine product — be it true late-night love or a Tiffany necklace on eBay — and the buyers in search of them is to prove that they’re not just full of empty words.
This is where Super Bowl ads come in. Airtime during the game is, of course, fantastically expensive. So why do companies bother buying it? For the same reason that gang members get face tattoos: to prove that they’re in it for the long haul.
The classic formulation of this idea comes from Michael Spence’s PhD thesis, published in 1973, which helped win him a Nobel prize, in 2001. The thesis focused not on clubs or ads, but on how good workers reveal themselves as smart or dependable.
In Spence’s formulation, hiring a worker is essentially a spin of the roulette wheel: Depending on what comes up, you might get a conscientious, productive worker or an incompetent, shirking one. There are plenty of steps that employers can and do take to make the hiring decision less of a gamble. They may consider, consciously or not, whether the applicant is short or tall, black or white, male or female. Based on prior experiences or stereotypes they’ve formed, recruiters may make a judgment of what the applicant would probably be like once he starts showing up for work. Whether it’s legal or moral (or even correct), we judge people based on all sorts of characteristics that a person is essentially born into.
Then there are the choices we make in presenting ourselves to a prospective employer: Short or tall, black or white, male or female, you need to decide whether to wear a suit or jeans, or to show up clean-cut or shaggy, or covered in tattoos. On your résumé you can choose to offer up your undergraduate GPA, and mention that it was from an Ivy League institution, or you can avoid saying much of anything about your education.
You might counter that lots of these aren’t exactly choices: Harvard chooses you, not the other way around. That’s correct, sort of. But think of it this way: The cost of adding a credential to your résumé or presenting yourself in a particular way at an interview is higher for some applicants than for others. In a sense, the “cost” for many of getting into Harvard and making it to graduation is just not within the realm of possibility. But for the candidate with the right characteristics – drive, conscientiousness, intelligence — it may not be that difficult to apply, get in, and coast through whatever the Harvard curriculum might throw at them.
In Spence’s model, the attributes that make it relatively easy to get a Harvard degree are the same characteristics that make for a productive employee. If that’s the case, then companies will do well to hire Harvard math majors, even if no one learns anything of practical use, or even any math, at Harvard. In contrast to telling a recruiter, “I’m smart,” a Harvard math degree is not something that less-able applicants can mimic or would even want to, because the cost is too great. It is a “signal,” in economics parlance.
Stripped to its essentials, the Spence signaling model simply requires that there be a link between a desirable yet hidden attribute and the cost of doing something. And that something can be anything, as long as everyone knows it’s cheap for the smart and virtuous to do it, and difficult for anyone else.
For companies, one way of sending a strong signal of commitment to the long run would be to convert the company’s savings into hard currency, cart it out to the street, and put a lit match to it. Only a company that expects to do repeat business with lots of customers is going to be willing to pay this up front “money-burning” cost.
If we don’t see many companies lighting cash-fueled bonfires, economists have argued they do the equivalent — both more credibly and more publicly — through advertising. In a classic article on advertising as money burning, “Price and Advertising Signals of Product Quality,” economists Paul Milgrom and John Roberts describe a 1983 ad announcing the introduction of Diet Coke: “a large concert hall full of people, a long chorus line kicking, a remarkable number of (high-priced) celebrities over whom the camera pans, and a simple announcement that Diet Coke is the reason for this assemblage.”
They also give the example of an even more literal take on advertising-as-pointless-destruction in a Ford Ranger ad from 1984 that “features these trucks being thrown out of airplanes (followed by a half dozen skydivers) or driven off high cliffs.” In both cases the ad is a deliberate act of pointless excess and waste — money burning.
Milgrom and Roberts, in substantiating their analysis of product launch ads as money burning, observe, “These ads carry little or no direct information other than that the product in question exists. But if that is the message being sent, these ads seem an inordinately expensive way to transmit the information. Indeed, the clearest message they carry is, ‘We are spending an astronomical amount of money on this ad campaign.’” The lavish destruction of value is the only way the signal can’t be copied by someone who doesn’t really have the money to burn.
Signaling can produce perverse incentives that can cause those without the hidden quality to try to fake it. That might help explain why, in the year 2000, 19 internet startups spent millions buying advertising time during the Super Bowl. It’s also telling that eight of the 19 — including, famously, Pets.com, with its sock puppet mascot — no longer exist. Ironically, their efforts to signal they had the deep pockets and quality offerings that would allow them to survive the internet’s winner-take-all economy may have helped to drive these big spenders into bankruptcy. You might think that such a failure rate would discourage a repeat, but the trend continues: During the 2015 Super Bowl, startups including Wix.com (a company that helps users build websites) and Loctite (a glue maker) spent $4.5 million each for a 30-second spot. This year, the price has gone up to $5 million.
Signaling is a useful theory for helping us understand why getting a face tattoo is a great decision if you want to show everyone — your rivals, the police, potential employers — that you’ve thrown your lot in with your gang forever. But it’s harder to come up with a signal of your own to show some hidden, but valuable, quality. And even if you have the ingenuity to figure out a strong signal, like Ford burning money on an ad buy, you have to have customers who are smart enough to understand what, by driving a bunch of trucks off a cliff, you’re trying to tell them.
How much do you read?
For most of my adult life I read maybe five books a year — if I was lucky. I’d read a couple on vacation and I’d always have a few slow burners hanging around the bedside table for months.
And then last year I surprised myself by reading 50 books. This year I’m on pace for 100. I’ve never felt more creatively alive in all areas of my life. I feel more interesting, I feel like a better father, and my writing output has dramatically increased. Amplifying my reading rate has been the domino that’s tipped over a slew of others.
I’m disappointed that I didn’t do it sooner.
Why did I wait 20 years?
Well, our world today is designed for shallow skimming rather than deep diving, so it took me some time to identify the specific changes that skyrocketed my reading rate. None of them had to do with how fast I read. I’m actually a pretty slow reader.
Here’s my advice for fitting more reading into your own life, based on the behaviors that I changed:
Centralize reading in your home. Back in 1998, psychologist Roy Baumeister and his colleagues performed their famous “chocolate chip cookie and radish” experiment. They split test subjects into three groups and asked them not to eat anything for three hours before the experiment. Group 1 was given chocolate chip cookies and radishes, and were told they could eat only the radishes. Group 2 was given chocolate chip cookies and radishes, and were told they could eat anything they liked. Group 3 was given no food at all. Afterward, the researchers had all three groups attempt to solve an impossible puzzle, to see how long they would last. It’s not surprising that group 1, those who had spent all their willpower staying away from the cookies, caved the soonest.You and Your Team Series Improving Yourself
What does this have to do with reading? I think of having a TV in your main living area as a plate of chocolate chip cookies. So many delicious TV shows tempt us, reducing our willpower to tackle the books.
Roald Dahl’s poem “Television” says it all: “So please, oh please, we beg, we pray / go throw your TV set away / and in its place, you can install / a lovely bookshelf on the wall.”
Last year my wife and I moved our sole TV into our dark, unfinished basement and got a bookshelf installed on the wall beside our front door. Now we see it, walk by it, and touch it dozens of times a day. And the TV sits dormant unless the Toronto Blue Jays are in the playoffs or Netflix drops a new season of House of Cards.
Make a public commitment. In his seminal book Influence: The Psychology of Persuasion, Robert Cialdini shares a psychology study showing that once people place their bets at the racetrack, they are much more confident about their horse’s chances than they were just before laying down the bet. He goes on to explain how commitment is one of the big six weapons of social influence. So why can’t we think of ourselves as the racehorses? Make the bet on reading by opening an account at Goodreads or Reco, friending a few coworkers or friends, and then updating your profile every time you read a book. Or put together an email list to send out short reviews of the books you read. I do exactly that each month, with my Monthly Book Club Email. I stole the idea from bestselling author Ryan Holiday, who has a great reading list.
Find a few trusted, curated lists. Related to the above, the publishing industry puts out more than 50,000 books a year. Do you have time to sift through 1,000 new books a week? Nobody does, so we use proxies like Amazon reviews. But should we get our reading lists from retailers? If you’re like me, and you love the “staff picks” wall in independent bookstores, there’s nothing as nice as getting one person’s favorite books. Finding a few trusted, curated lists can be as simple as the email lists I mentioned, but with a bit of digging you can likely find the one that totally aligns with your tastes. Some of the lists that I personally like are: Bill Gates’s reading list; Derek Sivers’s reading list; and Tim Ferriss’s list, where he has collected the recommendations of many of his podcast guests.
Change your mindset about quitting. It’s one thing to quit reading a book and feel bad about it. It’s another to quit a book and feel proud of it. All you have to do is change your mindset. Just say, “Phew! Now I’ve finally ditched this brick to make room for that gem I’m about to read next.” An article that can help enable this mindset is “The Tail End,” by Tim Urban, which paints a striking picture of how many books you have left to read in your lifetime. Once you fully digest that number, you’ll want to hack the vines away to reveal the oases ahead.
I quit three or four books for every book I read to the end. I do the “first five pages test” before I buy any book (checking for tone, pace, and language) and then let myself off the hook if I need to stop halfway through.
Take a “news fast” and channel your reading dollars. I subscribed to the New York Times and five magazines for years. I rotated subscriptions to keep them fresh, and always loved getting a crisp new issue in the mail. After returning from a long vacation where I finally had some time to lose myself in books, I started realizing that this shorter, choppier nature of reading was preventing me from going deeper. So I canceled all my subscriptions.
Besides freeing up mindshare, what does canceling all news inputs do? For me, it saved more than $500 per year. That can pay for about 50 books per year. What would I rather have 10 or 20 years later — a prized book collection which I’ve read and learned from over the years…or a pile of old newspapers? And let’s not forget your local library. If you download Library Extension for your browser, you can see what books and e-books are available for free right around the corner.
Triple your churn rate. I realized that for years I’d thought of my bookshelf as a fixed and somewhat artistic object: There it is, sitting by the flower vases! Now I think of it as a dynamic organism. Always moving. Always changing. In a given week I probably add about five books to the shelf and get rid of three or four. Books come in through lending libraries in our neighborhood, a fantastic used bookstore, local indie and chain stores, and, of course, online outlets. Books go out when we pass them to friends, sell them to the used bookstore, or drop them off at the lending library. This dynamism means I’m always walking over to the shelf, never just walking by it. As a result, I read more.
Read physical books. You may be wondering why I don’t just read e-books on a mobile device, saving myself all the time and effort required to bring books in and out of the house. In an era when our movie, film, and photography collections are all going digital, there is something grounding about having an organically growing collection of books in the home. If you want to get deep, perhaps it’s a nice physical representation of the evolution and changes in your mind while you’re reading. (Maybe this is why my wife refuses to allow my Far Side collections on her shelf.) And since many of us look at screens all day, it can be a welcome change of pace to hold an actual book in your hands.
Reapply the 10,000 steps rule. A good friend once told me a story that really stuck with me. He said Stephen King had advised people to read something like five hours a day. My friend said, “You know, that’s baloney. Who can do that?” But then, years later, he found himself in Maine on vacation. He was waiting in line outside a movie theater with his girlfriend, and who should be waiting in front of him? Stephen King! His nose was in a book the whole time in line. When they got into the theater, Stephen King was still reading as the lights dimmed. When the lights came up, he pulled his book open right away. He even read as he was leaving. Now, I have not confirmed this story with Stephen King. But I think the message this story imparts is an important one. Basically, you can read a lot more. There are minutes hidden in all the corners of the day, and they add up to a lot of minutes.
In a way, it’s like the 10,000 steps rule. Walk around the grocery store, park at the back of the lot, chase your kids around the house, and bam — 10,000 steps.
It’s the same with reading.
When did I read those five books a year for most of my life? On holidays or during long flights. “Oh! A lot of downtime coming,” I’d think. “Better grab a few books.”
When do I read now? All the time. A few pages here. A few pages there. I have a book in my bag at all times. In general I read nonfiction in the mornings, when my mind is in active learning mode, and fiction at night before bed, when my mind needs an escape. Slipping pages into all the corners of the day adds up.
The business model of research-based pharmaceutical companies is under significant pressure. Their return on R&D investment has dropped to its lowest levels in decades, and their public reputation in the United States and abroad is worse than ever.
One antidote to these problems is to transform “access to medicine” from a relentless activist slogan to a fully-fledged business strategy. By that I mean that pharma companies should develop innovative treatments for pervasive unmet medical needs; avoid corruption, collusion, and other unethical marketing practices; and make sure that their products reach as many patients around the world as possible. This strategy will tap potential growth in emerging markets, limit the risks of misconduct, and improve public trust in the industry.
It’s a fact that the current business model of pharma companies is not working efficiently. For each $1 billion spent on R&D, the number of new medicines approved has halved roughly every nine years since 1950. The estimated return on these (fewer) products has itself declined substantially since 2010, from 10.1% to 3.7%.
This decline can be partly explained by the transition from one-size-fits-all blockbuster drugs to niche therapies (which have smaller patient groups). However, it also reflects stronger pressures to lower medicine costs in traditional pharmaceutical markets. In just the last few months, President Trump made a commitment to bring down drug prices, high-ranking government ministers in the Netherlands published a strong call to develop alternative pharmaceutical business models, and the OECD released a report that recognized the need to rebalance the negotiating powers of payers and pharma companies.
This represents a disquieting trend for companies whose profit growth heavily depends on price increases. According to a Credit Suisse analysis of 20 leading global pharma companies, 80% of their growth in net profits in 2014 stemmed from price increases in the United States.
Undoubtedly, these findings (and related controversies over drug prices) further undermined trust in the industry. According to the 2016 Harris Corporate Reputation Poll, only one-third of U.S. citizens have a positive opinion of big pharma. An August 2016 Gallup Poll found that no industry is held in lower esteem by U.S. citizens than pharmaceuticals (the sector’s worst showing in 16 years).
This worrisome mix of little growth potential and low reputation is the main explanation for why investors are increasingly interested in how pharma companies manage access-to-medicine opportunities and risks, which range from developing new treatments for neglected populations and pricing existing products at affordable levels to avoiding corruption and price collusion.
For instance, 60 institutional investors, collectively managing more than $5.5 trillion in assets, have committed to taking into account the findings of the Access to Medicine Index while conducting their investment analyses and running their engagement meetings. (The Access to Medicine Index, which my organization produces, assesses 20 of the world’s largest pharma companies according to their efforts to reach the 2 billion people who still lack access to medicine in low- and middle-income countries.)
Improving access to medicine is also promoted by BlackRock and Ceres, a nonprofit advocate of sustainability, in its guide for institutional investors seeking to engage companies on sustainability issues, and by Morgan Stanley in a report outlining a framework for incorporating sustainability performance data into the investment-analysis process. And it is the first topic of the provisional standard for the pharmaceutical sector produced by the Sustainability Accounting Standards Board. The standard states that “a strategic approach to access to medicines can yield opportunities for growth, innovation, and unique partnerships, which can enhance shareholder value.”
Expanding access to medicines will help pharma companies enhance shareholder value in several ways:
Unlock growth potential in emerging markets. These markets are already responsible for about one-quarter of the revenues of several research-based pharma companies, and are expected to contribute 50% to 75% of the growth in global spending on pharmaceuticals in the next four years. In order to fully benefit from the growth of these countries, pharma companies should help reduce barriers to access to medicine and participate directly in the development of sustainable markets.
Mitigate the risk of unethical conduct. Companies need strict policies and strong compliance systems to avoid unethical practices (from corruption to anticompetitive measures). This prevents fines and settlements, damage to their reputations, and more burdensome regulation. Ethical conduct is particularly important in emerging markets where companies rely heavily on governments’ goodwill for market access and health care investments.
Enhance corporate reputations. A bad reputation is obviously not good for business. Restoring public trust in pharma companies would enhance their capacity to attract the best talent, encourage patients to participate in clinical trials, and obtain premium prices for truly innovative products. Also important, it could help them retain the strong patent protection that their products now enjoy.
The success of new business models depends on both the willingness and the ability of pharmaceutical companies to fully integrate access to medicine into their business strategies. Companies should take a patient-centric approach, where barriers to access are first fully understood and then proactively addressed. Moreover, they should partner with other actors, including governments, NGOs, and private foundations, to build capacities into the pharma value chain while avoiding conflicts of interest.
The message is clear: Pharma companies should treat underserved demographics as a growth opportunity, not as a lost cause.
Talent is what separates the best from the rest. The best-performing companies simply have better people. Right?
That’s certainly what we thought before Bain & Company launched its in-depth investigation of workforce productivity. After assessing the practices of global companies and surveying senior executives, we discovered that the best companies have roughly the same percentage of star talent as the rest — no more, no less. It turns out that what separates the best-performing companies from others is the way they deploy talent.
Bain performed detailed organizational audits on 25 global companies. We benchmarked the practices of these organizations relative to companies widely regarded as best-in-class. To complement this research, we collaborated with the Economist Intelligence Unit to survey more than 300 senior executives from large companies worldwide. We asked them to assess their workforce and to describe their people management practices, all with an eye toward understanding the drivers of workforce productivity. What we found surprised us, at least with respect to star talent:
- On average, 15% of a company’s workforce — roughly one in seven employees — are A players, or “stars.”
- The amount of star talent does not differ dramatically between the best-performing companies in our sample (the top quartile) and the rest (the average of the remaining three quartiles). Stars made up 16% for the best, and 14% for the rest.
What does differ between the best and the rest is how each group deploys its star talent. We found two distinct deployment models at work:
The best companies used intentional nonegalitarianism. The best-performing companies deploy their star talent in an intentionally nonegalitarian way. That is, they focus their stars on areas where these individuals can have the biggest impact on company performance. As a result, the vast majority of business-critical roles — upward of 95% — are filled by A-quality talent. In some technology companies, for example, software development is critical to business success. So the best-performing companies in this industry make sure that software development roles are filled with star talent. In other industries brand management matters more, so the A players tend to be clustered there. Stars are concentrated where they can make the biggest difference, which of course means that less A talent is available for other positions.
The rest used unintentional egalitarianism. The remaining companies in our sample deploy star talent in an unintentionally egalitarian way. In other words, these companies attempt to spread their A players more or less evenly across all roles, so that one in seven professionals in every role is a star player, and the remaining six are average players. No team has more stars than any other; no roles are seen as more important than the rest.
The egalitarian approach may seem fair, even admirable, but it does not produce superior results.
Our research suggests that people-deployment practices account for a significant portion of the difference in productivity and performance between the best and the rest. To be sure, many other practices are at play as well. But people deployment is vitally important.
What steps should organizations take to make the most of their star talent? Our research highlights five best practices:
Know who the stars are in your organization. It is difficult to deploy scarce talent effectively without first identifying your company’s A players. Most companies employ some form of assessment based on performance and potential, typically as a vehicle for determining compensation and career progression. Following this approach, A players are employees who score highly on both dimensions.
Know where your A players are (and could be) deployed. Knowing who your stars are is just the beginning. You also need to know how effectively they are being deployed. For each star in you company, ask two important and related questions:
- Where are they currently deployed? What role is each star currently playing in the organization? This information will help you assess how effectively you are deploying scarce star talent.
- How fungible are they? Could they perform some other role with the same (or similar) performance? Your most valuable people are both highly proficient in their current roles and highly versatile. If you find that you have underinvested scarce talent in a number of critical roles, versatile stars can help to fill these roles.
Identify the business-critical roles in your company. Not all roles are created equal. Some are inherently more important than others in successfully executing a company’s strategy and delivering superior performance. The best companies identify these roles explicitly. They ask themselves: “Which roles benefit the most from star talent?” and, by implication, “Which roles can we afford to fill with ‘good enough’ talent?” Having the best software programmer in the world makes little difference if your business is consumer packaged goods. But having the very best brand managers and marketers may make a big difference. The best-performing companies put their talent where the money is.
Treat star talent as a company-wide resource. Organizations commonly struggle with moving great talent from one part of the company to another. Your star talent can quickly become the property of a single business unit or function unless you have the processes and practices to ensure that these scarce resources are invested on behalf of your entire company, not just the division, business, geography, or function where they currently reside. Organizations that put these practices in place make better use of their existing talent and avoid the artificial shortages of talent that can be created by parochial hoarding of A players.
Ensure that business-critical roles get first dibs on star talent. Once your leadership has the information it needs to determine who and where the stars are in your organization, it must be ruthlessly nonegalitarian in the way it assigns talent. It must make sure that business-critical roles are filled with A players first, and then turn its attention to roles that are important but less business-critical. Only then can you be assured that your star talent is being deployed as well as possible.
Ever since the start of the “war for talent,” companies have invested billions to attract, develop, and retain the very best. Now that war looks like a stalemate: Most companies, on average, have the same amount of stars. The companies that perform the best are the ones that treat star talent as the scarce, hard-won resource that it is.
Here in the United States, we’re just days away from Super Bowl Sunday. The buzz around the biggest game in America’s biggest sport is, as always, about more than football. It’s also about business and leadership. Does the Patriots’s consistent excellence over the last 15 years offer insights on teamwork that transcend football? Does Bill Belichick’s unrivaled record speak to his skills not just as a coach but also as a leader from whom others can learn? Even as high-minded a publication as The Economist gets caught up every so often in the connections between sports and business. A few years back, writing about a team that was dominating a different kind of football, the magazine claimed that FC Barcelona, the renowned soccer club, “has provided a distinctive solution to some of the most contentious problems in management theory.” Wow!
So the question becomes: What can sports in general, and football in particular, teach us about competition and success, talent and teamwork, value and values? My answer, I’m afraid, is “not very much.” Sports, it turns out, are a terrible metaphor for business, and leaders who look to the gridiron or the soccer pitch for ideas about their work will be sorely disappointed.
Here’s what’s wrong with making analogies between sports and business.
The logic of competition and success is completely different. What makes football or basketball so exhilarating is that only one team wins at the end of a season. In the case of the Super Bowl, there is one world champion, and 31 NFL teams with crushed dreams and dispirited fans. For one team to win, every other team must lose. The logic of business competition is nothing like this. The most successful companies, those that win big and create the most economic value, worry less about crushing the competition than about delighting and amazing their customers. The very idea of zero-sum competition (for me to win, you must lose) feels like a relic from a long-ago era of business. Virtually every industry has room for plenty of different winners, each of which is great at serving a distinct piece of the market or a certain set of customers.
A few years ago, during the research for our book Mavericks at Work, Polly LaBarre and I spent time with Mike McCue, one of the great entrepreneurs in Silicon Valley. Here’s how he explained his approach to strategy and success: “Even in the face of massive competition, don’t think about the competition. Literally don’t think about them. Every time you’re in a meeting and you’re tempted to talk about a competitor, replace that thought with one about user feedbacks or surveys. Just think about the customer.”
The dynamics of talent and teamwork are completely different. You’d think business organizations would have lots to learn from high-performing sports teams such as the New England Patriots, but there are huge weaknesses in the comparisons, which makes the analogy virtually useless. Most important, “teamwork” in the NFL means teamwork among players whose careers are absurdly short and whose loyalties to any one team only last as long as the duration of their contracts. According to The Wall Street Journal, the average length of an NFL career is 2.66 years.
So the job of an NFL coach is to yell, threaten, and otherwise cajole maximum effort from players who have almost no expectation of sticking around for very long. What sane company would take that approach? Organizations that are building for the long term, that hope to attract, grow, and retain the best people in their fields, that wish to create an environment where great people do their best work year after year, have little to learn from the short-term, utterly disposable mentality that defines life in the NFL. Most football teams, to be brutally honest, are a collection of mercenaries ruled by a tyrant. That’s not how great business organizations work.
The creation of economic value is completely different. Even the most ardent sports fans are quick to agree with the idea that sports is a business. And the business of sports, it turns out, may offer even fewer lessons for business leaders than what happens on the field. Unlike most billion-dollar businesses, which are owned by shareholders and governed by a board of directors, nearly every NFL team is owned by a single individual, and they are accountable to virtually no one besides the other billionaire owners. The one notable exception is the Green Bay Packers, which are structured as a nonprofit organization and are run to benefit the community.
NFL owners have reaped vast riches over the last 20 years, negotiating huge television contracts, demanding big subsidies for taxpayers, and devising new ways to profit from the internet. Their hardball tactics have made them very wealthy — but very unpopular with fans. Remember that old expression “Don’t hate the player, hate the game”? Well, NFL fans (and fans of most sports, truth be told) love the players, but hate the owners. Sure, there are plenty of unpopular CEOs out there, but would any publicly traded company put up with a CEO who is as unpopular with its customers (fans) as, say, Chargers owner Alex Spanos is with the residents of San Diego, or as Rams owner Stan Kroenke is with the people of St. Louis?
And don’t even get fans started on NFL commissioner Roger Goodell, who may be the single most unpopular executive in all of sports. It’s hard to square the unprecedented popularity of football with the universal unpopularity of NFL owners, but that’s the business of sports — and another reason why sports are a lousy metaphor for business. It’s hard to learn many leadership lessons from an industry whose leaders are burned in effigy or booed at huge public gatherings.
So I hope everyone has a great time watching the Super Bowl. But the idea that what happens on the field (or in the team executives’ offices) teaches us anything about what should happen inside other organizations is misguided. It’s fun to be a student of the game, but let’s not kid ourselves that any lessons we learn from sports apply to our roles as company builders or business leaders.
Soon after taking office, the new president created a national commission to examine the impact of automation. No family should pay an unjust price for progress, he announced, yet automation should not be viewed as an enemy. “If we understand it, if we plan for it, if we apply it well, automation will not be a job destroyer or a family displaced. Instead, it can remove dullness from the work of man and provide him with more than man has ever had before.”
The U.S. president who spoke those words was Lyndon B. Johnson, and the year was 1964.
A half-century later, technology has advanced at breakneck speed. Who back then, other than science fiction writers, could have imagined Amazon’s drone shipments, the legions of robots at work today in manufacturing, or the algorithms now being used to detect cancers? Yet anxiety about automation is still with us. Today there is concerned debate about the impact of technology on the economy and especially on the future of work.
It’s instructive to note how the economy has continued to prosper, and people have continued to work, since the 1960s, even as the workplace itself has been reshaped by technology. New jobs that could not have been imagined at the time, such as app developer or MRI technician, have replaced obsolete ones, such as switchboard operators. That’s a pattern we have seen since the beginning of the Industrial Revolution, two centuries ago, when more than 60% of Americans worked on the land; today it’s less than 2%. Still, we cannot help but wonder: Could this time be different?
We have just published new research about automation’s potential effects, based on an in-depth analysis of more than 2,000 workplace activities across 800 occupations. We focused on activities because all occupations consist of numerous activities, each of which can be automated to a varying degree. Within marketing, for example, some tasks can be automated easily, but others cannot.
We found that half of the activities people are paid to do in the global economy have the potential to be automated using current technology. The most automatable activities involve data collection, data processing, and physical work in predictable environments like factories, which make up 51% of employment activities (not jobs) and $2.7 trillion of wages in the U.S. These activities are most prevalent in sectors such as manufacturing, food services, transportation and warehousing, and retail.
More occupations will change than will be automated in the short to medium term. Only a small proportion of all occupations (about 5%) can be entirely automated using these demonstrated technologies over the coming decade, though the proportion is likely to be higher in middle-skill job categories. But we found that about 30% of the activities in 60% of all occupations could be automated — and that will affect everyone from welders to landscape gardeners to mortgage brokers to CEOs. We estimate that about 25% of CEOs’ time is currently spent on activities that machines could do, such as analyzing reports and data to inform decisions.
Automation’s potential is broader than it historically has been because technologies including robotics, artificial intelligence, and machine learning are increasingly able to accomplish not just physical activities but also ones that include cognitive capabilities, from lip reading to driving. As companies deploy automation, we need to think more about mass redeployment than unemployment, and we need to equip people with the skills they will need for the workforce of tomorrow — including being able to interact much more closely with machines in the workplace. These machines have capabilities that are inherently human, including managing and developing people, along with social and emotional reasoning.
Like President Johnson in the 1960s, we see that automation could make a major contribution to productivity and prosperity. Our research suggests that future automation could raise productivity growth globally from 0.8%–1.4% annually, which can make a meaningful contribution to global economic growth and compensate for the demographic headwinds of aging populations. For companies around the world, automation will offer the potential to capture substantial value — and not just from labor substitution. These technologies enable higher throughput, enhanced quality, better outcomes, greater safety, and the opportunity to scale up or adopt new business models.
However, just because the technological potential to automate a workplace activity exists does not mean that it will happen anytime soon. The pace and extent of automation will depend on a range of factors, of which technical feasibility is only one; there are still important barriers to overcome, including the ability of computers to generate and understand natural language. Other factors include the dynamics of labor supply and demand. If there is no shortage in the labor market for lower-wage cooks, for example, it may not make business sense to replace them with an expensive machine.
The benefits for business are relatively clear: better, smarter, error-free outcomes, along with innovation, productivity, and growth. For policy makers, the issues are more complicated. They should embrace the opportunity for their economies to benefit from automation’s productivity growth potential, and put in place policies and incentives to encourage investment in continued progress and innovation. At the same time, they must enact policies that help workers and institutions adapt to the changes in employment. This will likely include rethinking education and training, income support and safety nets, and transition support for those dislocated. Above all, a focus on the skills needed to thrive in this new era will be paramount. The lesson from history is that innovation, investment, and growth create demand and jobs that may once have seemed like science fiction.