Get the latest from AlphaComm Strategies!
Harvard Business Review
Artificial intelligence (AI) is emerging in applications like autonomous vehicles and medical assistance devices. But even when the technology is ready to use and has been shown to meet customer demands, there’s still a great deal of skepticism among consumers. For example, a survey of more than 1,000 car buyers in Germany showed that only 5% would prefer a fully autonomous vehicle. We can find a similar number of skeptics of AI-enabled medical diagnosis systems, such as IBM’s Watson. The public’s lack of trust in AI applications may cause us to collectively neglect the possible advantages we could gain from them.
In order to understand trust in the relationship between humans and automation, we have to explore trust in two dimensions: trust in the technology and trust in the innovating firm.Insight Center
- The Age of AI Sponsored by Accenture How it will impact business, industry, and society.
In human interactions, trust is the willingness to be vulnerable to the actions of another person. But trust is an evolving and fragile phenomenon that can be destroyed even faster than it can be created. Trust is essential to reducing perceived risk, which is a combination of uncertainty and the seriousness of the potential outcome involved. Perceived risk in the context of AI stems from giving up control to a machine. Trust in automation can only evolve from predictability, dependability, and faith.
Three factors will be crucial to gaining this trust: 1.) performance — that is, the application performs as expected; 2.) process — that is, we have an understanding of the underlying logic of the technology, and 3.) purpose — that is, we have faith in the design’s intentions. Additionally, trust in the company designing the AI, and the way the way the firm communicates with customers, will influence whether the technology is adopted by customers. Too many high-tech companies wrongly assume that the quality of the technology alone will influence people to use it.Related Video A.I. Could Liberate 50% of Managers' Time Here's what they should focus on. See More Videos > See More Videos >
In order to understand how firms have systematically enhanced trust in applied AI, my colleagues Monika Hengstler and Selina Duelli and I conducted nine case studies in the transportation and medical device industries. By comparing BMW’s semi-autonomous and fully autonomous cars, Daimler’s Future Truck project, ZF Friedrichshafen’s driving assistance system, as well as Deutsche Bahn’s semi-autonomous and fully autonomous trains and VAG Nürnberg’s fully automated underground train, we gained a deeper understanding of how those companies foster trust in their AI applications. We also analyzed four cases in the medical technology industry, including IBM’s Watson as an AI-empowered diagnosis system, HP’s data analytics system for automated fraud detection in the healthcare sector, AiCure’s medical adherence app that reminds patients to take their medication, and the Care-O-bot 3 of Frauenhofer IPA, a research platform for upcoming commercial service robot solutions. Our semi-structured interviews, follow-ups, and archival data analysis was guided by a theoretical discussion on how trust in the technology and in the innovating firm and its communication is facilitated.
Based on this cross-case analysis, we found that operational safety and data security are decisive factors in getting people to trust technology. Since AI-empowered technology is based on the delegation of control, it will not be trusted if it is flawed. And since negative events are more visible than positive events, operational safety alone is not sufficient for building trust. Additionally, cognitive compatibility, trialability, and usability are needed:
Cognitive compatibility describes what people feel or think about an innovation as it pertains to their values. Users tend to trust automation if the algorithms are understandable and guide them toward achieving their goals. This understandability of algorithms and the motives in AI applications directly affect the perceived predictability of the system, which, in turn, is one of the foundations of trust.
Trialability points to the fact that people who were able to visualize the concrete benefits of a new technology via a trial run reduced their perceived risk and therefore their resistance to the technology.
Usability is influenced by both the intuitiveness of the technology, and the perceived ease of use. An intuitive interface can reduce initial resistance and make the technology more accessible, particularly for less tech-savvy people. Usability testing with the target user group is an important first step toward creating this ease of use.
But even more important is the balance between control and autonomy in the technology. For efficient collaboration between humans and machines, the appropriate level of automation must be carefully defined. This is even more important in intelligent applications that are designed to change human behaviors (such as medical devices that incentivize humans to take their medications on time). The interaction should not make people feel like they’re being monitored, but rather, assisted. Appropriate incentives are important to keep people engaged with an application, ultimately motivating them to use it as intended. Our cases showed that technologies with high visibility — e.g., autonomous cars in the transportation industry, or AiCure and Care-O-bot in the healthcare industry — require more intensive efforts to foster trust in all three trust dimensions.
Our results also showed that stakeholder alignment, transparency about the development process, and gradual introduction of the technology are crucial strategies for fostering trust. Introducing innovations in a stepwise fashion can lead to more gradual social learning, which in turn builds trust. Accordingly, the established firms in our sample tended to pursue a more gradual introduction of their AI applications to allow for social learning, while younger companies such as AiCure tended to choose a more revolutionary introduction approach in order to position themselves as a technology leader. The latter approach has a high risk of rejection and the potential to cause a scandal if the underlying algorithms turn out to be flawed.
If you’re trying to get consumers to trust a new AI-enabled application, communication should be proactive and open in the early stages of introducing the public to the technology, as it will influence the company’s perceived credibility and trustworthiness, which will influence attitude formation. In the cases we studied, users who could effectively communicate the benefits of an AI application had a reduction in their perceived risk, which resulted in greater trust, and a higher likelihood to adopt the new technology.
In a traditional team structure, conflicts can be escalated to the boss to resolve. Can’t agree on how to prioritize projects, or on which deadlines need to shift? Ask the team leader to step in and make a call. Think a coworker is acting snarky, or that their work is too sloppy? Advise the manager to give them some feedback. But for flat or self-managed teams, that’s not an option. Self-managed teams must identify different ways to find and address day-to-day conflicts.
Self-managed teams can focus on three things to help them successfully resolve conflicts. (Traditionally hierarchical teams may benefit from them too.)
Encourage openness to productive conflict. First and foremost, self-managed teams must commit to openly discussing their differences. Conflict should be seen not as an annoyance that leads to anxiety and alienation, but as an opportunity for growth and strong working relationships.
To create this culture of open communication, try turning conflict resolution into an organized group activity. A technique called Planning Poker has opened my team’s eyes to just how productive having dissenting viewpoints can be. Using a point-based system, the technique encourages all team members to raise their opinions, weigh every option, and collectively vote on the best plan. Planning Poker is predominantly used by software developers, but it can facilitate virtually any business decision.
Come to a common understanding about which conflicts can be resolved without the involvement of others. For example, you might develop norms about what constitutes a low-risk decision (for example, it affects few people, or the related costs fall below a certain threshold), and encourage the team to resolve low-risk conflicts without group intervention.
Prioritize accountability over blame. Autonomous teams should win and lose as a group. When shortcomings occur, teams shouldn’t assign blame to the contributors closest to the debacle. Rather than looking at who was responsible, as people express only the symptoms, they should investigate why the issue occurred.
This mode of conflict resolution is akin to the “blameless postmortem” approach much of the technology world takes to understand why products and endeavors don’t reach their full potential. If a team is comfortable speaking openly about conflict and hardships, asking “How did this happen?” when conducting a postmortem won’t lead to the blame game; it will yield the root cause. As Etsy CTO John Allspaw says, people are “the most expert in their own error. They ought to be heavily involved in coming up with remediation items.” Punishing them for contributing to conflict discourages this productive dialogue.Further Reading
To further enhance the blameless approach, a team can discuss the situation with several other teams at the company and gather multiple unbiased opinions regarding the conflict’s root cause and how it could be addressed. Even if this doesn’t result in a unanimous opinion or a clear plan of action, it shifts the focus from the responsible parties and opens the remediation process to many diverse, productive ideas.
Quantify the impact of the problem. A team at my organization was recently at odds because a developer preferred to work at night — which was inconvenient because everyone else worked during the day. This employee was absent from nearly every important meeting, and his teammates constantly found themselves taking extra time to fill him in on everything he missed.
The tension continued until the team quantified the impact of his absence. Each meeting the employee missed took 60 minutes, and the team would spend 30 more minutes recapping for him and hearing his thoughts. With six members on the team, that’s a combined three hours of unnecessary discussion. To top it off, the employee missed about 10 meetings each month, so his team was devoting more than 350 hours per year to these conversations. Instead of focusing on the symptomatic conflict and requiring the employee to work during the day every day, the team decided to develop a flexible schedule that worked for everyone. On meeting days, the night owl could arrive in the afternoon, share a few hours of overlap with everyone else, and then burn the midnight oil as he pleased.
Quantifying the impact of conflict provides several benefits. It encourages productive conversations, creates alignment around the gravity of the issue, and unlocks creative solutions as people identify both the source and the impact of their conflicts. Assigning a numeric value to waste helps teams find better ways to reduce it.
The world is undergoing a transformation in how it gets its power. In Germany, we have a word for it: Energiewende. It means energy turning point. (We use the same word Wende to describe the fall of the Berlin Wall and all the dramatic changes that came with it.)
In this transformation, we are witnessing the decarbonization of power consumption, thanks to the large-scale deployment of renewable energy sources such as wind and solar. Earlier this year, the European Union announced that its climate and renewable energy targets—a 20% cut in greenhouse gas emissions, 20% of EU energy from renewable sources, and a 20% improvement in energy efficiency—are actually on track to realization by the year 2020.
At the same time, we’re also seeing the decentralization of power production. For example, in Germany, more than 1.5 million households supply their own electricity, either for self-consumption or directly to the central grid. In 2015, around 40% of new PV installations were accompanied by a battery. In the nation’s rural areas, more than 180 bioenergy villages have taken responsibility for their own electricity generation. Similarly, in cities, energy and housing associations are installing PV panels on multi-unit buildings, and the German ministry of economics and energy estimates around 3.8 million apartments could be supplied with PV panels placed on their rooftops. Industry players have realized the marketing and cost-saving potential, too: automaker BMW powers the plant where it manufactures the i3 and i8 electric vehicles with a 10 MW wind park, and discount retailer Aldi Süd has installed photovoltaic panels on 1,000 supermarkets. In 2016,renewable, intermittent energy sources contributed more than 30% to gross electricity generation.
Besides the environmental benefits, there are huge implications for the manufacturing sector and for national competitiveness. Countries that manage to transition effectively to low-carbon generation technologies will be home to competitive energy solutions and manufacturing firms that are more resilient to energy shocks and weather disruptions.
That’s why so many countries are moving ahead with ambitious plans in this sector. In 2016, China installed 34 gigawatts (GW) of PV-panel-driven renewable power capacity. In January, the country’s energy agency announced that it will invest $361 billion to shift from smog-generating coal power to renewables. India plans to install 100 GW by 2022, up from 4.9 GW of new installations in 2016. The United Arab Emirates is investing $163 billion in renewable energy projects, with a target of meeting nearly half of its power needs with renewables by 2050. Morocco aims to do so by 2030. In two chief regions in Australia, rooftop PV penetration has already reached 30 percent. Around the world in 2015, additions of renewable power capacity outpaced other forms of electricity generation—coal, gas, oil, and nuclear—combined.
While regulatory policy, implementation, and rollout may differ from country to country, decentralization typically encompasses three phases. Each brings its own challenges.
Countries in the first phase, which we call “Energiewende 1.0,” focus on promoting renewable energies, such as solar, wind, biomass, or geothermal energy. Regulatory incentives include instruments like requiring utilities to source a small portion of their generation from renewable sources. Countries with a strong manufacturing base, such as China or Germany, may have a secondary objective: establishing a domestic manufacturing base for the respective renewable technology.
During this first phase of development, the total contribution of renewable power generation hovers below critical thresholds. The electricity infrastructure can cope with the additional, intermittent strain on the distribution network. Supply and demand remain largely unaffected.
Some countries such as Denmark and Germany have already entered the second phase, “Energiewende 2.0,” which is characterized by a large share of intermittent, weather-dependent power sources. In Germany, we have a word for the cloudiest days when the wind is not blowing very hard: Dunkelflaute. It means “dark doldrums.” Dealing with days like these — when both wind and solar power generation is very low – must be part of the equation as regulators and industry introduce more renewable power into a system originally designed for more flexible electric power generators such as gas-fired plants.
During this second phase, grid operators frequently have to intervene to keep the electricity grid in balance. For example, interventions in Germany’s largest transmission grid operated by private company TenneT increased from fewer than 10 interventions per year in 2003 to almost 1,400 interventions in 2015.
In the third phase, which is yet to come for any country, we predict that the electricity supply industry will be forced to leave its roots as a public infrastructure service and become truly private businesses, with customized solutions for each producer and consumer. This seems like the natural end-game for the broader decentralization patterns we’re observing. Thus markets entering “Energiewende 3.0” will have to answer two major questions. Who will bear the costs of expensive high-voltage transmission infrastructure if most supply is organized on a local or individual level? And how can governments steer the transition from a public to a private infrastructure, in particular the co-existence of both a central network and decentralized solutions?
Many governments still hesitate to foster the transition to decentralized power generation structures. It’s not easy, as the financial turmoil of major European power companies demonstrates. But electric utilities have been learning to adapt to these new realities of decentralized supply. They’re beginning to offer bundled services and package solutions instead of simply selling electrons by the kilowatt hour. We believe it is only a matter of time until flat rates for electricity become the standard.
Private-sector solutions are stepping up to meet market needs, too. So-called aggregators are now bundling the energy input of individual households to sell on wholesale markets. And demand-response providers identify companies that can temporarily switch off part of their electricity consumption—increasing the elasticity of demand to keep the grid balanced.
ReSTORE, the European market leader in demand response, has already attracted more than 125 large industrial and commercial consumers, including heavyweights such as petrochemical company Total, steel producer ArcelorMittal, and cement manufacturer Holcim. Compensation paid to these manufacturers can amount to more than 100,000 euros per year per megawatt of avoided energy consumption.
Countries in the developing world that have historically struggled to electrify their rural areas may be able to jump ahead to the third phase more quickly. In these markets, entrepreneurs recognize opportunities in the absence of public sector solutions. For example, Bangladeshi startup SOLshare establishes peer-to-peer microgrids that deliver solar power to households and businesses. That enables people to become solar entrepreneurs, because they can trade excess electricity for profit.
Whether via community initiative, entrepreneurial disruption, or traditional supplier adaptation, the global energy transformation is underway. Inevitably, it will affect national and industry competitiveness. Manufacturers and businesses have a large stake in managing this transition effectively, whether they’re driving the changes—or simply benefitting from the flexible, decentralized system.
Innovation has always required a constant iteration of trial and error as companies use data about current performance to improve future performance. So it should come as no surprise that companies in the information age want to use ever more data to hone their products. But there is an emerging debate over the competitive implications of big data. Some observers argue that companies amassing too much data might inhibit competition, so antitrust regulators should preemptively take action to cut “big data” down to “medium data.” Others say there is nothing new here, and existing competition law is more than capable of dealing with any problems.
Among those advocating for an expansion of antitrust reviews around data are law professor Maurice Stucke and antitrust attorney Allen Grunes, who voice three interrelated concerns in Big Data and Competition Policy. First, they argue that allowing companies to control large amounts of data raises barriers to entry for potential rivals that lack enough data to develop competitive products. By this logic, deals like Facebook’s acquisition of WhatsApp should be fought, because allowing a dominant company to acquire even more data will increase its market power.
Second, proponents of this view assert that existing antitrust law is inadequate for the competitive threats stemming from large collections of data. One reason why is that much of traditional antitrust analysis focuses on the prices of goods and services, because companies with market power face incentives to limit supplies and charge more. With the profusion of “free” services, authorities may have a much tougher time adequately evaluating the implications of competition other than price, such as degradations in product quality or privacy protection.
Finally, some who worry about big data from an antitrust perspective claim that consumer protection laws are inadequate, because privacy protections are themselves a function of how much competition companies face, so antitrust regulators must step in to protect privacy.
But other antitrust scholars are much less worried that a company possessing large amounts of data automatically confers market power. For example, economists Anja Lambrecht and Catherine Tucker recently examined the use of data and found “little evidence that the mere possession of big data is sufficient protection for an incumbent against a superior product offering.” This is because there is a vibrant market in the collection and sale of all sorts of data, and new technologies have made it easier for market entrants to gather, store, and analyze the data they need. Moreover, if the possession of large amounts of data were necessary for an entrant to compete successfully, that would not necessarily constitute an unfair competitive advantage. Many industries have high entry costs; we do not say that Ford and Daimler have an unfair advantage just because companies must build an expensive factory before they can sell a single car.
With regard to free services, while companies such as Facebook, Google, and Twitter may have a very large share of the consumer markets for their narrow service offerings, the markets themselves are two-sided — and the side where they earn most of their revenue is advertising, which is characterized by fierce competition, powerful counterparties, and constant evaluation of the relative performance of different advertising outlets. So in this case traditional concerns of abuse, such as pricing below marginal cost and product tying, don’t really apply, and can actually benefit competition and consumers.
When it comes to privacy, those who don’t believe that merely possessing lots of data is anticompetitive suggest that antitrust regulators should leave that to privacy and consumer protection regulators. In the United States, that principally means the Federal Trade Commission, which to date has largely acted on a case-by-case basis to deal with bad conduct stemming from the use of data. There is no evidence that the mere possession of more data provides any greater risk to privacy. But data does drive many of our most important emerging technologies, including autonomous cars, language translation, and other artificial intelligence–based innovations. Nor is there evidence that consumers are demanding more privacy protection in the products they use. Most consumers are willing to share large amounts of personal data in return for free services they value. Consumers tend to object only when their data is actually misused, something regulators already take action to address.
There is no question that diligent antitrust enforcement remains critical to ensuring competitive markets. Data-rich companies, like all companies, are capable of engaging in anticompetitive behavior. They are also capable of trying to use mergers to amass enough market power to affect prices and squeeze out competitors. Wherever this happens, antitrust agencies need to take action — and existing law gives them adequate power to address these threats. However, regulators must demonstrate a clear threat to competition to justify their actions.
Antitrust law is not meant to protect weaker companies from the consequences of fair competition or to pursue noncompetitive goals, such as privacy. Moreover, the mere possession of large amounts of data is never a cause for concern. And, in most cases, neither is using this data to produce a better product. Data-rich companies are not an economic threat, but rather are an important source of innovation. If the simple possession of data were to become a new factor in antitrust analysis, it would depress innovation when policy makers should be encouraging it.
The Great Recession of 2008 was a watershed moment in American society for many reasons — but one of the underappreciated effects may be how the recession changed gender relations in America. While the unemployment rates for men and women normally move together fairly tightly, between December 2008 and March 2010, the unemployment rate for men aged 20 and above was, on average, more than two points higher than the unemployment rate among women. Even using the more forgiving seasonally adjusted rates, in January, 2010, unemployment among men was 2.3 points higher than among women (10.2 percent versus 7.9 percent). In the aggregate, this translates to millions of households in which wives earned more than their husbands, and while the number of households in which women are the chief breadwinners has been rising slowly in the US since the 1980s, the number of households in which women earned more than men underwent a sudden and unprecedented spike. By following how the social views of individual men changed over that time, we can see how important breadwinner status is for men — and how they adapt when it’s threatened.
In previous research discussed in HBR, I’ve described how concern about women earning more than their husbands decreased support for Hillary Clinton (but not Bernie Sanders) in the 2016 election, but this is only part of a long line of research on how men try to compensate when their masculinity is threatened in some way.Related Video Gender Equality Is Making Men Feel Discriminated Against And it's shaping their politics. See More Videos > See More Videos >
In a clear-cut example, researchers had men carry out what was described as either a “rope-reinforcing” or a “hair braiding” task and then gave the men the opportunity to publicly or privately indicate their sexuality. Men who were told that the task was “hair braiding” were much more likely to publicly declare their heterosexuality than those who had been doing the “masculine” task. Masculinity, in essence, is something that men earn, rather than something they naturally have, and it therefore exists in a permanently tenuous state. The man card can be revoked at any time. That means that men have to find some way to reinforce their gender role in response to anything that might be seen to threaten it. Loss of income relative to a spouse seems like an especially potent threat to masculinity: earning less than their wives has been linked to men needing erectile dysfunction medication, as well as an increased likelihood of sexual infidelity.
However, figuring out how relative income impacts political and social views is difficult, as men aren’t randomly assigned to marry someone who makes more or less than they do. If views differ between men who earn more than their wives and those who earn less, it could be because relative earnings make a difference — or it could be that the two groups were different to begin with. The best way to control for this is to make use of panel data: talking to the same men repeatedly over time to see how their attitudes have shifted. Here, I’m looking at 854 men interviewed up to three times, as part of the General Social Survey, at two-year intervals between 2006 and 2010. Each time, they were asked some of the same questions, meaning that the data can show how their views shifted in response to changes in their lives. My analysis examines how the tumultuous economy of the time affected their views on two issues: support for abortion and support for government aid to African-Americans.
I found that Republican men who contributed less to their household income than they did two years prior became significantly less supportive of abortion rights, and the more income that they lost relative to their spouses, the more their support for abortion dropped. About a quarter of men lost 38% or more in relative income (dropping, for instance, from 70% of the family’s income to 32%), and men who saw an income drop of that magnitude dropped by an average of of 0.3 points on the eight-point abortion scale (respondents were asked about seven situations in which a woman might want an abortion, and their score went up by one point each time they responded that they would support a woman’s right to have an abortion under those circumstances). Men who lost more in relative income — those in the top 10% of income losses — saw bigger decreases: up to 0.8 points on the scale.
Among Democratic men, losing income relative to their spouse led them to be, on average, about 0.5 points more supportive of abortion rights, while men who gained income relative to their spouses actually became less supportive. When faced with gender role threat, liberal men come to hold more liberal views on abortion, while conservative men come to hold more conservative views. To put these changes in perspective, the average difference between a liberal and a conservative on this scale is about two points, so moving even a tenth of a point represents a significant shift.
Similar effects hold for views on government aid to African Americans. It would normally be expected that individuals expressing extreme views would be a little less extreme two years later, and Democratic men do generally become a little less supportive, and Republican men generally become a little more supportive. However, Democratic men who lost income relative to their spouses became more supportive of aid to African Americans, while Republican men became less so.
Results like this lead to a few conclusions. First, while it has been decades since most married women were homemakers, being the primary breadwinner in a household is still a big part of the gender identity of many American men. Money and work are about more than finances — they’re tied deeply to how people view themselves. While men with strong political views seem to react to a threat to their gender role by doubling down on their political and social attitudes, it seems that not all men have the same conceptions of masculinity; perhaps for liberal men, the shock of losing relative income has led these men to conclude that they can’t define themselves through traditionally masculine roles, and led them to rebel against them as best they can. (Registered Independents would presumably react differently, though I didn’t examine them in my analysis.) Whatever the mechanism, more and more men will find themselves dealing with losing relative income as technological innovations increasingly displace male-dominated jobs, and men will need to find a way to define their identity in a way that doesn’t require that they earn more than their wives.
We are living in the age of the superstar firm. Companies like Samsung, Google, or BMW—the top players in their respective industries—are prospering. Yet economic growth remains sluggish in many parts of the world. The reason for that paradox, as the OECD has warned, is that the productivity gap between firms at the global frontier and those lagging behind has widened. Frontier firms are able to employ the most advanced technologies, which in turn allow them to win market share at the expense of their less productive competitors. And the globalized markets that frontier firms operate in disproportionately reward their knowledge advantage, setting them even further apart from the rest.
In a recent Harvard Business Review article, Nicholas Bloom from Stanford University argued that this type of “winner takes most” competition is an important driver of rising income inequality. The Google’s of the world, in their global hunt for talent, are extremely generous when it comes to employees’ salaries. Meanwhile, wages are stagnating for many workers at less successful firms.
Several explanations have been proposed for the emergence of this “winner takes most” competition: a drop in search and transaction costs because of the Internet; network effects; the ability to scale up quickly due to IT and automation.
My analysis suggests another driver: R&D investment is increasingly concentrated in a few top firms. Some firms are investing heavily in R&D to expand their technological capabilities, while others don’t make that investment and so fall further behind. I believe this could be one of the main reasons for the widening productivity gap we observe.
Take the example of Germany: Between 2003 and 2015, R&D expenditure in the business sector increased by 59%, reaching a record high of 157.4 billion euro. Over the same period, however, the share of firms in the economy investing in R&D fell from 47% to 35%. In particular, small and medium-sized enterprises reduced their innovation efforts. So even as R&D expenditure has risen, it has become more and more concentrated within a smaller share of firms. The Gini coefficient—a commonly used measure of inequality—has been increasing steadily in Germany since the mid-1990s.
Whether the same thing is happening in other countries remains an ongoing research question. More often than not, researchers are constrained by the lack of good data sources. Nonetheless, U.S. data show something similar. Overall, business R&D increased by 67% between 2003 and 2014. And the increase was largest for the firms investing the most. In 2014, the hundred U.S. companies with the largest R&D budgets invested 92% more in innovation than in 2003. And the gap between how much large firms spend on R&D compared to smaller ones has exhibited a noticeable upswing since 2009. Moreover, recent research suggests that nowadays basic research activities are more concentrated in more specialized firms than it was the case several decades ago.
It’s unrealistic to expect every firm to invest in R&D. Yet, the concentration of this crucial activity is quite concerning. A higher concentration of innovation efforts can be a major source of productivity differences between firms, and economists, policy makers, and business leaders should pay close attention to these trends. Competition at the global research frontier is getting more and more fierce. At the same time, many firms seem to be unable to keep up with the pace at which this development is unfolding. Those left standing become the superstar firms. The rest get left behind.
United Airlines is pledging to train its workers better to ensure that “employees are prepared and empowered to put our customers first” in the wake of a video showing a passenger being dragged from a plane. It’s a new turn in what has been a PR disaster for the company.
The public reacted to the video with horror. Those flight attendants must have been appalled, too, as they watched the customer — who just a few minutes earlier was supposed to have been greeted on the plane with smiles and welcomes — being dragged, face bleeding, past other customers. What must they be thinking now? We were powerless to intervene, they might say. Civility was no longer an option. We called security. That was what management told us to do.
In customer service settings, it used to be that there were two modes: a friendly face of service delivery and a grim face, saying “call security.” It’s an outdated way of thinking, in the U.S. in particular, which practically invented the art of at-scale customer service design. And it fails to take into account the power of social media to punish companies whose employees go against their instincts as decent people because they felt bound to follow orders.
It took two days to get an apology from United Airlines CEO Oscar Muñoz after the video surfaced, but the company’s management now appears to recognize that its frontline service providers set in motion something with awful consequences. Of all the U.S. air carriers, United should have known the power of social media and public outrage in the face of rigid company policy. In 2010 I wrote the case study “United Breaks Guitars” about Dave Carroll, a musician whose guitar was damaged at Chicago’s O’Hare International Airport (the same one as in this latest furor) during transfer between planes, in full view of passengers. The musician tried for 15 months to obtain compensation. A customer service representative eventually told him that the damage was Carroll’s responsibility.
The case wasn’t closed for Carroll. He wrote a song about it and posted it to YouTube. He also blogged about the ordeal and tweeted the link. United saw it by noon on the first day and reached out to Carroll within an hour. But it was already too late – what wasn’t working for United was working for his career. Within a week, his video “United Breaks Guitars” had been viewed 3 million times.
The 2009 case highlights the ability of consumers to talk back to corporations and just how fast viral content spreads. MBA classes have debated why it was so difficult to quash a social media firestorm, and how corporations should organize rapid responses. The six years since has seen the spread not only of smartphones but of publishing platforms for streaming video, such as Facebook Live, Periscope, Snapchat, and Instagram Stories, and of platforms for memes and photographs such as Imgur and Reddit. In this week’s incident, once people had the shared their videos of the passenger being dragged, mainstream media outlets had all they needed to spread the word further, creating a furor.
But the moral of this story is not how to do crisis management faster and better in a lightning-fast digital world. It’s that even the nimblest and deftest crisis management response cannot contain the damage of going straight to “call security” and crossing the last line of defense in customer service design. And at the same time, an audience of customers standing by with their smartphones to record any spectacle seems to have had, until now, unfortunately, no discernible impact on corporate policy guidelines for dealing with uncooperative customers. Company leaders need to offer better employee training on all these points.
Consider also that today’s customer service systems are implemented with the help of computer technology intended to give workers better parameters for their decision making and actions. Computer systems and algorithms are now part of customer service decision making. If an airline like United chooses to invoke its power to deny boarding to some passengers, it will likely run an algorithm to select those unlucky people. That algorithm will choose customers based on price paid for the ticket, frequent flyer status, and many other factors, just as it does thousands of times a year. That and many other decisions are made more reliably by a machine than by a human. But the machine’s output should be a suggestion, not a command backed up by a threat to call security. Human judgment is still one of the most important customer service tools we have.
Machines follow orders. People use discretion. Learning the importance of that truism is the lesson of this awful situation, and it will be a lesson of growing relevance and application as algorithms and machines play ever larger roles in service delivery.
The best teachers all have at least one thing in common: they ask great questions. They ask questions that force students to move beyond simple answers, that test their reasoning, that spark curiosity, and that generate new insights. They ask questions that inspire students to think, and to think deeply.
As a business leader, you might have years of experience and the confidence of your organization behind you, so it may be tempting to think that your job is to always have the right answers. But great leaders have to inspire the same curiosity, creativity, and deeper thinking in their employees that great teachers inspire in their students – and that starts with asking the right questions. Any answer is only as good as the question asked.
As a dean, I find it useful to remember the statement often (perhaps spuriously) attributed to Albert Einstein that if he had an hour to solve a problem, and his life depended on it, he would spend the first fifty-five minutes determining the proper question to ask.
Yet asking a good question is not an easy task. It requires us to look beyond simple solutions and to encourage colleagues to do the same. It requires courage and tact, to generate hard questions without sparking defensiveness, as well as being open to new ideas and to questioning untested assumptions. It requires being willing to listen and follow up.
I believe there are some essential questions that are useful across a variety of contexts, including, and perhaps especially, the workplace. In fact, I gave a commencement speech last year on this topic, suggesting to students from the Harvard Graduate School of Education that there are really only five essential questions in life. Although the audience was future educators, I believe these questions are equally valuable for anyone in a position to lead or influence others.
Too often, we jump to conclusions without having enough information. We listen just long enough to form a quick opinion, and then we either endorse or oppose what has been said. This puts us at risk of making faulty judgments, leaving key assumptions untested, and missing out on potential opportunities.
Leaders (as well as their employees) need to be able to ask colleagues and direct reports to slow down and explain in more detail what is being proposed, especially if something doesn’t quite sound right or seems too easy to be a lasting solution. Asking “Wait, what?” is an exercise in understanding, which is critical to making informed judgments and decisions—whether in the office or the boardroom.
“I wonder why …?” or “I wonder if …?”
Children are far better than adults at questioning the world around them – nothing is beyond interrogation. When children wonder why the sky is blue, they prompt others to think, reason, and explain things anew. Similarly, leaders have to remain curious about their organizations in order to bring new ideas to bear on longstanding challenges.
Wondering why something is the way it is will sometimes lead to an unsatisfactory answer—as in, we do it this way because it’s easier and that’s the way we have always done it. But asking “I wonder why…” is the first step in overcoming the inertia that can stifle growth and opportunity for leaders and employees alike. That’s because it inevitably leads to the perfect follow up: “I wonder if things could be done differently?” This can begin the process of creating change by sparking the interest and curiosity of those with whom you work.
“Couldn’t we at least…?”
Most of us have had the experience of sitting through a contentious meeting, where stakeholders are polarized, progress is stalled, and consensus feels like a pipe dream. Asking “couldn’t we at least?” is the question that can help you and your colleagues get unstuck on an issue. It can get you started on a first step, even if you are not entirely sure where you will end. Perhaps you might first find some common ground by asking: “Couldn’t we at least agree on some basic principles?” or “Couldn’t we at least begin, and re-evaluate at a later time?”
“How can I help?”
The instinct to lend a hand to someone in need is one of our most admirable traits as human beings, but we often don’t stop to think about the best way to help. Instead, we swoop in and try to save the day. This frequently does more harm than good: it can unintentionally disempower, or even insult, those who need to take charge.
So when a colleague or direct report is complaining about an issue or expressing frustration, rather than jumping to offer solutions, try asking, “How can I help?” This forces your colleague to think clearly about the problem to be solved, and whether and how you can actually help. It helps your colleagues define the problem, which is the first step toward owning and solving it.
“What truly matters?”
This question might seem obvious, but I don’t think any of us ask it often enough. “What truly matters?” is not a question that you should wait to ask when you are on vacation or are retired. It should be a regular conversation, externally and internally. For example, it’s a useful way to simplify complicated situations, like sensitive personnel issues. It can also help you stay grounded when you have grand ambitions, like an organizational restructuring. And it can make even your weekly meetings more efficient and productive, by keeping people focused on the right priorities. Asking this often will not only make your work life smoother, but also help you find balance in the broader context of your life.
Leaders should ask these questions both on a daily basis and during critical moments. Of course, these aren’t the only questions to ask; context certainly matters. But I have found these five to be a very practical and useful way to ensure understanding, generate new ideas, inspire progress, encourage responsibility, and remain focused on what is genuinely important.
Photo by Mike Wilson
The cost of child care in the United States is high. This is not news. The 2016 Care Index released by the New America Foundation and Care.com shared an increasingly troubling cost profile of child care right now. The national average for at-home care is $28,354 per year, while in-center care is $8,589 per year. Some of the most expensive metro regions included private-sector-rich NYC, Boston, Atlanta, Los Angeles, and San Francisco. These are the very same cities where employers struggle to recruit and maintain skilled talent.
Here’s how Lauren Smith Brody describes the problem in The Fifth Trimester: The Working Mom’s Guide to Style, Sanity, and Big Success After Baby: “Many of the women I interviewed for my book talked about a terrifying moment of new-mom math that they did in their heads: How much of my net salary after taxes will go towards paying for child care? Often there was a very slim margin or none at all…. In that moment, when they’re already heading back before a too-short, unpaid maternity leave, before they’re emotionally or physically ready, this is a particularly cruel realization.”
Related Video The Benefits Employees Want Don't Always Cost a Lot Work-life balance and flexibility rank highly.See More Videos > See More Videos >
My work with the It’s Working Project reveals a very candid view from the parental perspective. The project has asked hundreds of parents to share their personal experiences of going back to work after baby. These first-person narratives expose not only trends but also consistent concerns and challenges affecting parents as they return to work. Child care and genuine support (or lack thereof) from the workplace are consistently at the top of the list. Support from employers makes or breaks the deal, creating either a manageable new reality or the need for a backup plan (or even an exit).
Some organizations are helping employees make the most of existing, subpar child care realities, or going further and actively helping employees figure out better, more-affordable child care options.Making the Most of Existing Child Care Realities
Set regular start and end times for meetings. Some organizations have implemented a policy that no meetings will start prior to 9:30 AM or end later than 4:30 PM. This simple move cuts down on the anxiety surrounding timely daycare pick-up and drop-off, and the expense related to daycare overtime charges. When parents aren’t worried about running late, they can keep their mental energies focused on the business.
Make schedules predictable. Dina Bakst, founder of New York–based A Better Balance, is an advocate of making schedules predictable and avoiding telling employees at the last minute that they need to stay late, come in early, or travel on short notice. “This can be hugely stressful when you have to arrange daycare,” she says. During my recent conversation with Dina, she shared a simple and effective strategy: “Schedule your work in a way that allows your employees to predict when they need to be available.”
Offer flexibility. I spoke with BirchBox’s VP of People and Culture, Melissa Enbar. She explained how her organization allows employees to set schedules that work for them, by providing flexibility about the hours they work and the location they work from. Investing in conference room technology “so you can dial in from anywhere and still feel like you are in the room with your colleagues” is an important enabler of flexible work, she says. “We ask that new parents put their availability (time in and out) onto their calendars and [that] colleagues schedule meetings around their needs,” she continues. “We are respectful of their set work hours, and they always have the option to dial in to a meeting that may be earlier or later than usual.”
Offer access to flexible spending accounts, and educate employees on how to use them. FSA accounts are a generous benefit. But it takes more than just access to this type of pre-tax account to create the highest level of benefit for new parents. As Enbar explained, “Most companies offer the option for child care Flex Spending Accounts to cover child care costs pre-tax. We spend the time to educate employees during onboarding or after a life change event so they understand how to use these accounts for significant savings.” On a larger scale, a Bank of America spokesperson shared in recent emails how the bank offers a full range of financial counseling services with a focus on child-related expenses that are both continuous and overwhelming to parents. The Benefits Education & Planning Center is staffed with licensed counselors who specialize in Bank of America’s benefits programs, products, and employee discounts. They can offer guidance on the costs of child care, budgeting for expenses, and FSAs, among other topics. The service is confidential and available at no cost to employees.
Help with back-up care. Sometimes the best-laid plans fall apart. Sometimes that’s due to a sick child, or a sick child care provider. Sometimes, work travel produces the need for more or different coverage. This is where back-up care comes in. Many employers offer back-up care, which comes in many shapes and sizes. According to Jon, an employee of Brigham and Women’s Hospital, in Boston, the institution offers a reduced rate for six days of emergency care at home through Care.com, plus access to a backup child care center. This service can be activated with as little as two hours’ notice. Talk about crisis averted!
Offer on-site care. While this is not the most inexpensive benefit, having an on-site family center addresses many working-parent child care concerns and expresses a genuine interest in supporting parents in the workplace. At Campbell’s Soup, in addition to a 10-week, gender-blind parental leave policy and more than 12 family lounges at company headquarters, the company offers a Family Center at its headquarters in Camden, N.J., for infants through kindergarten. In my recent correspondence with Kerrin Donnelly, Director of Global Integrated Facilities, I learned, “This is subsidized, and we provide food, with a focus on healthy, nutritious options. Backup care is also available at the Family Center.”
Child care can be expensive and hard for working parents to navigate, even in the best of circumstances. Smart, compassionate companies help their employees through this minefield, recognizing that it could be the benefit that matters most for employee retention. Policies and programs can help with the practicalities of care and to express an authentic desire to do well by your firm’s working parents.
Every few months it seems another study warns that a big slice of the workforce is about to lose their jobs because of artificial intelligence. Four years ago, an Oxford University study predicted 47% of jobs could be automated by 2033. Even the near-term outlook has been quite negative: A 2016 report by the Organization for Economic Cooperation and Development (OECD) said 9% of jobs in the 21 countries that make up its membership could be automated. And in January 2017, McKinsey’s research arm estimated AI-driven job losses at 5%. My own firm released a survey recently of 835 large companies (with an average revenue of $20 billion) that predicts a net job loss of between 4% and 7% in key business functions by the year 2020 due to AI.
Yet our research also found that, in the shorter term, these fears may be overblown. The companies we surveyed – in 13 manufacturing and service industries in North America, Europe, Asia-Pacific, and Latin America – are using AI much more frequently in computer-to-computer activities and much less often to automate human activities. “Machine-to-machine” transactions are the low-hanging fruit of AI, not people-displacement.
For example, our survey, which asked managers of 13 functions, from sales and marketing to procurement and finance, to indicate whether their departments were using AI in 63 core areas, found AI was used most frequently in detecting and fending off computer security intrusions in the IT department. This task was mentioned by 44% of our respondents. Yet even in this case, we doubt AI is automating the jobs of IT security people out of existence. In fact, we find it’s helping such often severely overloaded IT professionals deal with geometrically increasing hacking attempts. AI is making IT security professionals more valuable to their employers, not less.Insight Center
- The Age of AI Sponsored by Accenture How it will impact business, industry, and society.
In fact, although we saw examples of companies using AI in computer-to-computer transactions such as in recommendation engines that suggest what a customer should buy next or when conducting online securities trading and media buying, we saw that IT was one of the largest adopters of AI. And it wasn’t just to detect a hacker’s moves in the data center. IT was using AI to resolve employees’ tech support problems, automate the work of putting new systems or enhancements into production, and make sure employees used technology from approved vendors. Between 34% and 44% of global companies surveyed are using AI in in their IT departments in these four ways, monitoring huge volumes of machine-to-machine activities.
In stark contrast, very few of the companies we surveyed were using AI to eliminate jobs altogether. For example, only 2% are using artificial intelligence to monitor internal legal compliance, and only 3% to detect procurement fraud (e.g., bribes and kickbacks).
What about the automation of the production line? Whether assembling automobiles or insurance policies, only 7% of manufacturing and service companies are using AI to automate production activities. Similarly, only 8% are using AI to allocate budgets across the company. Just 6% are using AI in pricing.Where to Find the Low-Hanging Fruit
So where should your company look to find such low-hanging fruit – applications of AI that won’t kill jobs yet could bestow big benefits? From our survey and best-practice research on companies that have already generated significant returns on their AI investments, we identified three patterns that separate the best from the rest when it comes to AI. All three are about using AI first to improve computer-to-computer (or machine-to-machine) activities before using it to eliminate jobs:
Put AI to work on activities that have an immediate impact on revenue and cost. When Joseph Sirosh joined Amazon.com in 2004, he began seeing the value of AI to reduce fraud, bad debt, and the number of customers who didn’t get their goods and suppliers who didn’t get their money. By the time he left Amazon in 2013, his group had grown from 35 to more than 1,000 people who used machine learning to make Amazon more operationally efficient and effective. Over the same time period, the company saw a 10-fold increase in revenue.
After joining Microsoft Corporation in 2013 as corporate vice president of the Data Group, Sirosh led the charge in using AI in the company’s database, big data, and machine learning offerings. AI wasn’t new at Microsoft. For example, the company had brought in a data scientist in 2008 to develop machine learning tools that would improve its search engine, Bing, in a market dominated by Google. Since then, AI has helped Bing more than double its share of the search engine market (to 20%); as of 2015, Bing generated more than a $1 billion in revenue every quarter. (That was the year Bing became a profitable business for Microsoft.) Microsoft’s use of AI now extends far beyond that, including to its Azure cloud computing service, which puts the company’s AI tools in the hands of Azure customers. (Disclosure: Microsoft is a TCS client.)Related Video A.I. Could Liberate 50% of Managers' Time Here's what they should focus on. See More Videos > See More Videos >
Look for opportunities in which AI could help you produce more products with the same number of people you have today. The AI experience of the 170-year-old news service Associated Press is a great case in point. AP found in 2013 a literally insatiable demand for quarterly earnings stories, but their staff of 65 business reporters could write only 6% of the earnings stories possible, given America’s 5,300 publicly held companies. The earnings news of many small companies thus went unreported on AP’s wire services (other than the automatically published tabular data). So that year, AP began working with an AI firm to train software to automatically write short earnings news stories. By 2015, AP’s AI system was writing 3,700 quarterly earnings stories – 12 times the number written by its business reporters. This is a machine-to-machine application of AI. The AI software is one machine; the other is the digital data feed that AP gets from a financial information provider (Zacks Investment Research). No AP business journalist lost a job. In fact, AI has freed up the staff to write more in-depth stories on business trends.
Start in the back office, not the front office. You might think companies will get the greatest returns on AI in business functions that touch customers every day (like marketing, sales, and service) or by embedding it in the products they sell to customers (e.g., the self-driving car, the self-cleaning barbeque grill, the self-replenishing refrigerator, etc.). Our research says otherwise. We asked survey participants to estimate their returns on AI in revenue and cost improvements, and then we compared the survey answers of the companies with the greatest improvements (call them “AI leaders”) to the answers of companies with the smallest improvements (“AI followers”). Some 51% of our AI leaders predicted that by 2020 AI will have its biggest internal impact on their back-office functions of IT and finance/accounting; only 34% of AI followers said the same thing. Conversely, 43% of AI followers said AI’s impact would be greatest in the front-office areas of marketing, sales, and services, yet only 26% of the AI leaders felt it would be there. We believe the leaders have the right idea: Focus your AI initiatives in the back-office, particularly where there are lots of computer-to-computer interactions in IT and finance/accounting.
Computers today are far better at managing other computers and, in general, inanimate objects or digital information than they are at managing human interactions. When companies use AI in this sphere, they don’t have to eliminate jobs. Yet the job-destroying applications of AI are what command the headlines: driverless cars and trucks, robotic restaurant order-takers and food preparers, and more.
Make no mistake: Automation and artificial intelligence will eliminate some jobs. Chatbots for customer service have proliferated; robots on the factory floor are real. But we believe companies would be wise to use AI first where their computers already interact. There’s plenty of low-hanging fruit there to keep them busy for years.
Maybe it’s your performance review. Or a 360-degree feedback report. Or (unsolicited) advice from a colleague. Maybe you got a dressing-down from an angry client. Or overheard the nickname your team has for you. Whatever it was, it was wrong. Off-base. Unfair. They don’t understand the situation. They don’t even really know what you do. And besides, their advice wouldn’t even work.
Getting feedback that seems just plain wrong can be isolating, painful, and maddening. What should you do when this happens to you?
The first thing to do is nothing. Don’t decide whether or not you agree with the feedback…yet. This isn’t easy. But you need to give yourself time to more clearly understand the feedback before you accept or reject it.
Take Nira (not her real name), a highly regarded creative director in the digital media industry who was three months into a new job when her CEO sat her down to say: “I need to you be more creative.”
Nira’s thoughts raced with all the reasons this feedback was ridiculous: I am the creative director of this company. Cre-a-tive is in my job title! This is contrary to every piece of feedback I’ve received my entire career. You wouldn’t know creative if it smacked you in the face. I’m nothing if not creative.
Outwardly, Nira smiled tightly, thanked her boss and walked out, searching her phone for the headhunter’s number.
Nira’s reaction is natural. In order to decide whether to accept or reject the feedback, we automatically scan for what’s wrong with it: who gave it to us, why we suspect they gave it to us, when, where, or how they gave it to us, why it isn’t true or wouldn’t work. There are two problems with “wrong-spotting”: first, you will always be able to find something wrong with the feedback; and second, you’ll dismiss it too quickly — before you actually understand what the feedback giver is trying to tell you.
So dig deeper. Most feedback arrives in the form of a vague label: “You need to step it up,” “Show more leadership,” “Think more strategically,” or “Be more creative.” It’s easy to jump to what these labels mean to us, and assume we know what they mean to the feedback giver. Yet these labels are — at best — loose approximations of what they are trying to say.
The feedback we get always has a past: Looking back, Nira’s boss is trying to describe a set of observations, examples, expectations of what she would or should do, or perhaps opportunities he felt she missed which impacted her creativity. The feedback also has a future: the CEO must have some specific ways he would like her to approach things differently. So before Nira can decide what’s wrong — or right — about his feedback, she needs to understand where his feedback is coming from, and where it is going to. She needs to ask questions like:
- When you say “creative,” can you say more about what you mean?
- Can you be a bit more specific about particular times or instances I wasn’t creative?
- Can you give examples of what “creative” would feel like to you? What specifically are you suggesting I do differently?
After some deep breaths and a pep talk in the bathroom mirror, Nira did go back to ask some questions. She learned that the CEO wasn’t referring to her client work at all. He meant that he wanted her to rethink how she was managing her team meetings. He observed that she did a lot of the talking, leaving little space for some of her quieter — but wildly talented — team members. He had surprisingly good ideas about how to get some of the more hesitant voices into the room. There was more value in his feedback than she initially assumed.Further Reading
It’s easy to criticize the CEO for being so unclear — the leap from what he meant by “creative” and what anyone might imagine he meant is large. But any label he used would need some exploration in order to clarify what he was worried about, or what he was recommending she do. So always assume givers will need help articulating what they mean. And that the way to help them — and yourself — is by asking clear and curious questions without a defensive tone.
Check blind spots by asking two questions: “What’s wrong?” and “What might be right?” Sometimes feedback doesn’t feel “true” to us because we’re simply unaware of it. It sits squarely in a blind spot. To get a clearer idea of what you might be missing, ask a friend.
Take Jake (also not his real name). He received feedback that he needed to “watch his attitude.” In Jake’s mind, this was preposterous; his attitude was great. He was tireless and devoted.
He went to vent to a colleague down the hall, who was quick to be supportive: “That’s crazy! No one works as many hours a week as you do. You are always here.” She provided what Jake was implicitly asking for — a friend to support him by validating what was “wrong” with the feedback.
Most of us stop there, reassured. But if you want to check your blind spots ask your friend a second question: Okay, is there anything that might be right about the feedback?
Jake’s colleague — after a pause — offered this: “You do work a lot of hours, but every time you are asked to stay late, you sort of sigh and complain that you have no life. You actually do give off serious attitude.”
While there will always be something wrong with the feedback you get — maybe even 90% — there will also almost always be something right that you can learn from. Our friends and colleagues are well placed to help us see that last 10%. But they won’t do so unless we explicitly ask, and demonstrate that we won’t shoot the messenger.
Receiving feedback well doesn’t mean you have to take the feedback. Being good at receiving feedback means just that: that you receive it. That you hear it. That you work to understand it. That you share your perspective on it. That you reflect on it. That you sit with it. That you look for that (even tiny) bit that might be right and of value. Then you get to decide whether or not to act on it.
Whatever you decide, circle back to your feedback giver to share your thinking. If you don’t, they think you didn’t hear them, or didn’t care. Letting them know you took their input seriously will strengthen the relationship even if you ultimately go in a different direction.
Scott D. Anthony, Innosight managing partner, discusses why established corporations should be better at handling disruptive threats. He lays out a practical approach to transform a company’s existing business while creating future business. It hinges on a “capabilities link,” which means using corporate assets—that startups don’t have—to fight unfairly. He also discusses the leadership qualities of executives who effectively navigate their companies’ imminent disruption. Anthony is the coauthor of the new book, Dual Transformation: How to Reposition Today’s Business While Creating the Future.
A recent New York Times article on how Uber is using various insights from behavioral economics to push, or nudge, its drivers to pick up more fares — sometimes with little benefit to them — has generated quite a bit of criticism of Uber. It’s just one of several stories of late that have cast the company in a poor light.
When I read the piece, it reminded me of a question executives often ask me when I talk to them about the benefits of behavioral economics or give them examples of how they could use it in their own organizations: “Aren’t you afraid it will be used with ill intent?”
I always respond that, like many tools, it can be used in good and bad ways. Before I delve into the differences between the two, I should first make sure you are familiar with the somewhat new field of behavioral economics.
According to the traditional view in economics, we are rational agents, well informed with stable preferences, self-controlled, self-interested, and optimizing. The behavioral perspective takes issue with this view and suggests that we are characterized by fallible judgment and malleable preferences and behaviors, can make mistakes calculating risks, can be impulsive or myopic, and are driven by social desires (e.g., looking good in the eyes of others). In other words, we are simply human.
Behavioral economics starts with this latter assumption. It is a discipline that combines insights from the fields of psychology, economics, judgment, and decision making, and neuroscience to understand, predict, and ultimately change human behavior in ways that are more powerful than any one of those fields could provide on its own. Over the last few years, organizations in both the private and public sectors have applied some of the insights from behavioral economics to address a wide range of problems — from reducing cheating on taxes, work stress, and turnover to encouraging healthy habits, increasing savings for retirement as well as turning up to vote (as I wrote previously).
Uber has been using similar insights to influence drivers’ behavior. As Noam Scheiber writes in the Times article, “Employing hundreds of social scientists and data scientists, Uber has experimented with video game techniques, graphics and noncash rewards of little value that can prod drivers into working longer and harder — and sometimes at hours and locations that are less lucrative for them.”
One such approach, according to Scheiber, compels drivers toward collecting more fares based on the insight from behavioral sciences that people are highly influenced by goals. According to the article, Uber alerts drivers that they are very close to hitting a precious target when they try to log off. And it also sends drivers their next fare opportunity before their current ride is over.
Now let’s return to the question of when are nudges good and when are they bad. In discussing this topic with executives, I first provide a couple of examples. One of my favorites is the use of checklists in surgery to reduce patient complications. Checklists describe several standard critical processes of care that many operating rooms typically implement from memory. In a paper published in 2009, Alex Haynes and colleagues examined the use and effectiveness of checklists in eight hospitals in eight cities in the Unites States. They found the rate of death for patients undergoing surgery fell from 1.6% to 0.8% following the introduction of checklists. Inpatient complications also fell from 11% to 7%.
In a related paper published in 2013, Alexander Arriaga and colleagues had 17 operating-room teams participate in 106 simulated surgical-crisis scenarios. Each team was randomly assigned to work with or without a checklist and instructed to implement the critical processes of care.
The results were striking: Checklists reduced missed steps in the processes of care from 23% to 6%. Every team performed better when checklists were available. Remarkably, 97% of those who participated in the study reported that if one of these crises occurred while they were undergoing an operation, they would want the checklist used.
Another example I often give concerns the use of fuel- and carbon-efficient flight practices in the airline industry. In a recent paper, using data from more than 40,000 unique flights, John List and colleagues found significant savings in carbon emissions and monetary costs when airline captains received tailored monthly information on fuel efficiency, along with targets and individualized feedback. In the field study, captains were randomly assigned to one of four groups, including one “business as usual” control group and three intervention groups, and were provided with monthly letters from February 2014 through September 2014. The letters included one or more of the following: personalized feedback on the previous month’s fuel-efficiency practices; targets and feedback on fuel efficiency in the upcoming month; and a £10 donation to a charity of the captain’s choosing for each of three behavior targets met.
The result? All four groups increased their implementation of fuel-efficient behaviors. Thus, informing captains of their involvement in a study significantly changed their actions. (It’s a well-documented social-science finding called the Hawthorne effect.) Tailored information with targets and feedback was the most cost-effective intervention, improving fueling precision, in-flight efficiency measures, and efficient taxiing practices by 9% to 20%. The intervention, it appears, encourages a new habit, as fuel efficiency measures remained in use after the study ended. The implication? An estimated cost savings of $5.37 million in fuel costs for the airline and reduced emissions of more than 21,500 metric tons of CO2 over the eight-month period of the study.
Both in the case of surgeons using checklists or captains receiving feedback about fuel efficiency, one of the main goals of the intervention was to motivate the participants to act in a certain way. So, in a sense, the researchers were trying to encourage a change in behavior the same way managers at Uber were trying to bring about a change in their drivers’ behavior.
But there is an important difference across these three examples. Are the nudges used to benefit both parties involved in the interaction or do they create benefits for one side and costs for the other? If the former, then (as Richard Thaler and Cass Sunstein argue in their influential book Nudge) we are “nudging for good.” Thaler and Sunstein identify three guiding principles that should be on top of mind when designing nudges: Nudges should be transparent and never misleading, easily opted out of, and driven by the strong belief that the behavior being encouraged will improve the welfare of those being nudged.
That’s where the line between encouraging certain behaviors and manipulating people lies. And that’s also where I see little difference between applying behavioral economics or any other strategies or frameworks for leadership, talent management, and negotiations that I teach in my classes. We always have the opportunity to use them for either good or bad.
If the interests of a company and its employees differ, the organization can exploit its own members as Uber appears to have done. But there are plenty of situations where the interests are, in fact, aligned — the company certainly benefits from higher levels of performance and motivation, but the workers do, too, because they feel more satisfied with their work.
And that is where I see great potential in applying behavioral economics in organizations: to create real win-wins.
The idea of “work-life balance” is an invention of the mid-19th century. The notion of cultivating awareness of one’s work versus one’s pleasure emerged when the word “leisure” caught on in Europe in the Industrial Era. Work became separate from “life” (at least for a certain class of men) and we’ve been struggling to juggle them ever since.
Today, when so much work and leisure time involve staring at screens, I see a different struggle arising: a struggle to find a healthy balance between technology and the physical world, or, for short, “tech/body balance.” A 2016 survey from Deloitte found that Americans collectively check their phones 8 billion times per day. The average for individual Americans was 46 checks per day, including during leisure time—watching TV, spending time with friends, eating dinner.
So attached are we to our devices that it’s not unusual to have your phone with you at all times. We carry our phones around everywhere as if they are epi-pens and we all have fatal allergies. Consider: two weeks ago, as I was beginning a consulting project at a midtown Manhattan corporate office, I found myself making a U-turn on the way to the restroom. I needed to go back to my office to pick up my cellphone, which I had inadvertently left behind. It was an unconscious decision to go back and get it, but my assumption was clear: I needed to take the phone with me to the bathroom. Was I going to make a clandestine call from a bathroom stall? No. Was I dealing with an urgent business matter? Fortunately not. So why did I need my phone with me while I took care of a basic physical need? I didn’t really know. But apparently 90% of use our phones in the bathroom.
According to recent data from Nielsen, 87% of Singapore’s 5.4 million population reports owning a smartphone, while a smaller but still substantial 68% of Americans own smartphones. A hefty 89% of American workers have reported feeling chronic body pain as a result of the posture they’ve developed using these devices, and 82% of this same group also say that the presence of phones “deteriorated” their most recent conversations. Pew Global recently released a report about the correlation between smartphone use and economic growth, noting that the rates of technology-use are not only climbing steadily in advanced economies, but also in countries with emerging economies. As additional reference points, 39% of the Japanese population reports owning a smartphone, while 59% of Turkey reports relying on mobile internet use. These numbers decrease in developing countries, given the relationship that exists between a person’s educational background, socioeconomic status, and their access to technology.
But whether we are among those who use our devices to work remotely, or we are just obsessed with them because of the culture we live in regardless of how much time we are spending on “work,” it’s time to shift our attention to what tech-body balance could look like.
I decided to launch a two-week, informal experiment to explore what tech-body balance might look like, even as I failed to embody it. I divided my experiments into three categories, based on three basic bodily needs:Sleeping
For me and for many, the time in bed before sleep is a time to finally stop focusing on tasks to do and bask in feeling unfocused and empty-headed. For me, this means mindlessly scrolling through Instagram or Twitter to tire out my eyes until I am ready for sleep. Sometimes, I’ll mindlessly scroll for as much as an hour. So one night, I decided to impose a time limit. I gave myself five minutes, and they went by in one second. At the end of them, I felt annoyed by my self-imposed discipline and wanted to keep scrolling, even as I realized I had not learned anything new or even been entertained by the activity.
Sure, my work-life balance is fine in those moments, as I’m not writing work emails in bed (though yes, I have done that too). But what about my tech-body balance? My neck is strained while looking at my phone, my wrists tire from scrolling, and my attention is fully dedicated to my brightly lit device, rather than winding down for sleep.
Since imposing a time limit didn’t work very well, I decided a more drastic experiment was needed. I tried using a real, old-fashioned alarm clock to wake myself up (rather than the alarm on my phone), and left my phone in the charger a short walk from my bed. Embarrassingly, this felt like a radical decision to make—and you know what? It was. I didn’t look at my phone before bed, and instead let myself think in the dark, and let my eyes tire on their own.Eating
Our bodies and minds need fuel to function properly, and eating food is what gives us fuel. Of course, eating can introduce complications like digestive malaise when stress is in the picture (at least that’s true for me), or when I, like so many of us, inhale my food while sitting at my computer writing emails, thinking about a million things at once.
I tried to stop staring at screens while I was eating, but honestly, it was hard. I was not able to make this a regular habit due to pragmatic concerns like a busy day or not enough time to eat lunch. But I tried it on several occasions, and that in itself felt illuminating.
What if you chose, once a week, to eat one meal alone without your phone or a computer nearby? It might feel unsettling, but you will feel your body, and you may find you are even able to eat more slowly, chew more carefully, and enjoy your food a lot more.Moving
Personally, I love talking on the phone while walking, and find that my ideas are more organic and free to arrive at my mind when I am on the move. I decided that my first experiment here would just be to walk during more of my phone calls, rather than take them seated at a desk, staring at a screen. Sure, you may be distracted by your surroundings while you are walking, but it is dynamic distraction that prevents you from looking at another device. (I don’t know about you, but I have the awful habit of writing emails while on calls).
To try out something more radical, even scary (as much as I am embarrassed to admit it), I decided to take a walk the other afternoon during the work day, and very deliberately left my phone behind. More than usual, I felt little reminders pop into my head, tempting me to get my phone to jot it down in G-cal or in my Notes app. But instead, I had to experience the discomfort of knowing that I’d either remember what I needed to remember organically, or simply forget and accept the consequences. It was uncomfortable to take this walk, particularly as I did it during a day when I felt stressed and busy at work. But of course, the counterintuitive wisdom I hoped for did arrive: the break from the stressors of my phone and computer gave me a sense of spaciousness and freedom, even though there were distinct moments of panic and disorientation. At one point, I reached into my pocket and felt the cortisol rush as I genuinely thought I lost my phone.
As you can tell, I didn’t have an easy time with this experiment, and it was certainly not a strict “digital detox.” But I think that tech-body balance shouldn’t be extreme. Extreme behavioral shifts strike me as unsustainable and unproductive. Like work-life balance, finding tech-body balance is a constant experiment, and one that is different for everyone. Tech, like “work,” is something that’s mostly a positive thing for each of us, and for the world we live in. But it is important to remember that we often do not need our phones with us, regardless of how much it may feel like we do.
Cognitive computing and artificial intelligence (AI) are spawning what many are calling a new type of industrial revolution. While both technologies refer to the same process, there is a slight nuance to each. To be specific, cognitive uses a suite of many technologies that are designed to augment the cognitive capabilities of a human mind. A cognitive system can perceive and infer, reason and learn. We’re defining AI here as a broad term that loosely refers to computers that can perform tasks that once required human intelligence. Because these systems can be trained to analyze and understand natural language, mimic human reasoning processes, and make decisions, businesses are increasingly deploying them to automate routine activities. From self-driving cars to drones to automated business operations, this technology has the potential to enhance productivity, direct human talent on critical issues, accelerate innovation, and lower operating costs.
Yet, like any technology that is not properly managed and protected, cognitive systems that use humanoid robots and avatars — and less human labor — can also pose immense cybersecurity vulnerabilities for businesses, compromising their operations. The criminal underground has been leveraging this capability for years, using the concept of “botnets” — which distribute tiny pieces of code across thousands of computers programmed to execute tasks that mimic the actions of tens and hundreds of thousands of users, resulting in mass cyberattacks and spamming of email and texts, and even making major websites unavailable for large periods of time via denial of service attacks.Insight Center
- The Age of AI Sponsored by Accenture How it will impact business, industry, and society.
In a digital world where there is greater reliance on business data analytics and electronic consumer interactions, the C-suite cannot afford to ignore these existing security risks. In addition, there are unique and new cyber risks associated with cognitive and AI technology. Businesses must be thoughtful about adopting new information technologies, employing multiple layers of cyber defense, and security planning to reduce the growing threat. As with any innovative new technology, there are positive and negative implications. Businesses must recognize that a technology powerful enough to benefit them is equally capable of hurting them.
First of all, there’s no guarantee of reliability with cognitive technology. It is only as good as the information fed into the system, and the training and context that a human expert provides. In an ideal state, systems are designed to simulate and scale the reasoning, judgment, and decision making capabilities of the most competent and expertly trained human minds. But, bad human actors — say, a disgruntled employee or rogue outsiders — could hijack the system, enter misleading or inaccurate data, and hold it hostage by withholding mission-critical information or by “teaching” the computer to process data inappropriately.
Second, cognitive and artificial intelligence systems are trained to mimic analytical processes of the human brain — not always through clear, step-by-step programming instructions like a traditional system, but through example, repetition, observation and inference.Related Video The Upside of Automating Part of Your Job Machines will do the things you didn't want to do anyway. See More Videos > See More Videos >
But, if the system is sabotaged or purposely fed inaccurate information, it could infer an incorrect correlation as “correct” or “learn” a bad behavior. Since most cognitive systems are designed to have freedom, as humans do, they often use non-expiring and “hard-coded” passwords. A malicious hacker can use the same login credentials as the bot to gain access to much more data than a single individual is allowed. Security monitoring systems are sometimes configured to ignore “bot” or “machine access” logs to reduce the large volume of systemic access. But this can allow a malicious intruder, masquerading as a bot, to gain access to systems for long periods of time — and go largely undetected.
In some cases, attempts to leverage new technology can have unintended consequences, and an entire organization can become a victim. In a now-classic example, Microsoft’s Twitter bot, Tay, which was designed to learn how to communicate naturally with young people on social media, was compromised shortly after going live when internet trolls figured out the vulnerabilities of its learning algorithms and began feeding it racist, sexist, and homophobic content. The result was that Tay began to spew hateful and inappropriate answers and commentary on social media to millions of followers.
Finally, contrary to popular thinking, cognitive systems are not protected from hacks just because a process is automated. Chatbots are increasingly becoming commonplace in every type of setting, including enterprise and customer call centers. By collecting personal information about users and responding to their inquiries, some bots are designed to keep learning over time how to do their jobs better. This plays a critical role in ensuring accuracy, particularly in regulated industries like healthcare and finance that possess a high volume of confidential membership and customer information.
But like any technology, these automated chatbots can also be used by malicious hackers to scale up fraudulent transactions, mislead people, steal personally-identifiable information, and penetrate systems. We have already seen evidence of advanced AI tools being used to penetrate websites to steal compromising and embarrassing information on individuals, with high-profile examples such as Ashley Madison, Yahoo and the DNC. As bad actors continue to develop advanced AI for malicious purposes, it will require organizations to deploy equally advanced AI to prevent, detect and counter these attacks.
But, risks aside, there is tremendous upside for cyber security professionals to leverage AI and cognitive techniques. Routine tasks such as analyzing large volumes of security event logs can be automated by using digital labor and machine learning to increase accuracy. As systems become more effective at identifying malicious and unauthorized access, cybersecurity systems can become “self-healing” — actually updating controls and patching systems in real time — as a direct result of learning and understanding how hackers exploit new approaches.
Cognitive and AI technologies are a certainty of our future. While they have the power to bring immense potential to our productivity and quality of life, we must also be mindful of potential vulnerabilities on an equally large scale. With humans, a security breach can often be localized back to the source and sealed. With cognitive and AI breaches, the damage can become massive in seconds. Balancing the demands between automation and information security should be about making cybersecurity integral — not an afterthought — to an organization’s information infrastructure.
Turkish people love to boast about their hospitality. In our culture, people consider it the height of ill manners not to offer tea to guests. When civil war broke out in Syria and refugees started crossing the border, Turkish officials proclaimed that the country was welcoming Muslim brothers fleeing the brutal Bashar al-Assad regime.
This approach was built upon the premise that the al-Assad regime would collapse relatively quickly, allowing Syrians to return home. In 2014, the “Foreigners under Temporary Protection” regulation granted refugees free access to public services such as education and health care. It did not include a vision for long-term integration; nobody in Turkey foresaw the duration and severity of Syria’s humanitarian crisis. Moreover, a March 2016 deal between the EU and Turkey closed the EU’s borders to many refugees and consigned Turkey to the task of handling millions of displaced Syrians on its own.
Now, nearly 3 million Syrians reside in Turkey, making up 3.5% of Turkey’s population. Of these, approximately 1.8 million are of working age. The majority possess low skill sets and face language barriers. One survey showed 80% of refugees residing outside of refugee camps have eight years of education or less. Only 10% have a university degree. Unemployment rates have risen in regions with more refugees, perhaps indicating that Syrian refugees are displacing less-educated Turkish citizens as a source of cheap labor. This may be one of the causes of increasing civil unrest; as elsewhere in the world, xenophobia is on the rise in Turkey. Everyday rhetoric has shifted from “Syrians are our guests” to vitriol and suspicion. When President Recep Tayyip Erdoğan suggested citizenship for some Syrians, especially doctors and engineers, there was a backlash. Some people complain that Turkish soldiers are dying on Syrian soil in battles with ISIS while refugees relax and collect social aid.
A long-term framework for integrating Syrian refugees is a pressing domestic policy to head off tensions like these. It’s also necessary to prevent vulnerable Syrians from being economically exploited. A 2013 survey shows the median earnings for men outside of refugee camps (where they’re not allowed to work) was $160 per month, much less than the Turkish minimum monthly wage of around $400. Child labor is on the rise. (There are 1 million Syrian children under the age of 15 in Turkey.) One study suggests that, in 2015, most Syrian children worked for more than eight hours per day nearly every day. The average daily earnings were less than $12. Raiding a workshop in 2016, police seized thousands of fake life jackets that were to be sold to refugees attempting to ply the illegal sea route to Greece. Even more tragic, the workshop employed Syrian children.
It doesn’t have to be this way. There is growing evidence that, despite the challenges and economic costs of integration, Syrians are contributing to the Turkish economy. Some are creating jobs through entrepreneurship; one in three newly established foreign firms in Turkey is owned by Syrians, reaching 2.4% of all new companies in 2015. Syrians are also boosting exports to the Middle East. The share of Syrian firms is much higher in Turkey’s southeast region, especially in Gaziantep, the main export hub to the Middle East and North Africa. Many economists also believe Syrians boosted consumption growth in an economy that relies heavily on consumption. That’s one reason why Turkey’s GDP grew by 2.9% in 2016, despite a failed coup attempt, terror attacks, political turbulence, and a halt to international capital inflows felt across emerging economies.
New research shows that when Syrian businessmen moved to Turkey, they also transferred their commercial networks. My own forthcoming research shows that average wages in formal jobs have risen slightly with the establishment of new Syrian firms. As the capital they bring in goes up, unemployment insurance claims have gone down.
It’s time for the Turkish government to build on these promising signs and work toward long-term integration of refugees into the economy. My research and the work of others suggest that targeted training programs, tax incentives, and steps to ease credit constraints could offset, at least partially, the financial cost of accommodating Syrian refugees, not to mention future social costs. New policies could have a particular impact on the younger generation of refugees, most of whom are bilingual (speaking both Turkish and Arabic) and have a strong stake in creating new lives for themselves in Turkey. Specifically, government policy makers would be wise to help ease credit constraints for Syrians, because promoting entrepreneurship could increase regional exports and create jobs.
Lower-skilled Syrian refugees could also help meet the demand for caregiver services in Turkey. Just as in countries like the U.S. and Germany, Turkey is facing the economic challenge of an aging workforce and unmet demand for caregiving for the elderly and children. As in other countries, where immigrants often take such jobs, less-educated Syrian refugees could find work in this area. But it is also important to give Syrians more access to education, which undeniably increases participation in the economy. Targeted government training programs and certain tax exemption schemes might make a difference in terms of integration, employment, and cohabitation, while at the same time enlarging the overall size of the pie so that lower-skilled Turkish-born people can still find work.
Syrians no longer feel as welcome as they did when the Turkish border first opened to give them safe harbor. But with the right policies, the spirit of hospitality that reigned in those early days can result in a better economic future for all involved.
Seamus (not his real name) was having a rough time at work. An attorney at a large firm, he lost a big trial that the company had invested heavily in. He was relieved when the company still offered him the promotion he’d been working toward — but he then had to turn down the role because it would have required him to relocate.
After that things changed in the atmosphere of the office; he could sense people acting differently toward him even though no one said anything to him directly. “I was not invited to several meetings,” he told us, “and was left out of several important decisions about the direction of the law firm.” He heard that one partner had denigrated him to a group of others; Seamus felt his approach was “very passive aggressive.” The situation came to a head with a social occasion — all of the attorneys at the firm were invited to play a basketball game and he was left off of the invitation. By this point he had no doubt it was deliberate; he was 6’4” and a strong player.
The negative effects of bullying and harassment and other aggressive behaviors in the workplace are becoming better known, but another, quieter form of torment is actually far more common: ostracism. Research indicates that a full 71% of professionals experience some degree of exclusion or social isolation in a six-month period (compared to the 49% that experience harassment, for example). And it’s not just more common: research has also shown that experiencing ostracism in the workplace can in fact be more psychologically hurtful than being the target of more overt aggressive behavior.
Our investigation of why that is has helped us identify a number of strategies that you can use if you feel like you’re being intentionally left out or given the silent treatment at work.
First it helps to understand why ostracism happens, and why it’s so hurtful. As a “sin of omission,” ostracism is an act that someone didn’t do: they didn’t acknowledge you or reach out to you or invite you to something. Whether the act of one person or many, it can include being left off email threads, being looked over for a committee position, or being ignored when making suggestions. You may experience conversations stopping when you try to join in, no one taking your order for a coffee run, or finding out that you haven’t been invited to a weekend outing with colleagues.
Often the person who is leaving you out may not mean any deliberate slight; they may be forgetful or distracted in the moment, or more generally socially insensitive or inept. And those who do realize they are doing it — often as a misguided way of avoiding handling conflict or otherwise seeking to protect themselves — don’t realize how hurtful it is; research shows that most managers do not view ostracism as particularly harmful or socially unacceptable.
But extensive research shows that ostracism is harmful, whether or not it is deliberate, because what it omits is very important. As human beings, we have a fundamental social need to “belong”; from an evolutionary perspective, we are dependent on belonging to a group for survival. Therefore an absence of expected social engagement is a threat to a fundamental need; it signals that we are socially worthless and a bad fit for that very community that we depend on. This — as you may know from experience — makes being on the receiving end of ostracism acutely painful.
All the more so because it can be so ambiguous. It may be unclear to you whether the person actually meant to hurt you; it may even be unclear whether you’re actually being left out of anything or whether it’s all in your head. Just ruminating over these questions in itself causes pain. And because we’re more prone to readily interpret even minor acts of exclusion as meaningful, even if no negative intent was meant, you’re more likely to be convinced that you were deliberately targeted.Further Reading
Ostracism can have a sharp negative effect on your work, primarily because most of us respond to this kind of treatment with psychological withdrawal. (Even in those cases in which people work hard to get re-included, they become more focused on overcoming the ostracism than on their daily tasks.) In the workplace, that can mean a waning sense of motivation and commitment to the task at hand, or to your team or company. This can result in turnover for the organization; research shows that turnover is significantly higher three years after an episode of ostracism. Being left out also has more direct effects on your ability to excel in your job as you might miss out on critical information or get skipped over for a plum project that would have put you in line for a promotion.
So what do you do if you believe you are the target of this kind of behavior?
The first step is cognitive: challenge any assumptions that might lead you to blame yourself for the situation. Understand that the extent to which you’re hurt by an episode of ostracism depends entirely on how you perceive the situation and its threat to you. In part this hinges on your own sense of how your experience differs from social norms, or from the behavior you’d generally expect in a given situation. If you never expected to be invited to a meeting, for example, you won’t feel deliberately left out when it is held without you. So better understanding the norms of a situation and the intention behind it may improve your perspective on the episode in and of itself.
Ask yourself whether you are feeling left out in a situation in which exclusion makes sense — and also who else is being left out. Perhaps you weren’t invited to a meeting with the division head, but neither was anyone else in your department, because he’s going to meet with you all separately.
Talk to other trusted confidants who know the people and the situation you’re dealing with. Perhaps there’s some other social context that you need to know about; perhaps you weren’t invited to that meeting simply because you’re low on the totem pole, and it’s up to you do be more noticed and to actively exert your influence.
Next, consider whether there’s anyone else that this happens to (does Joan tend to ignore Alejandro in meetings too?). Talk to them and see if your stories match up. You’ll feel validated if they do, and you may realize that the issue lies more with Joan than with you.
When you talk to someone about what you’re experiencing, though, it’s important not to assume that the person will confirm your reality. Ostracism is hard to spot from the outside and it’s very typical for no one else to see what’s going on, even when it really is happening. That doesn’t mean you’re imagining it; if it’s beyond an isolated incident, you should trust your gut. If the person you’re talking to doesn’t see anything going on, instead ask them to imagine with you that it is. If so, why might it be happening? What could you do about it? You can still get good advice and support even if the person doesn’t corroborate your point of view.
Other approaches are less about a mind shift and more about behaviors that can change your experience or the situation itself.
First, seek social support. Aside from the conversations you have with colleagues trying to figure out what is going on, find the people who do value your contributions to the team — or who value you socially — and spend more time with them. This may seem frivolous, but positive social interactions like this will go a long way toward addressing your devalued self-worth. What’s more, your connections with other people can give you the confidence you need to grapple head-on with more difficult relationships.
In terms of logistics, if you’re getting left out of conversations or meetings where important information is shared, find other ways to get it. Create a broader work network so that you can go around the one difficult person and get resources in other ways.
If the situation persists, document what’s happening, as you should with any patterns of aggression, like harassment or bullying. This will give you a better opportunity to take the issue to others, or to the person themselves. It can be hard to challenge situations involving ostracism because others are less likely to appreciate how bad you might feel than if you were subject to more obvious mistreatment — even HR professionals may not be aware of how harmful ostracism can be. Documenting the situation and its effects on your work can help you make the case.
A final option is a direct confrontation of the person excluding you. This has its risks, especially because there’s a good chance that the person wasn’t doing it on purpose, or if they are, they will not admit it. Understand ways of dealing with conflict that mean your conversation will help solve the issue, instead of making it worse. More typically, someone accused of ostracism will deny it, whether it’s because it’s something they don’t want to deal with or because they really didn’t do it deliberately.
In Seamus’s case, he felt relatively confident that he knew why he was being ostracized — and that it was, in fact, deliberate. We would still recommend that he discuss the situation with trusted confidantes in the firm or profession to see if others had experienced similar situations and learn how they had resolved them. Because the situation was affecting his work — he was being left out of important meetings, decisions, and networking opportunities — we’d also recommend that Seamus document all instances of ostracism as he experienced them, and how they undermined his work. He could then confront the situation with those perpetrating it head on, potentially also involving HR or management.
In trying to increase awareness of ostracism in the workplace, we don’t want to imply that everyone should be included in everything all the time. We naturally form stronger social ties with some people more strongly than others. And everyone can be inadvertently left off of an email — those are normal everyday glitches. But workplaces demand a certain level of professionalism and respect between all members. If there’s a pattern where the same parties are excluding you for reasons beyond the social norms of your organization, you need to trust your gut, and use best practices of conflict resolution to address the issue with the mutual respect and professionalism that their approach is lacking.
Twenty five years ago Steve Hughes, now the CEO of Sunrise Strategic Partners, was walking through an orange juice plant when he had an epiphany that turned into a $500 million business.
Hughes had just become executive vice-president of Tropicana, and he was touring facilities to try to learn about the business. He was in a plant and noticed some of the workers on a break. They were near a manufacturing line that separated the pulp from the orange juice. He noticed they were taking the excess pulp—normally a waste product–and putting it back into a special batch of juice for their personal consumption. “That’s funny,” Hughes said to himself, his curiosity piqued.Related Video The Explainer: Marketing Myopia Theodore Levitt's classic theory -- in under two minutes. See More Videos > See More Videos >
Some managers would have kept moving. Instead, Hughes stopped and asked them why they were creating this overly-pulpy version of orange juice. “It makes it taste like it was fresh squeezed,” the workers told him. Hughes tasted some, and he agreed. Shortly thereafter Hughes led the launch of Tropicana Grovestand, a new extra-pulpy beverage with the tagline “the taste of fresh-squeezed orange juice.” In its first year, Grovestand achieved $200 million in revenues. After four years, it was a $500 million brand—with 90% of those sales representing incremental revenue.
Grovestand wasn’t Hughes’s only innovation. Since then he’s grown sales dramatically at food-and-beverage brands like Healthy Choice and Silk, maker of almond milk and other plant-based beverages. One of the distinctive qualities about Hughes’s management style is where others might see a strange behavior and dismisses it as “weird,” Hughes is more likely to react with curiosity and an instant willingness to learn more from bizarre behavior.
This difference between “That’s weird” and “That’s funny” is subtle but important. At the core, the reaction “That’s funny” is one of curiosity. Curious people learn more than others who are not. There’s a great quote from Shinryu Suzuki, a Zen Buddhist monk, who said: “In the beginner’s mind there are many possibilities, but in the expert’s there are few.” Overlooked opportunities are unearthed when leaders, despite their expertise, continue to see the world through a beginner’s mind.
Underlying curiosity is respect and empathy for other people, one driven by humility. People who observe the world through a “That’s funny” lens aren’t intending to mock or poke fun at behavior they’ve never seen before. Instead of instantly registering unusual behavior as aberrant, “That’s funny” leaders assume this is an opportunity to learn.
“That’s weird” is very different. This point-of-view creates distance and repels people, instead of creating intimacy and empathy. “That’s weird” frames novel behaviors in negative and judgmental terms. It’s an attitude that inhibits learning, and it causes leaders to miss out on opportunities.
At root, the difference between these two attitudes lies in how an organization or its leader deals with variance or a deviation from average.
Consider an example. Let’s say you had three people. Person A brushes his teeth four times a day, Person B brushes twice a day, and Person C rarely brushes her teeth. The average would tell you that most people brush twice a day, but that obscures the stories behind the non-average behavior. The person who brushes four times a day might be seeking to brighten his smile, in order to get a new job that will give him the confidence to ask his girlfriend to marry him. What about the person who never brushes? The reason: Her gums and teeth are extra sensitive and bleed. Brushing makes her feel bad. She chooses to just rinse with warm water, but she knows it’s not enough. She slowly pulls away from socializing because she is ashamed of her teeth, but she doesn’t know what to do. Once you go beyond the average, you find unusual stories, in which there can be great opportunities: a new, stain-removing toothpaste or extra-soft toothbrush.
Averages are the enemies of innovators. Innovation happens at the fringes, not at the center. We can often learn far more by talking with both superconsumers and infrequent users and connecting the dots than from talking with the average user, because superconsumers speak to us through their unexpected uses, while light consumers communicate through their unexpected non-consumption and compensating behaviors.
We undervalue variance in part because it is human nature to dismiss things that are different from us. Curiosity can create complexity, and asking questions could upend what we’ve worked hard to keep easy and familiar.
We need to learn to recognize variance as an opportunity. Remind yourself and your team that if you can train yourself to react with “That’s funny” instead of “That’s weird,” you may spot some very profitable ideas.
On April 17th, Boston will host its 121st annual marathon. The Boston Marathon is the oldest long-distance road race in the United States and one of the largest marathons in the country. It is expected to attract approximately 30,000 runners and an estimated 500,000 spectators to line the marathon route and cheer them along.
Large public events like the Boston Marathon require tremendous resource investment to organize and manage the event as well as to ensure the safety of those involved. In addition to widespread road closures required to hold the marathon (by definition, more than 26.2 miles), the city enlists approximately 1,900 medical professionals and 3,500 military or police personnel to be posted along the race route.
While the focus of the city and the Boston Athletic Association (which organizes the race) will be on the health and safety of those involved, we wondered whether these infrastructure disruptions could have unintended health consequences for people not participating?
In a study just published in the New England Journal of Medicine, we investigated what happens to Medicare patients who suffer an acute cardiac emergency – either a heart attack or cardiac arrest – during a major marathon and are hospitalized in an area affected by the race route.
We compared 1,145 patients hospitalized on marathon days to 11,074 patients hospitalized on identical days in the 5 weeks before and after each marathon (so as to compare Mondays to Mondays, and so on) and those hospitalized on the same dates in surrounding non-affected areas. Our goal was to determine whether the road closures and traffic disruptions that occur during marathons could adversely affect elderly patients (not participating in marathons) trying to get to the hospital. In our sample of Medicare patients, the average age was 77 years and the majority had several chronic conditions.
We studied Medicare hospitalization data and ambulance transportation data from 11 U.S. cities that held marathons from 2002 to 2012. We found that Medicare patients who were hospitalized for an acute cardiac emergency on the day of a marathon had substantially higher 30-day mortality (28.2% of them died), compared to patients admitted during the weeks before and after the race (24.9% of them died) and to patients admitted on the day of the race, but in zip codes just outside of the marathon route (24.8% of them died). We accounted for patient demographics, clinical conditions, and hospital effects (which allowed us to compare outcomes of patients admitted to the same hospital on marathon vs non-marathon days). Our findings imply that, for every 100 patients who have a heart attack or cardiac arrest, an additional three people would die within one month if the cardiac event happened on the day of a marathon.
What could drive these results?
Perhaps road closures and immense crowds led patients to be hospitalized in different, lower quality hospitals outside the marathon city. However, we found no evidence of this; the hospitals that treated patients on marathon days were similar in quality to the hospitals that treated patients on non-marathon days. Similarly, Medicare patients tended to receive the same treatments when hospitalized on marathon and non-marathon days, including percutaneous coronary intervention (or stenting of the heart). Both findings suggest that the mortality differences we observed were due to differences in care that was provided to patients prior to hospital arrival, which includes delays in ambulance transport.
We considered the effects of delays in care imposed by marathons. Using a national database of ambulance transports, we studied the average time it took ambulances to transport patients from their home to the hospital in host cities. Our results showed that during the mornings of marathons, ambulances took 4.5 minutes longer, on average, to get to the hospital, relative to the surrounding non-marathon days. While this may seem small in absolute terms, it reflects a nearly 30% increase in travel time (the average travel time in our data was about 12 minutes). We found no corresponding increase during the evenings of the marathon date (by which point we assume most roads have re-opened), nor in neighboring areas unaffected by the marathon.
Our findings suggest that widespread road closures and other infrastructure disruptions during major marathons lead to substantive delays in care for patients. The conditions we studied – heart attack and cardiac arrest – tend to prompt patients to seek care immediately, and there is a wealth of medical literature suggesting that mere minutes can be the difference between life and death for these patients. We expect that our findings would translate to other medical emergencies which require immediate care, such as trauma and stroke.
This has important implications for organizers of all large public events, not just marathons. Primarily, it is essential that they consider and plan for the unintended side effects these events create. Municipalities and organizations already devote tremendous resources to the safety of participants, but should account for the costs imposed on bystanders as well. Additional measures to take up might include instructing emergency medical personnel to prepare alternative protocols on the dates of major events – rather than simply increasing the number of ambulances available – to help reduce the health costs imposed on others.
Moreover, residents of cities hosting large events would also do well to take note of major infrastructure disruptions. While our analysis of travel times focused on ambulances, 23% of the patients in our study arrived at the hospital by other means. It is reasonable to assume that these patients faced even larger transport delays; so residents should know to call medical personnel at the first sign of symptoms rather than attempt to drive to the hospital themselves.
Many U.S. cities enjoy a number of large public events, ranging from Fourth of July celebrations to professional sports games. While the disruptions they cause are felt by many, there are also substantive health implications for those who may require acute medical care. Both the organizers of these events and the residents living nearby should be aware of potential delays in medical care that these events can inadvertently cause.