Barry Shore Of Global Project Strategy

Instructions:

Write a page with resources in APA format. Think back to a project that you have been involved with at work (preferably one that followed a structured process). If you have not been involved with a project at work, select a project from this list to evaluate:

  • Barry Shore of Global Project Strategy. (2009).attached below:

Evaluate the project based on accepted effectiveness theories and describe:

  • Goal/Team/Resources/Timeline
  • Challenges in implementation
  • Successes in Implementation
  • Overall success (include how you would measure this)
  • Lessons learned/what you do differently?

P A

P E

R S

December 2008 � Project Management Journal � DOI: 10.1002/pmj 5

Why Do Projects Fail? Project failure rates are certainly cause for concern, but consider that more and more organizations are adopting a project-based model of organization, called PBO, and it is not surprising to find that addressing failures and learning from them has become increasingly important (Eden, Ackermann, & Williams, 2005; Gray & Larson, 2006; Hyvari, 2006; Robertson & Williams, 2006; Thiry & Deguire, 2007).

Failures occur despite the fact that we have significantly improved the process of planning, executing, and controlling projects. Two contributions would include the Project Management Institute’s (PMI’s) A Guide to the Project Management Body of Knowledge (PMBOK® Guide) (2004) and the literature on critical success factors (CSFs) (Cooke-Davies, 2002; Fortune & White, 2006; Hyvari, 2006; Pinto & Slevin, 1987; Sutterfield, Friday-Stroud, & Shivers-Blackwell, 2006).

To help us understand how projects fail, it may be useful to classify the approaches represented by the PMBOK® Guide, Capability Maturity Model Integration (CMMI), Earned Value Management (EVM), Critical Chain Project Management (CCPM), and CSFs as the Rational Expectation view of project management. They assume that project leaders follow a rational and consistent approach to project management and strive to achieve spe- cific organizational goals (Bazerman, 1994; Beach & Connolly, 2005). It is a view that emphasizes what “should” be done. Argyris (1999) referred to this as the “espoused” theory of individuals and organizations.

There is, however, another view, and it focuses on the way in which indi- viduals within an organization actually behave and make decisions. Borrowing from the work of Simon (1955) and Tversky and Kahneman (1974, 1981), it can be classified as the “behavioral” view of project management. It emphasizes what individuals and groups “actually” do and how managers make decisions involving values and risk preferences (Bazerman, 1994). Argyris (1999) called this the “theory-in-practice.”

This article focuses on the behavioral view of project management and how an understanding of systematic biases—those common to the human decision-making process—can prove useful in diagnosing project failure. By studying these systematic biases, we can learn how decision makers respond to ambiguity, complexity, and uncertainty, as well as how their own particu- lar psychological processes influence project decision making (Schwenk, 1984). From this behavioral view we can learn more about why management approves an overly ambitious scope, why communications between teams is limited, why a manager might ignore signs that the project is going badly, or why a manager discourages the participation of a wider constituency in the project management process.

The article begins with a framework for analyzing project outcomes, introduces the systematic biases commonly associated with decision processes, briefly summarizes eight project failures, uses these biases as a diagnostic tool in understanding how these projects failed, and develops an approach that links these biases to the project culture of failed projects. The

Systematic Biases and Culture in Project Failures Barry Shore, Whittemore School of Business and Economics, University of New Hampshire, Durham, NH, USA

ABSTRACT �

Project success rates have improved, and much of the credit can be given to the knowledge, practices, and standards that have contributed to the professionalization of the field. Unfortunately, too many failures still occur. Because many of them can be traced to man- agement and decision-making practices, it might be useful at this stage to explore a set of systematic biases to determine if understand- ing them can help diagnose and perhaps even prevent failures from occurring. This article begins with a framework identifying the influ- ences on project outcomes, defines the systematic biases that may derail projects, summarizes eight project failures, uses the framework to diagnose those failures, and con- cludes by suggesting how organizational and project culture may contribute to these very common and natural biases.

KEYWORDS: project failure; project cul- ture; systematic biases; project success

Project Management Journal, Vol. 39, No. 4, 5–16

© 2008 by the Project Management Institute

Published online in Wiley InterScience

(www.interscience.wiley.com)

DOI: 10.1002/pmj.20082

P A

P E

R S

6 December 2008 � Project Management Journal � DOI: 10.1002/pmj

Systematic Biases and Culture in Project Failures P

A P

E R

S

article concludes with two examples of how organizations have limited the damage from systematic biases.

The Interaction of Cultural, Leadership, Project, Management, and Behavioral Factors on Project Outcomes The outcome of a project can be related to the influence of cultural, leadership, project, management, and behavioral factors. These relationships are summa- rized in Figure 1. National culture can be defined as the values and belief sys- tems held by a group of individuals, learned early in life, and difficult to change (Hofstede, 1997). Given the international reach of an increasing number of projects, a contemporary view of project management must acknowledge the influence of national culture on the management of projects (Shore & Cross, 2005; Wang & Liu, 2007).

Organizational culture develops within the context of national culture and executive leadership. It can be defined as the shared perceptions of organizational work practices within organizational units (Hofstede, 1999). It also represents the particular ways of conducting organizational business and is instrumental in establishing the com- petence of the organization (Belassi, Kondra, & Tukel, 2007; Schein, 1985; van den Berg & Wilderom, 2004; van Marrewijk, 2007). While executive lead- ership shapes the culture of the organi- zation, project leadership shapes project culture (Turner & Müller, 2006).

Project culture is then the shared perceptions of project work practices, influenced by both the project leader and organizational culture. It is charac- terized by the way in which project planning, execution, and control are exercised.

The systematic biases, common to human decision processes and to be addressed in this article, influence management and team decisions, which in turn influence the planning, execution, and control of the project process.

Methodology The first step in this study was to iden- tify and define the systematic biases that have been studied in the decision literature (Bazerman, 1994; Beach & Connolly, 2005; Hammond, Keeney, & Raiffa, 2006; Keil, Depledge, & Rai, 2007; Tversky & Kahneman, 1974). These biases, defined in the next sec- tion, include the following: • available data, • conservatism, • escalation of commitment, • groupthink, • illusion of control, • overconfidence, • recency, • selective perception, and • sunk cost.

Establishing a clear distinction between them is difficult. For example, Keil et al. (2007) contended that selec- tive perception plays an important role in escalation of commitment. Langer’s (1975) illusion of control overlaps with overconfidence (Russo & Schoemaker, 1989). While the apparent overlap in the definition of these biases is prob- lematic, they have still proven useful in studying failures (Keil et al., 2007; Roberto, 2002).

In the second step of this study, the following project failures are briefly summarized: • Airbus 380, • Coast Guard Maritime Domain

Awareness Project, • Columbia Shuttle, • Denver Baggage Handling, • Mars Climate Orbiter and Mars Polar

Lander, • Merck Vioxx, • Microsoft Xbox 360, and • NYC Police Communications System.

Project Culture

Organizational Culture

Project Outcome

Project Planning Execution and

Control Processes

National Culture

Systematic Biases

Project Goals

Budget Schedule

Complexity

Management and Team Decision

Processes

Project Standards PMBOK ® Guide

Executive Leadership

Project Leadership

Figure 1: Influence of cultural, leadership, project, management, and behavioral factors on project outcome.

Data for these projects was obtained from public and government sources. Case studies were written for each failure (Siggelkow, 2007). Twenty- two business professionals attending a graduate program in “Management of Technology” discussed the nine sys- tematic biases. The participants were then divided into five groups and pre- sented with summaries of the eight cases. None of these professionals was employed by the organizations includ- ed in the study. Using a modified Delphi Method, each group was asked to read the cases and reach consensus on the systematic biases that could best explain why the projects failed (Skulmoski, Hartman, & Krahn, 2007). Finally, each of the five groups present- ed their results and a discussion fol- lowed, during which consensus for the group as a whole was reached.

Systematic Biases Systematic biases represent common distortions in the human decision- making process. They reflect a particu- lar point of view that may be contrary to rational thought. Further, they are systematic in contrast to random errors that, on average, cancel each other out (Bazerman, 1994; Beach & Connolly, 2005). These biases are summarized in Table 1.

Project Failures This section briefly summarizes the full version of the eight case studies pre- sented to the 22 participants. At the end of each case is a summary of the con- sensus reached by the entire group.

Airbus A380 Airbus was founded in 1970 as a loose consortium of 16 independent aero- space companies with facilities in France, Germany, Britain, and Spain. In 2000, Airbus started the A380 project, the goal of which was to design and manufacture a superjumbo jet capable of carrying up to 800 passengers. The aircraft was to usher in a new era of travel.

In the fall of 2006, when the aircraft was in the assembly stage at Toulouse,

France, a preassembled wiring harness produced in the Hamburg, Germany, plant failed to fit into the airframe. The problem, according to several press reports, was that the wiring har- ness had been designed in Hamburg using an older version of CATIA, soft- ware commonly used in aircraft design. The assembly plant in Toulouse, how- ever, used the most recent version of the software. Unfortunately, the ver- sions were incompatible, and the ability to share design specifications between these two plants was compromised. As

a result, hundreds of miles of cabin wiring failed to fit. There was no choice but to halt production, postpone deliv- eries of the aircraft for two years, and redesign the wiring system. Not only was the cost expected to exceed $6 billion, but it also placed the program two years behind schedule. When this delay was announced, the stock lost one-third of its value. Worse, the copresident of the company was accused in June 2008 of selling his stock before the problems were made public.

December 2008 � Project Management Journal � DOI: 10.1002/pmj 7

Table 1: Summary of systematic biases.

Systematic Bias Definition

Available data A data-collection process that is restricted to data that is readily or conveniently available (Bazerman, 1994)

Conservatism Failure to consider new information or negative feedback (Beach & Connolly, 2005)

Escalation of commitment to Additional resources allocated to a project that a failing course of action is increasingly unlikely to succeed. (Keil &

Montealegre, 2000; Keil et al., 2007; Schwenk, 1984; Staw, 1981)

Groupthink Members of a group under pressure to think alike, and to resist evidence that may threaten their view (Haslam, 2004; Haslam et al., 2006; Janis, 1971)

Illusion of control When decision makers conclude that they have more control over a situation than an objective evaluation of the situation would suggest (Langer, 1975; Martz, Neil, & Biscaccianti, 2003)

Overconfidence Level of expressed confidence that is unsupported by the evidence (Bazerman, 1994; Fischoff, Slovic, & Lichtenstein, 1977; Russo & Schoemaker, 1989; Schwenk, 1984)

Recency Disproportionate degree of emphasis placed on the most recent data (Beach & Connolly, 2005; Chan, 1995)

Selective perception The situation where several people perceive the same circumstances differently; varies with the ambiguity of the problem or task (Dearborn & Simon, 1958; Russo & Schoemaker, 1989)

Sunk cost The inability to accept that costs incurred earlier can no longer be recovered and should not be considered a factor in future decisions (Beach & Connolly, 2005; Staw & Ross, 1987)

8 December 2008 � Project Management Journal � DOI: 10.1002/pmj

Systematic Biases and Culture in Project Failures

What had been apparent for a long time, and confirmed in Business Week and the Wall Street Journal, was that Airbus had failed to transform itself from a balkanized organization into an integrated company, and as a result suf- fered from a convoluted management structure. Managers, moreover, acted to protect their former constituency and made political rather than economic decisions (Gauthier-Villars & Michaels, 2007; Matlack, 2006).

When the research groups were presented with this case, they conclud- ed that several systematic biases could be identified, including selective per- ception, groupthink, illusion of control, and availability bias.

Selective perception occurs when a problem is viewed from a limited or narrow frame. This, the group conclud- ed, explained why the organization was unable to move toward an integrated enterprise and why, as a practical example, top managers acted to protect their former constituents, unaware that incompatible software would jeopard- ize the project.

Second, the research groups sug- gested that the insular nature of the separate organizations created a proj- ect culture that limited communication among these units. It created, they maintained, a breeding ground for groupthink, where each group was iso- lated from others in the organization. As a result, many of the practices and procedures within these organizational units went unchallenged.

Third, while management at the central facilities in Toulouse initially envisioned a centralized organization with some control over its divisions, the inability of these divisions to use the same version of CATIA suggested that this transition was far from effective. Senior management, the research groups concluded, suffered from the illusion of control.

The fourth explanation was attrib- uted to availability bias, in which management at each of the plants was limited to the data that was available to

them, and thereby concluded that the project was meeting its local objectives. From their perspective, they were on schedule and within budget.

Coast Guard Maritime Domain Awareness Project In 2001, the U.S. Department of Homeland Security and the U.S. Coast Guard undertook a project that prom- ised to create the maritime equivalent of an air traffic control system. It was a project that would combine the use of long-range surveillance cameras, radar systems, and information technology to automatically identify vessels as they entered U.S. ports. It was also a project that would span 24 federal agencies responsible for the protection of water- ways and coasts.

In the first phase of the project, a complete surveillance system, called Project Hawkeye, was to be developed and implemented for the port of Miami. It would track larger vessels with radar, and smaller vessels, even jet skis, with infrared cameras. Finally, a software system would process the data to deter- mine which vessels posed a security threat and deserved closer scrutiny.

The first test of the system was declared a failure (Lipton, 2006). The cameras were ineffective in tracking the small boats, the radar system proved unreliable when it incorrectly identi- fied waves as boats, the Automated Identification System used for large boats failed to meet its objectives, and the software systems needed to make sense of the data had yet to be installed. Although some data from the system was available to the Coast Guard, they were unable to use it. Because the test failed, the implementation of this sys- tem in 35 ports was delayed until at least 2014.

When the research groups present- ed their conclusions, they focused on the complexity of working with 24 agencies. They concluded that since so many components of the system had failed, project leaders succumbed to the illusion of control bias; these lead-

ers assumed that they had control over the agencies and subcontractors, when in fact they did not. Control issues, the research groups continued, should have been resolved before the project was undertaken, not after.

The groups also suggested that selective perception contributed to the failure. Each separate agency focused only on its immediate task, with appar- ently little effort directed at integrating their role with that of others; there was no suggestion of a strategic relationship among vendors and agencies.

Columbia Shuttle On February 1, 2003, seven astronauts perished when their Columbia Shuttle disintegrated as it re-entered the earth’s atmosphere. During launch, a piece of foam insulation, similar in composition to a Styrofoam cup and about the size of a briefcase, broke away from the main propellant tank. The foam struck the left wing, seriously breaching the protective panels on its leading edge (Gehman, 2003).

It was not the first time that a sec- tion of foam had broken away during launch. In fact, it had happened on every previous flight. But on each of these flights, the spacecraft reentered the earth’s atmosphere without incident and safely returned home. Management assumed that it was a problem of minor significance and that it did not increase the risk level of the flight (Starbuck & Farjoun, 2005).

Many concluded, certainly just after the 2003 tragedy occurred, that technology was to blame. But a more thorough and comprehensive investi- gation, undertaken by the Columbia Accident Investigation Board (CAIB), concluded differently. It maintained that management was as much to blame for the failure as was the foam strike. The Board described a culture where, at every juncture, program man- agers were resistant to new informa- tion. It was a culture where people were unwilling to speak up, or if they did speak up, they were convinced they

P A

P E

R S

would not be heard. They also conclud- ed that the organizational failure was a product of NASA’s history, culture, and politics (Columbia Accident Investiga- tion Board, 2003).

The study participants concluded that NASA had created a culture in which systematic biases went unchecked. First, the participants identified the recency effect. Foam insulation had broken away on previous flights and caused no harm. To them this was confirmation that these recent events distorted the real danger presented by this problem.

Conservatism was also suggested, because the data from these previous flights was largely ignored by senior managers; they failed to revise their prior belief that the system was operat- ing properly. There was also evidence of overconfidence. During the flight, engi- neers, concerned that the foam strike may have caused a problem, asked a manager of the Mission Management Team (MMT) to request satellite imagery of the spacecraft. Management, howev- er, was apparently confident that there was no safety issue, and a decision was made against imagery. Had the imagery been authorized, and the damage dis- covered, the conjecture is that a rescue attempt would have had a reasonable chance of success.

Selective perception was suggested, since management of the shuttle pro- gram had shifted from an engineering focus to a managerial focus. This mor- phed the organization in such a way that engineering problems were less likely to be recognized and more likely to be dominated by schedules and budgets (Gehman, 2003).

Denver Baggage Handling The new airport in Denver, with a budget of $4.9 billion and originally scheduled for completion in October 1993, would be one of the nation’s largest public works projects of the 1990s (Brooke, 1995). It would cover 53 square miles and include five runways, with future expansion to 12 runways. Due to its size and the necessity to

move baggage quickly between flights, the airport would feature a completely automated baggage-handling system.

In April 1995, after many delays, the baggage system project was completed. Reporters were invited to attend a demonstration but instead witnessed a public disaster. Delivery carts were derailed, luggage was torn, and piles of clothes and other personal items were strewn beneath the tracks (Myerson, 1994). After scaling back the scope of the baggage system and making the necessary design changes, the airport finally opened, 16 months behind schedule and almost $2 billion over budget (Keil & Montealegre, 2000).

The baggage-handling project at Denver was more complex than any- thing that had been attempted before at any airport. Luggage was to be first loaded onto conveyor belts, much as it is in conventional baggage-handling sys- tems. These moving conveyors would then deposit the luggage into moving computer-controlled carts at precisely the right moment. The luggage would then travel at 17 miles per hour to its des- tination, as much as one mile away. This underground rail system would be com- pletely automated and would include 4,000 baggage carts traveling throughout the airport and under the control of 100 computers. It would be capable of pro- cessing up to 1,400 bags per minute.

After the system failed its public test, and after design changes were implemented, the system still had problems. Only United Airlines used it, and then only for outgoing flights. Other carriers turned to a hastily constructed manual system, since no contingent plans had been made should the automated system fail. Finally, in 2005, after a decade of frus- trating attempts to solve its problems, the system was abandoned. Under the lease agreement, United Airlines, one of the major stakeholders in the proj- ect, would still be liable for $60 million per year for 25 years.

In two papers that raised concerns about the scope and feasibility of the

project, deNeufville (1994, 2000) contended that this baggage-handling system represented an enormous technological leap over current practices. He concluded that the prob- lem of accommodating the variable demand made on the system, charac- terized in the literature as a classic line- balancing problem, would be difficult to solve.

After discussing the case, the research group identified overconfi- dence as a major factor in the failure. They referenced a quote in the case summary taken from the New York Times: “While the airport was being designed, United insisted that the air- port have the fancier baggage handling system, which it contended would sharply reduce delays” (Johnson, 1994). Overconfidence was also suggested in another quote from the New York Times (Myerson, 1994), in which Gene Di Fonso, president of BAE, the prime contractor for the project, declared, “Who would turn down a $193 million contract? You’d expect to have a little trouble for that kind of money.” With widespread support, the group con- cluded that no one questioned whether it could be done.

They also identified the sunk cost trap. In spite of years of disappoint- ments, when all the airlines, with the exception of United, opted out and used a manual backup baggage- handling system, the project continued. Both the City of Denver and United Airlines had already incurred high costs and were unwilling to disregard these past expenditures, even as their prob- lems persisted and grew worse.

The illusion of control was also identified for its role in keeping the project alive too long. It helped explain why, after evidence was presented at the beginning, a line-balancing prob- lem of this magnitude was very difficult to solve, and why, after an embarrass- ing preview of the system to reporters, management was still confident that it could fix the problems and control the outcome.

December 2008 � Project Management Journal � DOI: 10.1002/pmj 9

10 December 2008 � Project Management Journal � DOI: 10.1002/pmj

Systematic Biases and Culture in Project Failures

Mars Climate Orbiter and Mars Polar Lander As part of the NASA Mars Surveyor Program, the Mars Climate Orbiter was to orbit Mars and collect environmen- tal and weather data. But as the spacecraft approached its destination, telemetry signals fell silent, and a $125 million mission failed.

The root cause identified by NASA was the failure to convert between met- ric and English units. When the fatal error was detected, Noel Hinners, vice- president for flight systems at Lockheed, the company that built the spacecraft, said in disbelief, “It can’t be something that simple that could cause this to hap- pen” (Pollack, 1999). But it was.

Apparently, Lockheed had used pounds during the design of the engines, while NASA scientists, respon- sible for the operation and flight, thought the data was in metric units.

There were early signs during its flight that something was wrong with the craft’s trajectory, and an internal review later confirmed that it may have been off course for months (Oberg, 1999; Pollack, 1999). Project culture, however, required that engineers prove that something was wrong rather than “prove that everything was right.” This difference in perspective prevented the team from looking into the problem. Edward Weiler, NASA associate adminis- trator for space science, said, “The prob- lem here was not the error; it was the fail- ure of NASA’s systems engineering, and the checks and balances in our processes to detect the error” (Oberg, 1999, p. 34).

The Mars Investigation Panel report identified several contributing factors to the failure: the system engineering process did not adequately address the transition from development (Lockheed) to operations (NASA); inadequate com- munications between project elements; and inadequate staffing and training.

Within a few months of the Orbiter failure, the Mars Polar Lander, a related NASA project with a price tag of $165 million, suffered the same fate. Its flight was uneventful until it began its landing

approach. Then, during its descent to the rough terrain of the polar cap, telemetry signals fell silent. With no data to pinpoint the precise cause of failure, the teams investigating the accident speculated that the vehicle’s descent engines prematurely shut down. Unable to slow the descent, the speculation was that the engines quit when the Lander was 130 feet high, crashing into the sur- face of Mars at about 50 miles per hour. The inappropriate response of its engines was attributed to software glitches (Leary, 2000).

The prevailing culture at NASA of “Better, Faster, and Cheaper,” which defined the period when these projects were in development, has been high- lighted many times as the contributing factor behind these failures. Thomas Young, a former NASA official, said that they were trying “to do too much with too little.” He continued, “No one had a sense of how much trouble they were actually in” (Broad, 1999).

The prevailing culture was best expressed in an internal memo written by a laboratory official at the Jet Propulsion Lab: “There might have been some overconfidence, inadequate robustness in our processes, designs or operations, inadequate modeling and simulation of operations, and failure to heed early warnings” (Oberg, 1999, p. 35).

While the trajectory problem asso- ciated with the Orbiter and the engine ignition problem associated with the Lander could be characterized as tech- nical, the Mars Climate Orbiter Failure Board Report (2000) said that manage- ment failures were also to blame. They found that these projects suffered from a lack of senior management involve- ment and too much reliance on inexpe- rienced project managers. The Board also criticized the strategy where proj- ect managers in one organization (Lockheed) were responsible for devel- opment and a separate organization (NASA) was responsible for operations after launch.

The study group first identified the sunk cost trap. If the orbiter did not

launch on schedule, it would have to wait several months before its next opportunity to launch. With launch windows far apart, and with budgets unable to tolerate a substantial delay, managers were under pressure to meet the deadline; it was important not to “waste” the effort put into the project to that point.

Selective perception bias was iden- tified and used to explain why the engi- neers at the Jet Propulsion Lab, the design team, failed to coordinate with the operational team at NASA. In large- scale complex projects such as the Orbiter and Lander, with countless activities, contractors, and suppliers, it is very possible that teams may take a narrow view of their own activities. The risk is that the work of one team may be incompatible with the work of another.

Conservatism, the group contin- ued, explained why engineers failed to take action when they noticed that the trajectory of the spacecraft was off. They even held a meeting in Denver to address the issue, but it was never resolved. Even as the spacecraft approached its destination and data showed that it was drifting off course, controllers largely ignored the real data and assumed it was on course (Oberg, 1999).

Merck Vioxx In 2000, the New England Journal of Medicine published an article suggest- ing that Merck misrepresented clinical trial data on the risks of Vioxx, a drug used to treat arthritis pain. Suspicions were raised again when the Journal of the American Medical Association, pub- lished a paper in 2001 finding that those who took Vioxx were more than five times more likely to experience a cardiac event than those taking a commonly used over-the-counter anti- inflammatory drug, Naproxen. Merck denied these claims, insisting the find- ings were “flawed” (Topol, 2004). Then, under increasing pressure, they revised the Vioxx label in 2002 to reflect these added risks.

P A

P E

R S

During this period, Merck had undertaken a separate study focusing on the use of the drug in treating colon polyps. New data from this trial simply confirmed the risks that had been raised earlier.

Shortly thereafter, on September 30, 2004, five years after it had been introduced to the market, after 84 mil- lion people had taken the drug, and after three years of denying the drug could induce heart attacks and strokes, it was pulled from the shelves (Topol, 2004).

The legal consequences were signif- icant. Over 27,000 claims were filed contending that certain incriminating data were withheld during the FDA review process. In an early trial, a New Jersey jury ruled unanimously in March 2007 that Merck committed consumer fraud by intentionally suppressing, concealing, or omitting information on the risks of Vioxx. Eventually, Merck proposed an out-of-court settlement to the remaining complainants at a cost of over $5 billion (Berenson, 2007).

The study group identified organi- zation and project culture as important contributors to the Vioxx project fail- ure. They cited a Business Week article contending that Richard Clark, CEO, had watched the company degenerate into a “collection of fiefdoms” more focused on their own agendas than on the company’s agenda (Weintraub, 2007).

Financial pressures, the group con- tended, also shaped the culture. Drug discovery is a costly and lengthy process, fraught with risk. The average cost to bring a drug to market exceeds $1 billion. As drug trials proceeded from animal to human trials and even- tually to FDA review, it was not unrea- sonable to conclude that the pressure to continue with the project increased as investment increased.

These cultural problems, together with financial pressures, continued, creating a breeding ground for system- atic biases to emerge. The sunk cost trap was identified as the dominant

bias. After incurring nearly $1 billion to develop the drug, after generating $2.5 billion in sales during 2003, it was not difficult to understand why the compa- ny resisted pressure to remove Vioxx from the market.

While the sunk cost trap dominat- ed, conservatism was also identified as contributing to the failure, because Merck suppressed early data suggesting that the drug could have serious and sometimes tragic side effects.

Microsoft Xbox 360 When Microsoft rushed its Xbox video game console to market in November 2005, it had a one-year advantage over Sony and Nintendo. By 2007, Microsoft had sold over 11.6 million units at $279 to $479, depending on configuration.

Unresolved issues plagued the proj- ect from the beginning. When journal- ists and reviewers were invited to try the Xbox 360 in 2005, before it became available on store shelves, they encountered problems connecting it to the Internet (Croal, 2007). Shortly after it was introduced to the public, users complained that that the console dam- aged game disks, so much that they could no longer be used (Cliff, 2007). In 2005, Microsoft recalled the power cords, concerned that they posed a fire hazard (Wolverton & Takahashi, 2007). Then, in December 2006, in an appar- ent response to these and other issues, Microsoft extended the warranty from 90 days to one year.

But problems persisted. Blogs and forums complained about the “Red Ring of Death,” referring to a string of three lights that illuminate on the con- sole when a serious malfunction occurs. One survey found that the return rate was 33% (Cliff, 2007).

Then, in July 2007, Robbie Bach, president of Microsoft’s Entertainment and Devices Division, said that there had been an “unacceptable high num- ber of repairs” (Taub, 2007). Shortly thereafter, Microsoft announced an extension of the warranty from one to three years at an expected cost of

$1 billion. This represented about $100 for every Xbox sold since its introduc- tion in 2005.

Later in the same month Microsoft announced that its top gaming execu- tive, Peter Moore, was leaving the com- pany, but denied that his departure was related to the Xbox’s engineering prob- lems (Wingfield, 2007).

Three systematic biases were iden- tified by the group. The first was con- servatism. In the face of a continuous stream of product returns and cus- tomer complaints, those who were responsible for the project were unwill- ing to acknowledge that the problem was serious, that customer satisfac- tion and loyalty were deteriorating rapidly, that the product needed to be redesigned, and that customer satisfac- tion needed to be addressed.

It was also suggested that manage- ment fell prey to the sunk cost trap. Considerable investment in the prod- uct had already been made, sales were strong, and since the division had yet to turn a profit, there was pressure to con- tinue at any cost. Returning to earlier stages of design, issuing a recall for the defective units, and replacing them with new units were apparently not realistic options.

Because Microsoft declined to com- ment on the exact cause of the prob- lem, which many suspected was tied to either a power cord or component that was overheating, it was concluded that groupthink was also an issue. The only public comment, made by Robbie Bach, president of the Entertainment and Devices Division responsible for the Xbox, was that the company made manufacturing and production changes that should reduce hardware lockups (Taub, 2007). It was suggested by the group that this could be interpreted as protecting the company to prevent exposing its failures.

New York City Subway Communications System In New York City, police officers who worked underground in the city’s

December 2008 � Project Management Journal � DOI: 10.1002/pmj 11

12 December 2008 � Project Management Journal � DOI: 10.1002/pmj

Systematic Biases and Culture in Project Failures

extensive rapid transit subway system were routinely unable to communicate with officers working the streets above- ground. Incompatible systems were at fault. Not that this was a new problem to law enforcement and emergency organizations in New York City. On September 11, 2001, for example, it was not possible for police to communicate with firefighters and warn them that the World Trade Center towers were in jeopardy of collapsing.

As early as the 1990s, preliminary plans for an integrated communication system had been proposed. In 1999, a contract was signed with two firms. The project was scheduled for completion in 2004, with an approved budget of $115 million.

In 2001, a report warned of an inter- ference problem that could jeopardize the ability of the systems to work together. Rather than return to the design stage and study the validity of this concern, subcontractors continued with the project. It was completed in October 2007, but during the trial of the system it became apparent that inter- ference did indeed create serious communication problems. As a result, implementation was halted. Fixing the problem was expected to increase the cost of the project to $210 million.

The group linked the failure of the New York City Subway Communi- cations Project to conservatism, over- confidence, and illusion of control. Conservatism was suggested when the project managers failed to take the interference warning seriously enough to change their plans early in the proj- ect. Overconfidence was also suggested to explain why they ignored the warn- ing: project managers were apparently convinced that the proposed design would work or that all problems could eventually be solved.

Illusion of control was also identi- fied. The project management team presumably believed that they could fix the interference problem later, that somehow they had enough control to assure a successful outcome. Another

explanation is that they believed that the vendor would take responsibility to solve the problem.

Discussion A summary of the biases identified by the 22 participants can be found in Table 2. Four biases were mentioned more frequently than the others. Conservatism, or the failure to consid- er new information, was mentioned for the Columbia, Merck, Microsoft, and New York City Subway projects. Illusion of control was mentioned for the Airbus, Coast Guard, Denver Baggage, and New York City Subway projects. Selective perception was mentioned for the Airbus, Coast Guard, Mars, and Merck projects. Sunk cost was mentioned for the Denver Baggage, Mars, Merck, and Microsoft projects. Both groupthink and over- confidence were mentioned some- what less. Two biases, recency and available data, were mentioned only once, while escalation of commitment was not mentioned at all.

The results from this small sample prevent making conclusive statements about the dominant biases in project management, but the data begins to suggest that conservatism, illusion of control, selective perception, and sunk cost may be more common than the other biases. Whether they were iden- tified more frequently in this study because they were more easily under- stood by the participants or whether they actually contribute more than the others to project failure is difficult to conclude at this juncture. At the other extreme, the study suggests that esca- lation of commitment, available data, and recency are more difficult to iden- tify and may not contribute signifi- cantly to project failure.

It is rather surprising that escalation was not mentioned at all, because the Denver Baggage, Coast Guard, and the New York City Subway projects required additional funding after evidence became available that these projects were in trouble. One possible explanation

P AP ER S

Table 2:Sumary of biase saf fect ing each cas est udy.Av aila ble Esca latio nof llus ion ofSe lect ive Sunk Data Cons erva tism Com mitm ent Grou pthi nkCo ntro lOv erco nfid ence Rece ncy Perc eptio nCo stAirbus A38 0X XX XCo ast G uard Mar ine XXCo lum bia Shut tleX XXDe nver Bag gage XX XM ars Orbi ter XXM erck Vio xxX XXM icro soft Xbo x36 0X XXNe wY ork City Sub way XXDecember 2008 � Project Management Journal � DOI: 10.1002/pmj 13

literature that focuses on the study of systematic biases. There is also a long tradition in the project management lit- erature that focuses on project failures. The objective of this article has been to determine if bringing these two tradi- tions together could prove useful in learning more about project failures and then in understanding how culture may provide the environment within which these biases may emerge.

Twenty-two professionals partici- pated in the study. They were intro- duced to systematic biases and then asked to determine which of these bias- es could help explain eight failed proj- ects. Their responses suggest that the vocabulary of systematic biases could prove very useful in understanding how the rational processes of project man- agement can be derailed by the human decision-making process.

What this result underscores is that the skills and techniques expressed in the rational view of project manage- ment, regardless of how aggressively they are pursued, may be insufficient to

Several of the dimensions used in van den Berg and Wilderom overlap with the Competing Values Model.

The nine systematic biases used in this article were mapped onto the Competing Values Model. The results are summarized in Table 3. For exam- ple, the available data bias suggests an organization and project culture char- acterized by an internal focus and a concern that external data may lead to unwelcomed changes. Airbus is an example of a case study in which this bias was observed. It can be concluded that the Airbus project culture, at the very least, could be characterized as having a preference for an internal focus and stability. One can hypothe- size from Table 3 that failed projects, in general, can be associated with organi- zational and project cultures character- ized by an internal focus and a prefer- ence for stability, not change.

Summary There is a long tradition in the organiza- tional psychology and decision-making

Cognitive Bias Competing Case in Which Cognitive Values Model Bias Was Observed

Available data Internal focus, stability Airbus

Conservatism Internal focus, stability Columbia, Merck, Microsoft, New York City Subway

Escalation of commitment Internal focus, stability

Groupthink Internal focus, stability Airbus, Microsoft

Illusion of control Internal focus, stability Airbus, Coast Guard, Denver Baggage, New York City Subway

Overconfidence Internal focus, stability Columbia, Denver Baggage

Recency Internal focus, stability Columbia

Selective perception Internal focus, stability Airbus, Coast Guard, Mars, Merck

Sunk cost Stability Denver Baggage, Mars, Merck, Microsoft

Note. The results suggest that the failed projects studied in this article reflect a project culture that can be characterized as having a preference for an internal focus and stability.

Table 3: Cognitive biases mapped onto the competing values model.

tion is that the 22 participants behaved somewhat like the project managers in these ill-fated projects and concluded that allocating additional funds to pre- vent failure was a reasonable strategy and did not constitute escalation. Another explanation is that escalation and the other biases mentioned less frequently, such as available data and recency, are very difficult to identify. Recognizing those biases may require inside informa- tion, usually difficult to obtain.

In addition to using systematic bias- es as a vocabulary for understanding failures, it is also useful to consider the role of culture, as suggested in Figure 1, in creating an environment within which these biases may emerge.

Culture does affect outcome. Hansen and Wernerfelt (1989) showed that organizational factors explain about twice as much of the variance in profit as do economic factors. Henrie and Sousa-Poza (2005), in a compre- hensive review of the literature, sug- gested that culture may be a significant factor in project failure. They also con- tended that culture is not widely report- ed in the literature, nor have there been many attempts to measure it. Ajmal and Koskinen (2008) also concluded that the failure of many projects can be attrib- uted to organizational culture, and that a significant role of the project manager is to merge several different organiza- tional and professional cultures into one project culture.

To link issues of organizational and project culture to systematic biases requires that organizational culture be measured. Five dimensions were identi- fied in van den Berg and Wilderom (2004), including autonomy, external ori- entation, interdepartmental coordina- tion, human resource orientation, and improvement orientation. Livari and Huisman (2007) used the Competing Values Model to measure culture. That model includes four dimensions: • internal focus, • external focus, • stability, and • change.

14 December 2008 � Project Management Journal � DOI: 10.1002/pmj

Systematic Biases and Culture in Project Failures

assure project success. If indeed, as suggested by the literature, systematic biases are common in the human deci- sion-making process, then there are fundamental reasons why project fail- ure should not be an unexpected result.

The study also provides some insight into the organizational and project cultures of failed projects. It suggests that when these systematic biases are overlaid on the Competing Values Model, failed projects appear to be related to organizational and project cultures characterized by an internal focus and stability. This suggests that those organizations protecting their own structures and management processes, as well as those organiza- tions resisting change and dismissing external threats, may have created an environment in which systematic bias- es should not be unexpected, even when the application of the traditional tools of project management is vigor- ously enforced.

Although the purpose of this study was to explore the usefulness of sys- tematic biases in understanding failed projects, additional work needs to be undertaken to validate the framework, results, and theories expressed in this article. This might include a larger number of participants, greater partici- pant training to better understand the systematic biases, the use of survey instruments to undertake more focused empirical studies, or additional in- depth case studies.

Unfortunately, studies of project failure, including the failures summa- rized in this study, are limited by the extent to which organizations are will- ing to reveal and discuss their failures. Because most are unwilling to do so, we are often limited to public projects or those projects whose products or serv- ices are subjected to government regu- lation, such as the Challenger and Vioxx failures. Regardless of whether an organization engages in a postmortem analysis within the organization or whether it is an independent study of the failed project, much is still left

hidden. Sometimes it is hidden to protect organizations, teams, project managers, and careers; at other times to protect brands, market share, or investments.

Because organizational culture and project culture may play an important role in creating an environment within which systematic biases emerge, and since culture, as Hofstede (1999) con- tended, is difficult and slow to change, a logical strategy for some organiza- tions would be to change management practices, which in turn may set into motion events that may minimize the emergence of systematic biases. Two examples are worth mentioning.

When Boeing established a radical- ly new approach to project manage- ment for the 777 project, it hoped that it would improve the outdated engineer- ing and management processes that had been in place for decades. At the center of this approach was an open culture requiring teams to include representation from engineering, pro- duction, management, suppliers, and customers. It was a culture that did not discourage conflict, and if a suggestion was ignored, team members were encouraged to take it to the next high- est level. It was a radically new approach to project management at Boeing and produced one of the most successful aircraft in its history (Cohen, 2000).

Another strategy is to create a cul- ture that reduces the fear of failure (Staw & Ross, 1987). Merck, recognizing that it may have gone too far in empha- sizing success and punishing failure, is now promising stock options to scien- tists who terminate unpromising proj- ects. They say it is not the loss they are rewarding, but the scientist’s willing- ness to accept the fact that the project lacks promise and that he or she is will- ing to move on (Weintraub, 2007). Certainly, one advantage of this cul- tural shift is that managers are less likely to succumb to the sunk cost trap.

In conclusion, the real objective of this study was not to conclusively relate

specific systematic biases to project failures. Moreover, it was to suggest a vocabulary that could prove useful by providing insight into why projects fail, as well as understanding how project culture may inadvertently create an environment within which these very natural biases emerge. The evidence from this study suggests that this vocabulary is worth further study. �

References Ajmal, M., & Koskinen, K. (2008). Knowledge transfer in project-based organizations: An organizational culture perspective. Project Management Journal, 39(1), 7–15.

Argyris, C. (1999). On organizational learning. Malden, MA: Blackwell.

Bazerman, M. (1994). Judgment in managerial decision making. New York: Wiley.

Beach, L. R., & Connolly, T. (2005). The psychology of decision making. London: Sage.

Belassi, W., Kondra, A., & Tukel, O. (2007). New product development projects: The effects of organizational culture. Project Management Journal, 38(4), 12–24.

Berenson, A. (2007, November 9). Merck agrees to pay $4.85 billion for Vioxx claims. New York Times. Retrieved September 25, 2008, from http://www.nytimes.com/2007/11/09/ business/09cnd-merck.html

Broad, W. (1999, December 8). Experts warn of Mars on the cheap. New York Times. Retrieved September 25, 2008, from http://query.nytimes.com/ gst/fullpage.html?res=9401E7DC1E3EF 93BA35751C1A96F958260

Brooke, J. (1995, October 22). New Denver airport up and running, but takeoff is none too soon. New York Times. Retrieved September 25, 2008, from http://query.nytimes.com/ gst/fullpage.html?res=9C06E0DF1E39F 931A15753C1A963958260

Chan, M. K. (1995). The moderating effects of cognitive style and recency effects on the auditor’s belief revision

P A

P E

R S

process. Managerial Auditing Journal, 10(9), 22–28.

Cliff, E. (2007, September 9). Microsoft’s billion-dollar fix. Business Week, p. 29.

Cohen, I. (2000). Philip Condit and the Boeing 777: From design and develop- ment to production and sales. In H. Kerzner, Project management case studies (pp. 81–104). New York: Wiley.

Columbia Accident Investigation Board. (2003). Retrieved May 11, 2008 from http://www.nasa.gov/columbia/ home/CAIB_Vol1.html

Cooke-Davies, T. J. (2002). The real success factors in projects. International Journal of Project Management, 23(3), 185–190.

Croal, N. (2007, July 16). Beware the red rings of death. Newsweek. Retrieved September 25, 2008, from http://findarticles.com/p/articles/ mi_kmnew/is_/ai_n19380767

Dearborn, D., & Simon, H. (1958). Selective perception: A note on the departmental identification of execu- tives. Sociometry, 21, 140–144.

deNeufville, R. (1994). The baggage handling system at Denver: Prospects and lessons. Journal of Air Transport Management, 1(4), 229–236.

deNeufville, R. (2000, April 12). The baggage system at Denver: Prospects and lessons. Retrieved April 12, 2008, from http://ardent.mit.edu/airports/ ASP_papers/Bag%20System%20at%20 Denver.pdf

Eden, C., Ackermann, F., & Williams, T. (2005). The amoebic growth of proj- ect costs. Project Management Journal, 36(2), 15–27.

Fischoff, B., Slovic, P., & Lichtenstein, S. (1977). Knowing with certainty: The appropriateness of extreme confi- dence. Journal of Experimental Psychology, 3, 552–564.

Fortune, J., & White, D. (2006). Framing of project critical success factors by a systems model. Journal of Project Management, 24(1), 53–65.

Gauthier-Villars, D., & Michaels, D. (2007, July 9). EADS considers a sim- pler management structure. Wall Street Journal. Retrieved September 25, 2008, from http://online.wsj.com/article/ SB118393055196560256.html

Gehman, H. (2003). The Columbia Shuttle accident: Study in safety cul- ture. INPO CEO Conference, Atlanta.

Gray, C., & Larson, E. (2006). Project management. New York: McGraw-Hill Irwin.

Hammond, J., Keeney, R., & Raiffa, H. (2006). The hidden traps in decision making. Harvard Business Review, 84(1), 118–126.

Hansen, G., & Wernerfelt, B. (1989). Determinants of firm performance: The relative importance of economic and organizational factors. Strategic Management Journal, 10, 399–411.

Haslam, S. (2004). Psychology in organizations. London: Sage.

Haslam, S., Ryan, M., Postmes, T., Spears, R., Jetten, J., & Webley, P. (2006). Sticking to our guns: Social identity as a basis for maintenance of committment to faltering organiza- tional projects. Journal of Organizational Behavior, 27, 607–628.

Henrie, M., & Sousa-Poza, A. (2005). Project management: A cultural liter- ary review. Project Management Journal, 36(2), 5–14.

Hofstede, G. (1997). Cultures and organizations: Software of the mind. New York: McGraw-Hill.

Hofstede, G. (1999). The universal and the specific in 21st-century global management. Organizational Dynamics, 28(1), 34–44.

Hyvari, I. (2006). Success of projects in different organizational conditions. Project Management Journal, 37(4), 31–41.

Janis, I. (1971). Groupthink and group dynamics: A social psychological analysis of defective policy decisions. Policy Studies Journal, 73(2), 19–25.

Johnson, D. (1994, September 25). Late already, Denver airport faces more delays. New York Times.

Retrieved September 25, 2008, from http://query.nytimes.com/gst/fullpage. html?res=9502E7DB123AF936A1575AC 0A962958260

Keil, M., & Montealegre, R. (2000). Cutting your losses: Extricating your organization when a big project goes awry. Sloan Management Review, 41(3), 55–68.

Keil, M., Depledge, G., & Rai, A. (2007). Escalation: The role of problem recog- nition and systematic bias. Decision Sciences, 38, 391–421.

Langer, E. (1975). Illusion of control. Journal of Personality and Social Psychology, 32, 311–328.

Leary, W. (2000, March 29). Poor man- agement by NASA is blamed for Mars failure. New York Times. Retrieved September 25, 2008, from http:// partners.nytimes.com/library/national/ science/032900sci-nasa-mars.html

Lipton, E. (2006, December 30). Efforts by Coast Guard for security fall short. New York Times. Retrieved September 25, 2008, from http://query.nytimes. com/gst/fullpage.html?res=9B0DE1DE 1630F933A05751C1A9609C8B63

Livari, J., & Huisman, M. (2007). The relationship between organizational culture and the deployment of systems development methodologies. MIS Quarterly, 31(1), 35–58.

Mars Climate Orbiter Failure Board Report. (2000). Retrieved April 12, 2008, from http://mpfwww.jpl.nasa. gov/msp98/news/mco991110.html

Martz, B., Neil, T., & Biscaccianti, A. (2003). TradeSmith: An exercise to demonstrate the illusion of control in decision making. Decision Sciences Journal of Innovative Education, 1, 273–287.

Matlack, C. (2006, October 5). Airbus: First blame the software. Business Week Online. Retrieved September 25, 2008, from http://www.businessweek. com/ globalbiz/content/oct2006/gb20061005_ 846432.htm?campaign_id=rss_daily

Myerson, A. (1994, April 18). Automation off course in Denver. New York Times, pp. D1–D2.

December 2008 � Project Management Journal � DOI: 10.1002/pmj 15

16 December 2008 � Project Management Journal � DOI: 10.1002/pmj

Systematic Biases and Culture in Project Failures

Oberg, J. (1999, December). Why the Mars probe went off course. IEEE Spectrum, 36(12) 34–39.

Pinto, J., & Slevin, D. (1987). Critical success factors in successful project implementation. IEEE Transactions on Engineering Management, 34(1), 22–28.

Pollack, A. (1999, October 1). Missing what didn’t add up, NASA subtracted an orbiter. New York Times. Retrieved September 25, 2008, from http://query.nytimes.com/gst/ fullpage.html?res=9F0CE0DC133EF932 A35753C1A96F958260

Project Management Institute (PMI). (2004). A guide to the project manage- ment body of knowledge—Third edition. Newtown Square, PA: Author.

Roberto, M. (2002). Lessons from Everest: The interaction of cognitive bias, psychological safety, and system complexity. California Management Review, 45(1), 136–158.

Robertson, S., & Williams, T. (2006). Understanding project failure: Using cognitive mapping in an insurance project. Project Management Journal, 37(4), 55–71.

Russo, J., & Schoemaker, P. (1989). Decision traps. New York: Doubleday.

Schein, E. (1985). Organizational culture and leadership—A dynamic view. London: Jossey-Bass.

Schwenk, C. (1984). Cognitive simplifi- cation processes in strategic decision making. Strategic Management Journal, 5(2), 111–128.

Shore, B., & Cross, B. (2005). Exploring the role of national culture in the man- agement of large-scale international science projects. International Journal of Project Management, 23, 55–64.

Siggelkow, N. (2007). Persuasion with case studies. Academy of Management Journal, 50(1), 20–24.

Simon, H. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69, 99–118.

Skulmoski, G., Hartman, F., & Krahn, J. (2007). The Delphi Method for graduate

research. Journal of Information Technology Education, 6, 21–42.

Starbuck, W., & Farjoun, M. (2005). Organizations at the limit: Lessons from the Columbia disaster. Malden, MA: Blackwell.

Staw, B. (1981). The escalation of commitment to a course of action. Academy of Management Review, 6, 577–587.

Staw, B., & Ross, J. (1987). Knowing when to pull the plug. Harvard Business Review, 65(2), 68–74.

Sutterfield, J., Friday-Stroud, S. S., & Shivers-Blackwell, S. (2006). A case study of project and stakeholder man- agement failures: Lessons learned. Project Management Journal, 37(5), 26–35.

Taub, E. (2007, July 6). Microsoft to spend $1.15 billion for Xbox repairs. New York Times. Retrieved September 25, 2008, from http://www.nytimes. com/2007/07/06/business/06soft.html

Thiry, M., & Deguire, M. (2007). Recent developments in project-based organi- zations. International Journal of Project Management, 25, 649–658.

Topol, E. (2004, October 2). Good rid- dance to a bad drug. New York Times. Retrieved September 25, 2008, from http://www.nytimes.com/2004/10/02/ opinion/02topol.html

Turner, J., & Müller, R. (2006). Choosing appropriate project man- agers: Matching their leadership style to the type of project. Newtown Square, PA: Project Management Institute.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131.

Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psy- chology of choice. Science, 211, 453–458.

van den Berg, P., & Wilderom, C. (2004). Defining, measuring, and com- paring organisational cultures. Applied

Psychology: An International Review, 53, 570–582.

van Marrewijk, A. (2007). Managing project culture: The case of Environ megaproject. International Journal of Project Management, 25, 290–299.

Wang, X., & Liu, L. (2007). Cultural bar- riers to the use of Western project management in Chinese enterprises: Some empirical evidence from Yunnan Province. Project Management Journal, 38(3), 61–73.

Weintraub, A. (2007, July 30). Is Merck’s medicine working? Business Week, p. 67.

Wingfield, N. (2007, July 18). Videogame shift sparks shuffle. Wall Street Journal. Retrieved September 25, 2008, from http://online.wsj.com/article/SB11847 0316371569255.html

Wolverton, T., & Takahashi, D. (2007, July 6). Microsoft’s costly Xbox prob- lem. San Jose Mercury News. Retrieved September 25, 2008, from http://www. mercurynews.com

Barry Shore is a professor of decision sciences in the Whittemore School of Business and Economics at the University of New Hampshire. He holds a BS in electrical engineering from Tufts University, an MBA from the University of Massachusetts, and a PhD from the University of Wisconsin. Prior to becoming a faculty mem- ber, he worked for Boeing, General Electric, and Hewlett-Packard. He has written four books and authored over 100 peer-reviewed research arti- cles that have appeared in such journals as the Communications of the ACM, International Journal of Project Management, International Journal of Technology Management, and Engineering Management Journal. In 2007, he was named a fellow of the Global Information Technology Management Association. He has been a consultant to a wide range of organiza- tions and is a member of the faculty in the University of Naples Federico Secundo’s (Italy) PhD program in science and technology management.

P A

P E

R S

 
Do you need a similar assignment done for you from scratch? Order now!
Use Discount Code "Newclient" for a 15% Discount!