Why IT projects still fail...
Despite new methodologies and management techniques meant to head off spectacular failures, critical technical initiatives still fall flat at an alarming rate. Here’s how IT can learn from its mistakes.
In the age of agile development, devops and related management techniques, is IT project failure even a thing anymore? The answer, sadly, is yes.
In the past, IT failures often meant high-priced flops, with large-scale software implementations going on way too long and way over budget. Those failures can and still do happen. Case in point: IBM’s never-completed $110 million upgrade to the State of Pennsylvania’s unemployment compensation system.
But IT failure today is frequently different than in it was in the past, as agile, devops, continuous delivery and the fail-fast movement have changed the nature of how IT handles projects. These iterative management methodologies and philosophies are meant to minimize the chances of projects going spectacularly awry, but the fact of the matter is that IT projects still fail, just in new and sometimes more insidious ways.
Here’s what seven IT leaders and analysts say about the state of IT project failure today.
A cautionary tale
Chris McMasters, currently the CIO for the City of Corona, Calif., cites the case of an 18-month-long implementation of a SaaS customer relationship management system a few years ago at a previous employer, where IT worked with the sales department leadership to understand business needs and define requirements.
[ Find out how to pick the right project management methodology for your team and beware the most common project management mistakes to avoid. | Get the latest insights by signing up for our CIO newsletter. ]
“We thought we had all the [necessary] buy-in and knew what the outcome was supposed to be, but we got to project end and the sales force didn’t want it. There was an extreme amount of resistance. Top management was on board, but there was some distrust among the users,” he says.
The cloud-based CRM was declared a bust and scrapped — showing that even when projects are on time and on budget, they can still fail.
“Failure can take many different shapes and forms,” McMasters says. “It doesn’t matter how shiny the product is or if it does a thousand things. To me, if we’re not providing the outcome the end user expects, that’s failure.”
McMasters says success would have been more likely had IT focused more on marketing the benefits of the new system rather than on project execution. “We weren’t as engaged as we could have been. We could have teamed up better with the business,” he says.
As a failed project, that CRM implementation hardly stands alone. The Project Management Institute’s 2017 Pulse of the Profession report found that 28 precent of strategic initiatives overseen by survey respondents were deemed outright failures. Some 37 percent of the more than 3,000 project management professionals who responded cited a lack of clearly deﬁned and/or achievable milestones and objectives to measure progress as the cause of failure, followed by poor communication (19%), lack of communication by senior management (18%), employee resistance (14%) and insufﬁcient funding (9%).
And speaking of money, the same report found that due to poor project performance, organizations waste an average of $97 million for every $1 billion invested. That’s better than 2016’s $122 million in waste, but still a significant amount of cash lost.
Factors for failure
Despite new methodologies and management techniques meant to head off spectacular failures, many of the factors that traditionally put IT projects at risk of failure are still present in the enterprise, experts say. Inadequate resources, overly aggressive timelines, underestimated costs, overlooked requirements, unanticipated complications, poor governance and human mistakes such as bad code can all lead to project failure.
PwC’s 2017 Global Digital IQ Survey polled 2,216 business and IT leaders from 53 countries and asked them what hinders digital transformation. Some 64 percent of respondents said lack of collaboration between IT and business is to blame, 58 percent cited inflexible or slow processes, 41 percent listed lack of integration of new and existing technologies, 38 percent named outdated technologies and 37 percent put down lack of properly skilled teams.
Meanwhile, the criteria used to judge whether a project is successful — or a failure — has been expanding to reflect how critical today’s technology initiatives are, experts say. PMI’s 2017 Pulse of the Profession report states, “the definition of success is evolving. The traditional measures of scope, time, and cost are no longer sufficient in today’s competitive environment. The ability of projects to deliver what they set out to do — the expected benefits — is just as important.”
The study identified organizations with 80 percent or more of their projects being completed on time and on budget while also meeting original goals and business intent; it classified this group as “champions.” The report also highlighted the fact that these champions had invested in several common areas, including the leadership skills of project professionals, benefits realization management, project management offices, actively engaged executives, and agile project management practices.
Stephen Elliot, an analyst with research firm IDC, estimates that 30 percent to 35 percent of IT projects could be counted as failures. Elliot attributes many such failures to changes in business priorities or objectives. That means, he says, that the technology works fine but it doesn’t deliver the results currently desired by the business. In those cases, the lack of effective communication and collaboration — “where business decisions are made but aren’t passed on” — plays a pivotal role in tanking IT projects.
“In this more customer-centric world, I would define ‘failure’ as being that your company’s reputation or profits or revenue has been negatively impacted,” he says. “Failure is still real, but is it more associated with business processes than with a truer technology failure because someone didn’t [for example] check a configuration on a key router.”
Others concur. “If you slap something together that’s on time and on budget, but it doesn’t do what customers want or what users need, then it doesn’t matter,” says James Stanger, senior director of products at IT certification provider CompTIA.
Agile and automation to the rescue?
Some trends, notably agile and devops methodologies, help mitigate the potential for wholescale project failures in modern IT shops.
“Theoretically, this new way of writing code, in small chunks, automating the testing of it, and iterating until it’s clean and then moving onto the next chunk, provides [a safety net]. You’re checking for errors more often and therefore the output should be higher quality — so when it’s done properly, it shouldn’t break as much. You can get out newer features faster and still reduce high failure points,” Elliott says.
The increasing use of automation in development and testing also helps to mitigate the potential for failure. As Elliott says, “Most failures today are still associated with the human element — bad code, a network configuration that caused an outage, bad load balancing. This stuff is really complex, and mistakes are made. But as more and more automation comes through, there should be less human errors made, especially in scripting and applications deployments and networking.”
Changes in organizational hierarchy help mitigate risk, too. Executives from various units are expected to partner together, to move quickly and adjust on the fly; in fact, leading organizations allow for more autonomy to course-correct to enable this culture, according to analysts and consultants.
“Today people are much more willing to say, ‘Let’s redefine it as we go.’ That’s one of the biggest changes today vs. 20 years ago,” Stanger says.
McMasters says he tries to manage risk of failure by focusing more on what a project is supposed to achieve. He employs devops principles to break down work into smaller pieces where problems can surface sooner and runs pilot programs where ideas can safely backfire (thereby allowing innovation without big impacts on the business).
He also credits “the strong project management movement that has moved through IT” for helping mitigate the risk of failure in his IT department as well as others. He says this movement has helped tech leaders and their business unit peers to better articulate what projects should accomplish—and what they won’t. That has shifted the definition of success away from the on- time, on-budget criteria to meeting business objectives.
Fast failure as a tool
Meanwhile, shifting mindsets about failure in the enterprise have helped reshape organizational attitudes around risk. “Now it’s OK to fail, as long as you’re learning from it,” Elliott says. “There are some companies that really appreciate failure as long as things are getting better and people are learning from them and getting wiser about what they should or shouldn’t be doing.”
Of course, Elliott and others note that those organizations who are more accepting of failure also work hard to mitigate the risk, using sandbox environments and pilots and iterative development to limit the amount of damage that can happen if something goes amiss. “They’re mitigating the risk of something big happening at the end,” McMasters says.
Reed A. Sheard, vice president for college advancement and CIO at Westmont College, has seen how that cultural shift has a positive result. He says he and other CIOs understand that not all projects are equal; each one carries different potentials when successful and varying degrees of consequences if they fail. With that in mind, he says, “we make judgements on where it’s OK to fail and where it’s not.”
He cites two recent initiatives to illustrate his point. The first one featured the implementation of a new platform to manage the school’s admission process, a critical undertaking where failure to meet user requirements and specific deadlines would be devastating, as admissions is one of the organization’s core functions. Sheard says he was actively engaged in the project, evaluating progress and the resources that went into it. It went live, successfully, in July 2015.
The second initiative centered on delivering a platform that would allow alumni to network virtually. Sheard says his team tried to build a secure, user-friendly platform, but in the end couldn’t accomplish those objectives. Staffers struggled with securing the system, authenticating users and populating user profiles with the right information.
He says he was OK with that project’s failure because they learned a lot in the attempt. “We became expert by some of our failures. So I was comfortable walking away from two years of work because we got super-smart [about balancing security and usability],” he says, adding that his team used the newfound knowledge to work with a provider to ultimately deliver a top-notch product.
Others see that mindset — the willingness to sometimes fail — as a critical component for organizations that want to innovate and remain competitive.
“If you’re constantly learning and constantly improving, then you could be constantly failing,” says Terry Coull, a principal at WGroup, a Radnor, Penn., firm that helps organizations optimize business performance and create value.“For those in that continuum, it means you’re adjusting, adapting, you’re flexible — so you’re working in small teams and delivering. So your project failure isn’t as large as it used to be. In the old waterfall world, you could lose a year before you find a failure.”
Risks of failure remain
These newest corporate cultural trends and IT methodologies certainly don’t guarantee success or fully guard against project failure. In fact, some say that there are elements in the modern IT shop that could even exacerbate the potential for problems that could take down a project.
One expert pointed to potential problems with agile and devops methodologies. “You solve the smaller problems, but then you [build] these large integrated systems where the large problems aren’t visible until you hit scale,” says Marshall Van Alstyne, a professor and chair of information systems at Boston University’s Questrom School of Business.
For example, he says, IT teams working with these iterative methodologies might find that their new software features and functions work at each individual step, but then discover that the application, when fully deployed, doesn’t work well as a whole. “In some sense what you’ve done is escalate the point at which systems fail to the higher levels,” he adds, comparing the scenario to doctors “curing” individual symptoms in a sick patient while failing to treat the larger condition causing all the symptoms.
Meanwhile, the breakdown of siloes between business units and IT can add to the risk of project failure as business unit executives embrace technologies and seek to capitalize on the latest and greatest, regardless of whether they fully understand or thoroughly vet their options.
Consider that more and more technology spend flows from business unit budgets rather than IT’s coffers, says Chris Curran, a PwC principal and chief technologist for the U.S.
firm’s advisory practice. He notes that PWC’s 2015 Global Digital IQ Survey found that 68 percent of spending for technology falls outside of the IT budget.