Thursday, December 29, 2011
Monday, November 14, 2011
“If builders built buildings the way programmers write programs, then the first woodpecker to come along would destroy civilization”
– Gerald M. Weinberg, Weinberg's Second Law
At the end of the day, success for either of them is dependent on the other. Executives depend on the work accomplished by project management offices for their own success, just as project management offices depend on executives for their success.
O'Brochta and Finch did a great job in discussing how healthy relationship between PMO and executives should be. Good that they are going to describe specific key performance indicators that a newly-established PMO can use to measure itself to ensure alignment with the needs of the organisation in their follow-up piece.
Thursday, November 10, 2011
Three common mistakes that flood IT Projects
I have been involved with IT for several years now. Since my high school years I've seen many IT initiatives fail miserably (i.e. disastrous implementation, horrible IT solutions, incomplete initiatives, etc.) and I've seen others be tremendous successes.
I recently came up with a blog posting by Ty Kiisel about common mistakes that plague IT projects. Seeing in retrospective, in one way or another, I can identify with those mistakes he mentions in the blog.
For example, he talks about the project manager setting up unrealistic deadlines for the team. The author suggests that while some projects require a hard deadline, most of them don’t. In my experience that holds true, or at least not you shouldn’t set up unrealistic deadlines especially when you are not expected to set them. I remember a time when one of our clients wanted to implement a new 3D scanning system in his manufacturing plant as part of a new quality management control system. The system consisted in a combination of cameras, sensors and software that took several pictures of a certain object and compared them against a previously defined “quality” product. The problem with this system is that it was the first time anybody in the team tried to “cluster” the cameras to shoot at the same time and tried to automate the camera’s functions via programming language. The client was very interested in the project because it would increase the speed of the quality control system without hiring more people. During the negotiations with the client, my team leader offered the finalized product into what we thought would be a very tight schedule if we knew the technology. When we found out what we were faced against, we discovered the deadline was totally unrealistic. Fortunately, the client was nice enough to accept a delay of over 4 times the expected delivery time. Lesson: never commit to a deadline (specially to an unrealistic one) just to impress people at the beginning, at the end you will impress them in the totally opposite way.
Kiisel also talks about risk not being managed and how ignoring it does not make it go away. I worked with a partner for a significant academic project as part of our thesis. We were building a vehicle traffic simulator in various computers at university. We had heard how a recent electric failure in one of the adjacent computer labs of the building had fried two of the research servers used in a thesis project for other group and how they lost nearly two years of research due to data damage. We were aware of the risk and we thought we should take precautionary measures in order to avoid data loss. We made backups of our data the next week and we forgot about the data loss case as the time went by. Nearly six months later, the same problem happened again but this time the fried computer was our server. If it hadn’t been for a backup made a week earlier by the recent automated backup system installed by the Research Department, we would have lost a tremendous amount of our time and data critical to our final project.
Finally, Kiisel also talks about the mistake of stakeholders not involved in the project. I found this several times in IT projects of third parties. For example, I remember how a supposedly high-tech emergency communication system installed in the school where I worked failed miserably. The US Department of State, through the US Embassy in my country, gave a substantial grant for the implementation of an emergency communication system installed in all school classrooms. The sponsor unilaterally decided to outsource the project to an external company that did the installation over a period of two months working during the weekends. The company never interviewed any of the stakeholders (faculty members, support staff, students, etc.) to gain insights about our needs, about stakeholders’ tech proficiency, among other factors. The result was an installed high-tech system that was so complicated to use that nobody could actually operate. The old system (walk and notify to the nearest secretary –even if she was in a different building-) was brought back and the new system was abandoned and remained installed as a symbol of failure. Personally, in this case I think there was more than just bad project management as I suspect the sponsor had some dubious interests in the company and technology used. This feeling became stronger when he was fired soon after the failed implementation of the project.
We have discussed some project failures so far in our course, which have been very telling, and even quite entertaining at times. I would like to share a brief chronicle of a highly successful IT project I have read about recently, IBM Watson.
Watson is a compute cluster built by the firm's DeepQA division. DeepQA's mandate is to perform research in the realm of artificial intelligence (AI) centring around human-like “open-domain” question answering. Open domain question answering is basically the phenomenon of a machine answering questions expressed in natural human language.
The 90 node, 2880 core, 80 teraFLOP Watson is among IBM's most impressive projects to date and in February, 2011, it beat the two top Jeopardy contestants of all time in a two game match demonstrating its capabilities.
This amazing feat was made possible, not only by brilliant engineers, architects, and powerful hardware, but highly organized and effective project managers as well. One of those project managers, Jim De Piante, sourced talent for practise competition against the supercomputer. Other key sub-projects included hardware delivery and staging, sample question development. Meanwhile, several other technical and non-technical project managers coordinated the efforts of at least nine other groups working in linguistics, systems, software development, game strategy (for playing Jeopardy), data linking, search, and more. Additionally, project managers had to coordinate all these complex sub-projects in order to complete the entire package that is Watson.
The budget was over three million dollars and the project was completed in approximately two years.
In order to ensure the project was completed on time, on specification, and on budget, project managers had to have a solid understanding of the ambitious project goals, resource constraints, software development, supercomputer hardware, human language, artificial intelligence, and working with a multicultural distributed team in four countries on a project some thought was impossible. It was an impressive feat indeed.
I was recently involved in the preliminary stages of a IT project ,at large organization, that aimed to replace the central information sharing and document archiving system. The current system was a 10 plus year old mishmash of interdependent modules that were developed in house to meet the needs and demands of various stakeholders. The system no doubt needed to be replaced; the challenging question was what new system would provided the same level of functionality and integration with current business processes.
Despite the obvious challenge in finding an appropriate replacement system, upper management, due to financial pressures and politics, committed themselves to an off the shelf solution designed by an American vendor without conducting a functional gap analysis in co-ordination with the major stakeholder. The scope of the system replacement project was therefore limited to; getting major stakeholders to support the proposed change, working with the selected vendor to implement the system and pre-launch training.
The omitted functional gap analysis was a sure recipe for failure. The project planning stage, of which i was part, uncovered several intractable challenges the implementation project will be facing due to the omitted functional gap analysis. The standard answer to each new uncovered challenge quickly became " well we are already committed to the new system". However, as the project planning progressed and more challenges were uncovered an unspoken consensus slowly emerged; the planned implementation will run into significant problems and the selected system will most likely not meet the needs and demands of its users. In my estimation, this state of affairs were a direct consequence of the failure to conduct a proper functional gap analysis before selecting the replacement system.
I suspect that the situation I encountered is not uncommon. I imagine that economic pressures or the lure of the latest IT tool or fad, for example cloud computing, frequently motivates managers to commit similar errors in judgement that ultimately jeopardizes project success.
In the interest of exploring my suspicion further, I will be happy to hear from other members of the class who had had similar project experience.
The failure of the License Application Mitigation Project (LAMP):
It was estimated that nearly $40 million had been wasted in this project.
What reasons could lead to this failure?
-Recognize and admit the symptoms of failures.
-Accurately identify what is going wrong in the project.
-Select suitable means of handling the situation, be it cancelling the project before costs become so huge.
What we can learn from this failure:
The first part of the post is a replication of a post at http://PMToolsThatWork.com/project-management-crisis-hurry-but-do-nothing/ . I will then offer my analysis in the second part.
- Limited IT budgets and resources. Most organizations need to improve the way they use their existing resources in order to maximize productivity. This applies to both people and tools.
- Need for better IT governance (and data for compliance with Sarbanes Oxley Act ). Many IT organizations lack a consistent, accountable body for decision making. PPM provides a decision-making framework that helps ensure IT decisions are aligned with the overall business strategy; IT participates in setting business goals and directions, establishing standards, and prioritizing investments.
- Need to improve project success rate. According to the latest Standish Group survey, executive support and clear business objectives are among the top ten success factors for application development projects. PPM includes approaches for achieving both of these requirements.
- Closer alignment of IT with business: With an easily digestible, holistic view of their entire project portfolio, executives and managers can more readily understand where IT dollars are being spent and which projects continue to be worthwhile.
- Better IT governance: PPM helps managers monitor project progress in real time and provides detailed data to help satisfy Sarbanes Oxley Act compliance specifications.
- Cost reductions and productivity increases: PPM helps managers identify redundancies and allocate resources appropriately; it enables them to make better IT staffing and outsourcing decisions, and to spot opportunities for asset reuse.
- Business-based decision making: By viewing projects as they would view components of an investment portfolio, managers can make decisions based not only on projected costs, but also on anticipated risks and returns in relation to other projects/initiatives. This leads to improvements in customer service and greater client loyalty.
- More predictable project outcomes: A PPM strategy bridges the gap between business managers and the practitioners who deliver the projects; it ensures consistent processes across projects and helps managers assess project status in real-time, predict project outcomes, and identify inter-project dependencies.
Table 2: The PPM solution framework
- Project teams use different vocabularies.
- Team members do not understand the business objectives.
- Projects are not prioritized by ROI potential.
- Software requirements are not traceable to business objectives.
- Method management: A consistent, repeatable process, providing the means for establishing a common vocabulary, instituting a framework for assessing project health, and prioritizing initiatives.
- Idea/innovation management: Support for considering IT project requests in relation to other prospective and current projects (project pipeline management).
- Portfolio management: Ways to align and prioritize proposed initiatives and projects.
- Program management: A holistic view of multiple projects and their inter-dependencies.
- Project management: Support for planning and tracking schedules, establishing milestones and assigning tasks for individual projects, identifying project dependencies, completing Gantt charts and other reporting artifacts.
- Resource management: Ways to plan, balance, and schedule resources for IT initiatives.
- Time management: Means to allocate, track, and compare time spent on project activities.
- Financial management: Help with establishing and managing IT budgets; means for capturing expenses and obtaining approvals.
- Business process modeling: Support for managers to discover, document, and specify current business processes with metrics, and specify new goals and requirements.
- Requirements analysis: Means to analyze financials and prioritize projects according to potential business value, define and prioritize requirements, identify/prepare existing assets for reuse.
- Design and construction: Functionality for rapid integration and/or application development, visual construction and programmatic code generation, unit testing and debugging.
- Testing and deployment: Support for functional and load testing, and for managing testing requirements.
- Change management: Configuration management and change management support to deploy and monitor the solution.
- Maintenance and productivity monitoring: Support for testing and measuring system performance.
- Business metrics collection: Means for collecting and analyzing post-deployment business results. PPM also helps you track metrics for component reuse.
- Setup and monitoring of Service Level Agreements (SLA): Setup for specific IT service levels and metrics collection for response time, service availability, and other parameters.