(Content from softwarePlanner.com)
Tips for collecting Customer Requirements
Often customer requirements are stated vaguely and other times requirements are not documented at all. When this happens, customers view the requirements broadly, while developers view the requirements very narrowly.
For example, a vague customer requirement may be to create a logon page for your application. The developer may be thinking that the end user will enter their email address and password, have this information authenticated, then allow the end user to log on to the system. The customer, on the other hand, may be thinking:
As you can see, the effort for creating a simple logon page (entering of email address and password and authenticating it) is much less than the effort for creating the bells and whistles the client envisions. Unless the exact requirements are documented and agreed upon, the project can slip due to the additional effort the client envisioned.
Below are the keys to successfully collecting customer requirements:
Tips for Managing Risk in Software Projects
To deliver software on-time and on-budget, successful project managers understand that software development is complex, and that unexpected things will happen during the project life cycle. There are 2 types of risks that may affect your project during it's duration:
Risks you know about - There are many risks that you know about, that you can mitigate. For example, let's assume that you have assembled a team to work on the project and one of the stellar team members has already scheduled a 3 week vacation just before testing is scheduled, which you agreed to allow. The successful project manager will identify this risk and provide some contingency plans to control the risk.
Risks you don't know about - There are also risks that you don't know about, so a general risk assessment must be done to build time into your schedule for these types of risks. For example, your development server may crash 2 weeks into development and it may take you 3 days to get it up and running again.
The key to managing risks is to build contingency plans for risk and to build enough time into your project schedule to mitigate risks that you do not know about. Below are a list of the 5 most common scheduling risks in a software development project:
Scope and feature creep - Here is an example: Let's say the client agrees to a requirement for a Logon page. The requirement specifies that the client will enter their userid/password, it will be validated and will allow entry upon successful validation. Simple enough. Then in a meeting just before coding is commencing, the client says to your project manager "I was working with another system last week and they send the client a report each day that shows how many people log in each day. Since you have that information already anyway, I'm sure it will only take a couple of minutes to automate a report for me that does this." Although this sounds simple to the client, it requires many different things to happen. First, the project manager has to amend the requirement document. Then the programmer has understand the new requirement. The testing team must build test scenarios for this. The documentation team must now include this report in the documentation. The user acceptance team must plan to test this. So as you can see, a simple request can add days of additional project time, increasing risk.
Gold Plating - Similar to scope and feature creep, programmers can also incur risk by making the feature more robust than is necessary. For example, the specification for the Logon page contained a screen shot that showed very few graphics, it was just a simple logon process. However, the programmer decides that it would be really cool to add a FLASH based movie on the page that fades in the names of all the programmers and a documentary on security. This new movie (while cool in the programmer's eyes), takes 4 hours of additional work, put their follow-on tasks are n jeopardy because they are now behind schedule.
Substandard Quality - The opposite of Gold Plating is substandard quality. In the gold plating example, the programmer got behind schedule and desperately needed to catch up. To catch up, the programmer decided to quickly code the next feature and not spend the time testing the feature as they should have. Once the feature went to the testing team, a lot of bugs were found, causing the testing / fix cycle to extend far beyond what was originally expected.
Unrealistic Project Schedules - Many new team members fall into this trap. Project members (project managers, developers, testers, etc), all get pressure from customers and management to complete things in a certain time frame, within a certain budget. When the timeframes are unrealistic based on the feature set dictated, some unseasoned team members will bow to the pressure and create their estimates based on what they think their managers want to hear, knowing that the estimates are not feasible. They would rather delay the pain until later, when the schedule spirals out of control.
Poor Designs - Many developers and architects rush the design stage in favor of getting the design behind them so the "real" work can begin. A solid design can save hundreds of programming hours. A design that is reusable, allows changes to made quickly and lessens testing. So the design stage should not be rushed.
Tips for Providing Weekly Status Reports in Software Projects
To deliver software on-time and on-budget, successful project managers communicate regularly with all members of the team (management, leaders, testers, programmers, clients, etc). Creating weekly status reports are great way to ensure that everyone is on the same page and also benefits the team by stepping back and analyzing how the project is progressing.
The key to great communication is to collaborate with team members each day and to create weekly status reports to summarize your progress and to identify issues that need resolution. Below are a list of tips for making weekly status reports meaningful:
Use the Red/Yellow/Green
Metaphor - Status reports are designed to show accomplishments and to
identify areas that need attention. Using a Red/Yellow/Green metaphor is
a great way to separate those areas of the status report:
Red - List critical issues
that are keeping you from delivering on schedule and on budget. These
items need management help in resolving. Example: You
can not begin testing because management has not approved the purchase of your
test server.
Yellow -
List issues that management should be aware of but do not
keep you from delivering on schedule and under budget. These items may
not need management help in resolving. Example: Your testing
team is running 2 days behind schedule, but the testing team has agreed to
work the weekend to catch up.
Green - List accomplishments or progress made on deliverables for the week.
Example: Provide a bulleted list of
deliverables that should have been achieved this week, along with their
status.
Identify Week's
Priorities - Identify next weeks tasks and priorities so that everyone
knows what things are expected of them in the upcoming week. Different
teams also use that to ensure that any tasks that are dependent on them are
all in alignment as to be ready for those deliverables to be worked
on.
Provide Metrics -
Providing metrics allow your team to step back and see things in the bigger
picture. Typical metrics should include defect metrics (like number of
defects by status/severity/priority, etc) and test case metrics (number of
test cases run/passed/failed, etc). It could also include metrics
regarding deliverables and your risk management efforts.
Discussion Forums -
Create a discussion forum for your team members. Post the weekly status
reports in the discussion forum so that they are automatically distributed via
email and a history is kept of each weekly status.
Template - We have created a template we use for the weekly status report. To download a copy click here.
Tip for Creating Solid Detailed Designs
Tips for Creating Solid Test Designs
To deliver software on-time and on-budget, project managers must be able to understand the testing effort to adequately estimate the project. Once solid customer requirements have been created (see our prior newsletter for tips on collecting solid requirements) and a solid detail design has been done (see last month's newsletter for tips on creating solid detail designs), the test team leader should create a test plan that explains the testing strategy. The most reliable way to do this is to create a "Test Design" document.
The test design document allows your testing team to thoroughly think through the testing approach, and to determine the effort involved in providing adequate test coverage for each functional specification item. Below are the keys to successfully creating test designs (see next section for a template to get you started):
Tips for Releasing Software for Customer Testing
Once your testing team has thoroughly tested your software, it is time for the customer to test it before moving the software into production. This is referred to as the "User Acceptance Test" phase of the software lifecycle. This is an important phase of the software lifecycle, as it is the first opportunity for the end clients to work with your software. A very organized User Acceptance Test can bear many rewards:
Defect Discovery - The customers may use your software a little differently than the developers and testers did during the development phase. This can bring defects to the surface that you would not have caught until implementation.
Customer Buy In - Since the customer has an active role in testing the software, they become a champion for the software release. If done properly, they will be excited about the new release and begin telling others about its merits.
Customer Approval - By including the customers in final testing, they will be more likely to quickly approve the software for release to production once the testing phase is complete.
The key to a successful User Acceptance Test phase is to have a very organized plan for conducting the testing. Below is a list of 5 Tips for conducting successful User Acceptance Tests:
Set Expectations - Educate the customer, letting them know that the goal of User Acceptance Testing is to find defects that will be prevented once the software is implemented. So finding defects is a good thing and is encouraged.
Identify Defect Resolution Procedures - As defects are found, you must have a documented strategy for allowing the client to report defects and to review the status of each defect. Using products like Defect Tracker (www.DefectTracker.com) or Software Planner (www.SoftwarePlanner.com) allow customers to submit support tickets on-line and check the status of the tickets.
Drop Schedule - As defects are fixed, you should have a "Drop Schedule" for new releases. For example, during the User Acceptance Test phase, you may release a new copy of the software each Wednesday for your customers to test. This allows the customer to rely on a specific time table for new releases so that they can re-test defects that were previously fixed.
Document Current Defects and Testing Statistics - Before beginning User Acceptance Testing, you may have some low priority defects that have not been fixed. Let the customer know what those defects are so that if they encounter them, they will not report them again. Another good approach is to supply the customer with statistics that show how many test cases were run during your testing and how many defects came out of that effort. Each week, do weekly status reports for your customer, showing how many defects have been found by their efforts and how many defects are outstanding.
Create a User Acceptance Testing Document - Prior to beginning User Acceptance Testing, create a "User Acceptance Testing Release document." This document explains the plan for User Acceptance Testing, and provides a conduit for a successful testing phase. We have created a template that you can use for the document, download it by clicking here.
Tips for Conducting Project Post Mortems
Very few projects go as planned. Many projects encounter problems that must be corrected and a few lucky projects go smoother than planned. Regardless of how successful or disastrous a project is, it is important to review the project in detail once the project is over. This allows your team to figure out what things were done well and to document the things that need improvement. It also aids in building a knowledge base that teams coming behind you can review to ensure they get the most out of their upcoming projects.
The key to a successful projects is to learn from past mistakes. Below is a list of 5 Tips for conducting successful Post Mortem reviews:
Plan Your Post Mortem Review - Upon completion of a project, the Project Manager should conduct a "Post Mortem" review. This is where the Project Manager invites all the major players of the team (Analysts, Lead Programmers, Quality Assurance Leaders, Production Support Leaders, etc) to a meeting to review the successes and failures of the project.
Require Team Participation - Ask the attendees to bring a list of 2 items that were done well during the project and 2 things that could be improved upon.
Hold the Post Mortem Review Meeting - Go around the table and have each person to discuss the 4 items they brought to the meeting. Keep track of how many duplicate items you get from each team member. At the end of the round table discussion of items, you should have a count of the most popular items that were done well and the most agreed upon items that need improvement. Discuss the top 10 success items and the top 10 items that need improvement.
List Items Done Well and Things Needing Improvement -Upon listing of the 10 success and improvement items, discuss specific things that can be done to avoid the items that need improvement upon the next release. If some items need more investigation, assign specific individuals to finding solutions.
Create a Post Mortem Report - The best way to keep this information organized is to create a "Post Mortem" report, where you document your findings. Send the Post Mortem report to all team members. Before team members embark on their next project, make sure they review the Post Mortem report from the prior project to gain insight from the prior project. We have created a template that you can use for the document, download it by clicking here.
Tips for Minimizing Software Defects via Inspections
Many of us have experienced projects that drag on much longer than expected and cost more than planned. Most times, this is caused either from inadequate planning (requirement collection and design) or from an inordinate number of defects found during the testing cycle.
A major ingredient to reducing development life cycle time is to eliminate defects before they happen. By reducing the number of defects that are found during your quality assurance testing cycle, your team can greatly reduce the time it takes to implement your software project.
The key to reducing software defects is to hold regular inspections that find problems before they occur. Below is a list of 5 Tips for Reducing Software Defects:
Conduct Requirement Walkthroughs - The best time to stop defects is before coding begins. As the project manager or requirements manager begins collecting the requirements for the software, they should hold meetings with two or more developers to ensure that the requirements are not missing information or are not flawed from a technical perspective. These meetings can bring to surface easier ways to accomplish the requirement and can save countless hours in development if done properly. As a rule of thumb, the requirements should be fully reviewed by the developers before the requirements are signed off.
Conduct Peer Code Reviews - Once coding begins, each programmer should be encouraged to conduct weekly code reviews with their peers. The meeting is relatively informal, where the programmer distributes source code listings to a couple of his/her peers. The peers should inspect the code for logic errors, reusability and conformance to requirements. This process should take no more than an hour and if done properly, will prevent many defects that could arise later in testing.
Conduct Formal Code Reviews - Every few weeks (or before a minor release), the chief architect or technical team leader should do a formal inspection of their team's code. This review is a little more formal, where the leader reviews the source code listings for logic errors, reusability, adherence to requirements, integration with other areas of the system, and documentation. Using a checklist will ensure that all areas of the code are inspected. This process should take no more than a couple of hours for each programmer and should provide specific feedback and ideas for making the code work per the design.
Document the Results - As inspections are held, someone (referred to as a scribe) should attend the meetings and make detailed notes about each item that is found. Once the meeting is over, the scribe will send the notes to each team member, ensuring that all items are addressed. The scribe can be one of the other programmers, an administrative assistant, or anyone on the team. The defects found should be logged using your defect tracking system and should note what phase of the life cycle the defect was found.
Collect Metrics - Collect statistics that show how many defects (along with severity and priority) are found in the different stages of the life cycle. The statistics will normally show over time that when more defects are resolved earlier in the life cycle, the length of the project decreases and the quality increases.