PSP Break-down, part 3: The Good Stuff
Welcome to part three of my PSP series. Now that the introductory material is out of the way, it's time to get to the good stuff! This time, I'll discuss the details of the PSP and what you can actually get out of it. Theoretically, that is. The final post in this series will discuss my observations, results, and conclusions.
As I alluded to in a previous post, there are several PSP phases that you go through as part of the learning process. Levels 0 and 0.1 get you used to using a defined and measured process; levels 1 and 1.1 teach planning and estimation; and levels 2 and 2.1 focus on quality management and design. There is also a "legacy" PSP 3 level which introduces a cyclical development process, but that's not covered in the book I used (though there is a template for it in Process Dashboard). The phases are progressive in terms or process maturity. So PSP 0 defines your baseline process and by the time you get to PSP 2.1 you're working with an industrial-strength process.
For purposes of this post, my discussion will be at the level of the PSP 2.1. Of course, there's no rule that says you can't use a lower level, but the book undoubtedly pushes you toward the higher ones. In addition, while the out-of-the-box PSP 2.1 is probably too heavy for most people's taste, there is definitely useful material there that you can adapt to your needs.
One of the big selling points for all that data collection that I talked about in part 1 is to use it for estimation. The PSP uses an evidence-based estimation technique called Proxy-Base Estimation, or PROBE. The idea is that by doing statistical analysis of past projects, you can project the size and duration of the current project.
The gist of the technique is that you create a "conceptual design" of the system you're building and define proxies for that functionality. The conceptual design might just be a list of the proxies such that, "if I had these, I would know how to build the system." A proxy is defined by its type/category, it's general size (from very small to very large), and how many items it will contain. In general, you can use anything as a proxy, but in object-oriented languages, the most obvious proxy is a class. So, for example, you might define a proxy by saying, "I'll need class X, which will be a medium-sized I/O class and will need methods for A, B, and C."
By using historical project data, you can create relative size tables, i.e. tables that tell you how many lines of code a typical proxy should have. So in the example above, you would be able to look up that a medium-sized I/O class has, on average, 15.2 lines of code per method, which means your class X will have about 46 lines of code. You can repeat that process for all the proxies defined in your conceptual design to project the total system size. Once you have the total estimated size, PROBE uses linear regression to determine the total development time for the system. PROBE allows for several different ways to do the regression, depending on how much historical data you have and how good the correlation is.
As a complement to estimation, the PSP also shows you how to use that data to do project planning. You can use your historical time data to estimate your actual hours-on-task per day and then use that to derive a schedule so that you can estimate exactly when your project will be done. It also allows you to track progress towards completion using earned value.
For the higher PSP levels, the focus is on quality management. That, in and of itself, is a concept worth thinking about, i.e. that quality is something that can be managed. All developers are familiar with assuring quality through testing, but that is by no means the only method available. The PSP goes into detail on other methods and also analyzes their efficiency compared to testing.
The main method espoused by the PSP for improving quality is review. The PSP 2.1 calls for both a design review and a code review. These are both guided by a customized checklist. The idea is that you craft custom review checklists based on the kinds of errors you tend to make, and then review your design and code for each of those items in turn. This typically means that you're making several passes through your work, which means you have that many opportunities to spot errors.
The review checklists are created based on an analysis of your defect data. Since the PSP has you capture the defect type and injection phase for each defect you find, it is relatively easy to look at your data and figure out what kind of defects you typically introduce in code and design. You can use that to prioritize the most "expensive" defect areas and develop review items to try to catch them.
As part of design review, the PSP also advocates using standard design formats and using organized design verification methods. The book proposes a standard format and describes several verification methods. Using these can help standardize your review process and more easily uncover errors.
Another big win of the PSP is that it allows you to be objective about changes to your personal process. Because you're capturing detailed data on time, size, and defects, you have a basis for before and after comparisons when adopting new practices.
So, for instance, let's say you want to start doing TDD. Does it really result it more reliable code? Since you're using the PSP, you can measure whether or not it works for you. You already have historical time, size, and defect data on your old process, so all you need to do is implement your new TDD-based process and keep measuring those things. When you have sufficient data on the new process, you can look at the results and determine whether there's been any measurable improvement since you adopted TDD.
The same applies to any other possible change in your process. You can leverage the data you collect as part of the PSP to analyze the effectiveness of a change. So no more guessing or following the trends - you have a way to know if a process works for you.
One of the least touted, but (at least for me) most advantageous aspects of using the PSP is simply that it keeps you honest. You use a process support tool, configured with a defined series of steps, each of which has a script with defined entry and exit criteria. It gives you a standard place in your process to check yourself. That makes it harder to bypass the process by, say, skimping on testing or glossing over review. It gives you a reminder that you need to do X, and if you don't want to do it, you have to make a conscious choice not to do it. If you have a tendency to rush through the boring things, then this is a very good thing.
So that's a list of some of the reasons to use the PSP. There's lots of good stuff there. Some of it you can use out-of-the-box, other parts of it offer some inspiration but will probably require customization for you to take advantage of. Either way, I think it's at least worth learning about.
In the next and final installment in this series, I'll discuss my experiences so far using the PSP. I'll tell you what worked for me, what didn't, and what to look out for. I'll also give you some perspective on how the PSP fits in with the kind of work I do, which I suspect is very different from what Humphrey was envisioning when he wrote the PSP book.
You can reply to this entry by leaving a comment below. You can send TrackBack pings to this URL. This entry accepts Pingbacks from other blogs. You can follow comments on this entry by subscribing to the RSS feed.