Among many advantages of agile development is the one which produces efficiency by squeezing away the cost of interaction between various members (roles) of a development team (for example, QA doesn’t need to raise the bug and wait for it to be fixed and then verify it, he/she can walk up to the dev and get it fixed, or get involved early enough to avoid the bug altogether). This is achieved by getting rid of most the process around role interactions and encouraging informal interactions. Getting the problem solved is the primary focus, rather than role preservation, and hence efficiency improves many-fold.
Many groups in our company follow agile methodology for software development, and we have seen these benefit for these groups, and quality gets a boost too. However it is going to pose some problems when performance review time comes. This is because the traditional performance review models (and the one we use) focus on individual performances, whereas agile methods encourage team performances and many times it is impossible to extract individual performances out of a team performance. In such cases, it may not be possible (or fair) to try finding out who did what and hence try to get data for individual performance review.
According to Katzenbach and Smith in Wisdom of Teams (this is considered the classic book on Teams, you can find my commentary of the book here: Part-I, Part-II, and Part-III), performance is the single most important driver for excellent team output, and that the team needs to be considered a single unit for performance measurement in order to make them successful. This means that most traditional performance review models will cause problems where team performance is important and encouraged (almost every company worth its salt these days!). Given the fact that performance review determines most of the rewards, raises and perks, this may actually make a team fail (Here is one example of how: Getting ‘exceeds expectation’ rating in my performance review requires me to make sure there is enough evidence of my good work; so if I am QA, I better register all the defects in tracking system rather than get them quietly fixed by devs).
One solution to this is to do a team performance review first (based on team’s results and competencies) and then use that as a predefined % of individual’s performance review. I say some % and not 100% because there are many other factors that indeed need individual reviews, like individual and team competencies, individual training and tasks undertaken, etc. Actual distribution of course should depend on actual situation, but team performance weightage should be significantly above 50% if team performance is to be encouraged.
I am curious to know if you have faced this problem, either as a reviewer or a reviewee. How have you coped with them?