Tuesday 10 February 2015

Delivering constructive review feedback to technically superior employees


End of year reviews have either been recently completed or are in the process of being finalized. As a peer, scrum-master or manager you have probably been involved in delivering in some form feedback to the technical people in the team. In software development appraisals can quickly be dismissed as a box ticking exercise for management, but it doesn’t have to be this way. Camille Fourneir wrote a blog earlier today on this very subject where she actually emailed her team with a justification as to why the process is important. The process can be even harder if you are giving feedback on someone who has more experienced or specialist technical skills. Here’s some tips I have found help in constructing a useful, engaging appraisal process which helps you achieve the two main goals of a review process – highlighting achievements and identifying areas which can be worked on in the new year. This is useful for both people providing feedback on their peers, and for those managing others.

Image from freeimages
1. The review process is not one way!
Firstly the review process should be a two way process. If you are a manager, you are not just looking at a developers year and commenting on them; this is their chance to comment on your performance in relation to their role. For example, they may think you are good at dealing with impediments for the good of the team, but they find that you are sometimes a block between the development team and the end customer. This kind of feedback is invaluable to you and you should actively encourage it.
2. Include anonymous peer reviews in the process.
Secondly, contrary to point 1, the review process is not just a two way process – it should include peers. One person saying to another person they are good or bad at a specific thing can lead to confrontation or the person being appraised not believing the feedback; however if you get peer feedback from multiple team members with multiple people mentioning the same points, then it is harder to disagree with. A process which has worked well for me is for each team member to nominate 3-5 peers who can provide anonymous feedback on their performance. The appraiser collates the feedback to make it anonymous and provides it prior to the review starting to give the person time to review their feedback and take it in. Feedback from peers will generally resonate better than manager feedback, as you may work far more closely with your peers than you do a manager. This is also allows you to collate both technical and non-technical feedback from various roles within the company, removing the need for you to feel like you need to comment on someone's code if you yourself are not a developer for example.
3. Provide positive and negative feedback.
The third main point is that you need to provide both positive and negative (constructive…) feedback. It is important to celebrate the successes of the previous year. Even if someone has had an unbelievable year, you still need to give them something to work on the next year. The best employees will want constructive feedback so they can improve their performance – we are all professionals after all and the software development industry requires continuous self improvement.
The biggest failing in feedback processes I see is when peers only provide glowing feedback for their peers. Whilst this is great for the review meeting and egos can be massaged, it isn’t for the best of the person being appraised. Getting focussed feedback on areas to improve will be the best thing in the long run, especially if multiple people pick up on something. For junior members of the team giving feedback on someone with superior technical skills or for managers who do not have the same level of speciality in a technology this can be a challenge and intimidating. However, from experience, any honest feedback will be well received and will help gain respect amongst peers.
4. Regular reviews throughout the year.
The most common cause for uncomfortable reviews is when they are only done once a year. One to ones should take place throughout the year and should remove the opportunity for surprises when it comes to the review process. Is it really fair to land negative feedback after 12 months when you could have mentioned it in January and resolved the issue immediately?
5. The outcome of the review is not to encourage someone to aim for a managerial role – there should be a technical career path!
In software development there is a tendency to encourage our best engineers to progress to a point where their technical prowess is no longer used as they move to a management role – which is simply absurd. In reviews, the person being appraised may feel like they have to say they want to work towards management or a scrum-master role, purely as they want to show they have ambition even though it may not be the best thing for them. A good company will have a technical route for employees as well as a managerial route. Just because you are the best developer in the company, doesn’t mean you can’t improve and do more from a technical point of view. Make sure there is a career ladder people who don’t want to go into management can follow. Managers don’t need to get paid more than the best developers! Keeping your best developers developing if that is what they want to do will ultimately lead to you having a greater product.  Erik Dietrich describes this common problem in the following great blog post.
Related Posts:

Saturday 7 February 2015

Kick-starting a development project with a solid testing approach - a 5 step process to ensure quality

Ask a developer about their feelings about testing and you can elicit a huge spectrum of opinions:
  • "It's someone else's responsibility"
  • "Unit testing is the way forward"
  • "Manual testing gets the best results"
  • "You can't make the software better through testing!"
Above are all things I've heard people say when talking about testing. However it is such a broad subject and we tend to focus on a very specific aspect of it (e.g. unit tests or manual tests) and we can sometimes miss the big picture. When kick-starting a new product or project, it's vital that you think about various aspects of testing up front - here's my 5 step approach to ensure you have quality software right from the beginning:
 
solid testing approach
  1. Test input early in software lifecycle. You need to think about testing as early in the project as you can. When looking at designs, for example, you should think about negative tests and edge cases to help guide your design decision process. If you have testers within your team then this is great and you should get them involved in the design; if you don't have test resource the developers need to take on this mantle (they should also do this if there are testers involved in the project). Testing then happens as the software is developed with close interactions between the testers and developers and quick feedback loops. This is a very agile approach which works really well. Leaving testing until the last minute just doesn't work. Getting them involved early also allows them to start writing test scripts which is important for step 3.
  2. Automated test environment setups. A huge mistake people can make is leaving the test team to spend hours setting up test environments for every set of tests they do. This is such a waste - for every 1 hour spent testing you can easily burn 7 hours doing server and data setup. Investment in automated deployments for test purposes early on is a time well spent. There are also some really good by-products of emphasising this at the start of a project; you start to think about how the software would be deployed, how it would be upgraded potentially, how you load data into the system and also helps you consider your installation times. You will want it to be as efficient and quick as possible so any benefits your test team gain from this, your customers will gain when your product goes out the door.
  3. Manual test scripts. In modern software development people sometimes baulk at the idea of not just manual tests, but writing test scripts for this. However, there is no substitute for getting a real person do testing. They find things a system can't, they can pick up usability issues etc. By writing test scripts at the design stage, the test team can test the design or mock ups without any software needing to be developed. Things that are missed in the design can be picked up at this stage. The test scripts should be well structured and repeatable as the manual test scripts can then be used as a basis for automation test scripts. A huge by-product of well written test scripts is you can then scale up your test team when you need to (e.g. before release) with new people who are unfamiliar with the product as the tests should be detailed enough for them to step through. You can scale up using test resource from other teams in your organization, from outsourcing potentially, or even by getting your development team to run through the tests.
  4. A path to automation. If done correctly, the manual test scripts will provide you a set of repeatable steps that are always tested with every release of the software. The automated test environment setups give you a one click ability to setup the system on which to run this test. If you automate this manual test and marry it up to run on the automated deployment, then you have a quick and easy path to having good automation coverage. You don't need to automate everything - in fact I'd just automate the core 20-50% of your product. The time saved here in the long term will be enormous. Every time a developer commits, you can spin up a new test environment and then run your automated tests; if the tests fails the test team do not even need to do anything with this build and hence do not waste any time. Efficiency starts to go through the roof.
  5. Regular bug purges. A bad habit to fall into is to leave bugs for later, as you are adding a great new feature. This ends up with a big backlog of bugs and a lot of technical debt. If this is a long term project - avoid this at all costs. There are various approaches to keeping your bug count low; rotate a developer onto purely bug fixing every iteration, assign 20% of your iteration capacity just to bugs, make every 3rd or 4th iteration purely a bug fixing iteration. Do whatever of these suits but make sure you do it. With a good continuous integration system and automated deploys and tests, the chances of bad code and major defects creeping in decreases which should help to keep this figure low.
Of course there are various other things that need done when thinking about testing a product and it will vary depending on project size, team sizes and skillsets. However, following the five steps above as part of your development process will ensure you have a quality driven approach to your product.
How do you approach testing? Are there any other things that should be done in the initial stages of product development which are big wins? If so please leave a comment below.


Related Posts




Covid-19 impact on mobile applications - a quick case study

Amidst everything that's been going on over the last few months, checking on how my apps have been doing has been low down my priorities...