Agile Testing and Quality Strategies: Discipline Over Rhetoric
|We like to say that agile developers are "quality infected" and in many ways that's true. Agilists, at least disciplined ones, strive to validate their work to the best of their ability. As a result they are finding ways to bring testing and quality assurance techniques into their work practices as much as possible. This article overviews the techniques and philosophies which disciplined agile developers are applying in practice, putting them into the context of the agile software development lifecycle.||
This article is organized into the following topics:
This section is a brief introduction to agile software development. It is organized into the following sections:
One frustration that many people new to agile have is that there is no official definition of agile software development, although many people will point to the values and principles of the Agile Manifesto. Having said that, my definition of disciplined agile software development is:
An iterative and incremental (evolutionary) approach to software development which is performed in a highly collaborative manner by self-organizing teams within an effective governance framework, with "just enough" ceremony that produces high-quality solutions in a cost effective and timely manner which meets the changing needs of its stakeholders.
The criteria that I look for to determine whether a team is taking a disciplined approach to agile development:
I have explored these questions via several surveys, the most recent one was the 2013 How Agile Are You? survey.
To truly understand agile testing and quality strategies you must understand how they fit into the overall agile system development lifecycle (SDLC). Figure 1 depicts a high-level view of the agile lifecycle, showing that the construction phase of agile projects are organized into a series of time-boxes called iterations (in the Scrum methodology they call them "Sprints" and some people refer to them as cycles). Although many people with tell you that the agile lifecycle is iterative this isn't completely true, as you can see it is really serial in the large and iterative in the small. The serial aspect comes from the fact that there are at least several phases to the delivery lifecycle (Inception, Construction, and Transition) where the nature of the work that you do varies. The implication is that your approach to testing/validation also varies depending on where you are in the lifecycle. As a result it is important to understand each of the high-level activities depicted by this lifecycle:
Figure 3 depicts the V Model for software development, basically a sophisticated form of the traditional waterfall model. With the V model the work on the left-hand side of the diagram is validated later in the lifecycle through corresponding activities later in the lifecycle (for example requirements are validated through acceptance testing, the architecture via integration testing, and so on). Although this approach is better than not testing at all, it proves to be very expensive in practice because of several systemic problems:
Traditional testing professionals who are making the move to agile development may find the following aspects of agile development to be very different than what they are used to:
The agile approach offers many benefits over the traditional V model:
Finally, I just wanted to point out that the results depicted in Figure 4 aren't an anomaly. Various surveys over the years have found that people believed that agile teams were producing greater quality than traditional teams, providing better stakeholder satisfaction, and providing greater levels of productivity.
This section provides an overview to agile approaches to requirement elicitation and management. This is important because your approach to requirements goes hand-in-hand with your approach to validating those requirements, therefore to understand how disciplined agile teams approach testing and quality you first need to understand how agile teams approach requirements. Figure 5 depicts a process map of the best practices of Agile Modeling (AM) which address agile strategies for modeling and documentation, and in the case of TDD and executable specifications arguably strays into testing. This section is organized into the following topics:
Agile Modeling’s practice of Active Stakeholder Participation says that stakeholders should provide information in a timely manner, make decisions in a timely manner, and be as actively involved in the development process through the use of inclusive tools and techniques. When stakeholders work closely with development it increases the chance of project success by increasing the:
The traditional approach of having stakeholders participate in a requirements elicitation phase early in the project and then go away until the end of the project for an acceptance testing effort at the end of the lifecycle proves to be very risky in practice. People are not very good at defining their requirements up front and as a result with a serial approach to development a significant effort is invested in building and testing software which is never even used once the system is in production. To avoid these problems agilists prefer an evolutionary approach where stakeholders are actively involved, an approach which proves more effective at delivering software that people actually want.
A fundamental agile practice is Prioritized Requirements Stack, called Product Backlog in Scrum. The basic ideas, shown in Figure 6, are that you should implement requirements in prioritized order and let your stakeholders evolve their requirements throughout the project as they learn. The diagram also indicates several advanced agile concepts. First, it's really a stack of work items and not just functional requirements (defect reports also appear on the stack as you can see in Figure 2, more on this later, and you also need to plan for work such as reviewing artifacts from other teams and taking vacations). Second, to reduce the risks associated with complex work items, not all work items are created equal after all, you will want to consider modeling a bit ahead whenever a complex work item is an iteration or two away.Figure 6. Agile requirements change management process.
Figure 7 depicts the project lifecycle of Agile Model Driven Development (AMDD). As you see in Figure 7, during Inception agilists will do some initial requirements modeling with their stakeholders to identify the initial, albeit high-level, requirements for the system. The goal of initial requirements envisioning is to do just enough modeling to identify the scope of the system and to produce the initial stack of requirements which form the basis of your prioritized work item list (it just doesn't magically appear one day, after all). The goal is not to create a detailed requirements specification as that strategy actually increases your project risk in practice.Figure 7: The Agile Model Driven Development (AMDD) Lifecycle.
Depending on logistics issues (it can be difficult to get all the right people together at roughly the same time) and your organization's ability to make decisions within a reasonable timeframe, Inception may last for a period of several days to several months of calendar time. However, your initial requirements modeling effort should only take up several days of effort during that period. Also, note that there is a bit more to Inception than initial modeling -- the AMDD lifecycle of Figure 7 only depicts modeling activities. An important activity during Inception is garnering initial support and funding for the project, something which requires an understanding of the initial scope. You may have already garnered initial support via your pre-project planning efforts (part of portfolio management), but realistically at some point somebody is going to ask what are we going to get, how much is it going to cost, and how long is it going to take. You need to be able to provide reasonable, although potentially evolving, answers to these questions if you're going to get permission to work on the project. In many organizations you may need to take it one step further and justify your project via a feasibility study.
As you see in Figure 6 agile team will implement requirements in priority order by pulling an iteration's worth of work off the top of the stack. To do this successfully you must be able to accurately estimate the work required for each requirement, then based on your previous iteration's velocity (a measure of how much work you accomplished) you pick that much work off the stack. For example, if last iteration you accomplished 15 points worth of work then the assumption is that all things being equal you'll be able to accomplish that much work this iteration. The implication is that at the beginning of each Construction iteration an agile team team must estimate and schedule the work that they will do that iteration. To estimate each requirement accurately you must understand the work required to implement it, and this is where modeling comes in. You discuss how you're going to implement each requirement, modeling where appropriate to explore or communicate ideas. This modeling in effect is the analysis and design of the requirements being implemented that iteration. My experience is that a two-week iteration will have roughly half a day of iteration planning, including modeling, whereas for a four-week iteration this effort will typically take a day. The goal is to accurately plan the work for the iteration, identify the highest-priority work items to be addressed and how you will do so. In other words, to think things through in the short term. The goal isn't to produce a comprehensive Gantt chart, or detailed specifications for the work to be done. The bottom line is that an often neglected aspect of Mike Cohn’s planning poker is the required modeling activities implied by the technique.
The details of these requirements are modeled on a just in time (JIT) basis in model storming sessions during the development iterations. Model storming is just in time (JIT) modeling: you identify an issue which you need to resolve, you quickly grab a few team mates who can help you, the group explores the issue, and then everyone continues on as before. One of the reasons why you model storm is to analyze the details of a requirement. For example, you may be implementing a user story which indicates that the system you’re building must be able to edit student information. The challenge is that the user story doesn't include any details as to what the screen should look like -- in the agile world we like to say that user stories are "reminders to have a conversation with your stakeholders", which in other words says to do some detailed requirements modeling. So, to gather the details you call your product owner over and together you create a sketch of what the screen will look like drawing several examples until you come to a common understanding of what needs to be built. In other words, you model storm the details.
Non-functional requirements, also known as "technical requirements" or "quality of service" (QoS) requirements, focus on aspects that typically cross-cut functional requirements. Common non-functionals include accuracy, availability, concurrency, consumability/usability, environmental/green concerns, internationalization, operations issues, performance, regulatory concerns, reliability, security, serviceability, and supportability. Constraints, which for the sake of simplicity I will lump in with non-functionals, define restrictions on your solution, such as being required to store all corporate data in DB2 per your enterprise architecture, or only being allowed to use open source software (OSS), which conforms to a certain level of OSS license. Constraints can often impact your technical choices by restricting specific aspects of your architecture, defining suggested opportunities for reuse, and even architectural customization points. Although many developers will bridle at this, the reality is that constraints often make things much easier for your team because some technical decisions have already been made for you. I like to think of it like this—agilists will have the courage to make tomorrow's decisions tomorrow, disciplined agilists have the humility to respect yesterday's decisions as well.
Although agile teams have pretty much figured out how to effectively address functional requirements, most are still struggling with non-functionals. Some teams create technical stories to capture non-functionals in a simple manner as they capture functional requirements via user stories. This is great for documentation purposes but quickly falls apart from a management and implementation point of view. The agile requirements management strategy described earlier assumes that requirements are self-contained and can be addressed in a finite period of time, an assumption that doesn't always hold true for non-functionals.
There are several fundamental strategies, all of which should be applied, for addressing non-functional requirements on an agile project:
Figure 8 summarizes some results from Ambysoft’s 2008 Agile Practice and Principles Survey. As you can see, it is quite common for agile teams to do some up-front requirements envisioning and that requirements details will emerge over time (via iteration modeling and model storming). Approaches to modeling initial requirements are shown in Figure 9, which summarizes some results from the 2013 Agile Project Initiation Survey. The November 2012 Agile Testing Survey found that although there is a lot of rhetoric around acceptance test-driven development (TDD) the fact is that it is still just being adopted within organizations. The implication is that requirements are explored via several techniques on agile teams, and rightfully so because one single strategy is rarely sufficient for enterprise-class situations.Figure 8. Requirements practices on agile projects.
There are several important implications that agile requirements strategies have for agile testing:
The good news is that agile testing techniques exist which reflect these implications. The challenge is that you need to be willing to adopt them.
To understand how testing activities fit into agile system development it is useful to look at it from the point of view of the system delivery lifecycle (SDLC). Figure 10 is a high-level view of the agile SDLC, indicating the testing activities at various SDLC phases. This section is organized into the following topics:
During Inception, often called "Sprint 0" in Scrum or "Iteration 0" in other agile methods, your goal is to get your team going in the right direction. Although the mainstream agile community doesn't like talking about this much, the reality is that this phase can last anywhere from several hours to several weeks depending on the nature of the project and the culture of your organization. From the point of view of testing the main tasks are to organize how you will approach testing and start setting up your testing environment if it doesn't already exist. During this phase of your project you will be doing initial requirements envisioning (as described earlier) and architecture envisioning. As the result of that effort you should gain a better understanding of the scope, whether your project must comply to external regulations such as the Sarbanes-Oxley act or the FDA's CFR 21 Part 11 guidelines, and potentially some some high-level acceptance criteria for your system -- all of this is important information which should help you to decide how much testing you will need to do. It is important to remember that one process size does not fit all, and that different project teams will have different approaches to testing because they find themselves in different situations -- the more complex the situation, the more complex the approach to testing (amongst other things). Teams finding themselves in simple situations may find that a "whole team" approach to testing will be sufficient, whereas teams in more complex situations will also find that they need an independent test team working in parallel to the development team. Regardless, there's always going to be some effort setting up your test environment.
|An organizational strategy common in the agile community, popularized by Kent
Extreme Programming Explained 2nd Ed, is for the team to include the right
people so that they have the skills and perspectives required for
the team to succeed. To successfully deliver a working system on a regular
basis, the team will need to include people with analysis skills, design skills,
programming skills, leadership skills, and yes, even people with testing skills.
Obviously this isn't a complete list of skills required by the team, nor does it
imply that everyone on the team has all of these skills. Furthermore, everyone on an agile team contributes in any way that they
can, thereby increasing the overall productivity of the team. This
strategy is called "whole team".
With a whole team approach testers are “embedded” in the development team and actively participate in all aspects of the project. Agile teams are moving away from the traditional approach where someone has a single specialty that they focus on -- for example Sally just does programming, Sanjiv just does architecture, and John just does testing -- to an approach where people strive to become generalizing specialists with a wider range of skills. So, Sally, Sanjiv, and John will all be willing to be involved with programming, architecture, and testing activities and more importantly will be willing to work together and to learn from one another to become better over time. Sally's strengths may still lie in programming, Sanjiv's in architecture, and John's in testing, but that won't be the only things that they'll do on the agile team. If Sally, Sanjiv, and John are new to agile and are currently only specialists that's ok, by adopting non-solo development practices and working in short feedback cycles they will quickly pick up new skills from their teammates (and transfer their existing skills to their teammates too).
This approach can be significantly different than what traditional teams are used to. On traditional teams it is common for programmers (specialists) to write code and then "throw it over the wall" to testers (also specialists) who then test it an report suspected defects back to the programmers. Although better than no testing at all, this often proves to be a costly and time consuming strategy due to the hand-offs between the two groups of specialists. On agile teams programmers and testers work side-by-side, and over time the distinction between these two roles blur into the single role of developer. An interesting philosophy in the agile community is that real IT professionals should validate their own work to the best of their ability, and to strive to get better at doing so over time.
The whole team strategy isn't perfect, and there are several potential problems:
Luckily the benefits of the whole team approach tend to far outweigh the potential problems. First, whole team appears to increase overall productivity by reducing and often reducing or even eliminating the wait time between activities. Second, there is less need for paperwork such as detailed test plans due to the lack of hand-offs between separate teams. Third, programmers quickly start to learn testing and quality skills from the testers and as a result do better work to begin with -- when the developer knows that they'll be actively involved with the testing effort they are more motivated to write high-quality, testable code to begin with.
The whole team approach works well in practice when agile
development teams find themselves in reasonably straightforward situations.
However, teams working at scale in complex environments will find that a whole team
approach to testing proves insufficient. In such situations this test team will perform
parallel independent testing
throughout the project and will typically be responsible for the
end-of-lifecycle testing performed during the
release/transition phase of the project. The goal of these efforts is
to find out where the system breaks (whole team testing often focuses on
confirmatory testing which shows that the system works) and report such
breakages to the development team so that they can fix them. This
independent test team will focus on more complex forms of testing which are
typically beyond the ability of the "whole team" to perform on their own,
more on this later.
Your independent test team will support multiple project teams. Most organizations have many development teams working in parallel, often dozens of teams and sometimes even hundreds, so you can achieve economies of scale by having an independent test team support many development teams. This allows you to minimize the number of testing tool licenses that you need, share expensive hardware environments, and enable testing specialists (such people experienced in usability testing or investigative testing) to support many teams.
It's important to note that an agile independent test team works significantly differently than a traditional independent test team. The agile independent test team focuses on a small minority of the testing effort, the hardest part of it, while the development team does the majority of the testing grunt work. With a traditional approach the test team would often do both the grunt work as well as the complex forms of testing. To put this in perspective, the ratio of people on agile developer teams (including anyone involved in whole team testing) to people on the agile independent test team will often be 15:1 or 20:1 whereas in the traditional world these ratios are often closer to 3:1 or 1:1 (and in regulatory environments may be 1:2 or more).
At the beginning of your project you will need to start setting up your environment, including setting up your work area, your hardware, and your development tools (to name a few things). You will naturally need to set up your testing environment, from scratch if you don't currently have such an environment available or through tailoring an existing environment to meeting your needs. There are several strategies which I typically suggest when it comes to organizing your testing environment:
Agile development teams generally follow a whole team strategy where people with testing skills are effectively embedded into the development team and the team is responsible for the majority of the testing. This strategy works well for the majority of situations but when your environment is more complex you'll find that you also need an independent test team working in parallel to the development and potentially performing end-of-lifecycle testing as well. Regardless of the situation, agile development teams will adopt practices such as continuous integration which enables them to do continuous regression testing, either with a test-driven development (TDD) or test-immediately after approach.
Continuous integration (CI) is a practice where at least once every few hours, preferably more often, you should:
Your integration job could run at specific times, perhaps once an hour, or every time that someone checks in a new version of a component (such as source code) which is part of the build.
Advanced teams, particularly those in an agility at scale situation, will find that they also need to consider continuous deployment as well. The basic idea is that you automate the deployment of your working build, some organizations refer to this as promotion of their working build, into other environments on a regular basis. For example, if you have a successful build at the end of the week you might want to automatically deploy it to a staging area so that it can be picked up for parallel independent testing. Or if there's a working build at the end of the day you might want to deploy it to a demo environment so that people outside of your team can see the progress that your team is making.
There are two levels of TDD:
With ATDD you are not required to also take a developer TDD approach to implementing the production code although the vast majority of teams doing ATDD also do developer TDD. As you see in Figure 12, when you combine ATDD and developer TDD the creation of a single acceptance test in turn requires you to iterate several times through the write a test, write production code, get it working cycle at the developer TDD level. Clearly to make TDD work you need to have one or more testing frameworks available to you. For acceptance TDD people will use tools such as Cucumber, Fitnesse, or RSpec and for developer TDD agile software developers often use the xUnit family of open source tools, such as JUnit or VBUnit. Without such tools TDD is virtually impossible. The greatest challenge with adopting ATDD is lack of skills amongst existing requirements practitioners, yet another reason to promote generalizing specialists within your organization over narrowly focused specialists.
Although many agilists talk about TDD, the reality is that there seems to be far more doing "test after" development where they write some code and then write one or more tests to validate. TDD requires significant discipline, in fact it requires a level of discipline found in few coders, particularly coders which follow solo approaches to development instead of non-solo approaches such as pair programming. Without a pair keeping you honest, it's pretty easy to fall back into the habit of writing production code before writing testing code. If you write the tests very soon after you write the production code, in other words "test immediately after", it's pretty much as good as TDD, the problem occurs when you write the tests days or weeks later if at all.
The popularity of code coverage tools such as Clover and Jester amongst agile programmers is a clear sign that many of them really are taking a "test after" approach. These tools warn you when you've written code that doesn't have coverage tests, prodding you to write the tests that you would hopefully have written first via TDD.
The whole team approach to development where agile teams test to the best of the ability is a great start, but it isn't sufficient in some situations. In these situations, described below, you need to consider instituting a parallel independent test team which performs some of the more difficult (or perhaps advanced is a better term) forms of testing. As you can see in Figure 13, the basic idea is that on a regular basis the development team makes their working build available to the independent test team, or perhaps they automatically deploy it via their continuous deployment tools, so that they can test it. The goal of this testing effort is not to redo the confirmatory testing which is already being done by the development team, but instead to identify the defects which have fallen through the cracks. The implication is that this independent test team does not need a detailed requirements speculation, although they may need architecture diagrams, a scope overview, and a list of changes since the last time the development team sent them a build. Instead of testing against the specification, the independent testing effort may focus on:
The independent test team reports defects back to the development, as you see in Figure 13. These defects are treated as type of requirement by the development team in that they're prioritized, estimated, and put on the work item stack.
Figure A depicts the scaling factors of the Software Development Context Framework (SDCF) and indicates when independent testing is likely to be required. There are several GOOD reasons why you should consider parallel independent testing:
Many development teams may not have the resources required to perform effective system integration testing, resources which from an economic point of view must be shared across multiple teams. The implication is that you will find that you need an independent test team working in parallel to the development team(s) which addresses these sorts of issues. System integration tests often require expensive environment that goes beyond what an individual project team will have.
A poor excuse for adopting independent testing is that your existing quality/testing staff still think, and often prefer to work, in a traditional manner. The real solution is to overcome these cultural challenges and help them to gain the skills and mindset required to work in an agile manner.
Some agilists will claim that you don't need parallel independent testing, and in simple situations this is clearly true. The good news is that it's incredibly easy to determine whether or not your independent testing effort is providing value: simply compare the likely impact of the defects/change stories being reported with the cost of doing the independent testing. n short, whole team testing works well for agile in the small, but for more complex systems and agile at scale you need to be more sophisticated.
Defect management is often much simpler on agile projects when compared to classical/traditional projects for two reasons. First, with a whole team approach to testing when a defect is found it's typically fixed on the spot, often by the person(s) who injected it in the first place. In this case the entire defect management process is at most a conversation between a few people. Second, when an independent test team is working in parallel with the development team to validate their work they typically use a defect reporting tool to inform the development team of what they found. Disciplined agile delivery teams combine their requirements management and defect management strategies to simplify their overall change management process. Figure 14 summarizes this (yes, it's the same as Figure 6) showing how work items are worked on in priority order. Both requirements and defect reports are types of work items and are treated equally -- they're estimated, prioritized, and put on the work item stack.Figure 14. Agile defect change management process.
This works because defects are just another type of requirement. Defect X can be reworded into the requirement "please fix X". Requirements are also a type of defect, in effect a requirement is simply missing functionality. In fact, some agile teams will even capture requirements using a defect management tool.
The primary impediment to adopting this strategy, that of treating requirements and defects as the same thing, occurs when the agile delivery team finds itself in a fixed-price or fixed estimate situation. In such situations the customer typically needs to pay for new requirements that weren't agreed to at the beginning of the project but should not pay for fixing defects. In such situations the bureaucratic approach would be to have two separate change management processes and the pragmatic approach would be to simply mark the work item as something that needs to be paid extra for (or not). Naturally I favor the pragmatic approach. If you find yourself in a fixed-price situation you might be interested that I've written a fair bit about this and more importantly alternatives for funding agile projects. To be blunt, I vacillate between considering fixed-price strategies are unethical or simply a sign of grotesque incompetence on the part of the organization insisting on it. Merging your requirements and defect management processes into a single, simple change management process is a wonderful opportunity for process improvement. Exceptionally questionable project funding strategies shouldn't prevent you from taking advantage of this strategy.
An important part of the release effort for many agile teams is end-of-lifecycle testing where an independent test team validates that the system is ready to go into production. If the independent parallel testing practice has been adopted then end-of-lifecycle testing can be very short as the issues have already been substantially covered. As you see in Figure 15 the independent testing efforts stretch into the transition phase in Disciplined Agile Delivery because the independent test team will still need to test the complete system once it's available.Figure 15. Independent testing throughout the lifecycle.
There are several reasons why you still need to do end-of-lifecycle testing:
There isn't enough that is publicly discussed in the mainstream agile community about end-of-lifecycle testing, the assumption of many people following mainstream agile methods such as Scrum is that techniques such as TDD are sufficient. This might be because much of the mainstream agile literature focuses on small, co-located agile development teams working on fairly straightforward systems. But, when one or more scaling factors (such as large team size, geographically distributed teams, regulatory compliance, or complex domains) are applicable then you need more sophisticated testing strategies. Regardless of some of the rhetoric you may have heard in public, as we see in the next section a fair number of TDD practitioners are indicating otherwise in private.
Figure 16 summarizes the results of one of the questions from Ambysoft’s 2008 Test Driven Development (TDD) Survey which asked the TDD community which testing techniques they were using in practice. Because this survey was sent to the TDD community it doesn't accurately represent the adoption rate of TDD at all, but what is interesting is the fact that respondents clearly indicated that they weren't only doing TDD (nor was everyone doing TDD, surprisingly). Many were also doing reviews and inspections, end of lifecycle testing, and parallel independent testing, activities which the agile purists rarely seem to discuss.Figure 16. Testing/Validation practices on agile teams.
Furthermore, Figure 17, which summarizes results from the 2010 How Agile Are You? survey, provides insight into which validation strategies are being followed by the teams claiming to be agile. I suspect that the adoption rates reported for developer TDD and acceptance TDD, 53% and 44% respectively, are much more realistic than those reported in Figure 16.Figure 17. How agile teams validate their own work.
Figure 18 and Figure 19 summarize results from the Agile Testing Survey 2012. These charts indicate the PRIMARY approach to acceptance testing and developer testing respectively. On the surface there are discrepancies between the results shown in Figure 17 and those in Figure 18 and Figure 19. For example, Figure 17 shows an adoption rate of 44% for ATDD but Figure 18 only shows a 9% rate. This is because the questions where different. The 2010 survey asked if a team was following the practice whereas the 2012 survey asked if it was the primary approach. So, a team may be taking a test-first approach to acceptance testing but other approaches may be more common, hence ATTD wasn't the primary strategy for acceptance testing on that team. When it comes to test-first approaches it's clear that we still have a long way to go until they dominate.Figure 18. Primary approach to acceptance testing.
There are several critical implications for existing test professionals:
In addition to agile testing strategies, there are also agile quality strategies. These strategies include:
Refactoring is a disciplined way to restructure your code to improve it's quality. The basic idea is that you make small changes to your code to improve your design, making it easier to understand and to modify. Refactoring enables you to evolve your code slowly over time, to take an iterative and incremental approach to programming. A critical aspect of a refactoring is that it retains the behavioral semantics of your code, at least from a black box point of view. For example there is a very simple refactoring called Rename Method, perhaps from getPersons() to getPeople(). Although this change looks easy on the surface you need to do more than just make this single change, you must also change every single invocation of this operation throughout all of your application code to invoke the new name. Once you’ve made these changes then you can say you’ve truly refactored your code because it still works again as before. It is important to understand that you do not add functionality when you are refactoring. When you refactor you improve existing code, when you add functionality you are adding new code. Yes, you may need to refactor your existing code before you can add new functionality. Yes, you may discover later on that you need to refactor the new code that you just added. The point to be made is that refactoring and adding new functionality are two different but complementary tasks.
Refactoring applies not only to code, but to your database schema as well. A database refactoring is a simple change to a database schema that improves its design while retaining both its behavioral and informational semantics. For the sake of this discussion a database schema includes both structural aspects such as table and view definitions as well as functional aspects such as stored procedures and triggers. An interesting thing to note is that a database refactoring is conceptually more difficult than a code refactoring; code refactorings only need to maintain behavioral semantics while database refactorings also must maintain informational semantics. There is a database refactoring named Split Column, one of many described in A Catalog of Database Refactorings, where you replace a single table column with two or more other columns. For example you are working on the Person table in your database and discover that the FirstDate column is being used for two distinct purposes – when the person is a customer this column stores their birth date and when the person is an employee it stores their hire date. Your application now needs to support people who can be both a customer and an employee so you’ve got a problem. Before you can implement this new requirement you need to fix your database schema by replacing the FirstDate column with BirthDate and HireDate columns. To maintain the behavioral semantics of your database schema you need to update all source code that accesses the FirstDate column to now work with the two new columns. To maintain the informational semantics you will need to write a migration script that loops through the table, determines the type, then copies the existing date into the appropriate column. Although this sounds easy, and sometimes it is, my experience is that database refactoring is incredibly difficult in practice when cultural issues are taken into account.
Refactoring also applies to your user interface, as Rusty Harold aptly shows in Refactoring HTML. Simple changes to your user interface, such as Align Fields and Apply Common Size, can improve the quality of the look and feel of your user interface.
With non-solo development approaches, such as XP’s pair programming or Agile Modeling’s Model With Others, two or more people work together on a single activity. In many ways non-solo development is the agile implementation of the old axiom "two heads are better than one". With pair programming two people literally sit together at a single workstation, one person writing the code while the other looks over their shoulder providing ideas and ensuring that the coder follows development conventions and common quality practices such as test-driven development (TDD). The pair programmers will shift roles on a regular basis, keeping a steady pace. When modeling with others, two or more people will gather around a shared modeling environment such as a whiteboard and work together to explore a requirement or to think through a portion of the design.
Of course, non-solo development isn't just limited to programming and modeling, you can and should work collaboratively on all aspects of an IT project. For example, agile teams will often do detailed project planning as a team, typically on a just-in-time (JIT) basis at the beginning of an iteration/sprint.
There are several key benefits to non-solo development:
There are two unfortunate misperceptions about non-solo development:
I've run into many organizations where philosophical debates rage about the benefits and potential drawbacks of non-solo development techniques, yet it is often the case that these debates are theoretical in nature because the people involved really haven't tried them in practice. Here's my advice:
Static code analysis tools check for defects in the code, often looking for types of problems such as security defects which are commonly introduced by developers, or code style issues. Static code analysis enhances project visibility by quickly providing an assessment of the quality of your code. Dynamic code analysis is a bit more complicated in that it examines the executing code for problems. Both forms of code analysis are particularly important for large teams where significant amounts of code is written, geographically distributed teams where code is potentially written in isolation, organizationally distributed teams where code is written by people working for other organizations, and any organizations where IT governance is an important issue.
Static code analysis should be used:
As you saw in Figure 17, which summarizes results from the 2010 How Agile Are You? survey, 32% of respondents claiming to be on agile teams included static code analysis in their builds and 21% dynamic code analysis. It's a start.
review, specific approaches to reviews include walkthroughs and
inspections, is a validation technique in which an artifact(s) is
examined critically by a group of your peers. The basic idea is that a
group of qualified people evaluate an artifact to determine if the
artifact not only fulfill the demands of its target audience, but also
is of sufficient quality to be easy to develop, maintain, and enhance.
Potential problems/defects are reported to the creator of the artifact
so that they may potentially be addressed.
First and foremost, I consider the holding of reviews and inspections to be a process smell which indicate that you have made a mistake earlier in the life cycle which could be very likely rectified by another, more effective strategy (non-solo development, for example) which has a much shorter feedback cycle. This goes against traditional wisdom which says that reviews are an effective quality technique, an opinion which is in fact backed up by research -- in the book Practical Software Metrics for Project Management and Process Improvement Robert Grady reports that project teams taking a serial (non-agile) approach that 50 to 75 percent of all design errors can be found through technical reviews. I have no doubt that this is true, but my point is that although reviews are effective quality techniques in non-agile situations that in agile situations there are often much better options available to you than reviews, so therefore you should strive to adopt those quality techniques instead.
There are several situations where it makes sense to hold reviews:
As you saw in Figure 17, which summarizes results from the 2010 How Agile Are You? survey, 23% of respondents claiming to be on agile teams indicated that they do external reviews of their work periodically.
As you saw in Figure 2, a common practice of agile teams is to hold a demo of their working solution to date at the end of each iteration/sprint. The goals are to show explicit progress to key stakeholders and to obtain feedback from them. This demo is in effect an informal review, although agile teams working in regulatory environments will often choose to be a bit more formal. As you saw in Figure 17, which summarizes results from the 2010 How Agile Are You? survey, 79% of respondents claiming to be on agile teams indicated that they do iteration/sprint demos.
Although we may have several project stakeholders working directly with the team we could have hundreds or even thousands that don’t know what’s going on. As a result I will often run an "all-hands" demo/review early in the delivery life cycle, typically two to three iterations into the construction phase, to a much wider audience than just the stakeholder(s) we're directly working with. We do this for several reasons:
Note that the term "all-hands" is more of a target than a reality in many cases, particularly in situations with many stakeholders and/or distributed stakeholders. As you saw in Figure 17, which summarizes results from the 2010 How Agile Are You? survey, 42% of respondents claiming to be on agile teams indicated that they do "all-hands" demos.
Milestone reviews, particularly lightweight ones, are a possible option, particularly for disciplined agile teams. The Dr. Dobb’s Journal’s 2013 project success survey found that agile teams do not have a 100% success rate. Therefore it behooves agile teams to review progress to date at key milestone points, perhaps at the end of major project phases or at critical financial investment points (such as spending X% of the budget). Milestone reviews should consider whether the project is still viable, although the team may be producing potentially shippable software each iteration the business environment may have changed and negated the potential business value of the system.
An important agile quality strategy, an important quality strategy in general, is to reduce the feedback cycle between obtaining information and validating it. The shorter the feedback cycle the less expensive (on average) it is to address any problems, see Why Agile Testing and Quality Techniques Work later in this article, and the greater the chance that you'll be motivated to make the required change in the first place. Table 1 discusses the impact of the feedback cycle of several agile techniques to give you a better understanding.
|Agile Strategy||Feedback Cycle|
|Continuous Integration||Minutes. You check in your code, rerun your build (or have it automatically rerun for you depending on the CI tools that you're using), and from your test results (remember, agilists are at least doing developer regression testing if not TDD), you see whether or not what you just did works as expected.|
|Active Stakeholder Participation||Seconds. You discuss with one or more
stakeholders what they want, getting feedback in real time regarding
your understanding of what they're asking for.
Hours to days. Once you think you understand their intent, you work on developing a solution which fulfills that intent. This will take you a few hours or maybe a few days, after which you show them what you did and get feedback from the stakeholder(s). You iterate as needed.
|Non-Solo Development||Seconds. You do some work as others are looking on and providing input.|
|Test-Driven Development (TDD)||Minutes. You a test, write just enough production code to fulfill that test, run your build (see continuous integration), and within minutes you know whether what you just did works or not.|
|Iteration/Sprint Demo||Weeks. Your team promises to deliver something at the beginning of the iteration/sprint, and at the end of the iteration/sprint you demo what you built. Because iterations are typically measured in weeks, the feedback cycle provided by an iteration demo at the end of each iteration is measured on the order of weeks because it validates whether you fulfilled the promise(s) that you made at the beginning of the iteration.|
Following development standards and guidelines is an important, and relatively easy to adopt, quality technique. When IT developers follow a common set of development guidelines it leads to greater consistency within their work, which in turn improves the understandability of the work and thus it's quality. The good news is that agile methodologies recognize the importance of development guidelines: Extreme Programming (XP) includes the practice Coding Standards, Agile Modeling includes the practice Apply Modeling Standards (see Elements of UML 2.0 Style for examples), and Agile Data promotes the practice of following database guidelines. The bad news, as you can see in Figure 20 which summarizes results from the DDJ State of the IT Union July 2009 survey , is that there appears to be more talk about following guidelines than there is actual following of said guidelines. Following personal guidelines is a bit better than following no guidelines at all, hopefully, and following project-level guidelines is better still. Ideally development teams should be following enterprise-level guidelines to promote ease of transfer between teams and reduction of effort around developing guidelines. Better yet, you should strive to adopt and tailor existing industry guidelines for your organization -- does it really make sense for you to create your own set of Java Coding Guidelines? Probably not.Figure 20. Development team's approach to following coding conventions (not paradigm specific).
The Agile Practices and Principles Survey (July 2008) found that when agile development teams were following common development guidelines that they were most likely to be following coding guidelines, and less likely to be following either database guidelines or user interface (UI) guidelines. Figure 21, which summarizes results from the 2010 How Agile Are You? survey, found that people who believed they were on agile teams reports that 58% of them had development guidelines identified and that 54% (93% of the 58%) were actually following them.Figure 21. Agile criterion: Self organization.
There are several very serious implications for quality practitioners:
Figure 22 depicts several common agile strategies mapped to Barry Boehm's Cost of Change Curve. In the early 1980s Boehm discovered that the average cost to address a defect rises exponentially the longer it takes you to find it. In other words, if you inject a defect into your system and then find it a few minutes later and fix it, the cost is very likely negligible to do so. However, if you find it three months later then that defect could cost you hundreds if not thousands of dollars to address, on average, because not only will you need to fix the original problem but you'll also have to fix any work which is based on that defect. Many of the agile techniques have feedback cycles on the order of minutes or days, whereas many traditional techniques have feedback cycles on the order of weeks and often months. So, even though traditional strategies can be effective at finding defects the average cost of fixing them is much higher.Figure 22. Mapping common techniques to the cost of change curve.
Table 2 summarizes results from the Agile Testing Survey 2012. One of the issues that the survey explored was what challenges agile teams faced when adopting agile testing and quality strategies.
Table 2. The most difficult challenges when adopting agile testing approaches.
|50%||Getting all testing done in the current iteration/sprint|
|37%||Adopting test-driven development (TDD) approaches|
|33%||Validating non-functional requirements|
|33%||Getting stakeholders/customers involved with testing|
|27%||Getting developers to test their own code|
|21%||User interface testing|
|16%||Learning to test throughout the agile lifecycle|
|13%||Adopting new agile testing tools|
|12%||Migrating existing testing and quality professionals to agile|
|8%||Using our existing testing tools to support agile development|
|8%||Remaining regulatory compliant|
|This book, Disciplined Agile Delivery: A Practitioner's Guide to Agile Software Delivery in the Enterprise describes the Disciplined Agile Delivery (DAD) process decision framework. The DAD framework is a people-first, learning-oriented hybrid agile approach to IT solution delivery. It has a risk-value delivery lifecycle, is goal-driven, is enterprise aware, and provides the foundation for scaling agile. This book is particularly important for anyone who wants to understand how agile works from end-to-end within an enterprise setting. Data professionals will find it interesting because it shows how agile modeling and agile database techniques fit into the overall solution delivery process. Enterprise professionals will find it interesting beause it explicitly promotes the idea that disciplined agile teams should be enterprise aware and therefore work closely with enterprise teams. Existing agile developers will find it interesting because it shows how to extend Scrum-based and Kanban-based strategies to provide a coherent, end-to-end streamlined delivery process.|
We actively work with clients around the world to improve their information technology (IT) practices, typically in the role of mentor/coach, team lead, or trainer. A full description of what we do, and how to contact us, can be found at Scott W. Ambler + Associates.