DevOps Example

A team is in charge of the website documentation, tutorials and new user experience for a complex web application. The application has a one-week free trial after which users must either purchase the full version of the software or cease to use it. The teams core responsibility is to maximize the frequency with which a user who starts a free trial of the software goes on to purchase the full version.

The product owner is responsible for assessing the business value of each potential experiment, considering both the immediately created value (e.g., the increase in sales) as well as the potential value of what the team might learn when analyzing their results.

The product owner collaborates with the rest of the team to determine what specifically needs to be done to create the desired features, what data should specifically be observed and analyzed, and what is required to collect and analyze the data to create ready Product Back Items (PBIs). These PBIs are estimated, with respect to how much effort is required to finish each item. During the product backlog selection discussion, led by their SKI master, the team collectively might combine, separate, simplify, or alter their product backlog items to come up with a specific experiment to run (i.e. an iteration).

Example Iteration

An item on their product backlog is to translate their FAQ, which currently is only provided in English, into additional languages, because they have noticed that their conversion rate in non-English speaking European countries is substantially lower than it is in the UK. Another item on their backlog is to create a guided walkthrough that is triggered when a new user starts using the application, as they notice that a large percentage of their users launch the application once and never use it again and they hypothesize that they lose a substantial number of new users who are confused by the application the first time they use it.

With respect to this iteration, the team might elect to create the automated guided walkthrough, but to make it available both in English and in German as they realize that translating the walkthrough into just one additional language is a small additional effort compared to translating the entire FAQ and might give insight into the value of translation. They would then measure the increase in how many users launched the application more than once, and how many converted to a sale of the full version of the application, among users from both the UK and Germany, and compare that to existing baseline rates.

Analyzing the resulting data might inform the team of both the potential future value of adding additional language support to their documentation while also helping them understand if new user confusion results in a significant loss of sales or if there are other issues (e.g., once a user tries the application they often feel that the product doesn’t meet their needs). Once the elements to create, analyze and observe are fully fleshed out, the team begins their iteration. They would take the larger experiment and break it into specific tasks which would flow through the task board. This team also had recurring regular responsibilities that are required to “keep the lights on”, as well as bug fixes that were also required as needed. The team adds the “keep the lights on” items to the task board as needed, and adds the “bug fix” items if/when bugs are identified. In this situation, an example set of tasks might be “Select which features the automated walkthrough will highlight and write text copy for each step in English” or “Send the English text to our German translator” or “Create code that will allow us to easily highlight a specific element of the interface and show text in a bubble near it with the ability to step forwards and backwards through a list of walkthrough steps”. The tasks will also include the steps required for the observation and analysis, such as “randomly show the new guided walkthrough to 50% of new users and track how many users in each sample use the application more than once, and how many buy the full version within a week, separated by country” and “Run a statistical test on the resulting data and assess the impact of the guided walkthrough overall and the German language specific version of it.”

Each day, the team has their daily standup to identify issues and roadblocks. In this example, it might take 7-10 days to finish observing and analyzing the data as the team would want to compare conversion rates among people who had used the automated walkthrough on their first use of the product and whose one-week free trial had ended prior to drawing conclusions. Note that, in this situation, the team may have a lighter workload during the observe and analyze portion of the iteration. During this time, the team might work on “keep the lights on” tasks or perform grooming of their product backlog. In addition, the team might start their next iteration and work on concurrently while observing and analyzing the data from the current iteration. Having the next iteration start while still in the final phases of a different iteration is similar to a more advanced pipeline of sprints found in type C Scrum, described by Sutherland [33], but not commonly used by Scrum teams.

After the Iteration has Completed

Once the data has been analyzed, the grooming and prioritization is done before the next iteration starts. The team also discusses their findings with their stakeholders at their next scheduled iteration review.

To continue our example, if the team found that the conversion rate in their German user segment was significantly higher than expected, they might prioritize a smaller iteration where they would add support for 3 additional languages. If they found that their experiment resulted in no significant increase in sales conversions among either the German or the English demographic, then perhaps they would look to create an experiment to see if a specific missing feature was causing users to abandon the product after a single use. As these tasks were already on the backlog, the start of this new iteration (i.e., starting their next experiment) would not need to wait for the team’s next iteration review but rather, the priority of these items would be adjusted via the team’s grooming and prioritization effort. However, the iteration review might uncover additional items for the product backlog, such as potentially additional guidance for more advanced features of the application.

Finally, the team has a monthly Retrospective to discuss what is and is not working with the current process and associated technical practices, and explore how to improve the team’s process and results.