The team was comprised of a product owner, a Process Expert and four DDS team members. The Process Master and the product owner were part time within each team.  As expected, during the project, the Process Expert helped the team adhere to the DDS framework.

The project was to analyze a large data set of customer survey responses for a client. The initial requirements for the project were very high level. Specifically, the team had a goal of “helping the management team understand the customer surveys and what drives customer satisfaction.” Hence, the team had to refine their goals (requirements) as they incrementally understood the data and what might be possible, in terms of actionable insight generated via data analytics.

To do the analysis, the teams were required to leverage many typical data science techniques, such as descriptive statistics, machine learning algorithms and geographic information analysis. The work was done in the R programming language, a popular data science tool that is used in both industry and academia.

Example Iteration

The DDS team worked collectively to determine what specifically needed to be done during an iteration, what data should specifically be observed and analyzed, and what would be required to collect and analyze the information generated from that iteration. When grooming and prioritizing, the team estimated how much effort was required to run a specific experiment (i.e., perform one cycle of create, observe and then analyze). This estimation was done at a high level (with high, medium and low estimates). Then, during product backlog selection, the team collectively reviewed their product backlog items to come up with a specific experiment to run.

An example item on the team’s product backlog was to explore customer satisfaction by age. This task was broken down to explore age via overall customer satisfaction, as well as satisfaction by geography (e.g., per each state in the United States). The team determined that the item required four tasks on the board, two related to data munging, one to calculate customer satisfaction across different loyalty levels by age, as well an effort to explore customer satisfaction by age from a geographic basis.

This experiment (item) was prioritized as important because the team hypothesized that age might be an important characteristic of customer satisfaction. Furthermore, based on previous experiments (iterations), loyalty level was deemed to have been potentially interesting. Once it was clear how the team was going to create, analyze and observe their experiment, the team began their iteration.

During this iteration (and all other iterations), the team’s board was defined with the following columns (“to do”, “in progress”, “validate”, “done”). The teams used these columns since there was the belief ensuring the validation of the task was to be done for all tasks. Each day, the team had their daily standup to identify issues and roadblocks. Note that, due to a variety of logistical issues, this was not always done via a face-to-face meeting. This specific iteration took 1.5 days.

Iteration Review Meeting

Since the team had agreed to have an iteration review meeting on a weekly basis, once the iteration had been completed, at the next weekly iteration review meeting, the team discussed their findings and came to consensus on some possible next experiments which were then added to the product backlog.  

Results from Retrospective

For this team, retrospectives occur on a monthly basis. The team collectively agreed that, in order to be clear if a task was focused on create, observe of analyze, the type of task was explicitly color coded for future iterations.