Social Contracts and the Agile Team – Part 1

Social Contracts and the Agile Team – Part 1

It’s a scenario we’ve all been a part of before.  To shake things up, your Agile teams are being restructured.  After the initial shuffle, the team gets together for a first meeting to figure out how it is going to work.  Introductions are made, experiences are shared.  Maybe a team lead is named.  It’s a heady time full of expectations.  Following the cycle of Forming-Storming-Norming-Performing, phase one is off to a good start.

At the first team retro, a better understanding of what everyone brings to the team starts to take shape.  Relationships and communications within the team, as well as other players within the organization, take root.  The team also starts to get a sense of where there are some gaps.  Maybe it’s a misunderstanding of how code reviews work, or how cards are pointed.  Storming has happened, and the team is ready to begin the transition to the Norming phase.

I’d suggest that team norms, which tend to be prescriptive in nature, falls short of what the stakeholders are hoping it will.  Instead, I’d suggest that a social contract is a better concept to work towards.

A social contract is a team-designed agreement for an aspirational set of values, behaviors and social norms.  They not only set expectations but responsibilities.  Instead of being focused on how individual team members should approach the work of the team and organization, it lays out the responsibilities of the team members to each other.  It also lays out the responsibilities and expectations between the team and the organization.

What would this type of contract look like?  It should call out both sides of a relationship.  An example of part of a social contract may look like this:

  • The Team promises to place value through deliverable software as the highest goal to the organization, as defined by the Product Owner
  • The Team promises to raise any obstacles preventing them from delivering value immediately
  • The Organization promises to address and remove obstacles in a timely manner to the best of their ability
  • The Organization promises to maintain reasonable stability of the team so that it has the opportunity to mature and reach its highest potential

In the spirit of the social contract, this should be discussed and brainstormed with open minds and constructive dialog with both sides of the social equation.  In truly Agile fashion, it should also be considered an iterative process, and reviewed from time to time to ensure the social contract itself is providing value.

Introduction
This is the second in a series of posts about our experience using Visual Studio Team Services (VSTS) to build a deployment pipeline for an ASP.NET Core Web Application. Future posts will cover release artifacts and deployment to Azure cloud services.

Prerequisites
It’s assumed that you have an ASP.NET Core project set up in VSTS and connected to a Git repository.
See the previous blog post for details.

Goal
The goal of this post is to set up your ASP.NET Core project to automatically build and run Unit Tests after every commit to source code repository (I.e. Continuous Integration).

Here is a video summarizing the steps described below:

 

Adding a New Build Definition
Log into VSTS. For this demo, The VSTS account that we will be using is https://ccpacetest.visualstudio.com, and the Microsoft user is CCPaceTest@outlook.com

Select the project from our previous post.

In VSTS, go to Build under the Build and Release tab

 

  1. Select “ + New Definition”
  2. Select ASP.NET Core Template                              
  3. Enter any Name that can help you to identify this build.
  4. Select an appropriate Agent queue. In this example, we will use Hosted VS2017. Use this agent if you’re using Visual Studio 2017 and you want the VSTS service to maintain your queue.  
  5. To simplify the process, use the default value for other fields.
  6. Go to “Trigger” and Enable continuous integrations. This will cause a build to automatically kick off after every code commit.
  7. Save the definition.

 

Adding a Test Project to the Solution

  1. Open the solution (from the previous post) in Visual Studio.
  2. Add a new Test Project.  
  3. Select .NET Core > Unit Test Project. We will name this project MyFirstApp.Tests. Note: the default build definition will look for test projects under folders that end with the word, “Tests“. So, make sure that a new folder that contains this word is created when you add your Unit Test Project.   
  4. For a proof of concept, we are going to write a dummy Test Method. Enter the following code in UnitTest1.cs
  5. Rebuild the project.
  6. Commit the changes locally and push it to the remote source repo.
  7. Back in VSTS, you can see that a Build has been triggered. 
  8. Click on the build number #2018XXXX.X to view the details of the build. Normally this will take a few minutes to complete.
  9. Ensure all of the steps passed. You can click on each step to view log details. 

What’s Next?
We’ll demonstrate how to deploy builds to different environments, either via push-button deployment or triggered automatically after each build (I.e. continuous deployment).

Stay tuned!

‘Agile’… ‘Lean’… ‘Fitnesse’… ‘Fit’… ‘(Win)Runner’… ‘Cucumber’… ‘Eggplant’… ‘Lime’… As 2018 draws near, one might hear a few of these words bantered around the water cooler at this time of year as part of the trendy discussion topic: our personal New Year’s resolutions to get back into shape and eat healthy. While many of my well-intentioned colleagues are indeed well on their way to a healthier 2018, many of these words were actually discussed during a strategy session I attended recently –  which, surprisingly, based on the fact that many of these words are truly foods – did not cover new diet and exercise trends for 2018. Instead, this planning session agenda focused on another trendy discussion topic in our office as we close out 2017 and flip the calendar over to 2018: software test automation.

SOFTWARE TEST AUTOMATION?!?” you ask?

“SERIOUSLY – cucumbers and limes and fitness(e)?!?”

This thought came to mind after the planning session and gave me a chuckle. I thought, “If a complete stranger walked by our meeting room and heard these words thrown around, what would they think we were talking about?”

This humorous thought resonated further when recently – and rather coincidentally – a client asked me for a high-level, summary explanation as to how I would implement automated testing on a software development project. It was a broad and rather open-ended question – not meant to be technical in nature or to solicit a solution. Rather, how would I, with a background in Agile Business Analysis and Testing (i.e. I am not a developer) go about kickstarting and implementing a test automation framework for a particular software development project?

This all got me thinking. I’ve never seen an official survey, but I assume many people employed or with an interest in software development could provide a reasonable and well-informed response if ever asked to define or discuss software test automation, the many benefits of automated testing and how the practice delivers requisite business value. I believe, however, that there is a substantial dividing line between understanding the general concepts of test automation and successfully implementing a high-quality and sustainable automated testing suite. In other words, those who are considered experts in this domain are truly experts – they possess a unique and sought-after skill set and are very good at what they do. There really isn’t any middle ground, in my opinion.

My reasoning here is that getting from ‘Point A’ (simply understanding the concepts) to ‘Point B’ (implementing and maintaining an effective and sustainable test automation platform) is often an arduous and laborious effort, which unfortunately, in many cases, does not always result in success. At a fundamental level, the journey to a successful test automation practice involves the following:

  • Financial investment: Like with any software development quality assurance initiative, test automation requires a significant financial investment (in both tools and personnel). The notion here, however – like any other reasonable investment – is that an upfront financial investment should provide a solid return down the line if the venture is successful. This is not simply a two-point ‘spike’ user story assigned to someone to research the latest test automation tools. To use the poker metaphor – if you are ready to commit, then you should go all-in.
  • Time investment: How many software development project teams have you heard stating that they have extra time on their hands? Surely, not many, if any at all. Kicking off an automated testing initiative also requires a significant upfront time investment. Resources otherwise assigned to standard analysis, development or testing tasks will need to shift roles and contribute to the automated testing effort. Researching and learning the technical aspects of automated testing tools, along with the actual effort to design, build out and execute a suite of automated tests requires an exceptional team effort. Reassigning team tasks initially will reduce a team’s velocity, although similar to the financial investment concept, the hope is significant time savings and improved quality down the line in later sprints as larger deployments and major releases draw near.
  • Dedicated resources with unique, sought after skill sets: In my experience, I’ve seen that usually the highest rated employees with the most institutional/system knowledge and experience are called on to manage and drive automated testing efforts. These highly rated employees are also more than likely the most expensive, as the roles require a unique technical and analytical skill set, along with a significant knowledge of corresponding business processes. Because these organizational ‘all-stars’ initially will be focused solely on the test automation effort, other quality assurance tasks will inherently assume added risk. This risk needs to be mitigated in order to prevent a reduction in quality in other organizational efforts.

It turns out that the coincidental internal automated testing discussion and timely client question – along with the ongoing challenge in the QA domain associated with the aforementioned ‘Point A to Point B’ metaphor – led to a documented, bulleted list response to the client’s question. Let’s call it an Agile test automation best-practices checklist. This list can be found below and provides several concepts and ideas an organization could utilize in order to incorporate test automation into their current software testing/QA practice. Since I was familiar with the client’s organization, personnel and product offerings, I could provide a bit more detail than necessary. The idea here is not the ‘what’, as you will not find any specific automation tools mentioned. Instead, this list covers the ‘how’: the process-oriented concepts of test automation along with the associated benefits of each concept.

This list should provide your team with a handy starting point, or a ‘bridge’ between Point A and Point B. If your team can identify with many of the concepts in this list and relate them to your current testing process and procedures, then further pursuing an automated testing initiative should be a reasonable option for your team or project.

More importantly, this list can be used as a tool and foundation for non-technical members of a software development team (e.g. BA, Tester, ScrumMaster, etc.) in order to start the conversation – essentially, to decide if automated testing fits in with your established process and procedures, whether or not it will provide a return on investment and to ensure if you do indeed embark down the test automation path, that you continue to progress forward as applications, personnel and teams mature, grow and inevitably, change. Understand these concepts and when to apply them, and you can learn more about cucumbers, limes and eggplants as you progress further down the test automation path:

To successfully implement and advance an effective and sustainable automated testing initiative, I make every effort to follow the following strategy which combines proven Agile test automation best-practices with personal, hands-on project and testing experience. As such, this is not an all-inclusive list, rather just one IT consultant’s answer to a client’s question:

For folks new to the world of test automation and for those who had absolutely no idea that ‘Cucumber’ is not only a healthy vegetable but is also the name of an automated testing tool, I hope this blog entry is a good start for your journey into the world of test automation. For the ‘experts’ out there, please respond and let me know if I missed any important steps or tasks, or, how you might do things differently. After all, we’re all in this together, and as more knowledge is spread throughout the IT world, the more we can further enhance our processes.

So, if you’ll excuse me now, I’m going to go ahead and plan out my 2018 New Year’s resolution exercise regimen and diet. Any additional thoughts on test automation will have to wait until next year.

Thoughts on Agile DC 2017
In early February of 2001, a small group of software developers published the Agile Manifesto. Its principles are now familiar to countless professionals. Over the years, Agile practices seem to have gained mainstream appeal. Today it is difficult to find a software developer, or even a manager who has never heard of Agile. Having been a software developer for over 15 years, to me, Agile methodologies appear to embody a very natural way of producing software, so in a lot of ways, Agile conferences produce a “preaching to the choir” effect as well. As I attended the Agile DC 2017 conference, I wanted to remove myself from the bubble and try to objectively take a pulse of the state of Agile today. I also wanted to learn about the challenges that Agile seems to continue to face in its adoption at the enterprise level.

Basic takeaways
As I listened to various speakers and held conversations with colleagues and even one of the presenters, it appears that today the main challenges of adopting Agile no longer deal specifically with software engineering. That generally makes sense, given the amount of literature, combined experience and continuous improvement software development teams have been able to produce as Agile has matured from a disruptive toddler into a young adult. The real struggle appears to be in adoption of Agile across other business units, not so much IT. Why is that? I believe the two main reasons for the struggle can be attributed to organizational culture, but most importantly lack of knowledge in relation to team dynamics and human psychology.

Culture
Agile seems to continue to clash with the way business units are organized. Some hierarchical structures may produce very siloed and stovepiped groups that have difficulty collaborating with one another. While this may appear as an insurmountable problem, there is at least one great example of how Agile organizations have addressed it. We can point to DevOps as a great strategy of integrating or streamlining traditionally separate IT units. Software development teams and IT Operations teams have traditionally existed in their respective silos, often functioning as completely separate units. There was a clear need of streamlining collaboration between the two to improve efficiency and increase throughput. As a result, DevOps continues on a successful path. In regards to other business units, “Marketing” may depend on “Legal” or “Product Development” may depend on “Compliance”. There may be a clear need to streamline collaboration between these business units. One of my favorite presentations at the conference was by Colleen Johnson entitled “End to End Kanban for the Whole Organization”. By creating a corporate Kanban board, separate business units were able to collaborate better and absorb change faster. One of the business unit leaders was able to see that there were unnecessary resources being allocated to supporting a product development effort that was dropped (into the “dropped” row on the board). Anecdotally, this realization occurred during a walk down the hall and a quick glance at the board.

Education
When it comes to software development, Agile appears to contain some key properties that make it a very logical and a natural fit within the practice. Perhaps its success stories in the software development community is what produced a perception that it applies specifically to software engineering and not to other groups within an organization. It seems to me that one of the main themes of the conference was to help educate professionals who possess an agnostic view related to the benefits of an Agile transformation. I believe that continuous education is critical to getting business unit leaders to embrace Agile at the enterprise level. Some properties of Agile and its related methodologies addresses some important aspects of human psychology.  This is where the education seems to lack. There is much focus on improvement in efficiency and production of value, with little explanation of why these benefits are reaped. In simple terms, working in an Agile environment makes people happier. There are “softer” aspects that business leaders should consider when adopting Agile methodologies. Focus on incremental problem solving provides a higher level of satisfaction as work gets completed. This helps trigger a mechanism within the reward system of our brain. Continuous delivery helps improve morale as staff is able to associate their immediate efforts to the visible progress of their projects. Close collaboration and problem solving creates strong bonds between team members and fosters shared accountability. Perhaps a detailed discussion on team dynamics and psychology is a whole separate topic for another blog, but I feel these topics are important to managing the perception of the role of Agile at the enterprise level.

Final thoughts
It is important to note that enterprise-wide Agile transformations are taking place. For example, several speakers at the conference highlighted success stories at Capital One. It seems that enterprise-wide adoption is finally happening, albeit very incrementally, but perhaps that is the Agile way.

Learn more about Agile Coaching.

Boy this summer flew by quickly! CC Pace’s summer intern, Niels, enjoyed his last day here in the CC Pace office on Friday, August 18th. Niels made the rounds, said his final farewells, and then he was off, all set to return to The University of Maryland, Baltimore County, for his last hurrah. Niels is entering his senior year at UMBC, and we here at CC Pace wish him all the best. We will miss him.

Niels left a solid impression in a short amount of time here at CC Pace. In a matter of 10 weeks, Niels interacted with and was able to enhance several internal processes for virtually all of CC Pace’s internal departments including Staffing, Recruiting, IT, Accounting and Financial Services (AFA), Sales and Marketing. On his last day, I walked Niels around the office and as he was thanked by many of the individuals he worked with, there were even a few hugs thrown around. Many folks also expressed wishes that Niels’ and our paths will hopefully soon cross again. In short, Niels made a very solid impression on a large group of my colleagues in a relatively short amount of time.

Back in June I gladly accepted the challenge of filling Niels’ ‘mentor’ role as he embarked on his internship. I’d like to think I did an admirable job, which I hope Niels will prove many times over in the years to come as he advances his way up the corporate ladder. As our summer internship program came to a close, I couldn’t help reminiscing back to my days as a corporate intern more than 20 years ago. Our situations were similar; I also interned during the spring/summer semesters of my junior year at Penn State University, with the assurance of knowing I had one more year of college remaining before I entered the ‘real world’. My internship was only a taste of the ‘corporate world’ and what was in store for me, and I still had one more year to learn and figure things out (and of course, one more year of fun in the Penn State football student section – priorities, priorities…)

Penn State’s Business School has a fantastic internship program, and I was very fortunate to obtain an internship at General Electric’s (GE) Corporate Telecommunications office in Princeton, NJ. My role as an intern at GE was providing support to the senior staff in the design and implementation of voice, data and video-conferencing services for GE businesses worldwide. Needless to say, this was both a challenging and rewarding experience for a 21-year-old college student, participating in the implementation of GE’s groundbreaking Global Telecommunications Network during the early years of the internet, among other things.

As I reminisced back to my eight months at GE, I couldn’t help but notice the similarities between my internship and a few of the ‘lessons learned’ I took away from my experience 20+ years ago, and how they compared or contrasted to my recent observations and feedback I provided to Niels as his mentor. Of course, there are pronounced differences – after all, many things have changed in the last 20 years – the technology we use every day is clearly the biggest distinction. I would be remiss not to also mention the obvious generation gap – I am a proud ‘Gen X’er’, raised on Atari and MTV, while Niels is a proud Millennial, raised on the Internet and smartphones. We actually had a lot of fun joking about the whole ‘generation gap thing’ and I’m sure we both learned a lot about each other’s demographic group. Niels wasn’t the only person who learned something new over the summer – I learned quite a bit myself.

In summary, my reminiscing back to the late 90’s certainly helped make my daily music choices easier for a few weeks this summer led to the vision for this blog post. I thought it would be interesting to list a few notable experiences and lessons I learned as an intern at GE, 20 odd years ago, along with how my experiences compared or contrasted with what I observed in the last 10 weeks working side-by-side with our intern, Niels. These observations are based on my role as his mentor, and were provided as feedback to Niels in his summary review, and they are in no particular order.

Have you similarly had the opportunity to engage in both roles within the intern/mentor relationship as I have? Maybe your example isn’t separated by 20 years, but several years? Perhaps you’ve only had the chance to fulfill one of these roles in your career and would love the opportunity to experience the other? In any case, see if you recognize some of the lessons you may have learned in the past and how they present themselves today. I think you’ll be amazed at how even though ‘the more things change, the more they stay the same’.

 

 

I’m in the process of reading a book on Agile database warehouse design titled, appropriately enough, Agile Data Warehouse Design and by Lawrence Corr.

While Agile methodologies have been around for some time – going on two decades – they haven’t permeated all aspects of software design and development at the same pace. It’s only in recent years that Agile has been applied to data warehouse design in any significant way.

I’m sure many Agile consultants have worked on projects in the past where they were asked to come up with a complete design up-front. That’s true with data warehouse projects too where a client’s  database team wanted the entire schema designed up-front – even before the requirements for the reports the data warehouse would be supporting were identified. What would appear to be driving the design was not the business and their report priorities, but the database team and their desire to have a complete data model.

While Agile Data Warehouse Design introduces some new methods, it emphasizes a common-sense approach that is present in all Agile methodologies. In this case, build the data warehouse or data mart one piece at a time. Instead of thinking of the data warehouse as one big star schema, think of it as a collection of smaller star schemas – each one consisting of a fact table and its supporting dimension tables.

The book covers the basics of data warehouse design including an overview of fact tables, dimension tables, how to model each and as mentioned, star schemas. The book stresses the 7-Ws when designing a data warehouse – who, what, where, when, why, how and how many. These are the questions to ask when talking to business to come up with an appropriate design. “How many” is applicable for the fact tables, while the other questions apply to dimension table design.

Agile Data Warehouse Design stresses collaboration with the business stakeholders, keeping them fully engaged so that they feel like they are not just users, but owners of the data. Agile Data Warehouse Design focuses on modeling the business processes that the business owners want to measure, not the reports to be produced or the data to be collected.

I still have a way to go before I’ve finished the book and then applied what I’ve learned, but so far, it’s been a worthwhile learning experience.

“Your Majesty,” [German General Helmuth von] Moltke said to [Kaiser Wilhelm II] now, “it cannot be done. The deployment of millions cannot be improvised. If Your Majesty insists on leading the whole army to the East it will not be an army ready for battle but a disorganized mob of armed men with no arrangements for supply. Those arrangements took a whole year of intricate labor to complete”—and Moltke closed upon that rigid phrase, the basis for every major German mistake, the phrase that launched the invasion of Belgium and the submarine war against the United States, the inevitable phrase when military plans dictate policy—“and once settled it cannot be altered.”
Excerpt From: Barbara W. Tuchman. “The Guns of August.”

In my spare time, I try to read as much as I can. One of my favorite topics is history, and particularly the history of the 20th century as it played out in Europe with so much misery, bloodshed, and finally mass genocide on an industrial scale. Barbara Tuchman’s book, The Guns of August, deals with the factors that led to the outbreak of WWI. Her thesis is that the war was not in any way inevitable. Rather, it was forced on the major powers by the rigidity of their carefully drawn up war plans and an inability to adjust to rapidly changing circumstances. One by one, like dominos falling, the Great Powers executed their rigid war plans and went to war with each other.

Although the consequences are far less severe, I occasionally see the same thing happen on projects, and not just software projects. A lot of time, perhaps appropriately, perhaps not, is spent in planning. The output of the planning process is, of course, several plans. Inevitably, after the project runs for a short while, the plans begin to diverge from reality. Like the Great Powers in the summer of 1914, project leadership sees the plans as destiny rather than guides. At all costs, the plans must be executed.

Why is this? I believe it stems from the fallacy of sunk cost: we’ve spent so much time on planning and coming up with the plans, it would be too expensive now to re-plan. Instead, let’s try to force the project “back on plan”. Because of the sunk cost of generating the plans, too much weight is placed upon them.

Hang on, though. I’ve played up the last part of the quote above – the part that emphasizes the rigidity of thinking that von Moltke and the German General Staff displayed. What about the first part of his statement? Isn’t it true that “the deployment of millions cannot be improvised”? Indeed it is. And that’s true in any non-trivial project as well. You can’t just start a large software project and hope to “improvise” along the way. So now what?

I believe there’s great value in the act of planning, but far less value in the plans themselves. Going through a process of planning and thinking about how the project is supposed to unfold tells me several things. What are the risks? What happens at each major point of the project if things don’t go as planned? What will be the consequences? How do we mitigate those consequences? This kind of contingency planning is essential.

Here’s how I usually do contingency planning for a software development project. Note that I conduct all these activities with as much of the project team present as is feasible to gather. At a minimum, I include all the major stakeholders.

First, I start with assumptions, or constraints external to the project. What’s our budget? When must the product be delivered? Are there non-functional constraints? For example, an enterprise architecture we must embed within, or data standards, or data privacy laws?

Next, I begin to challenge the assumptions. What if we find ourselves going over budget? Are we prepared to extend the delivery deadline to return to budget? I explore how the constraints play off against each other. Essentially, I’m prioritizing the constraints.

Then comes release planning. I try to avoid finely detailed requirements at this point. Rather, we look at epics. We try to answer the question, “What epics must be implemented by what time to generate the Minimum Value Product (MVP)?” Again, I challenge the plan with contingencies. What if X happens? What about Y? How will they affect the timeline? How will we react?

I don’t restrict this planning to timelines, budgets, etc. I do it at the technical level too. “We plan to implement security with this framework. What are the risks? Have we used the framework before? If not, what happens if we can’t get it to work? What’s our fallback?”

The key is to concentrate not just on coming up with a plan, but on knowing the lay of the land. Whatever the ideal plan that comes out of the planning session may be, I know that it will quickly run into trouble. So I don’t spend a lot of time coming up with an airtight plan. Instead, I build up a good idea of how the team will react when (not if) the plan runs aground. Even if something happens that I didn’t think of in the planning, I can begin to change my plan of attack while keeping the fixed constraints in mind. I have a framework for agility.

Never forget this: when the plan diverges from reality, it’s reality that will win, not the plan. Have no qualms about discarding at least parts of the plan and be ready to go to your contingent plans. Do not let “plans dictate policy”. And don’t stop planning – keep doing it throughout the project. No project ever becomes risk-free, and you always need contingencies.

“Once I started looking around behind the port frames, I figured I could just….”

And so began a summer of endless sailboat projects and no sailing.  One project lead to the start of another without resolving the first.  What does this possibly have to do with software development and Agile techniques?

My old man and I own and are restoring an older sailboat.  He is also in the IT profession, and is steeped in classic waterfall development methodology.  After another frustrating day of talking past each other, he asked how I felt things could be handled differently in our boat projects.

“Stop starting and start finishing!”

It is the key mindset for Agile.  Take a small task that provides value, focus on it, and get it done.  It eliminates distraction and gives the user something usable quickly.

Applying this mindset outside of software may not be intuitive, but can pay dividends quickly.  On the boat, we cleared space on the bulkhead, grabbed a stack of post-its and planned through the next project, rewiring the boat.  The discussion started with the goal of the project.  “We’re just to tear everything out and rewire everything.” Talk about ignoring non-breaking changes!  I suggested that we focus on always having a working product – a sail-able boat – and break the project into smaller tasks that can be worked from start to finish in short, manageable pieces of time.

Approaching the project from that angle, we quickly developed a list of sub tasks, prioritized them, and put them up on our make-shift Kanban board.  This was planning was so intuitive and rewarding on its own that we did the same for other projects we want to tackle before April.

So stop starting, start finishing, and start providing value quicker for your stakeholders.

Up, down, Detroit, charm, inside out, strange, London, bottom up, outside in, mockist, classic, Chicago….

Do you remember the questions on standardized tests where they asked you to pick the thing that wasn’t like the others?  Well, this isn’t a fair example as there are really two distinct groups of things in that list, but the names of TDD philosophies have become as meaningless to me as the names of quarks.  At first I thought I’d use this post to try to sort it all out, but then I decided that I’m not the Académie française of TDD school names and I really don’t care that much.  If the names interest you, I can suggest you read TDD – From the Inside Out or the Outside In.  I’m not convinced that the author has all the grouping right (in particular, I started learning TDD from Kent Beck, Ron Jeffries and Bob Martin in Chicago, which is about as classic as you can get, and it was always what she calls outside in but without using mocks), but it’s a reasonable introduction.

Still, it felt like it was time to think about TDD again, so instead I went back to Ron Jeffries Thoughts on Mocks and a comment he made on the subject in his Google Groups Forum.  In the posting, Ron speculated that architecture could push us a particular style of TDD.  That feels right to me.  He also suggested that writing systems that are largely “assemblies of OPC (Other People’s Code)” “are surely more complex” than the monolithic architectures that he’s used to from Smalltalk applications and that complexity might make observing the behavior of objects more valuable.  That idea puzzles me more.

My own TDD style, which is probably somewhere between the Detroit school, which leans towards writing tests that don’t rely on mocks, and London schools, which leans towards using mocks to isolate each unit of the application, definitely evolved as a way to deal with the complexity I faced in trying to write all my code using TDD.  When I first started out, I was working on what I believe would count as a monolithic application in that my team wrote all the code from the UI back to right before the database drivers.  We started mocking out the database not particularly to improve the performance of the tests, but because the screens were customizable per user, the data for which was in a database, and the actual data that would be displayed was stored across multiple tables.  It was really quite painful to try to get all the data set up correctly and we had to keep a lot more stuff in mind when we were trying to focus on getting the configurable part of the UI written.  This was back in 1999 or 2000, and I don’t remember if someone saw an article on mocking, but we did eventually light on the idea of putting in a mock object that was much easier to set up than the actual database.  In a sense, I think this is what Ron is talking about in the “Across an interface” section of his post, but it was all within our code.  Could we have written that code more simply to avoid the complexity to start with?  It was a long time ago and I can’t say whether or not I’d take the same approach now to solving that same problem, but I still do find a lot of advantages in using mocks.

I’ve been wanting to try using a NoSQL database and this seemed like a good opportunity to both try that technology and, after I read Ron’s post, try writing it entirely outside-in, which I always do anyway, and without using mocks, which is unusual for me.  I started out writing my front-end using TDD and got to the point that I wanted to connect a persistence mechanism.  In a sense, I suppose the simplest thing that could possibly work here would have been to keep my data in a flat file or something like that, but part of my purpose was to experiment with a NoSQL database.  (I think this corresponds to the reasonably common situation of “the enterprise has Oracle/MS SQL Server/whatever, so you have to use it.)  I therefore started with one of the NoSQL implementations for .NET.  Everything seemed fine for my first few unit tests.  Then one of my earlier tests failed after my latest test started passing.  Okay, this happens.  I backed out my the code I’d just written to make sure the failing test started passing, but the same test failed again.  I backed out the last test I’d written, too.  Now the failing test passed but a different one failed.  After some reading and experimentation, I found that the NoSQL implementation I’d picked (honestly without doing a lot of research into it) worked asynchronously and it seemed that I’d just been lucky with timing before they started randomly failing.  Okay, this is the point that I’d normally turn to a mocking framework and isolate the problematic stuff to a single class that I could either put the effort into unit testing or else live with it being tested through automated customer tests.

Because I felt more strongly about experimenting with writing tests without using mocks than with using a particular NoSQL implementation, I switched to a different implementation.  That also proved to be a painful experience, largely because I hadn’t followed the advice I give to most people using mocks, which is to isolate the code for setting up the mock into an individual class that hides the details of how the data is set up.  Had I been following that precept now that I was accessing a real persistence mechanism rather than a mock, I wouldn’t have needed to change my tests to the same degree.  The interesting thing here was that I had to radically change both the test and the production code to change the backing store.  As I worked through this, I found myself thinking that if only I’d used a mock for the data access part, I could have concentrated on getting the front-end code to do what I wanted without worrying about the persistence mechanism at all.  This bothered me enough that I finally did end up decoupling the persistence mechanism entirely from the tests for the front-end code and focus on one thing at a time instead of having to deal with the whole thing at once.  I also ended up giving up on the NoSQL implementation for a more familiar relational database.

Image from http://martinfowler.com/bliki/BeckDesignRules.html

Image from http://martinfowler.com/bliki/BeckDesignRules.html

So, where does all this leave my thoughts on mocks? Ron worried in his forum posting that using mocks creates more classes than testing directly and thus make the system more complex. I certainly ended up with more classes than I could have, but that’s the lowest priority in Ken Beck’s criteria for simple design. Passing the tests is the highest priority, and that’s the one that became much easier when I switched back to using mocks. In this case, the mocks isolated me from the timing vagaries of the NoSQL implementations. In other cases, I’ve also found that they help isolate me from other random elements like other developers running tests that happen to modify the same database tables that are modifying. I also felt like my tests became much more intention-revealing when I switched to mocks because they talked in terms of the high-level concepts that the front-end code dealt with instead of the low-level representation of the data of the persistence code needed to know about.  This made me realize that the hard part was caused by the mismatch between the way the persistence mechanism (either a relational database or the document-oriented NoSQL database that I tried) and the way I thought of the data in my code. I have a feeling that if I’d just serialized my object graph to a file or used an object-oriented database instead of a document-oriented database, that complexity would go away.  That’s for a future experiment, though.  And, even if it’s true, I don’t know how much I can do about it when I’m required to use an existing persistence mechanism.

Ron also worried that the integration between the different components is not tested when using mocks.  As Ron puts it in his forum message: “[T]here seems to be a leap of faith made: it’s not obvious to me that if we know that A sends the right messages to Mock B, and B sends the right messages to Mock A, A and B therefore work. There’s an indirection in that logic that makes me nervous. I want to see A and B getting along.”  I don’t think I’ve ever actually had a problem with A and B not getting along when I’m using mocks, but I do recall having a lot of problems with it when I had to map between submitted HTML parameters and an object model.  (This was back when one did have to write such code oneself.)  It was just very to mistype names on either side and not realize it until actual user testing.  This is actually the problem that led us to start doing automated customer testing.  Although the automated customer tests don’t always have as much detail as the unit tests, I feel like they alleviate any concerns I might have that the wrong things are wired together or that the wiring doesn’t work.

It’s also worth mentioning that I really don’t like the style of using mocks that really just check if a method was called rather than it was used correctly.  Too often, I see test code like:

mock.Stub(m => m.Foo(Arg.Is.Anything, Arg.Is.Anything)).Return(0);

mock.AssertWasCalled(m => m.Foo(Arg.Is.Anything, Arg.Is.Anything));

I would never do something like this for a method that actually returns a value.  I’d much rather set up the mock so that I can recognize that the calling class both sent the right parameters and correctly used the return value, not just that it called some method.  The only time I’ll resort to asserting a method was called (with all the correct parameters), is when that method exists only to generate a side-effect.  Even with those types of methods, I’ve been looking for more ways to test them as state changes rather than checking behavior.  For example, I used to treat logging operations as side-effects: I’d set up a mock logger and assert that the appropriate methods were called with the right parameters.  Lately, though, with Log4Net, I’ve been finding that I prefer to set up the logger with a memory appender and then inspect its buffer to make sure that the message I wanted got logged at the level I wanted.

In his Forum posting, Ron is surely right in saying about the mocking versus non-mocking approaches to writing tests: “Neither is right or wrong, in my opinion, any more than I’m right to prefer BMW over Mercedes and Chet is wrong to prefer Mercedes over BMW. The thing is to have an approach to building software that works, in an effective and consistent way that the team evolves on its own.”  My own style has certainly changed over the years and I hope it will continue to adapt to the circumstances in which I find myself working.  Right now I find myself working with a lot of legacy code that would be extremely hard to get under test if I couldn’t break it up and substitute mocks for collaborators that are nearly impossible to get set up correctly.  Hopefully I’ll also be able to use mocks less, as I find more projects that allow me to avoid the impedance between the code’s concept of the model and that of external systems.

Uber. Eight years ago, the company did not exist and the word was simply a rarely used adjective of German origin meaning “ultra”, like an uber intellectual. Today, Uber has become one of the most successful startups in history and the word has become a commonplace verb in English parlance. Transcending to “verb” status puts Uber in the highly exclusive class of innovative business disrupters like Google and FedEx whose business names and processes have become synonymous with an action that didn’t previously exist but is now done on a regular basis.  Who today wouldn’t understand what actions you had taken if you said, “I quickly googled the address for the nearest drop-off spot and uber-ed over there so I could fed-ex my package out on time”?

Uber owns no cars, has no drivers, and has minimal fixed assets. Instead, they created an incredibly user-friendly software that improves aspects of the taxi ride industry we didn’t know needed improvement. Not surprisingly, the full legal name is Uber Technologies, Inc.  While the only technologies typically found in traditional taxi cabs are the decades old meter clicking away the increments of the cost of your ride, the Uber software provides new value to both the driver and the customer with useful information such as the location of both the driver and the customer, time estimate for pick-up, exact pricing, car options, driving directions, and much more.

By creating this simple way to get a ride, Uber has reached another pinnacle accomplishment whereby the creativity of its business model has become a noun: uber-fication.  According to Dr. Paul Marsden in his Reading Room article, The Uberfication of Everything, “…the real genius of Uber lies in a deep understanding of convenience – what it is and why it matters.  That’s what Uberfication is all about; pivoting your business to deliver on a core under-exploited consumer need – convenience”.

One thing that every startup has is a dream and a vision. But, let’s be honest, that simply isn’t enough to successfully build a booming new business like Uber. You need the right partners, you need money, and you need passion for the project at hand. We believe that we can help in all these areas, which lead us to formalize an offering exclusively for startups.

When I formed CC Pace nearly 36 years ago, I was driven by a vision of a new model for a consulting company – one where integrity and the client’s best interest were ingrained in the firm’s culture and successful delivery could almost be guaranteed by the quality, drive and teamliness of the employees who worked there. While my dream may not have been as wide-reaching as Uber, when I think back to that time, I just remember energy, excitement and that ‘anything is possible’ feeling. Over the years, we’ve been very fortunate to work with clients in all phases, from startups to Fortune 500 organizations—all of which we value a great deal. I get excited to work with clients of all sizes, but there is something about working with startups that brings about an energy that you can’t replicate in other environments. Being a part of someone else’s vision coming to life brings me right back to where I stood over 35 years ago and is an environment in which I’ve seen our project teams thrive in.

Our experience working with startups combined with our project teams’ passion has lead us to formalize an offering to help startups get off the ground with the right technology. To enable us to work on more of these type of efforts, we are officially launching a new risk/reward program for startups. Here, we are able to combine our technical prowess with our business acumen that result in a software component that fully and effectively supports the start-up’s vision.  The premise of our offering is to build the technological platform for your business with less cash required. In exchange for this discount, we agree upon a fair share of some downstream benefits of your startup reflective of the risk we take.

If you like the idea of maintaining control of your vision while paying less up-front to get the results you need, then I would love to hear from you. Interesting companies with challenging technology needs has been a driver for us for over 35 years. For this reason, we are confident that we have the ability to help better enable your dream. After all, it’s only a matter of time before the next “Uber” shocks the world.

For more information on the risk/reward program, check out our offering here.

In my personal experience working on various software development projects, the concept of team energy often appears to be either undervalued or benignly ignored by management teams. The reasons are many. First of all, the term may be confused with “team velocity” which is a relative measurement of a team’s average output or productivity. If the velocity appears to be at either predictable or positive levels, the management team may choose to believe that team energy is also at satisfactory levels. Organizations may attempt to boost employee morale by putting together team-building exercises and outing events. This macro approach may result in generating a perceived positive effect on team energy, thus obscuring the need for focus on individual teams.  So, what is team energy and why should managers consider devoting some attention to it?

When thinking in terms of team energy, one can look at it as building credit with each individual team. One can also look at it from an analogy of having a rainy day fund. From a management’s standpoint it is important to keep team energy in the positive territory. This helps ensure that the team will likely empower themselves to exceed expectations, as well as step up during times of crisis or high pressure situations. I have worked in high energy teams, in which members voluntarily pushed themselves past regular working hours to produce deliverables. These cases did not involve any direct increase in compensation or promotion. People naturally wanted to succeed because they possessed enough energy to do so. I have also witnessed the opposite, where a team’s energy was low, deliverables were in a perpetual state of tardiness and the backlog was steadily accruing bugs. Developers and testers did not feel empowered to succeed and entered a cycle of doing the absolute bare minimum to “get the management off their back”.

Science behind building teams 
Agile methodologies, whether Scrum or Kanban, prescribe various techniques that are focused on continuous improvement that may positively affect team energy. Regardless of whether an organization has truly embraced Agile, it is difficult to find managers that would oppose efforts in improving a team’s processes of delivering faster and at a higher value. After all, who is against a boost in productivity? There is a hidden psychological component to continuous improvement that has a causal effect on team energy. This component is more associated with the experience of individual team members.

Studies of team dynamics such as ones conducted by MIT’s Human Dynamics Laboratory, and documented in Harvard Business Review suggest that there is a science to building high performing and high energy teams. One of the keys is to focus on the human brain and social dynamics of a group. Your teams may be composed of introverts as well as extroverts and a wide range of personalities, but there is a common factor that seems to persist. The studies show that humans feel good when they achieve their goals and overcome obstacles. A human brain actually rewards its owner with extra levels of dopamine when a goal is achieved. When the team feels good more often than not, the team energy goes up. When the opposite occurs, team energy goes down. Therefore, focusing on small achievable goals not only helps the organization to shift focus of deliverables, but it also fosters this psychological benefit of achievement for each individual team member.

Measurements 
An organization may choose to periodically measure team energy. One way to achieve such measurement is through anonymous surveys. Usually this is done at a more enterprise level to gauge the overall organization energy. There is certainly value in doing that, but the effort is not focused and may not necessarily apply to teams. Small teams may not produce very accurate results. There may be disincentives to be frank when answering a survey because team members may feel singled out and fear reprisals from management. In addition, more introverted team members may choose not to “rock the boat”.  A more effective and team-focused approach is to have an Agile coach periodically take team energy measurements. An opportune time may be during team retrospectives, when a team is usually more receptive to be candid.  Most importantly, these measurements do not need to be secretly stored in a manager’s vault but should be shared with the team. Adding transparency to the team building and management process will not only increase team energy, but also foster leadership skills among the more proactive and extraverted team members.

Building a new software product is a risky venture – some might even say adventure. The product ideas may not succeed in the marketplace. The technologies chosen may get in the way of success. There’s often a lot of money at stake, and corporate and personal reputations may be on the line.

I occasionally see a particular kind of team dysfunction on software development teams: the unwillingness to share risk among all the different parts of the team.

The business or product team may sit down at the beginning of a project, and with minimal input from any technical team members, draw up an exhaustive set of requirements. Binders are filled with requirements. At some point, the technical team receives all the binders, along with a mandate: Come up with an estimate. Eventually, when the estimate looks good, the business team says something along the lines of: OK, you have the requirements, build the system and don’t bother us until it’s done.

(OK, I’m exaggerating a bit for effect – no team is that dysfunctional. Right? I hope not.)

What’s wrong with this scenario? The business team expects the technical team to accept a disproportionate share of the product risk. The requirements supposedly define a successful product as envisioned by the business team. The business team assumes their job is done, and leaves implementation to the technical team. That’s unrealistic: the technical team may run into problems. Requirements may conflict. Some requirements may be much harder to achieve than originally estimated. The technical team can’t accept all the risk that the requirements will make it into code.

But the dysfunction often runs the other way too. The technical team wants “sign off” on requirements. Requirements must be fully defined, and shouldn’t change very much or “product delivery is at risk”. This is the opposite problem: now the technical team wants the business team to accept all the risk that the requirements are perfect and won’t change. That’s also unrealistic. Market dynamics may change. Budgets may change. Product development may need to start before all requirements are fully developed. The business team can’t accept all the risk that their upfront vision is perfect.

One of the reasons Agile methodologies have been successful is that they distribute risk through the team, and provide a structured framework for doing so. A smoothly functioning product development team shares risk: the business team accepts that technical circumstances may need adjustment of some requirements, and the technical team accepts that requirements may need to change and adapt to the business environment. Don’t fall into the trap of dividing the team into factions and thinking that your faction is carrying all the weight. That thinking leads to confrontation and dysfunction.

As leaders in Agile software development, we at CC Pace often encourage our clients to accept this risk sharing approach on product teams. But what about us as a company? If you founded a startup and you’ve raised some money through venture capital – very often putting your control of your company on the line for the money – what risk do we take if you hire us to build your product? Isn’t it glib of us to talk about risk sharing when it’s your company, your money, and your reputation at stake and not ours?

We’ve been giving a lot of thought to this. In the very near future, we’ll launch an exciting new offering that takes these risk sharing ideas and applies them to our client relationships as a software development consultancy. We will have more to say soon, so keep tuning in.

Recently, I attended a meetup for Loudoun’s Tech Startups in Ashburn, VA. It was a great opportunity to discuss ideas in various stages of development, as well as the resources available to bring these ideas to market. It was encouraging to see so many motivated entrepreneurs share their experiences in a local setting, with Loudoun showing its promise as a business incubator.

Michelle Chance from the Innovative Solutions Consortium gave a great speech about the services provided by her organization, such as “hard challenge” events, think-tank style meetings and student/recent graduate mentoring.

The hard challenge events seemed particularly interesting to me since they provide a collaborative environment for solving difficult technology problems. Organizations compete for the best solution and receive awards for most disruptive and innovative technologies.

Another sponsor of the event was the Mason Enterprise Center, which provides consultation and training to small business owners and entrepreneurs. They have regional offices in Fairfax, Fauquier, Loudoun and Prince William counties.

We attended this event because of our past experience developing collaborative solutions for startups. We’ve found that the Agile development philosophy fits nicely with the entrepreneurial spirit of organizations that want to quickly build a product that provides value. In fact, principles such as Minimum viable product and Continuous deployment are core to the Lean startup philosophy championed by Eric Ries. This method encourages startups to build a minimum amount of high-value features that can be released quickly. With frequent releases, a company can immediately begin collecting and responding to customer feedback.

If you’re at any stage in the startup process and have a technical idea that you want to explore or expand, there are plenty of resources available in the NoVa area. Also, I encourage you to attend one of the various meetups for startups, such as the Loudoun Tech Startups group.

At CC Pace, our Agile practitioners are sometimes asked whether Scrum is useful for activities other than software development. The answer is a definite yes.

Elizabeth (“Elle”) Gan is Director of Portfolio Management at a client of ours. She writes the blog My Scrummy Life. Recently, she wrote a fascinating post on how she used Scrum to plan her upcoming wedding.

How have you used Scrum outside its “natural” setting in software development?

Senior IT managers starting a new project often have to answer the question: build or buy? Meaning, should we look for a packaged solution that does mostly what we need, or should we embark on a custom software development project?

Coders and application-level programmers also face a similar problem when building a software product. To get some part of the functionality completed, should we use that framework we read about, or should we roll our own code? If we write our own code, we know we can get everything we need and nothing we don’t – but it could take a lot of time that we may not have. So, how do we decide?

Your project may (and probably does) vary, but I typically base my decision by distinguishing between infrastructure and business logic.

I consider code to be infrastructure-related if it’s related to the technology required to implement the product. On the other hand, business logic is core to the business problem being solved. It is the reason the product is being built.

Think of it this way: a completely non-technical Product Owner wouldn’t care how you solve an infrastructure issue, but would deeply care about how you implement business logic. It’s the easiest way to distinguish between the two types of problems.

Examples of infrastructure issues: do I use a relational or non-relational database? How important are ACID transactions? Which database will I use? Which transactional framework will I use?

Examples of business logic problems: how do I handle an order file sent by an external vendor if there’s an XML syntax error? How important is it to find a partial match for a record if an exact match cannot be found? How do you define partial?

Note that a business logic question could be technical in nature (XML syntax error) but how you choose to solve it is critical to the Product Owner. And a seemingly infrastructure-related question might constitute business logic – for example, if you are a database company building a new product.

After this long preamble, finally my advice: Strongly favor using existing frameworks to solve infrastructure problems, but prefer rolling your own code for business logic problems.

My rationale is simple: you are (or should be) expert in solving the business logic problems, but probably not the infrastructure problems.

If you’re working on a system to match names against a data warehouse of records, your team knows or can figure out all the details of what that involves, because that’s what the system is fundamentally all about. Your Product Owner has a product idea that includes market differentiators and intellectual property, making it very unlikely that an existing matching framework will fulfill all requirements. (If an existing framework does meet all the requirements, why is the product being developed at all?)

Secondly, the worst thing you want to do as a developer is to use an existing business logic framework “to make things simple”, find that it doesn’t handle your Product Owner’s requirements, and then start pushing back on requirements because “our technology platform doesn’t allow X or Y”. For any software developer with professional pride: I’m sorry, but that’s just weak sauce. Again, the whole point of the project is to build a unique product. If you can’t deliver that to the Product Owner, you’re not holding up your end of the bargain.

On the other hand, you are very likely not experts on transactional frameworks, message buses, XML parsing technology, or elastic cloud clusters. Oracle, Microsoft, Amazon, etc., have large expert teams and have put their own intellectual property into their products, making it highly unlikely you’ll be able to build infrastructure that works as reliably and is as bug free.

Sometimes the choice is harder. You need to validate a custom file format. Should you use an existing framework to handle validations or roll your own code? It depends. It may not even be possible to tell when the need arises. You may need to use an existing framework and see how easy it is to extend and adapt. Later, if you find you’re spending more time extending and adapting than rolling your own optimized code, you can change the implementation of your validation subsystem. Such big changes are much easier if you’ve consistently followed Agile engineering practices such as Test Driven Design.

As always, apply a fundamental Agile principle to any such decision: how can I spend my programming time generating the most business value?

You are embarking on a new software development project. Presumably, if it’s a Scrum project, a team is assembled, space and workstations for the team room are configured and the first sprint is right around the corner. The time has come for the initial gathering of project team members and stakeholders – the Project Kickoff. The Project Kickoff meeting can range from an hour to several days, and provides the opportunity for the project team, and any associated stakeholders, to come together and officially begin (i.e., ‘kick off’) a project. The ultimate goal is for everyone to leave this meeting on the same page, with a clear understanding of the project’s structure and goals.

One of the more common kickoff meeting agenda items for Scrum teams today is establishing the product vision, or the product vision statement. Many definitions and examples of product vision statements are available with a simple internet search; a solid summary of the product vision can be found here in a 2009 member article written by Roman Pichler for the Scrum Alliance:

The product vision paints a picture of the future that draws people in. It describes who the customers are, what customers need, and how these needs will be met. It captures the essence of the product – the critical information we must know to develop and launch a winning product.’

Here is where I have to come clean – I personally never thought much about the importance or impact of product vision statements until recently. It seemed to me that on many development projects, the product vision would simply exist as a feel-good placeholder or as a feeble attempt to energize the team: “We’re going to build this application and SAVE THE WORLD!”

I felt that the product vision statement was a guise for what seemed like the customary objective of a project which – in an admittedly negative opinion – was to provide a solid return on someone’s investment. Bluntly stated: “If we are successful, someone I’ve never met is going to make a lot of money on this product.” I observed that as a project ensued and the team became preoccupied with day to day tasks, reality would eventually kick in. At a certain point, the vision – which we were so excited about several weeks ago – usually becomes an afterthought.

Eventually, a point is reached several sprints into a project where the team’s project vision statement is scribbled on a large post-it sheet, taped to the wall in the team room, collecting dust – never to be spoken of again. In the past, my observation was that by the time our project wrapped up, we wouldn’t always take the time to measure project success against our original product vision statement. (In fact, many team members were probably already working towards achieving a new product vision on a completely new project.) We didn’t always ask the questions: Did we accomplish our mission? Did we meet all of our objectives? If not, why? (Some of these topics would assuredly be discussed in a Project Retrospective-type meeting, but in today’s reality, that isn’t always the case.)

Fortunately, times have changed. Several recent and personal discoveries (through complete happenstance) have improved my outlook; you could now say that I have a ‘newfound respect’ for the product vision statement. This inspiration is a result of successfully delivering on several development projects in the education research field. CC Pace has had the fortunate opportunity to partner with the National Student Clearinghouse (NSC) on several of their software development initiatives since 2010. Our first project supported NSC’s Research Center, whose mission is defined as ‘providing educators and policymakers with accurate longitudinal data on student outcomes to enable informed decision making.’

In June of 2010, CC Pace began the journey with NSC to redesign the StudentTracker for High Schools application (STHS 2.0), which contributes to NSC’s aforementioned mission as ‘a unique program designed to help high schools and districts more accurately gauge the college success of their graduates’. We began the project with an informative and efficient Kickoff meeting and established our product vision statement. Truth be told, I didn’t put much thought into it. After all, I was working with a new client on a newly-formed team, and Sprint #1 was approaching fast.

With all of that in mind, the following is a high-level summary (i.e., not verbatim and with some information added for clarification) of our product vision statement for the StudentTracker for High Schools 2.0 project from June, 2010:

NSC’s business goal is to leverage its unique assets and capabilities to provide the secondary-postsecondary longitudinal information required to inform the secondary education system in its efforts to increase rates of college readiness.

Redesigning the STHS 2.0 application will enhance the capacity and scalability to provide integrated secondary-postsecondary education information – in a timely and efficient fashion – to the maximum number of secondary customers possible.

The objectives to meet this business goal are as follows:

  • Enhance STHS reports to include more insightful and actionable data resulting in more valuable and accurate information available to secondary schools and districts
  • Provide for a more efficient file management process, reducing the turnaround time for data collection, processing and report distribution
  • Increase data collection and storage capacity, allowing for more robust reports
  • Improve NSC matching algorithms to enhance data quality and add reliability
  • Design, configure and implement a robust set of longitudinal reports stratified by school type, demographics, gender, academics and degree

So, there it was. Our product vision statement was posted on the wall for all to see. As anyone starting out on a new software development project can attest, I still had some questions: Where is all of this leading us? Will we succeed? Will this application – which we are completely redesigning from scratch – launch successfully a year from now?

Undoubtedly these were realistic questions and concerns. At the same time, however, I began to realize that I was working as a member of a team on a development project with a realistic, measurable and highly-motivational product vision statement. I thought at the time that if we truly achieve our vision and successfully implement STHS 2.0 in the next year, our product will have a profound impact on educational research and potentially improve the college success rates of millions of high school and college students for years to come.

Fast-forward to 2015 – I can proudly say that I have seen several firsthand accounts demonstrating that we did indeed achieve the product vision that we established several years ago. The intriguing element of this discovery is that I never personally set out to measure whether or not we achieved our vision as a result of successfully delivering STHS 2.0. After all, this was over five years ago and I have worked on many different projects in that time. Instead, I discovered the answer to that question completely by chance, and on more than one occasion. Five years later, I came to the realization that our team’s product vision had indeed become a reality, and it was a really great feeling.

Check back for a follow-up post for the recent chain of events validating that our project’s product vision statement truly became a reality – more than five years after it was established.

I used to attend Agile conferences pretty frequently, but at some point I got burned out on them and the last one I attended was a 2007 conference in Washington, DC.  This year, when the Agile Alliance conference returned to the DC region, and I decided it was time to give them a try again.

It’s interesting to see how things changed since I last attended an Agile conference.  Agile 2015 felt much more stage managed than in previous years, with its superhero party, the keynotes making at least glancing reference to it (the opening keynote, Awesome Superproblems, appears to have been retitled for the theme, since all the references in the presentation were to “wicked” problems instead of “super” problems), and making one go through the vendor to get to lunch.  It also seemed like there were mostly “experts” making presentations, whereas previously I felt like there were more presentations by community members.  I have mixed feelings about all of this, but on the whole I felt that my time was well spent.  Although I didn’t really plan it, I seem to have had three themes in mind when I picked my sessions, team building, DevOps and craftsmanship.  Today I’ll tell you about my experiences with the team building sessions.

Two of the keynotes supported this theme: Jessie Shternshus’ Individuals, Interactions and Improvisation and James Tamm’s Want Better Collaboration? Don’t be so Defensive.  I’d heard of using the skills associated with improvisation to improve collaborative skills, but the Agile analogy seemed labored.  Tamm’s presentation was much more interesting to me.  I’m not sure he’s aware of the use of the pigs and chickens story in Scrum, but he started out with a story about chickens.  Red zone and green zone chickens, to be precise.  Apparently there are those chickens (we’re outside of the scrum metaphor here, incidentally) that become star egg layers by physically abusing other chickens to suppress their egg production.  These were termed red zone chickens, while the friendly, cooperative chickens were termed green zone chickens.  Tamm described a few unpleasant solutions (such as trimming the chickens’ beaks) that people had tried to deal with the problem, and ended up by describing an experiment whereby the the red zone chickens were segregated from the green zone chickens, with the result that the green zone chickens’ egg production went up 260% while only the mortality rate went up for the red zone chickens (http://blog.pgi.com/2015/05/what-can-chickens-teach-us-about-collaboration/).  Tamm then went on to compare this to human endeavors, pointing out the signs that an organization might be in the red zone (low trust/high blame, threats and fear, and risk avoidance, for example) or in the green zone (high trust/low blame, mutual support and a sense of contribution, for example), while explaining that no organization is going to be wholly in either zone.  He wound up with showing us ways to identify when we, as individuals, are moving into the red zone and how to try to avoid it.  This was easily the most thought provoking of the three keynotes, and I picked up a copy of Tamm’s book, Radical Collaboration, to further explore these ideas.  The full presentation and slides are available at the Agile Alliance website (www.agilealliance.org).

In the normal sessions, I also attended Lyssa Adkins Coaching v. Mentoring, Jake Calabrese’s Benefiting from Conflict – Building Antifragile Relationships and Teams, and two presentations by Judith Mills: Can You Hear Me Now?  Start Listening Instead and Emotional Intelligence in Leadership.  Alas, It was only in hindsight that I realized that I’d read Adkins’ book.  In this presentation, she engaged in actual coaching and mentoring sessions with two people she’d brought along specifically for the purpose.  Unfortunately the sound in the room was poor and I feel like I lost a fair amount of the nuance of the sessions; the one thing I came away with was that mentoring seems like coaching while also being able to provide more detailed information to them.

Jake Calabrese turned out to be a dynamic and engaging speaker and I enjoyed his presentation and felt like it was useful, but that was before I went to Tamm’s keynote on collaboration.  I did enjoy one of the exercises that Calabrese did, though. After describing the four major “team toxins”: Stonewalling, Blaming, Defensiveness and Contempt, he had us take off our name badges, write down which toxin we were most prone to on a separate name badge, and go and introduce ourselves to other people in the room using that toxin as our name.  Obviously this is not something you want to do in a room full of people that work together all the time, but it was useful to talk to other people about how they used these “toxins” to react to conflict.  In the end, though, I felt that Calabrese’s toxins boil down to the signs of defensiveness that Tamm described and that Tamm’s proposals for identifying signs of defensiveness in ourselves and trying to correct them are more likely to be useful than Calabrese’s idea of a “Team Alliance.”

The two presentations by Judith Mills that I attended were a mixed bag.  I thought the presentation on listening was excellent, although there’s a certain irony in watching many of the other attendees checking their e-mail, being on Facebook, shopping, etc., while sitting in a presentation about listening (to be fair, there was probably less of that here than in other presentations).  Mills started by describing the costs of not listening well and then went into an exercise designed to show how hard listening really is: one person would make three statements and their partner would then repeat the sentences with embellishments (unfortunately, the number of people trying this at once made it difficult to hear, never mind listen.  The point was made, though).  We then discussed active listening and the habits and filters we can have that might prevent us from listening well and how communication involved more than just the words we use.  This was a worthwhile session and my only disappointment was that we didn’t get to the different types of question that one might use to promote communication and how they can be used.

Mills’ presentation on Emotional Intelligence in Leadership, on the other hand, was not what I anticipated.  I went in expecting a discussion on EI, but the presentation was more about leadership styles and came across as another description “new” leadership.  It would probably be useful for the people that haven’t experienced or heard about anything other than Taylorist scientific management, but I didn’t find anything particularly new or useful to my role in this presentation.

Notes from Agile 2015 Washington, D.C. 

Having lived in Washington DC area for over 25 years, my experience caused me to presume that the majority of the audience at the Agile 2015 Washington DC would primarily consist of people working in the public sector, given our geographical proximity to a long list of federal agencies. It was not unrealistic for me to expect that the speakers at the conference might tailor their presentations and discussions to this type of audience. The audience actually turned out to be quite diverse, rendering my assumptions inaccurate. However, I could not help to feel somewhat validated after listening to the first key note speaker.  Indeed, the opening presentation by Luke Hohmann entitled “Awesome Super Problems” focused on tackling “wicked problems” such as budget deficits and environmental challenges. Wicked problems, as described by Mr. Hohmann, are not technical in nature and cannot be solved by small Agile teams of 6-8 people. These problems deal more with strategic decision-making that may result in long-term consequences, intended as well as unintended. They impact millions of people and they require broad consent as well as governance. Hailing from the San Jose, California, Mr. Hohmann discussed how implementation of Agile methodologies helped the city tackle some “wicked problems” such as a budget deficit of 100 million dollars.

Planning and Executing
Solving major problems such as budget shortfalls generally require a great deal of collaboration between stakeholders with competing priorities. Mr. Hohmann stressed that the approach should focus on collaboration over competition, or in Agile terminology, “customer collaboration over contract negotiation”. Easier said than done? Maybe… To help facilitate this collaboration, Mr. Hohmann assembled a conference of public servants such as city planners, police and fire chiefs, and other community leaders. There were several discovery sessions where people could get answers to questions like how much money would be saved if the fire department removed one firefighter from their teams and what impact to safety that may entail. The group was broken down into small tables of no more than 8 people and one facilitator provided by Mr. Hohmann. The group was presented with the list of major budget items and subsequently was compelled to engage in budget games  in which participants were basically bidding to get their high priority budget items included in the next budget and negotiate cuts by trading these items. Afterwards the players had a retrospective and offered feedback.

Retrospective and Outcome
Feedback provided by the participants showed that competition was replaced by collaboration. Participants tended not to get into heated arguments because the games inherently encouraged compromise. Small groups helped cut down on distractions and side conversations. The participants also reported that the game was fair since every player possessed equal bidding power. Interestingly, the final outcome resulted in surprising consensus over the budget items, as the majority of the participants actually ended up prioritizing their items very similarly after the competition aspect was removed. The “democratic” aspect of the collaborative approach helped eliminate animosity and partisanship which are not uncommon, as have been witnessed in the U.S federal budget negotiations. This experiment seemed to yield the desired outcome of tackling the imbalanced budget and was touted as a success, attracting attention of more San Jose residents.

Scaling
To tackle other public issues such as school overcrowding and water shortages, Mr. Hohmann attempted to repeat the process, but the number of participants has increased to the point where a large conference hall was needed. In order not to upset the budget by renting a giant conference hall, Mr. Hohmann and the local government set up an online forum that accepted a virtually unlimited number of participants, yet still assigning them to groups of about eight people and increasing the number of groups. The participants played other games such as Prune the Product Tree which basically involves prioritizing the list of problems the public wants to tackle. The feedback was even more positive as the majority of the participants actually preferred the online setting even more. They reported even less distractions. The data was easier to collect and aggregate, giving the participants almost an immediate view of how the game was progressing and how the priorities were moving.

Conclusion
One main takeaway I got from Mr. Hohmann’s presentation is one of encouragement to be creative. Mr. Hohmann stressed the importance of focusing on what he described as “common ground for action”. The idea is to focus on generating a list or backlog of actionable items. The process or exercise to get to the desired state can vary, and Agile methodologies can help folks get there, even when tackling wicked problems.

External Links:
http://www.innovationgames.com/budget-games-guide/

http://www.innovationgames.com/prune-the-product-tree/

Further reading:
http://conteneo.co/san-jose-residents-play-4th-annual-budget-games/

In previous installments in this series, I’ve talked about what Product Owners and development team members can do to ensure iteration closure. By iteration closure, I mean that the system is functioning at the end of each iteration, available for the Product Owner to test and offer feedback. It may not have complete feature sets, but what feature sets are present do function and can be tested on the actual system: no “prototypes”, no “mock-ups”, just actual functioning albeit perhaps limited code. I call this approach fully functional but not necessarily fully featured.

In this installment, I’ll take a look at the Scrum Master or Project Manager and see what they can do to ensure full functionality if not full feature sets at the end of each iteration. I’ll start out by repeating the same caveat I gave at the start of the Product Owner installment: I’m a developer, so this is going to be a developer-focused look at how the Scrum Master can assist. There’s a lot more to being a Scrum Master, and a class goes a long way to giving you full insight into the responsibilities of the role.

My personal experience is that the most important thing you as a Scrum Master can do is to watch and listen. You need to see and experience the dynamics of the team.

At Iteration Planning Meetings (IPMs), are Product Owners being intransigent about priorities or functional decomposition? Are developers resisting incremental functional delivery, wanting to complete technical infrastructure tasks first? These are the two most serious obstacles to iteration closure. Be prepared to intervene and discuss why it’s in everyone’s interest to achieve this iteration closure.

At the daily stand-up meetings, ensure that every team member speaks (that includes you!), and that they only answer the three canonical questions:

  1. What did I do since the last stand-up?
  2. What will I do today?
  3. What is in my way?

Don’t allow long-winded discussions, especially technical “solution” discussions. People will tune out.

You’re listening for:

  • Someone who always answers (1) and (2) with the same tasks every day and yet says they have no obstacles
  • Whatever people say in response to (3)

Your task immediately after the stand-up is to speak with team members who have obstacles and find out what you can do to clear the obstacles. Then address any team members who’re always doing the same task every day and find out why they’re stuck. Are they inexperienced and unwilling to ask for help? Are they not committed to the project mission and need to be redeployed?

Guard against an us-versus-them mentality on teams, where the developers see Product Owners or infrastructure teams as “the enemy” or at least an obstacle, and vice versa. These antagonistic relationships come from lack of trust, and lack of trust comes from lack of results. Again, actual working deliverables at the close of each iteration go a long way to building trust. Look for intransigence on either the developer team or with the Product Owner: both should be willing to speak freely and frankly with each other about how much work can be done in an iteration and what constitutes Minimal Value Product for this iteration. It has to be a negotiation; try to facilitate that negotiation.

Know your team as human beings – after all, that is what they are. Learn to empathize with them. How do individuals behave when they’re happy or when they’re frustrated? What does it take to keep Jim motivated? It’s probably not the same things as Bill or Sally. I’ve heard people advocate the use of Meyers-Briggs Personality Tests or similar to gain this understanding. I disagree. People are more complex than 4 or 5 letters or numbers at one moment in time. I may be an introvert today and an extrovert tomorrow, depending on how my job is going. Spend time with people to really know them, and don’t approach people as test subjects or lab rats. Approach them as human beings, the complex, satisfying, irritating, and ultimately rewarding organisms that we actually are.

Occasionally, when I speak at technical or project management meet-ups, an audience member will ask, “I’m a Scrum Manager and I can’t get the Product Owner to attend the IPM; what should I do?” or, “My CIO comes in and tasks my developer team directly without going through the IPM; how do I handle this?” I try to give them hints, but the answer I always give is, “Agile will only expose your problems; it won’t solve them.” In the end, you have to fall back on your leadership and management skills to effect the kind of change that’s necessary. There’s nothing in Scrum or XP or whatever to help you here. Like any other process or tool, just implementing something won’t make the sun come out. You still have to be a leader and a manager – that’s not going away anytime soon.

Before I close, let me point out one thing I haven’t listed as something a Scrum Master ought to be adept at: administration. I see projects where the Scrum Master thinks their primary role is to maintain the backlog, measure velocity, track completion, make sure people are updating their Jira entries, and so on. I’m not saying this isn’t important – it is. It’s very important. But if you’re doing this stuff to the exclusion of the other stuff I talked about up there, you’re kind of missing the point. Those administrative tasks give you data. You need to act on the data, or what’s the point? Velocity is decreasing. OK…what are you and the team going to do about it? That’s the important part of your role.

When we at CC Pace first started doing Agile XP projects back in 2000-2001, we had a role on each project called a Tracker. This person would be part time on the project and would do all the data collection and presentation tasks. I’d like to see this role return on more Agile projects today, because it makes it clear that that’s not the function of the Scrum Master. Your job is to lead the team to a successful delivery, whatever that takes.

So here we are at the end of my series. If there’s one mantra I want you to take away from this entire series, it’s Keep the system fully functional even if not fully featured. Full functionality – the ability of the system to offer its implemented feature set to the Product Owner for feedback – should always come before full features – the completeness of the features and the infrastructure. Of course, you must implement the complete feature set and the full infrastructure – but evolve towards it. Don’t take an approach that requires that the system be complete to be even minimally useful.

If you’re a Product Owner:

  • Understand the value proposition not just of the entire system, but of each of its components and subsets.
  • Be prepared to see, use, and test subsets, or subsets of subsets of subsets, of the total feature set. Never say, “Call me only when the system is complete.” I guarantee this: your phone will never ring.

If you’re a developer:

  • Adopt Agile Engineering techniques such as TDD, CI, CD, and so on. Don’t just go through the motions. Become really proficient in them, and understand how they enable everything else in Agile methodologies.
  • Use these techniques to embrace change, and understand that good design and good architecture demand encapsulation and abstraction. Keeping the subsystems isolated so that the system is functional even if not complete is not just good for business. It’s good engineering. A car’s engine can (and does) run even before it’s installed into the car. Just don’t expect it to take you to the grocery store.
  • Be an active team member. Contribute to the success of the mission. Don’t just take orders.

If you’re a Scrum Master:

  • Watch and listen. Develop your sense of empathy so you “plug in” to the team’s dynamics and understand your team.
  • Keep the team focused on the mission.
  • If you want to sweat the details of metrics and data, fine – but your real job is to act on the data, not to collect it. If you aren’t good at those collection details, delegate them to a tracking role.

I hope you’ve enjoyed this series. Feel free to comment and to connect with me and with CC Pace through LinkedIn. Please let me hear how you’ve managed when you were on a supposedly Agile project and realized that the sound of rushing water you heard was the project turning into a waterfall.

In Part 1 of this blog series, I presented a high-level summary of the many different opportunities that Business Analysts (BA) can pursue in an Agile BA role, often resulting in new and exciting experiences. I also highlighted some of the differences between today’s “Agile BA” and the traditional “Waterfall BA”. Finally, I presented an interesting metaphor for today’s Agile BA, one of a Major League Baseball “utility player” (in this case, Jose Oquendo – a player who accomplished the rare feat of playing at every position during his 12-year MLB career.)

Part 2 of this blog entry focuses on the five key functional areas (aka “opportunities”) where I feel Agile BA’s can contribute or take outright ownership of a certain project task or responsibility. A key point to remember is that in order for these opportunities to present themselves, circumstances need to exist which ultimately depend on the dynamics of a project and the makeup of the particular team. Surely we don’t want to step on any toes or introduce team conflict. But if the opportunity is presented and a need has clearly been established, take the bull by the horns and run with these five opportunities:

  • Project Management
  • Product Management (aka the Product Backlog)
  • Testing
  • Documentation
  • Collaboration (with Project Stakeholders and Team Members)

PROJECT MANAGEMENT – WORK WITH (OR AS) THE SCRUMMASTER

On today’s project teams, Agile BA’s are usually best-suited to provide project management/ScrumMaster support whenever the need arises. And let’s be real – with PM’s and ScrumMasters constantly being pulled in several different directions, the Agile BA can tackle a numerous amount of responsibilities associated with this role.

In all likelihood, Agile BA’s have the experience necessary to handle many of the day-to-day responsibilities of a PM or ScrumMaster. The Agile BA can facilitate any of the recurring “events” as needed – the daily scrum (or “standup”), sprint planning, sprint review and sprint retrospective.

In many cases, the Agile BA is as close to (or even sometimes more engaged) with the project’s product backlog than the actual PM. This knowledge of the past, current and future state of the product backlog enables the Agile BA to assist with several project-related artifacts – for example, sprint and release burn-up/burn-down charts.

Successfully leading and delivering on many of these crucial project events and tasks not only contributes to the success of the team, but it also provides valuable on-the-job training. For Agile BA’s who want to eventually move into a PM or ScrumMaster role, this experience is invaluable.

THE “PROXY PRODUCT OWNER” – TODAY’S “PRODUCT OWNER” REALITY

Lately, it seems that a fully-engaged Product Owner is more of a luxury than a norm on today’s Agile projects. Agile BA’s can benefit from the potential subject-matter knowledge gained and added exposure by bridging this gap and acting as a “Proxy Product Owner”. In cases where a truly-dedicated Product Owner is not a reality, no one is better suited to step into this role than the Agile BA.

BA’s usually develop a solid rapport with the customer and can act as a liaison between the customer and the project team whenever needed. And as stated earlier, the Agile BA probably has the most experience working with the project’s user stories and backlog. On fast-moving development projects, many decisions are needed real-time, and waiting for answers from an absent Product Owner usually hinders the team’s progress.

TESTING – AFTERALL, WHO KNOWS THE STORY BETTER THAN THE BA?

Many Agile teams have already moved to this model, but for teams which have not, here is another opportunity. As previously mentioned, BA’s handing their work over to testers ‘waterfall-style’ is an outdated and inefficient practice. I have seen that 2-3 fully engaged Agile BA’s can efficiently handle the workload of 2-3 full-time BA’s and 2-3 full-time testers. Instead of separating requirements and functional testing tasks for a particular piece of functionality (e.g. user story), Agile BA’s focus on the user story as a whole – from origination (story creation) through implementation (fully-tested, potentially shippable product.) This is not to say that other methods of specialized testing aren’t needed, but in many cases, the best person to drive a user story to “done” is the Agile BA.

Automated testing has also become an invaluable practice on software development projects and provides yet another opportunity for Agile BA’s to contribute in the testing arena. With working knowledge of the current state of the application and product backlog, Agile BA’s have the capability to define and develop a project’s ongoing automated testing suite. Depending on the testing tools employed by the team, BA’s can “pair” with a technical resource (e.g. java developer). In this scenario, the developer handles the technical components of the automated testing suite while the BA designs, builds and manages the suite from a functional perspective.

COLLABORATION – BEFORE YOU KNOW IT, YOU’RE THE PROJECT’S “GO-TO” PERSON

I have included “collaboration” as an opportunity for Agile BA’s because collaboration, indeed, leads to opportunities. In my previous Waterfall experiences, BA’s didn’t collaborate much. In today’s Agile world, the Agile BA can really become a project’s “go-to” person. The collaboration piece also closely ties in with the previously mentioned PM/ScrumMaster and Product Owner opportunities.

For example, facilitating a sprint review or presenting a product demo provides invaluable experience and exposure. Sprint review meetings often include executives and/or stakeholders whom otherwise do not participate at all towards the project (basically, you see them once every two weeks). Leading these sessions provides direct communication with the “customer”, providing valuable feedback which can be relayed back to the team. Since entire project teams rarely attend these informational sessions, the team will start looking to you to provide the important feedback that we all value working on Agile projects. Personally, I have always looked forward to returning to the team room and have always appreciated team members asking, “so, how did it go!?”, after each sprint demo. At the same time, I am always glad to be able to provide that feedback to the team which we can use for future success.

DOCUMENTATION – CHANGE DOCUMENTATION FROM A TEDIUS TASK TO A VALUABLE COMMODITY

Even in today’s Agile world, most software development projects require some essential “dirty work”. The BA role has certainly evolved, but we should not completely abandon our roots. While we’ve all heard repeatedly (sometimes to our detriment) that the Agile Manifesto preaches valuing working software over comprehensive documentation, certain documentation can be critical to the success of projects.

I have seen that, if applied effectively, this basic Agile tenet not only reduces redundant documentation, but it helps teams focus on where documentation actually adds value to a project. Instead of documenting a requirement or process as part of an extensive list of deliverables promised six months ago (which will never be read or will become irrelevant), we document exactly what is needed, today. Most likely, information and processes which need to be documented aren’t even known at the outset of the project. Due to the fact that writing is a core skillset possessed by most BA’s, Agile BA’s are well-positioned to accomplish many of the documentation deliverables needed over the duration of a project.

NOW, GET TO WORK!

You’ve just finished reading this blog entry and your new sprint starts next week. It’s not entirely unrealistic that you can begin working in each of the areas mentioned in this blog over the next two weeks (if you haven’t been already). Offer to facilitate your upcoming sprint planning or review session. Ask the PM if you can contribute to the upcoming metrics reports (e.g. sprint/release burndown). If you aren’t already, start testing user stories – start at a high-level, ensuring that all acceptance criteria is met. Take a few hours, dive into and familiarize yourself with the product backlog. Offer to facilitate the sprint review and invite stakeholders who have become disengaged. And finally, document that process flow which has been taking up valuable whiteboard real-estate for the last several weeks (and really needs to be erased)!

Before you know it, you’re pursuing five completely new opportunities in a matter of two weeks. It might be similar to trying out three completely new positions on a baseball diamond. More importantly, you may even finally be able to explain exactly why Jose Oquendo was such a valuable player to have on a Major League Baseball team.

If you’ve been following this series of blog posts about why so many Agile projects seem to deteriorate into waterfall, you know that I believe failure to completely close iterations (or sprints) is a major reason why. If tasks continually spill over from one iteration into the next, the system is never stable enough to demo, and without demos, the feedback loop between the developer team and the Product Owner is broken. Without a rapid feedback loop, Agile doesn’t exist. The project is just a waterfall project with weekly or biweekly status meetings.

What can developer teams do to ensure that iterations close with functioning software that allows the feedback loop to run smoothly?
The most important thing you as a developer can do is to embrace change, as the motto of the Extreme Programming (XP) movement says. Don’t accept change. Don’t tolerate change. Embrace it. Understand that change is the very basis of the evolutionary and incremental approach to software development that is at the heart of Agile methodologies.

What does this mean?
Well, consider that most useful business functionality in an application takes a long time to develop. A single complete use case may take a month or more. And as you consider all the use cases in the system, you begin to see patterns emerging. Perhaps use case 1 requires a way to validate and persist complex incoming data to a database. Use case 20 requires a way to report invalid data to a compliance authority. Use case 35 requires you to deal with the response from the compliance authority. So you begin to think about how you can avoid having to rework what you do for use case 1 to incorporate the later use cases.

This is the road to perdition.

If you want all these use cases to “come together” all at once into a grand spectacle of software delivery, you’ll find it increasingly difficult to show progress along the way. There are too many interlocking pieces that all need to work in order for the whole thing to work. This is how work spills over from one iteration to the next – because the chunks of work you’re taking on are too big.

Instead, embrace change.

embrace-change

Know that some of what you do in use case 1 will likely be reworked for use case 20. This is a good thing, not a bad thingbecause it allows you to chunk up your work into smaller pieces that are functional but with limited feature sets.

You may ask: but doesn’t the cost of the rework add up and inflate the project cost?
Yes, a little, but if you diligently use Agile engineering techniques like Test Driven Development (TDD), Continuous Integration (CI), and so on, the cost of change will be greatly reduced, and ultimately the cost will be less than the cost of eliminating the feedback loop. Think of it this way: if you incrementally develop and constantly demo use case 1, the Product Owner may discover that use case 20 needs modification, and that use case 35 goes away. Now you’ve actually reduced cost by using an evolutionary approach. Eliminating the short feedback loop is one of the most expensive things you can do on a software delivery project.

As a member of a developer team, take the iteration planning meetings (IPMs) very seriously. Your focus should be on working with the Product Owner to break down use cases into user stories and tasks that you and your team can complete in an iteration. Keep repeating to yourself: fully functional but not necessarily fully featured. Accept that the feature set is going to grow and evolve, but always keep the system functional so that whatever limited feature set is implemented can be demoed and incorporated into the feedback loop.

For example…
So, let’s take the example of use case 1 above: Validate and persist complex incoming data into our database. Will this fit into a 2-week iteration? Most probably not. So start small. Let’s get some valid data into the database. This needs a couple of tables and some data persistence objects. Will that fit into 2 weeks? Yes, probably, with some time left over. OK, so let’s use the time left over to do 5 of the 120 total validations we will have to do. Decouple the validation from the persistence, because that makes it easier to do each piece – but it’s better software design anyway. How do we report the validation errors with no UI yet? How about writing them to a file? Sure, you’ll end up throwing away the file-writing code, but once again, you’re practicing better software design. You’ve separated performing the validations from reporting the validations.  And you can immediately start demoing your validations to the Product Owner.

(Hint: don’t simply write directly from the validators to a file. Design a validation reporting interface. Implement the interface first as a file, and later as whatever it actually needs to be. Use mock testing frameworks to test-drive the design. This decoupling and abstraction is good design whether you’re using Agile or not.)

Later, you will hook up the validations to the persistence. Then you will deal with how to handle invalid data. Then you will deal with showing validation errors via a UI. All the while, you have full functionality but not necessarily full feature sets. Each piece you develop is small, well-tested, and isolated. You then begin to interconnect them (don’t forget integration testing!) to deliver full feature sets.

[My colleague Robert Pantall wrote a blog post on how he used this technique of accepting controlled rework and incremental discovery to rewire his living room. It’s a fun read.]

Fully Functional, Not Necessarily Fully Featured
Keep in mind that you don’t get to unilaterally decide how the use cases get broken down into small chunks of functionality. You work with the Product Owner to do this, so that each small piece serves some business purpose. It’s a back and forth negotiation between you and the Product Owner.

CCPace-Board_LLP8670-final

Everything you do in an IPM should support this goal of evolutionary development that fits into the iteration. Be prepared to estimate quickly so you know whether work will fit. Don’t be afraid to say that work won’t fit and needs to be even more finely chunked down. Speak up. Have a dialog. Work with the product team. Keep the feedback loop running continuously.

After the IPM, when you’re working on the stories or tasks, you may run into unexpected trouble that lengthens the story completion time. Surface this immediately to the product team. You may need to break work down even more. That’s fine. Keep reminding yourself: fully functional, not necessarily fully featured.

I do want to emphasize something at this point. An evolutionary and incremental approach to software development is not a license to hack and slash. Saying you’re using incremental development does not give you an excuse to abandon architectural vision, best practices for design, continuous code improvement by refactoring, or coding standards. You must still design a system for its intended lifetime, following enterprise architecture best practices. Adopting incremental development only lets you make some short-term compromises for the benefit of keeping the software continuously functional. As the software evolves, it must evolve towards your architectural vision and continue to maintain its design and code integrity. As I showed above in the example, if you do this correctly, you’ll mostly be writing throw-away implementations to well-designed, long-term interfaces, and later building permanent production-ready implementations of the same interfaces.

I hope I’ve shown you that change is your friend if you truly adopt incremental delivery. It isn’t something to be feared or managed or avoided. Instead, it’s what makes it possible to develop complex business functionality while at the same time allowing the Product Owner to touch and use the software and to offer continuous feedback. Properly managed through correct Agile engineering techniques, the cost of the required rework fades into insignificance compared to the cost savings of the continuous feedback loop.

Don’t fear change. Embrace it.

Next episode, I’ll focus on what the Scrum Master or Project Manager can do to keep Agile projects from descending into waterfall.

As I write this blog entry, I’m hoping that the curiosity (or confusion) of the title captures an audience. Readers will ask themselves, “Who in the heck is Jose Oquendo? I’ve never seen his name among the likes of the Agile pioneers. Has he written a book on Agile or Scrum? Maybe I saw his name on one of the Agile blogs or discussion threads that I frequent?”

In fact, you won’t find Oquendo’s name in any of those places. In the spirit of baseball season (and warmer days ahead!), Jose Oquendo was actually a Major League Baseball player in the 1980’s, playing most of his career with the St. Louis Cardinals.

Perhaps curiosity has gotten the better of you yet again, and you look up Oquendo’s statistics. You’ll discover that Oquendo wasn’t a great hitter, statistically-speaking. His career .256 batting average and 14 homeruns over a 12 year MLB career is hardly astonishing.

People who followed Major League Baseball in the 1980’s, however, would most likely recognize Oquendo’s name, and more specifically, the feat which made him unique as a player. Oquendo has done something that only a handful of players have ever done in the long history of Major League Baseball – he’s played EVERY POSITION on the baseball diamond (all nine positions in total).

Oquendo was an average defensive player and his value obviously wasn’t driven from his aforementioned offensive statistics. He was, however, one of the most valuable players on those successful Cardinal teams of the 80’s, as the unique quality he brought to his team was derived from a term referred to in baseball lingo as “The Utility Player”. (Interestingly enough, Oquendo’s nickname during his career was “Secret Weapon”.)

Over the course of a 162-game baseball season, players get tired, injured and need days off. Trades are executed, changing the dynamic of a team with one phone call. Further complicating matters, baseball teams are limited to a set number of roster spots. Due to these realities and constraints of a grueling baseball season, every team needs a player like Oquendo who can step up and fill in when opportunities and challenges present themselves. And that is precisely why Oquendo was able to remain in the big leagues for an amazing 12 years, despite the glaring deficiency in his previously noted statistics.

Oquendo’s unique accomplishment leads us directly into the topic of the Agile Business Analyst (BA), as today’s Agile BA is your team’s “Utility Player”. Today’s Agile BA is your team’s Jose Oquendo.

A LITTLE HISTORY – THE “WATERFALL BUSINESS ANALYST”

Before we get into the opportunities afforded to BA’s in today’s Agile world, first, a little walk down memory lane. Historically (and generally) speaking – as these are only my personal observations and experiences – a Business Analyst on a Waterfall project wrote requirements. Maybe they also wrote test cases to be “handed off” and used later. In many cases, requirements were written and reviewed anywhere from six to nine months before documented functionality was even implemented. As we know, especially in today’s world, a lot can change in six months.

I can remember personally writing requirements for a project in this “Waterfall BA” role. After moving onto another project entirely, I was told several months down the road, “’Project ABC’ was implemented this past weekend – nice work.” Even then, it amazed me that many times I never even had the opportunity to see the results of my work. Usually, I was already working on an entirely new project, or more specifically, another completely new set of requirements.

From a communications perspective, BA’s collaborated up-front mostly with potential users or sellers of the software in order to define requirements. Collaboration with developers was less common and usually limited to a specific timeframe. I actually worked on a project where a Development Manager once informed our team during a stressful phase of a project, “please do not disturb the developers over the next several weeks unless absolutely necessary.” (So much for collaboration…) In retrospective, it’s amazing to me that this directive seemed entirely normal to me at the time.

Communication with testers seemed even rarer – by the very definition, on a Waterfall project, I’ve already passed my knowledge on to the testers – it’s now their responsibility. I’m more or less out of the loop. By the time the specific requirements are being tested, I’m already off onto an entirely new project.

In my personal opinion the monotony of the BA role on a Waterfall project was sometimes unbearable. Month-long requirements cycles, workdays with little or no variation, and some days with little or no collaboration with other team members outside of standard team meetings became a day to day, week to week, month to month grind, with no end in sight.

AND NOW INTRODUCING… THE “AGILE BUSINESS ANALYST”

Fast-forward several years (and several Agile project experiences) and I have found that the role of today’s Agile Business Analyst has been significantly enhanced on teams practicing Agile methodologies and more specifically, Scrum. Simply as a result of team set-up, structure, responsibilities – and most importantly, opportunities – I feel that Agile teams have enhanced the role of the Business Analyst by providing opportunities which were never seemingly available on teams using the traditional Waterfall approach. There are new opportunities for me to bring value to my team and my project as a true “Utility Player”, my team’s Jose Oquendo.

The role of the Agile BA is really what one makes of it. I can remain content with the day to day “traditional” responsibilities and barriers associated with the BA role if I so choose; back to the baseball analogy – I can remain content playing one position. Or, I can pursue all of the opportunities provided to me in this newly-defined role, benefitting from new and exciting experiences as a result; I can play many different positions, each one further contributing to the short and long-term success of the team.

Today, as an Agile BA, I have opportunities – in the form of different roles and responsibilities – which not only enhance my role within the team but also allow me to add significant value to the project. These roles and responsibilities span not only across functional areas of expertise (e.g. Project Management, Testing, etc.) but they also span over the entire lifetime of a software development project (i.e. Project Kickoff to Implementation). In this sense, Agile BA’s are not only more valuable to their respective teams, they are more valuable AND for a longer period of time – basically, the entire lifespan of a project. I have seen specifically that Agile BA’s can greatly enhance their impact on project teams and the quality of their projects in the following five areas:

  • Project Management
  • Product Management (aka the Product Backlog)
  • Testing
  • Documentation
  • Collaboration (with Project Stakeholders and Team Members)

We’ll elaborate – in a follow-up blog entry – specifically how Agile BA’s can enhance their role and add value to a project by directly contributing to the five functional areas listed above.

I found Mike Cohn’s posting Don’t Blindly Follow very curious because it seems to contradict what many luminaries of the Agile community have said about starting out by strictly following the rules until you’ve really learned what you’re doing.

In one sense, I do agree with this sentiment of not blindly doing something.  Indeed, when I was younger, I thought it was quite clever of me to say things like “The best practice is not to follow best practices.”  But then I discovered the Dreyfus Model of Skills Acquisition and that made me realize that there’s a more nuanced view.  In a nutshell, the Dreyfus model says that we progress through different stages as we learn skills.  In particular when we start learning something, we do start by following context-free rules (a.k.a., best practices) and progress through situational awareness to “transcend reliance on rules, guidelines, and maxims.”  This is resonates with me since I recognize it as the way I learn things and when I can see it in others when they are serious about something.  (To be fair, there are people that don’t seem to fit into this model, too, but I’m okay with a model that’s useful even if it doesn’t cover every possibility.)  So, I would say that we should start out following the rules blindly until we have learned enough to recognize how to helpfully modify the rules that we’ve been following.

Cohn concludes with another curious statement: “No expert knows more about your company than you do.”  Again, there’s a part of me that wants to agree with this, but then again…  An outsider could well see things that an insider takes for granted and have perspectives that allow them to come to different conclusions from the information that you both share.

I find myself much more in sympathy with Ron Jeffries’ statements in The Increment and Culture: “Rather than change ourselves, and learn the new game, we changed Scrum. We changed it before we ever knew what it was.”  This seems like it would offer a much better opportunity to really learn the basics before we start changing this to suit ourselves.

For the past 3 months we’ve had the pleasure of working with a charitable organization called the Ceca Foundation.

Ceca, which is derived from “celebrating caregivers”, was established in 2013 to celebrate caregiver excellence and “to promote high patient satisfaction by recognizing and rewarding outstanding caregivers”. They do this by providing employees of caregiving facilities with a platform for recognizing and nominating their peers for the Ceca Award – a cash reward given throughout the year. These facilities include rehabilitation centers, hospitals, assisted living centers and similar organizations.

CC Pace partnered with Ceca to build their next generation, customized nomination platform.

This was one of those projects that fills you with pride. First, for the obvious reason – Ceca’s worthwhile mission. Second, the not-so-obvious reason, which was the development process. It was a great example of why I enjoy helping customers build products.

The process

For various reasons, Ceca was under a tight deadline to get the new platform up-and-running for several facilities. The Agile process turned out to be a great fit, as it allowed for frequent customer feedback and weekly deployments to a testable environment. We developed the platform using high-level feature stories, rather than detailed specifications. This allowed the team to concentrate on the desired outcome, rather than getting caught up in the technical details. At times, we had to forgo a software-based solution in favor of a manual process. When you have limited resources and time, you have to make these types of decisions.

In February, after about 3 weeks, the Ceca Foundation launched the new web platform for one facility and then quickly brought on several more. There was immediate gratification for the team as we watched the nominations flood in.

The “feel good” story

What made this project successful and enjoyable at the same time? I’m reminded of the first value in the Agile manifesto – individuals and interactions over processes and tools. Some factors were technical but most were not:

  • a motivated and enthusiastic customer (Ceca)
  • a set of agreed upon features to provide the Minimum Viable Product
  • frequent collaboration with the customer
  • a cloud-hosted environment to provide infrastructure on-demand for testing and live versions
  • a software-as-a-service model that allowed us to quickly bring on new facilities

For me, it was Agile at its most fundamental: discuss the desired features; provide a cost estimate for those features; negotiate priority with the customer; provide frequent releases of working software.

Check out the Ceca Foundation for more information. You can see a demo of the software under “Technology​“.

Occasionally, as part of our strategic advisory service, I work with clients who don’t want custom application delivery from us, but rather want me to provide advice to their own Agile development teams. Many of them don’t need a lot of help, but perhaps the single issue I observe most often is that iteration (or sprint, in Scrum terminology) planning meetings (IPMs) don’t go well. Rather than being an interactive exchange of ideas and a negotiation between developers and product owners for the next iteration, I observe that the IPMs become 2-week status meetings that don’t accomplish much. The developer team doesn’t have much or anything to demo, there’s little feedback from the product owner, and everyone just routinely agrees to meet in two weeks to go through the same thing again.

One of the main reasons for these lackluster IPMs is the failure to close tasks at iteration boundaries. If the developer team can’t close tasks at iteration boundaries, then the product can’t be usefully demoed, which means the product owner can’t offer any feedback. This isn’t any form of Agile – it’s just waterfall with 2-week status meetings.

Failure to close tasks at iteration boundaries has other implications too, because what it’s telling you is that stories are too big, and stories that are too big have big consequences.

First, big stories are hard to estimate accurately. Think of estimates as sort of like weather forecasts: anything over 2-3 days is probably too inaccurate to use for planning. The smaller the story, the more accurate will be the estimate.

Second, big stories make it harder to change business priorities. That may seem like a non sequitur, but when developers are working on any story, the system is in an unstable, non-functioning state. To change direction, the developers have to bring the system to a stable state where it can be taken in a different direction. Those stable states are achieved when stories are completed and the system is ready to demo.

An analogy I like to use is to think of the system as a big truck proceeding down a controlled-access highway, like an American interstate. You can exit only at certain points. If you’re heading north and you realize you want to head east instead, you have to wait for the next exit to make that direction change – you can’t just immediately turn east and start driving through the underbrush. The farther apart the exits are, the farther you’re going to have to go out of your way before you can adjust. Think of each exit as being the close of a story. The closer together the exits (the smaller the stories), the sooner you’ll reach an exit (a system steady state) where you can change direction.

In this series of blog posts, I’m going to look at what it takes to ensure task closure at iteration boundaries. Each post will focus on a different team role, and how that role can help ensure that iterations end in an actual delivery of working software that can be demoed in an IPM. I’ll write about what product owners, developers, and project managers (or Scrum masters) can do to reduce story size, ensure product stability and functionality at iteration boundaries, and keep the system always ready to quickly change directions – the very definition of agility.

Watch this space.

I like to attempt minor DIY projects around the house because 1) it saves money and, 2) it’s enjoyable to solve technical issues that don’t involve staring at a computer. Recently, I decided to wire the living room with recessed lights. There is no existing light fixture in the ceiling so I had to use power from a receptacle and wire in a new switch to control the lights.

I didn’t want to cut through the wall board, run wire and then find out that I didn’t know how to actually wire into the receptacle. So, I decided to approach the problem much like I do when faced with a daunting programming problem – by unit testing.

I pulled out the receptacle, examined the wiring, and scratched out a plan on a piece of paper. Then I wired a light switch and cheap single bulb fixture off of the receptacle. Luckily, it worked without major adjustments or shocks. And it entertained the kids for about 3 minutes.

I then was able to confidently cut through the wall and wire in the switch, solving one piece of the puzzle and allowing me to focus on installing the overhead lights.

As a design technique, Test-Driven Development (TDD) allows us to break down complex systems into smaller, more manageable chunks. Once you’ve written tests to satisfy a cohesive set of requirements, you commit the code and move on to the next set.

A good example of this can be found in the book, Practices of an Agile Developer (Subramaniam, Hunt). One of the practices is described as Attacking Problems in Isolation. As the authors explain:

“Large systems are complicated – many factors are involved in the way they execute. While working with the entire system, it’s hard to separate the details that have an effect on your particular problem from the ones that don’t.”

Isolating the problem is also useful when debugging a system issue that is buried under layers of UI, database and middle-tier abstractions. Remove each layer until you’ve discovered the likely culprit. Or build a simple prototype and isolate the misbehaving module.

It’s easy to get overwhelmed by a complex system when trying to decide where to begin. It may feel like a house-of-cards, teetering on the verge of collapse with the next interruption. Consider TDD as not just a test fixture, but as a design technique that helps narrow the scope.

And as for wiring a home – keep it simple and remember to cut off the power.

 

Ron Jeffries recently posted an article on writing code for “The Diamond Problem” using TDD in response to an article by Alistair Cockburn on Thinking before programming.  Cockburn starts out:

“A long time ago, in a university far away, a few professors had the idea that they might teach programmers to think a bit before hitting the keycaps. Nearly a lost cause, of course, even though Edsger Dijkstra and David Gries championed the movement, but the progress they made was astonishing. They showed how to create programs (of a certain category) without error, by thinking about the properties of the problem and deriving the program as a small exercise in simple logic. The code produced by the Dijkstra-Gries approach is tight, fast and clear, about as good as you can get, for those types of problems.”

To which Ron responds:

“I looked at Alistair’s article and got two things out of it. First, I understood the problem immediately. It seemed interesting. Second, I got that Alistair was suggesting doing rather a lot of analysis before beginning to write tests. He seems to have done that because Seb reported that some of his Kata players went down a rat hole by choosing the wrong thing to test.

“Alistair calls upon the gods, Dijkstra and Gries, who both championed doing a lot of thinking before coding. Recall that these gentlemen were writing in an era where the biggest computer that had ever been built was less powerful than my telephone. And Dijkstra, in particular, often seemed to insist on knowing exactly what you were going to do before doing it.

“I read both these men’s books back in the day and they were truly great thinkers and developers. I learned a lot from them and followed their thoughts as best I could.”

Ron’s article goes on to solve the same programming problem with TDD and no particular up-front thinking that Cockburn solved in what he calls a combination of “the Dijkstra-Gries approach” and TDD.  On the whole, I would tend more towards the pure TDD approach that Ron takes because it got him feedback earlier and more frequently, while Cockburn’s approach, with more upfront thinking, didn’t provide him any feedback until he really did start writing his tests.  If Cockburn had gone down a blind alley with his thinking, he wouldn’t have gotten any concrete feedback on it until much later in the game.

But that’s not what I actually want to think about.  I did read both Dijkstra’s A Discipline of Programming and Gries’ The Science of Programming “back in the day” as well (last read in 1991 and 2001, respectively, although it was a second reading of Gries; I remember finding Dijkstra almost impossible to understand, but I did keep it, so it may be time to try it again), but I didn’t remember the emphasis on up front thinking that both Ron & Cockburn seemed to claim for them.  I dug out my copies of both books and did a quick flip through both of them, and I still feel that the emphasis is much more on proving the correctness of one’s code rather than doing a lot of up-front thinking.  I’d previously had the feeling that there was a similarity between Gries’ proofs and doing TDD.  As I poke around in chapter 13 of Gries’ book, where he introduces his methodology, I find myself believing it even more strongly.

Gries starts out asking “What is a proof?”  His answer?

“A proof, according to Webster’s Third New International Dictionary, is ‘the cogency of evidence that compels belief by the mind of a truth or fact,’  It is an argument that convinces the reader of the truth of something.

“The definition of proof does not imply the need for formalism or mathematics.  Indeed, programmers try to prove their programs correct in this sense of proof, for they certainly try to present evidence that compels their own belief.  Unfortunately, most programmers are not adept at this, as can be seen by looking at how much time is spent debugging.  The programmer must indeed feel frustrated at the lack of mastery of the subject!”

Doesn’t TDD provide that for us, at least when practiced correctly?  Oh, and the first principle that Gries gives in this chapter is: “A program and its proof should be developed hand-in-hand, with the proof usually leading the way.”  Hmm, sounds familiar, no?

Admittedly, Gries does speak out against what he calls “test-case analysis:”

“‘Development by test case’ works as follows.  Based on a few examples of what the program is to do, a program is developed.  More test cases are then exhibited – and perhaps run – and the program is modified to take the results into account.  This process continues, with program modification at each step, until it is believed that enough test cases have been checked.”   

On the face of it, this does sound like a condemnation of TDD, but does it really represent what we do when we really practice TDD?  Sort of, but it overlooks the critical questions of how we choose the test cases and the speed at which we can get feedback from them.  If we’re talking randomly picking a bunch of test cases and getting feedback from them in a matter of days or hours, then I’d agree that it would be a poor way to develop software.  When we’re practicing TDD, though, we should be looking for that next simplest test case that helps us think about what we’re doing.  Let’s turn to Gries’ “Coffee Can Problem” as an example.  

“A coffee can contains some black beans and some white beans.  The following process is to be repeated as long as possible.

“Randomly select two beans from the can.  If they have the same color, throw them out, but put another black bean in.  (Enough extra black beans are available to do this.)  If they are different colors, place the white one back into the can and throw the black one away.”

“Execution of this process reduces the number of beans in the can by one.  Repetition of the process must terminate with exactly one bean in the can, for then two beans cannot be selected.  The question is: what, if anything, can be said about the color of the final bean based on the number of white beans and the number of black beans initially in the can?”

Gries suggests we take ten minutes on the problem and then goes on to claim that “[i]t doesn’t help much to try test cases!”  But the test cases he enumerates are not the ones we’d likely try were we trying to solve this with TDD.  He suggests test cases for a black bean and a white bean to start with and then two black beans.  Doing TDD, we’d probably start with a single bean in the can.  That’s really the simplest case.  What’s the color of the final bean in the can if I start with only a single black bean?  Well, it’s black.  And it’s going to be white if the only bean in the can is white.  Okay, what happens if I start with two black beans?  I should end up with a black bean.  Two white beans wouldn’t make me change my code, so let’s try starting with a black and a white bean.  Ah, I would end up with a white bean in that case.  Can I draw any conclusions from this?  

I did actually think about those test cases before I read Gries’ description of his process:

“Perhaps there is a simple property of the beans in the can that remains true as the beans are removed and that, together with the fact that only one bean remains, can give the answer.  Since the property will always be true, we will call it an invariant.  Well, suppose upon termination there is one black bean and no white beans.  What property is true upon termination, which could generalize, perhaps, to be our invariant?  One is an odd number, so perhaps the oddness of the number of black beans remains true.  No, this is not the case, in fact the number of black beans changes from even to odd or odd to even with each move.”

There’s more to it, but this is enough to make me wonder if this is really different from writing a test case.  Actually it is: he’s reasoning about the problem, and by extension the code he’d write.  But he is still testing his hypotheses, it’s just in his head rather than in code.  And there I would suggest that TDD, as opposed to using randomly selected test cases, allows us to do that same kind of reasoning with working code and extremely rapid feedback.  (To be fair, I believe this is what Ron was saying, too.  I just want to highlight the similarity to what Gries was saying, while Ron seems to be suggesting more of a difference.)

What might get lost in TDD, at least when it’s not practiced well, is that idea of reasoning about the code.  There’s an art to picking that next simplest test to write, and I suspect that that’s where much of the reasoning really happens.  If we write too much code in response to a single test, we’re losing some of the reasoning.  If we write our tests after the code, we’ve probably lost it entirely.  And that’s something I do believe is lacking in many programmers today, evidenced, as Gries suggests, by the amount of time spent fumbling around in debuggers and randomly adding code “to see what will happen.”  But that’s for another time.