Agile – What is holding us back?

Agile – What is holding us back?

Thoughts on Agile DC 2017
In early February of 2001, a small group of software developers published the Agile Manifesto. Its principles are now familiar to countless professionals. Over the years, Agile practices seem to have gained mainstream appeal. Today it is difficult to find a software developer, or even a manager who has never heard of Agile. Having been a software developer for over 15 years, to me, Agile methodologies appear to embody a very natural way of producing software, so in a lot of ways, Agile conferences produce a “preaching to the choir” effect as well. As I attended the Agile DC 2017 conference, I wanted to remove myself from the bubble and try to objectively take a pulse of the state of Agile today. I also wanted to learn about the challenges that Agile seems to continue to face in its adoption at the enterprise level.

Basic takeaways
As I listened to various speakers and held conversations with colleagues and even one of the presenters, it appears that today the main challenges of adopting Agile no longer deal specifically with software engineering. That generally makes sense, given the amount of literature, combined experience and continuous improvement software development teams have been able to produce as Agile has matured from a disruptive toddler into a young adult. The real struggle appears to be in adoption of Agile across other business units, not so much IT. Why is that? I believe the two main reasons for the struggle can be attributed to organizational culture, but most importantly lack of knowledge in relation to team dynamics and human psychology.

Culture
Agile seems to continue to clash with the way business units are organized. Some hierarchical structures may produce very siloed and stovepiped groups that have difficulty collaborating with one another. While this may appear as an insurmountable problem, there is at least one great example of how Agile organizations have addressed it. We can point to DevOps as a great strategy of integrating or streamlining traditionally separate IT units. Software development teams and IT Operations teams have traditionally existed in their respective silos, often functioning as completely separate units. There was a clear need of streamlining collaboration between the two to improve efficiency and increase throughput. As a result, DevOps continues on a successful path. In regards to other business units, “Marketing” may depend on “Legal” or “Product Development” may depend on “Compliance”. There may be a clear need to streamline collaboration between these business units. One of my favorite presentations at the conference was by Colleen Johnson entitled “End to End Kanban for the Whole Organization”. By creating a corporate Kanban board, separate business units were able to collaborate better and absorb change faster. One of the business unit leaders was able to see that there were unnecessary resources being allocated to supporting a product development effort that was dropped (into the “dropped” row on the board). Anecdotally, this realization occurred during a walk down the hall and a quick glance at the board.

Education
When it comes to software development, Agile appears to contain some key properties that make it a very logical and a natural fit within the practice. Perhaps its success stories in the software development community is what produced a perception that it applies specifically to software engineering and not to other groups within an organization. It seems to me that one of the main themes of the conference was to help educate professionals who possess an agnostic view related to the benefits of an Agile transformation. I believe that continuous education is critical to getting business unit leaders to embrace Agile at the enterprise level. Some properties of Agile and its related methodologies addresses some important aspects of human psychology.  This is where the education seems to lack. There is much focus on improvement in efficiency and production of value, with little explanation of why these benefits are reaped. In simple terms, working in an Agile environment makes people happier. There are “softer” aspects that business leaders should consider when adopting Agile methodologies. Focus on incremental problem solving provides a higher level of satisfaction as work gets completed. This helps trigger a mechanism within the reward system of our brain. Continuous delivery helps improve morale as staff is able to associate their immediate efforts to the visible progress of their projects. Close collaboration and problem solving creates strong bonds between team members and fosters shared accountability. Perhaps a detailed discussion on team dynamics and psychology is a whole separate topic for another blog, but I feel these topics are important to managing the perception of the role of Agile at the enterprise level.

Final thoughts
It is important to note that enterprise-wide Agile transformations are taking place. For example, several speakers at the conference highlighted success stories at Capital One. It seems that enterprise-wide adoption is finally happening, albeit very incrementally, but perhaps that is the Agile way.

Learn more about Agile Coaching.

Boy this summer flew by quickly! CC Pace’s summer intern, Niels, enjoyed his last day here in the CC Pace office on Friday, August 18th. Niels made the rounds, said his final farewells, and then he was off, all set to return to The University of Maryland, Baltimore County, for his last hurrah. Niels is entering his senior year at UMBC, and we here at CC Pace wish him all the best. We will miss him.

Niels left a solid impression in a short amount of time here at CC Pace. In a matter of 10 weeks, Niels interacted with and was able to enhance several internal processes for virtually all of CC Pace’s internal departments including Staffing, Recruiting, IT, Accounting and Financial Services (AFA), Sales and Marketing. On his last day, I walked Niels around the office and as he was thanked by many of the individuals he worked with, there were even a few hugs thrown around. Many folks also expressed wishes that Niels’ and our paths will hopefully soon cross again. In short, Niels made a very solid impression on a large group of my colleagues in a relatively short amount of time.

Back in June I gladly accepted the challenge of filling Niels’ ‘mentor’ role as he embarked on his internship. I’d like to think I did an admirable job, which I hope Niels will prove many times over in the years to come as he advances his way up the corporate ladder. As our summer internship program came to a close, I couldn’t help reminiscing back to my days as a corporate intern more than 20 years ago. Our situations were similar; I also interned during the spring/summer semesters of my junior year at Penn State University, with the assurance of knowing I had one more year of college remaining before I entered the ‘real world’. My internship was only a taste of the ‘corporate world’ and what was in store for me, and I still had one more year to learn and figure things out (and of course, one more year of fun in the Penn State football student section – priorities, priorities…)

Penn State’s Business School has a fantastic internship program, and I was very fortunate to obtain an internship at General Electric’s (GE) Corporate Telecommunications office in Princeton, NJ. My role as an intern at GE was providing support to the senior staff in the design and implementation of voice, data and video-conferencing services for GE businesses worldwide. Needless to say, this was both a challenging and rewarding experience for a 21-year-old college student, participating in the implementation of GE’s groundbreaking Global Telecommunications Network during the early years of the internet, among other things.

As I reminisced back to my eight months at GE, I couldn’t help but notice the similarities between my internship and a few of the ‘lessons learned’ I took away from my experience 20+ years ago, and how they compared or contrasted to my recent observations and feedback I provided to Niels as his mentor. Of course, there are pronounced differences – after all, many things have changed in the last 20 years – the technology we use every day is clearly the biggest distinction. I would be remiss not to also mention the obvious generation gap – I am a proud ‘Gen X’er’, raised on Atari and MTV, while Niels is a proud Millennial, raised on the Internet and smartphones. We actually had a lot of fun joking about the whole ‘generation gap thing’ and I’m sure we both learned a lot about each other’s demographic group. Niels wasn’t the only person who learned something new over the summer – I learned quite a bit myself.

In summary, my reminiscing back to the late 90’s certainly helped make my daily music choices easier for a few weeks this summer led to the vision for this blog post. I thought it would be interesting to list a few notable experiences and lessons I learned as an intern at GE, 20 odd years ago, along with how my experiences compared or contrasted with what I observed in the last 10 weeks working side-by-side with our intern, Niels. These observations are based on my role as his mentor, and were provided as feedback to Niels in his summary review, and they are in no particular order.

Have you similarly had the opportunity to engage in both roles within the intern/mentor relationship as I have? Maybe your example isn’t separated by 20 years, but several years? Perhaps you’ve only had the chance to fulfill one of these roles in your career and would love the opportunity to experience the other? In any case, see if you recognize some of the lessons you may have learned in the past and how they present themselves today. I think you’ll be amazed at how even though ‘the more things change, the more they stay the same’.

 

 

Is your business undergoing an Agile Transformation? Are you wondering how DevOps fits into that transformation and what a DevOps roadmap looks like?

Check out a webinar we offered recently, and send us any questions you might have!

Recently, I was part of a successful implementation of a project at a big financial institution. The project was the center of attention within the organization mainly because of its value addition to the line of business and their operations.

The project was essentially a migration project and the team partnered with the product vendor to implement it. At the very core of this project was a batch process that integrated with several other external systems. These multiple integration points with the external systems and the timely coordination with all the other implementation partners made this project even more challenging.

I joined the project as a Technical Consultant at a rather critical juncture where there were only a few batch cycles that we could run in the test regions before deploying it into production. Having worked on Agile/Scrum/XP projects in the past and with experience working on DevOps projects, I identified a few areas where we could improve to either enhance the existing development environment or to streamline the builds and releases. Like with most projects, as the release deadline approaches, the team’s focus almost always revolves around ‘implementing functionality’ while everything else gets pushed to the backburner. This project was no different in that sense.

When the time had finally come to deploy the application into production, it was quite challenging in itself because it was a four-day continuous effort with the team working multiple shifts to support the deployment. At the end of it, needless to say, the whole team breathed a huge sigh of relief when we deployed the application rather uneventfully, even a few hours earlier than what we had originally anticipated.

Once the application was deployed to production, ensuring the stability of the batch process became the team’s highest priority. It was during this time, I felt the resistance to any new change or enhancement. Even fixes to non-critical production issues were delayed because of the fear that they could potentially jeopardize the stability of the batch.

The team dreaded deployments.

I felt it was time for me to build my case to have the team reassess the development, build and deployment processes in a way that would improve the confidence level of any new change that is being introduced. During one of my meetings with my client manager, I discussed a few areas where we could improve in this regard. My client manager was quickly onboard with some of the ideas and he suggested I summarize my observations and recommendations. Here are a few at a high level:

blog-image

 

It’s common for these suggestions to fall through the cracks while building application functionality. In my experience, I have noticed they don’t get as much attention because they are not considered ‘project work’. What project teams, especially the stakeholders, fail to realize is the value in implementing some of the above suggestions. Project teams should not consider this as additional work but rather treat it as part of the project and include the tasks in their estimations for a better, cleaner end product.

“Your Majesty,” [German General Helmuth von] Moltke said to [Kaiser Wilhelm II] now, “it cannot be done. The deployment of millions cannot be improvised. If Your Majesty insists on leading the whole army to the East it will not be an army ready for battle but a disorganized mob of armed men with no arrangements for supply. Those arrangements took a whole year of intricate labor to complete”—and Moltke closed upon that rigid phrase, the basis for every major German mistake, the phrase that launched the invasion of Belgium and the submarine war against the United States, the inevitable phrase when military plans dictate policy—“and once settled it cannot be altered.”
Excerpt From: Barbara W. Tuchman. “The Guns of August.”

In my spare time, I try to read as much as I can. One of my favorite topics is history, and particularly the history of the 20th century as it played out in Europe with so much misery, bloodshed, and finally mass genocide on an industrial scale. Barbara Tuchman’s book, The Guns of August, deals with the factors that led to the outbreak of WWI. Her thesis is that the war was not in any way inevitable. Rather, it was forced on the major powers by the rigidity of their carefully drawn up war plans and an inability to adjust to rapidly changing circumstances. One by one, like dominos falling, the Great Powers executed their rigid war plans and went to war with each other.

Although the consequences are far less severe, I occasionally see the same thing happen on projects, and not just software projects. A lot of time, perhaps appropriately, perhaps not, is spent in planning. The output of the planning process is, of course, several plans. Inevitably, after the project runs for a short while, the plans begin to diverge from reality. Like the Great Powers in the summer of 1914, project leadership sees the plans as destiny rather than guides. At all costs, the plans must be executed.

Why is this? I believe it stems from the fallacy of sunk cost: we’ve spent so much time on planning and coming up with the plans, it would be too expensive now to re-plan. Instead, let’s try to force the project “back on plan”. Because of the sunk cost of generating the plans, too much weight is placed upon them.

Hang on, though. I’ve played up the last part of the quote above – the part that emphasizes the rigidity of thinking that von Moltke and the German General Staff displayed. What about the first part of his statement? Isn’t it true that “the deployment of millions cannot be improvised”? Indeed it is. And that’s true in any non-trivial project as well. You can’t just start a large software project and hope to “improvise” along the way. So now what?

I believe there’s great value in the act of planning, but far less value in the plans themselves. Going through a process of planning and thinking about how the project is supposed to unfold tells me several things. What are the risks? What happens at each major point of the project if things don’t go as planned? What will be the consequences? How do we mitigate those consequences? This kind of contingency planning is essential.

Here’s how I usually do contingency planning for a software development project. Note that I conduct all these activities with as much of the project team present as is feasible to gather. At a minimum, I include all the major stakeholders.

First, I start with assumptions, or constraints external to the project. What’s our budget? When must the product be delivered? Are there non-functional constraints? For example, an enterprise architecture we must embed within, or data standards, or data privacy laws?

Next, I begin to challenge the assumptions. What if we find ourselves going over budget? Are we prepared to extend the delivery deadline to return to budget? I explore how the constraints play off against each other. Essentially, I’m prioritizing the constraints.

Then comes release planning. I try to avoid finely detailed requirements at this point. Rather, we look at epics. We try to answer the question, “What epics must be implemented by what time to generate the Minimum Value Product (MVP)?” Again, I challenge the plan with contingencies. What if X happens? What about Y? How will they affect the timeline? How will we react?

I don’t restrict this planning to timelines, budgets, etc. I do it at the technical level too. “We plan to implement security with this framework. What are the risks? Have we used the framework before? If not, what happens if we can’t get it to work? What’s our fallback?”

The key is to concentrate not just on coming up with a plan, but on knowing the lay of the land. Whatever the ideal plan that comes out of the planning session may be, I know that it will quickly run into trouble. So I don’t spend a lot of time coming up with an airtight plan. Instead, I build up a good idea of how the team will react when (not if) the plan runs aground. Even if something happens that I didn’t think of in the planning, I can begin to change my plan of attack while keeping the fixed constraints in mind. I have a framework for agility.

Never forget this: when the plan diverges from reality, it’s reality that will win, not the plan. Have no qualms about discarding at least parts of the plan and be ready to go to your contingent plans. Do not let “plans dictate policy”. And don’t stop planning – keep doing it throughout the project. No project ever becomes risk-free, and you always need contingencies.

Up, down, Detroit, charm, inside out, strange, London, bottom up, outside in, mockist, classic, Chicago….

Do you remember the questions on standardized tests where they asked you to pick the thing that wasn’t like the others?  Well, this isn’t a fair example as there are really two distinct groups of things in that list, but the names of TDD philosophies have become as meaningless to me as the names of quarks.  At first I thought I’d use this post to try to sort it all out, but then I decided that I’m not the Académie française of TDD school names and I really don’t care that much.  If the names interest you, I can suggest you read TDD – From the Inside Out or the Outside In.  I’m not convinced that the author has all the grouping right (in particular, I started learning TDD from Kent Beck, Ron Jeffries and Bob Martin in Chicago, which is about as classic as you can get, and it was always what she calls outside in but without using mocks), but it’s a reasonable introduction.

Still, it felt like it was time to think about TDD again, so instead I went back to Ron Jeffries Thoughts on Mocks and a comment he made on the subject in his Google Groups Forum.  In the posting, Ron speculated that architecture could push us a particular style of TDD.  That feels right to me.  He also suggested that writing systems that are largely “assemblies of OPC (Other People’s Code)” “are surely more complex” than the monolithic architectures that he’s used to from Smalltalk applications and that complexity might make observing the behavior of objects more valuable.  That idea puzzles me more.

My own TDD style, which is probably somewhere between the Detroit school, which leans towards writing tests that don’t rely on mocks, and London schools, which leans towards using mocks to isolate each unit of the application, definitely evolved as a way to deal with the complexity I faced in trying to write all my code using TDD.  When I first started out, I was working on what I believe would count as a monolithic application in that my team wrote all the code from the UI back to right before the database drivers.  We started mocking out the database not particularly to improve the performance of the tests, but because the screens were customizable per user, the data for which was in a database, and the actual data that would be displayed was stored across multiple tables.  It was really quite painful to try to get all the data set up correctly and we had to keep a lot more stuff in mind when we were trying to focus on getting the configurable part of the UI written.  This was back in 1999 or 2000, and I don’t remember if someone saw an article on mocking, but we did eventually light on the idea of putting in a mock object that was much easier to set up than the actual database.  In a sense, I think this is what Ron is talking about in the “Across an interface” section of his post, but it was all within our code.  Could we have written that code more simply to avoid the complexity to start with?  It was a long time ago and I can’t say whether or not I’d take the same approach now to solving that same problem, but I still do find a lot of advantages in using mocks.

I’ve been wanting to try using a NoSQL database and this seemed like a good opportunity to both try that technology and, after I read Ron’s post, try writing it entirely outside-in, which I always do anyway, and without using mocks, which is unusual for me.  I started out writing my front-end using TDD and got to the point that I wanted to connect a persistence mechanism.  In a sense, I suppose the simplest thing that could possibly work here would have been to keep my data in a flat file or something like that, but part of my purpose was to experiment with a NoSQL database.  (I think this corresponds to the reasonably common situation of “the enterprise has Oracle/MS SQL Server/whatever, so you have to use it.)  I therefore started with one of the NoSQL implementations for .NET.  Everything seemed fine for my first few unit tests.  Then one of my earlier tests failed after my latest test started passing.  Okay, this happens.  I backed out my the code I’d just written to make sure the failing test started passing, but the same test failed again.  I backed out the last test I’d written, too.  Now the failing test passed but a different one failed.  After some reading and experimentation, I found that the NoSQL implementation I’d picked (honestly without doing a lot of research into it) worked asynchronously and it seemed that I’d just been lucky with timing before they started randomly failing.  Okay, this is the point that I’d normally turn to a mocking framework and isolate the problematic stuff to a single class that I could either put the effort into unit testing or else live with it being tested through automated customer tests.

Because I felt more strongly about experimenting with writing tests without using mocks than with using a particular NoSQL implementation, I switched to a different implementation.  That also proved to be a painful experience, largely because I hadn’t followed the advice I give to most people using mocks, which is to isolate the code for setting up the mock into an individual class that hides the details of how the data is set up.  Had I been following that precept now that I was accessing a real persistence mechanism rather than a mock, I wouldn’t have needed to change my tests to the same degree.  The interesting thing here was that I had to radically change both the test and the production code to change the backing store.  As I worked through this, I found myself thinking that if only I’d used a mock for the data access part, I could have concentrated on getting the front-end code to do what I wanted without worrying about the persistence mechanism at all.  This bothered me enough that I finally did end up decoupling the persistence mechanism entirely from the tests for the front-end code and focus on one thing at a time instead of having to deal with the whole thing at once.  I also ended up giving up on the NoSQL implementation for a more familiar relational database.

Image from http://martinfowler.com/bliki/BeckDesignRules.html

Image from http://martinfowler.com/bliki/BeckDesignRules.html

So, where does all this leave my thoughts on mocks? Ron worried in his forum posting that using mocks creates more classes than testing directly and thus make the system more complex. I certainly ended up with more classes than I could have, but that’s the lowest priority in Ken Beck’s criteria for simple design. Passing the tests is the highest priority, and that’s the one that became much easier when I switched back to using mocks. In this case, the mocks isolated me from the timing vagaries of the NoSQL implementations. In other cases, I’ve also found that they help isolate me from other random elements like other developers running tests that happen to modify the same database tables that are modifying. I also felt like my tests became much more intention-revealing when I switched to mocks because they talked in terms of the high-level concepts that the front-end code dealt with instead of the low-level representation of the data of the persistence code needed to know about.  This made me realize that the hard part was caused by the mismatch between the way the persistence mechanism (either a relational database or the document-oriented NoSQL database that I tried) and the way I thought of the data in my code. I have a feeling that if I’d just serialized my object graph to a file or used an object-oriented database instead of a document-oriented database, that complexity would go away.  That’s for a future experiment, though.  And, even if it’s true, I don’t know how much I can do about it when I’m required to use an existing persistence mechanism.

Ron also worried that the integration between the different components is not tested when using mocks.  As Ron puts it in his forum message: “[T]here seems to be a leap of faith made: it’s not obvious to me that if we know that A sends the right messages to Mock B, and B sends the right messages to Mock A, A and B therefore work. There’s an indirection in that logic that makes me nervous. I want to see A and B getting along.”  I don’t think I’ve ever actually had a problem with A and B not getting along when I’m using mocks, but I do recall having a lot of problems with it when I had to map between submitted HTML parameters and an object model.  (This was back when one did have to write such code oneself.)  It was just very to mistype names on either side and not realize it until actual user testing.  This is actually the problem that led us to start doing automated customer testing.  Although the automated customer tests don’t always have as much detail as the unit tests, I feel like they alleviate any concerns I might have that the wrong things are wired together or that the wiring doesn’t work.

It’s also worth mentioning that I really don’t like the style of using mocks that really just check if a method was called rather than it was used correctly.  Too often, I see test code like:

mock.Stub(m => m.Foo(Arg.Is.Anything, Arg.Is.Anything)).Return(0);

mock.AssertWasCalled(m => m.Foo(Arg.Is.Anything, Arg.Is.Anything));

I would never do something like this for a method that actually returns a value.  I’d much rather set up the mock so that I can recognize that the calling class both sent the right parameters and correctly used the return value, not just that it called some method.  The only time I’ll resort to asserting a method was called (with all the correct parameters), is when that method exists only to generate a side-effect.  Even with those types of methods, I’ve been looking for more ways to test them as state changes rather than checking behavior.  For example, I used to treat logging operations as side-effects: I’d set up a mock logger and assert that the appropriate methods were called with the right parameters.  Lately, though, with Log4Net, I’ve been finding that I prefer to set up the logger with a memory appender and then inspect its buffer to make sure that the message I wanted got logged at the level I wanted.

In his Forum posting, Ron is surely right in saying about the mocking versus non-mocking approaches to writing tests: “Neither is right or wrong, in my opinion, any more than I’m right to prefer BMW over Mercedes and Chet is wrong to prefer Mercedes over BMW. The thing is to have an approach to building software that works, in an effective and consistent way that the team evolves on its own.”  My own style has certainly changed over the years and I hope it will continue to adapt to the circumstances in which I find myself working.  Right now I find myself working with a lot of legacy code that would be extremely hard to get under test if I couldn’t break it up and substitute mocks for collaborators that are nearly impossible to get set up correctly.  Hopefully I’ll also be able to use mocks less, as I find more projects that allow me to avoid the impedance between the code’s concept of the model and that of external systems.

At CC Pace, our Agile practitioners are sometimes asked whether Scrum is useful for activities other than software development. The answer is a definite yes.

Elizabeth (“Elle”) Gan is Director of Portfolio Management at a client of ours. She writes the blog My Scrummy Life. Recently, she wrote a fascinating post on how she used Scrum to plan her upcoming wedding.

How have you used Scrum outside its “natural” setting in software development?

Recently, I had one of those rare moments when my son, a 3rd grader, seemed to understand at least part of what I do for work.

He was excited to tell me about the Code Studio site (studio.code.org) that they used during computer lab at school. The site introduces students to coding concepts in a fun and engaging way. Of course, it helps to sweeten the deal with games related to Star Wars, Minecraft and Frozen.

We chose Minecraft and started working through activities together. The left side of the screen is a game board that shows the field of play, which is your character in a field surrounded by trees and other objects. The right side allows you to drag and drop colorful coding structures and operations related to the game (e.g. place block, destroy block, move forward) in a specific order. When you’re satisfied that you’ve built the program to complete the objective, you press the “Run” button and watch your character move through your instructions.

Anyone familiar with automated testing tools can relate to the joyful anticipation of seeing if your creation does the thing that it’s supposed to do (ok, maybe joy is a strong word, but it’s kind of fun). In our case, we anxiously watched as Steve, our Minecraft Hero, moved through our steps, avoiding lava and creepers along the way. If we failed and ended up taking a lava bath, we tried again. If we succeeded, the site would move us to the next objective, but also inform us that it’s possible to do it in fewer steps/lines of code (never good enough, huh?!).

Together, we were able to break down a complex problem into several smaller steps, which is a fundamental skill when building software incrementally. We also had to detect patterns in our code and determine the best way to reuse them given a limited set of statements and constructs.  For my son, it was a fun combination of gaming and puzzle solving. For me, it was nice to return to the fundamentals of working through logic to solve a problem – no annoying XML configurations or git merges to deal with.

I’m a big supporter of the Hour of Code movement and similar initiatives that expose students to programming, but feel that there’s often an emphasis on funneling kids into STEM career paths. This is all well and good for those with the interest and aptitude, but coding also teaches you to patiently and steadfastly work through problems. This can be applied to many different careers. Chris Bosh, the NBA player, even wrote an article about his interest in coding.

I encourage students of all ages to check out the Code Studio site and Hour of Code events: https://hourofcode.com

Like Martin Fowler, I am a long-time Doctor Who fan.  Although I haven’t actually gotten around to watching the new series yet, I’ve been going back through as much of the classic series as Netflix has available.  Starting way back from An Unearthly Child several years ago, I’ve worked my way up through the late Tom Baker period.  The quality of the special effects is often uppermost in descriptions of the classic series, but that doesn’t actually bother me that much.  Truth be told, I feel that a lot of modern special effects have their issues, too.  Sometimes modern effects are good and sometimes they’re bad; the bad effects are bad differently than those in the classic series, but at least the classic series didn’t substitute effects, good or bad, for story quality or acting ability.  To be fair, the classic series also has its share of less than successful stories, but the quality on the whole has remained high enough that I keep going with it.  (In my youth, I stopped watching shortly after Colin Baker became The Doctor, although I can’t remember if I became disenchanted with it or if my local PBS station stopped carrying it.  I’m very curious to see what happens when I get past Peter Davison this time.)

I’ve also been very interested to see the special features that come with most of the discs.  As a rule, I don’t bother with special features as I find them either inane or annoying (particularly directors talking over their movies), but the features on the Doctor Who discs have mostly been worth my time.  Each disc generally has at least one extra that has some combination of the directors, producers, writers, designers and actors talking pretty frankly about what worked and didn’t work with the serial.  I’ve learned a lot about making a television show at the BBC from these and also about the constraints, both technological and budgetary, that the affected the quality of the effects on the show.  (Curiously, it also brought home the effects of inflation to me.  I’d heard the stories of people going around with wheelbarrows full of cash in the Weimar Republic, but that didn’t have nearly as much impact on me as hearing someone talking about setting the budget for each show at the beginning of a season in the early 1970s and being able to get only half as much as they expected by the time they were working on the last series of the season.  Perhaps because I do create annual budgets, the scale is easier to relate to than the descriptions of hyperinflation.)

While I am sure there are those are fascinated by what I choose to get from Netflix (you know who you are 😊), I actually had a point to make here.  I recently watched the Tom Baker episode The Creature from the Pit, which has a decent storyline but was arguably let down by the design of the creature itself.  (There are those that argue it was just a bad story to start with and those that argue that it was written as an anti-capitalist satire that the director didn’t understand.)  As I watched the special feature in which Mat Irvine, the person in charge of the visual effects including the creature, and some of his crew talked about the problems involved in the serial, I realized that this was a lovely example of why we should value customer collaboration over contract negotiation.  Apparently the script described the creature as “a giant, feathered (or perhaps scaled) slug of no particular shape, but of a fairly repulsive grayish-purplish colour…unimaginably huge.  Anything from a quarter of a mile to a mile in length.”  (Quote from here.)  I suspect someone trying to realize this would have a horrible time of it even in today’s world of CGI effects, but apparently in the BBC at the time you just got your requirement and implemented it as best you could even though both the director and the designer were concerned about its feasibility and could point to a previous failure to create a massive creature in The Power of Kroll.

drwhoAlas, neither the designer nor the director felt empowered enough to work with the writer or script editor (or someone, I’m unclear on who could actually authorize a change) on the practical difficulties of creating a creature with such a vague description, resulting in what we tend to have a laugh at now.  I don’t know what the result would have been if there had been more collaboration in the realization of this creature, but it seems to me that the size and shape of the creature were not really essential to the story.  Various characters say that it eats people, although the creature vehemently denies this, and we see some evidence that it accidentally crushes people, but neither of these ideas requires a minimum size of a quarter of a mile.  It seems like the design could have gone a different way, that was easier to realize, if everyone involved had really been able to collaborate on the deliverable rather than having each group doing the best that it could in a vacuum.

In my last post, I talked about the interesting Agile 2015 sessions on team building that I’d attended. This time we’ll take a look at some sessions on DevOps and Craftsmanship.

On the DevOps’ side, Seth Vargo’s The 10 Myths of DevOps, was by far the most interesting and useful presentation that I attended. Vargo’s contention is that the DevOps concept has been over-hyped (like so many other things) and people are soon going to be becoming disenchanted with the DevOps concept (the graphic below shows where Vargo believes DevOps stands on the Gartner Hype Cycle right now). I might quibble about whether we’ve passed the cusp of inflated expectations yet or not, but this seems just about right to me. It’s only recently that I’ve heard a lot of chatter about DevOps and seen more and more offerings and that’s probably a good indication that people are trying to take advantage of those inflated expectations. Vargo also says that many organizations either mistake the DevOps concept for just plain operations or use the title to try to hire SysAdmins under the more trendy title of DevOps. Vargo didn’t talk to it, but I’d also guess that a lot of individuals are claiming to be experienced in DevOps when they were SysAdmins who didn’t try to collaborate with other groups in their organizations.

n1

The other really interesting myth in Vargo’s presentation was the idea that DevOps is just between engineers and operators. Although that’s certainly one place to start, Vargo’s contention is that DevOps should be “unilaterally applied across the organization.” This was characteristic of everything in Vargo’s presentation: just good common sense and collaboration.

Abigail Bangser was also focused on common sense and collaboration in Team Practices Applied to How We Deploy, Not Just What, but from a narrower perspective. Her pain point seems to have been that technical stories that weren’t well defined and were treated differently than business stories. Her prescription was to extend the Three Amigos practice to technical stories and generally treat techincal stories like any other story. This was all fine, but I found myself wondering why that kind of collaboration wasn’t happening anyway. It seems like doing one’s best to understand a story and deliver the best value regardless of whether the story is a business or a technical one. Alas, Bangser didn’t go into how they’d gotten to that state to start with.

CaptureOn the craftsmanship side, Brian Randell’s Science of Technical Debt helped us come to a reasonably concise definition of technical debt and used Martin Fowler’s Technical Debt Quadrant distinguish between different types of technical debt: prudent vs. reckless, and deliberate vs. inadvertent. He also spent a fair amount of time demonstrating SonarQube and explaining how it had been integrated into the .NET ecosystem. SonarQube seemed fairly similar to NDepend, which I’ve used for some years now, with one really useful addition: both NDepend and SonarQube evaluate your codebase compared to various configurable design criteria, but SonarQube also provides an estimated time to fix all the issues that it found with your codebase. Although it feels a little gimmicky, I think it would be more useful than just having the number of instances of failed rules in explaining to Product Owners the costs that they are incurring.

I also attended two divergent presentations on improving our quality as developers. Carlos Sirias presented Growing a Craftsman through Innovation & Apprenticeship. Obviously, Sirias advocates for an apprenticeship model, a la blacksmiths and cobblers, to help improve developer quality. The way I remember the presentation, Sirias’ company, Pernix, essentially hired people specifically as apprentices and assigned them to their “lab” projects, which are done at low-cost for startups and small entrepreneurs. The apprenticeship aspect came from their senior people devoting 20% of their time to the lab projects. I’m now somewhat perplexed, though, because the Pernix website says that “Pernix apprentices learn from others; they don’t work on projects” and the online PDF of the presentation doesn’t have any text in it, so I can’t double check my notes. Perhaps the website is just saying that the apprentices don’t work as consultants on the full-price projects, and I do remember Sirias saying that he didn’t feel good about charging clients for the apprentices. On the other hand, I can’t imagine that the “lab” projects, which are free for NGOs and can be financed by micro-equity or actual money, don’t get cross-subsidised by the normal projects. I feel like just making sure that junior people are always pairing and get a fair chance to pair with people they can learn from, which isn’t always “senior” people, is a better apprenticeship model than the one that Sirias presented.

The final craftsmanship presentation I attended, Steve Ropa’s Agile Craftsmanship and Technical Excellence, How to Get There was both the most exciting and the most challenging presentation for me. Ropa recommends “micro-certifications,” which he likens to Boy Scout merit badges, to help people improve their technical abilities. It’s challenging to me for two reasons. First, I’m just not a great believe in credentialism because I don’t find they really tell me anything when I’m trying to evaluate a person’s skills. What Ropa said about using internally controlled micro-certifications to show actual competence in various skill areas make a lot of sense, though, since you know exactly what it takes to get one. That brings me to the second challenge: the combination of defining a decent set of micro-certifications, including what it takes to get each certification, and a fair way of administering such a system. For the most part, the first part of this concern just takes work. There are some obvious areas to start with: TDD, refactoring, continuous integration, C#/Java/Python skills, etc., that can be evaluated fairly objectively. After that, there are some softer areas that would be more difficult to figure out certifications for, though. How, for example, do you grade skills in keeping a code base continually releasable? It seems like an all-or-nothing kind of thing. And how does one objectively certify a person’s ability to take baby steps or pair program?

Administering such a program also presents me with a real challenge: even given a full set of objective criteria for each micro-certification, I worry that the certifications could become diluted through cronyism or that the people doing the evaluations wouldn’t be truly competent to do so. Perhaps this is just me being overly pessimistic, but any organization has some amount of favoritism and I suspect that the sort of organizations that would benefit most from micro-certifications are the ones where that kind of behavior has already done the most damage. On the other hand, I’ve never been a boy scout and these concerns may just reflect my lack of experience with such things. For all that, the concept of micro-certifications seems like one worth pursuing and I’ll be giving more thought on how to successfully implement such a system over the coming months.

In Diamond in the Rough I talked some about the similarity I found between David Gries’ work on proving programs correct in The Science of Programming and actually doing TDD.  I’ve since gone back to re-read Science.  Honestly it hasn’t gotten any easier to read since I last read it so long ago.  It’s still a worthwhile, though, if you can find a copy. It’s a nice corrective to an issue I’ve seen with a lot of developers over the years: they just don’t have the skills or maybe the inclination to reason about their code.

This seems most evident in an over-reliance on debuggers.  For myself, I only use a debugger when I have a specific question to answer, and I can’t answer it immediately from the code.  “What’s the run-time type of this visitor given these circumstances?” of “Which of the five different objects on the silly call chain was null when it was used?” (that’s a little lazy, since I could refactor the code to make the answer clear without the debugger, but I’m generally trying to find the unit test to write to expose the problem before I fix it, and I don’t want to refactor until I have a green bar that includes the fix to the actual problem). Those are the types of questions I might turn to the debugger to help answer. Particularly when the cost of setting up a test to get that type of feedback is unusually high compared to using a debugger.  This is common when my code is interacting with third-party code in a server container.  Trying to set up some kind of integration or functional test to get the very specific information I want can be a horrible rabbit hole.  (Although it may still be worth setting up the functional test to prove a problem is actually solved.)

So I do recognize times when one can get good feedback more effectively from a debugger than immediately trying to write a unit test or just reasoning about the code one is looking at, but it worries me when the debugger is the first tool a developer uses when something doesn’t act as they expect.  Even when I try asking some developers the questions that I ask myself when trying to understand the problem, the first response is to fire-up the debugger, even if it’s patently not a question that needs a debugger to answer (“What kind of type is that method returning?”  when the actual method declaration is no more than twenty lines down and declares that it’s returning a concrete type).  And, most egregiously, I’ve often seen people debugging their unit tests.

That’s really worrying since the unit tests are meant to help us understand our code.  It concerns me when I see someone’s first response to a unit test failure (even a test they’re just writing) is to run it through the debugger.  Which is not to say that I never do so, but for me it’s always a case of “I know this test failed because this variable wasn’t what I expected it to be.  What was it?”  Again, for me the debugger is a means for getting a specific piece of information rather than something I try to use to help me understand the code.  And that seems a much more efficient and effective way to get the code to do what I want it to do.

The more general case for turning to the debugger seems to be when one doesn’t understand the code.  It’s a little more understandable when trying to understand someone else’s code.  Even then, though, I’m not convinced that the debugger is the best option for trying to really understand what the code is doing.  In a case like this, I’d comment out the code and write unit tests to make me uncomment one small part at a time.  This forces me to really understand what the code is doing because setting up the tests correctly helps me look at the various conditions that affect the code. Gries’ techniques come into play here, too.  It’s unconscious for me now, but the ability to reason formally about the code helps lead me into each new test that will make me uncomment the smallest possible bit of code.

So, how do we learn to reason about our code rather than turning to the debugger as an oracle that will hopefully explain it?  Part of it may be the skills one learns in Gries’ Science, even if they’re not formally applied. The stronger influence, however, may be the way I learned to practice TDD.  I do genuinely test-drive my code and when I first learned TDD it was drilled into me to write my failing test and state why the test would fail.  After not really writing one’s tests before the code, not asking that simple question seems the biggest failure in how I’ve seen TDD practiced.  That might be the better way to learn to really reason about what one’s code is doing.  While I still respect and appreciate the techniques that Gries described in Science, it’s probably both easier and more efficient to learn the discipline of really writing tests before the code, asking the why the test should fail and thinking about it when it fails differently than one expects.

As a developer, I tend to think of YAGNI in terms of code.  It’s a good way to help avoid building in speculative generality and over-designing code.  While I do sometimes think about YAGNI at the feature level, I have a tendency to view the decision to implement a feature as a business decision that should be made by the customer rather than something the developers should decide.  That’s never stopped me from giving an opinion, though!  Until recently, my argument against implementing a feature has generally been a simple argument about opportunity cost.   Happily, Martin Fowler’s recent post on YAGNI (http://martinfowler.com/bliki/Yagni.html) adds greatly to my understanding of all the cost that not considering YAGNI can add to a project.  Well worth a read regardless of whether you’re a developer, Product Owner, Scrum Master or fill any other role.

In previous installments in this series, I’ve talked about what Product Owners and development team members can do to ensure iteration closure. By iteration closure, I mean that the system is functioning at the end of each iteration, available for the Product Owner to test and offer feedback. It may not have complete feature sets, but what feature sets are present do function and can be tested on the actual system: no “prototypes”, no “mock-ups”, just actual functioning albeit perhaps limited code. I call this approach fully functional but not necessarily fully featured.

In this installment, I’ll take a look at the Scrum Master or Project Manager and see what they can do to ensure full functionality if not full feature sets at the end of each iteration. I’ll start out by repeating the same caveat I gave at the start of the Product Owner installment: I’m a developer, so this is going to be a developer-focused look at how the Scrum Master can assist. There’s a lot more to being a Scrum Master, and a class goes a long way to giving you full insight into the responsibilities of the role.

My personal experience is that the most important thing you as a Scrum Master can do is to watch and listen. You need to see and experience the dynamics of the team.

At Iteration Planning Meetings (IPMs), are Product Owners being intransigent about priorities or functional decomposition? Are developers resisting incremental functional delivery, wanting to complete technical infrastructure tasks first? These are the two most serious obstacles to iteration closure. Be prepared to intervene and discuss why it’s in everyone’s interest to achieve this iteration closure.

At the daily stand-up meetings, ensure that every team member speaks (that includes you!), and that they only answer the three canonical questions:

  1. What did I do since the last stand-up?
  2. What will I do today?
  3. What is in my way?

Don’t allow long-winded discussions, especially technical “solution” discussions. People will tune out.

You’re listening for:

  • Someone who always answers (1) and (2) with the same tasks every day and yet says they have no obstacles
  • Whatever people say in response to (3)

Your task immediately after the stand-up is to speak with team members who have obstacles and find out what you can do to clear the obstacles. Then address any team members who’re always doing the same task every day and find out why they’re stuck. Are they inexperienced and unwilling to ask for help? Are they not committed to the project mission and need to be redeployed?

Guard against an us-versus-them mentality on teams, where the developers see Product Owners or infrastructure teams as “the enemy” or at least an obstacle, and vice versa. These antagonistic relationships come from lack of trust, and lack of trust comes from lack of results. Again, actual working deliverables at the close of each iteration go a long way to building trust. Look for intransigence on either the developer team or with the Product Owner: both should be willing to speak freely and frankly with each other about how much work can be done in an iteration and what constitutes Minimal Value Product for this iteration. It has to be a negotiation; try to facilitate that negotiation.

Know your team as human beings – after all, that is what they are. Learn to empathize with them. How do individuals behave when they’re happy or when they’re frustrated? What does it take to keep Jim motivated? It’s probably not the same things as Bill or Sally. I’ve heard people advocate the use of Meyers-Briggs Personality Tests or similar to gain this understanding. I disagree. People are more complex than 4 or 5 letters or numbers at one moment in time. I may be an introvert today and an extrovert tomorrow, depending on how my job is going. Spend time with people to really know them, and don’t approach people as test subjects or lab rats. Approach them as human beings, the complex, satisfying, irritating, and ultimately rewarding organisms that we actually are.

Occasionally, when I speak at technical or project management meet-ups, an audience member will ask, “I’m a Scrum Manager and I can’t get the Product Owner to attend the IPM; what should I do?” or, “My CIO comes in and tasks my developer team directly without going through the IPM; how do I handle this?” I try to give them hints, but the answer I always give is, “Agile will only expose your problems; it won’t solve them.” In the end, you have to fall back on your leadership and management skills to effect the kind of change that’s necessary. There’s nothing in Scrum or XP or whatever to help you here. Like any other process or tool, just implementing something won’t make the sun come out. You still have to be a leader and a manager – that’s not going away anytime soon.

Before I close, let me point out one thing I haven’t listed as something a Scrum Master ought to be adept at: administration. I see projects where the Scrum Master thinks their primary role is to maintain the backlog, measure velocity, track completion, make sure people are updating their Jira entries, and so on. I’m not saying this isn’t important – it is. It’s very important. But if you’re doing this stuff to the exclusion of the other stuff I talked about up there, you’re kind of missing the point. Those administrative tasks give you data. You need to act on the data, or what’s the point? Velocity is decreasing. OK…what are you and the team going to do about it? That’s the important part of your role.

When we at CC Pace first started doing Agile XP projects back in 2000-2001, we had a role on each project called a Tracker. This person would be part time on the project and would do all the data collection and presentation tasks. I’d like to see this role return on more Agile projects today, because it makes it clear that that’s not the function of the Scrum Master. Your job is to lead the team to a successful delivery, whatever that takes.

So here we are at the end of my series. If there’s one mantra I want you to take away from this entire series, it’s Keep the system fully functional even if not fully featured. Full functionality – the ability of the system to offer its implemented feature set to the Product Owner for feedback – should always come before full features – the completeness of the features and the infrastructure. Of course, you must implement the complete feature set and the full infrastructure – but evolve towards it. Don’t take an approach that requires that the system be complete to be even minimally useful.

If you’re a Product Owner:

  • Understand the value proposition not just of the entire system, but of each of its components and subsets.
  • Be prepared to see, use, and test subsets, or subsets of subsets of subsets, of the total feature set. Never say, “Call me only when the system is complete.” I guarantee this: your phone will never ring.

If you’re a developer:

  • Adopt Agile Engineering techniques such as TDD, CI, CD, and so on. Don’t just go through the motions. Become really proficient in them, and understand how they enable everything else in Agile methodologies.
  • Use these techniques to embrace change, and understand that good design and good architecture demand encapsulation and abstraction. Keeping the subsystems isolated so that the system is functional even if not complete is not just good for business. It’s good engineering. A car’s engine can (and does) run even before it’s installed into the car. Just don’t expect it to take you to the grocery store.
  • Be an active team member. Contribute to the success of the mission. Don’t just take orders.

If you’re a Scrum Master:

  • Watch and listen. Develop your sense of empathy so you “plug in” to the team’s dynamics and understand your team.
  • Keep the team focused on the mission.
  • If you want to sweat the details of metrics and data, fine – but your real job is to act on the data, not to collect it. If you aren’t good at those collection details, delegate them to a tracking role.

I hope you’ve enjoyed this series. Feel free to comment and to connect with me and with CC Pace through LinkedIn. Please let me hear how you’ve managed when you were on a supposedly Agile project and realized that the sound of rushing water you heard was the project turning into a waterfall.

As I write this blog entry, I’m hoping that the curiosity (or confusion) of the title captures an audience. Readers will ask themselves, “Who in the heck is Jose Oquendo? I’ve never seen his name among the likes of the Agile pioneers. Has he written a book on Agile or Scrum? Maybe I saw his name on one of the Agile blogs or discussion threads that I frequent?”

In fact, you won’t find Oquendo’s name in any of those places. In the spirit of baseball season (and warmer days ahead!), Jose Oquendo was actually a Major League Baseball player in the 1980’s, playing most of his career with the St. Louis Cardinals.

Perhaps curiosity has gotten the better of you yet again, and you look up Oquendo’s statistics. You’ll discover that Oquendo wasn’t a great hitter, statistically-speaking. His career .256 batting average and 14 homeruns over a 12 year MLB career is hardly astonishing.

People who followed Major League Baseball in the 1980’s, however, would most likely recognize Oquendo’s name, and more specifically, the feat which made him unique as a player. Oquendo has done something that only a handful of players have ever done in the long history of Major League Baseball – he’s played EVERY POSITION on the baseball diamond (all nine positions in total).

Oquendo was an average defensive player and his value obviously wasn’t driven from his aforementioned offensive statistics. He was, however, one of the most valuable players on those successful Cardinal teams of the 80’s, as the unique quality he brought to his team was derived from a term referred to in baseball lingo as “The Utility Player”. (Interestingly enough, Oquendo’s nickname during his career was “Secret Weapon”.)

Over the course of a 162-game baseball season, players get tired, injured and need days off. Trades are executed, changing the dynamic of a team with one phone call. Further complicating matters, baseball teams are limited to a set number of roster spots. Due to these realities and constraints of a grueling baseball season, every team needs a player like Oquendo who can step up and fill in when opportunities and challenges present themselves. And that is precisely why Oquendo was able to remain in the big leagues for an amazing 12 years, despite the glaring deficiency in his previously noted statistics.

Oquendo’s unique accomplishment leads us directly into the topic of the Agile Business Analyst (BA), as today’s Agile BA is your team’s “Utility Player”. Today’s Agile BA is your team’s Jose Oquendo.

A LITTLE HISTORY – THE “WATERFALL BUSINESS ANALYST”

Before we get into the opportunities afforded to BA’s in today’s Agile world, first, a little walk down memory lane. Historically (and generally) speaking – as these are only my personal observations and experiences – a Business Analyst on a Waterfall project wrote requirements. Maybe they also wrote test cases to be “handed off” and used later. In many cases, requirements were written and reviewed anywhere from six to nine months before documented functionality was even implemented. As we know, especially in today’s world, a lot can change in six months.

I can remember personally writing requirements for a project in this “Waterfall BA” role. After moving onto another project entirely, I was told several months down the road, “’Project ABC’ was implemented this past weekend – nice work.” Even then, it amazed me that many times I never even had the opportunity to see the results of my work. Usually, I was already working on an entirely new project, or more specifically, another completely new set of requirements.

From a communications perspective, BA’s collaborated up-front mostly with potential users or sellers of the software in order to define requirements. Collaboration with developers was less common and usually limited to a specific timeframe. I actually worked on a project where a Development Manager once informed our team during a stressful phase of a project, “please do not disturb the developers over the next several weeks unless absolutely necessary.” (So much for collaboration…) In retrospective, it’s amazing to me that this directive seemed entirely normal to me at the time.

Communication with testers seemed even rarer – by the very definition, on a Waterfall project, I’ve already passed my knowledge on to the testers – it’s now their responsibility. I’m more or less out of the loop. By the time the specific requirements are being tested, I’m already off onto an entirely new project.

In my personal opinion the monotony of the BA role on a Waterfall project was sometimes unbearable. Month-long requirements cycles, workdays with little or no variation, and some days with little or no collaboration with other team members outside of standard team meetings became a day to day, week to week, month to month grind, with no end in sight.

AND NOW INTRODUCING… THE “AGILE BUSINESS ANALYST”

Fast-forward several years (and several Agile project experiences) and I have found that the role of today’s Agile Business Analyst has been significantly enhanced on teams practicing Agile methodologies and more specifically, Scrum. Simply as a result of team set-up, structure, responsibilities – and most importantly, opportunities – I feel that Agile teams have enhanced the role of the Business Analyst by providing opportunities which were never seemingly available on teams using the traditional Waterfall approach. There are new opportunities for me to bring value to my team and my project as a true “Utility Player”, my team’s Jose Oquendo.

The role of the Agile BA is really what one makes of it. I can remain content with the day to day “traditional” responsibilities and barriers associated with the BA role if I so choose; back to the baseball analogy – I can remain content playing one position. Or, I can pursue all of the opportunities provided to me in this newly-defined role, benefitting from new and exciting experiences as a result; I can play many different positions, each one further contributing to the short and long-term success of the team.

Today, as an Agile BA, I have opportunities – in the form of different roles and responsibilities – which not only enhance my role within the team but also allow me to add significant value to the project. These roles and responsibilities span not only across functional areas of expertise (e.g. Project Management, Testing, etc.) but they also span over the entire lifetime of a software development project (i.e. Project Kickoff to Implementation). In this sense, Agile BA’s are not only more valuable to their respective teams, they are more valuable AND for a longer period of time – basically, the entire lifespan of a project. I have seen specifically that Agile BA’s can greatly enhance their impact on project teams and the quality of their projects in the following five areas:

  • Project Management
  • Product Management (aka the Product Backlog)
  • Testing
  • Documentation
  • Collaboration (with Project Stakeholders and Team Members)

We’ll elaborate – in a follow-up blog entry – specifically how Agile BA’s can enhance their role and add value to a project by directly contributing to the five functional areas listed above.

I found Mike Cohn’s posting Don’t Blindly Follow very curious because it seems to contradict what many luminaries of the Agile community have said about starting out by strictly following the rules until you’ve really learned what you’re doing.

In one sense, I do agree with this sentiment of not blindly doing something.  Indeed, when I was younger, I thought it was quite clever of me to say things like “The best practice is not to follow best practices.”  But then I discovered the Dreyfus Model of Skills Acquisition and that made me realize that there’s a more nuanced view.  In a nutshell, the Dreyfus model says that we progress through different stages as we learn skills.  In particular when we start learning something, we do start by following context-free rules (a.k.a., best practices) and progress through situational awareness to “transcend reliance on rules, guidelines, and maxims.”  This is resonates with me since I recognize it as the way I learn things and when I can see it in others when they are serious about something.  (To be fair, there are people that don’t seem to fit into this model, too, but I’m okay with a model that’s useful even if it doesn’t cover every possibility.)  So, I would say that we should start out following the rules blindly until we have learned enough to recognize how to helpfully modify the rules that we’ve been following.

Cohn concludes with another curious statement: “No expert knows more about your company than you do.”  Again, there’s a part of me that wants to agree with this, but then again…  An outsider could well see things that an insider takes for granted and have perspectives that allow them to come to different conclusions from the information that you both share.

I find myself much more in sympathy with Ron Jeffries’ statements in The Increment and Culture: “Rather than change ourselves, and learn the new game, we changed Scrum. We changed it before we ever knew what it was.”  This seems like it would offer a much better opportunity to really learn the basics before we start changing this to suit ourselves.

As I mentioned in the introductory post in this series, an issue I frequently see with underperforming Agile teams is that work always spills over from one iteration into the next. Nothing ever seems to finish, and the project feels like a waterfall project that’s adopted a few Agile ceremonies. Without completed tasks at iteration planning meetings (IPMs), there’s nothing to demo, and the feedback loop that’s absolutely fundamental to any Agile methodology is interrupted. Rather than actual product feedback, IPMs become status meetings.

In this post, let’s look at how a Product Owner can help ensure that tasks can be completed within an iteration. As a developer myself, I’m focusing on the things that make life easier for the development team. Of course, there’s a lot more to being a good Product Owner, and I encourage you to consider taking a Scrum Product Owner training class (CC Pace offers an excellent one).

First and foremost, be willing to compromise and prioritize. An anti-pattern I observe on some underperforming teams is a Product Owner who’s asked to prioritize stories and replies, “Everything is important. I need everything.” I advise such teams that when you ask for all or nothing, you will never get all; you will always get nothing. So don’t ask for “all or nothing”, which is what a Product Owner is saying when they say everything has a high priority.

As a Product Owner, understand two serious implications of saying that every task has a high priority.

  1. You are saying that everything has an equal priority. In other words, to the developer team, saying that everything has a high priority is indistinguishable from saying that everything has medium or indeed low priority. The point of priority isn’t to motivate or scare the developers, it’s to allow them to choose between tasks when time is pressing. You’re basically leaving the choice up to the developer team, which is probably not what you had in mind.
  2. You’re really delegating your job to the developer team, and that isn’t fair. You’re the Product Owner for a reason: the ultimate success of the product depends on you, and you need to make some hard choices to ensure success. You know which bits of the system have the most business value. The developers signed on to deliver functionality, not to make decisions about business value. That’s your responsibility, and it’s one of the most important responsibilities on the entire team.

The consequence of “everything has a high priority” is that the developers have no way to break epic stories down into smaller stories that fit within an iteration. Everything ends up as an epic, and the developer team tends to focus on one epic after another, attempting to deliver complete epics before moving on to the next. It’s almost certain that no epic can be completed in one 2-week iteration, and so work keeps on spilling over to the next iteration and the next and the next.

Second, work with the developer team to find out how much of each story can be delivered in an iteration. Keep in mind that “delivered” means that you should be able to observe and participate in a demo of the story at the end of the iteration. Not a “prototype”, but working software that you can observe. Encourage the developers to suggest reduced functionality that would allow the story to fit in an iteration. For example, how about dealing only with “happy path” scenarios – no errors, no exceptions? Deal with those edge cases in a later iteration. At all costs, move towards a scenario where fully functional (that is, actually working) software is favored over fully featured software. Show the team that you’re willing to work with the evolutionary and incremental approach that Agile demands.

Third, be wary of stories that are all about technical infrastructure rather than business value. Sure, the development team very often need to attend to purely technical issues, but ask how each such story adds business value. You are entitled to a response that convinces you of the business need to spend time on the infrastructure stories.

At the end of a successful IPM, you as the Product Owner should have:

  • Seen some working software – remember, fully functional but perhaps not fully featured
  • Offered the developer team your feedback on what you saw
  • Worked with the developer team to have another set of stories, each of which is deliverable within one iteration
  • Prioritized those stories into High, Medium, and Low buckets, with the mutual understanding  that nobody will work on a medium story if there are high stories remaining, and nobody will work on low stories if there are medium stories remaining
  • A clear understanding of why any technical infrastructure stories are required, and what business problem will be addressed by such stories

Finally, make yourself available for quick decisions during the iteration. No plan survives its first encounter with reality. There will always be questions and problems the developers need to talk to you about. Be available to talk to them, face to face, or with an interactive medium like instant messaging or video chat. Be prepared to make decisions…

Developer: “Sorry, I know we said we could get this story done this iteration, but blahblah happened and…”

You: “OK, how much can you get done?”

Developer: “We would have to leave out the blahblah.”

You: “Fine, go ahead.” (Or, “No, I need that, what else can we defer?”)

Be decisive.

Next post, I’ll talk about what developers should concentrate on to make sure some functionality is delivered in each iteration.

One of the basic Agile tenets that most people agree on – whether or not you’re an Agile enthusiast or supporter – is the value of early and continuous customer feedback. This early and often feedback is invaluable in helping software development projects quickly adapt to unexpected changes in customer demands and priorities, while concurrently minimizing risk and reducing ‘waste’ over the lifetime of a project.

Sprint Demo
In the Agile world, we traditionally view this customer feedback in the form of a formal product demonstration (i.e. sprint demo) which would occur during the closeout of a Sprint/Iteration. In short, the “team” – those building the software – presents features and functionality they’ve built in the current sprint to the “customer” – those who will be users or sellers of the product. Even though issues may be uncovered and discussed – usually resulting in action items for the following sprint – the ideal end result is both the team and the customer leaving the demo session (and in theory, closing the sprint) with a significant level of confidence in the current state and progress of the project. Sprint demos are subsequently repeated as an established norm over the project’s duration.

This simple, yet valuable Agile concept of working product demos can be expanded and applied effectively to other areas of software development projects:

  • Why not take advantage of the proven benefits of product demos and actually apply them internally – within the actual development team – before the customer is even involved?
  • Let’s also incorporate product demos more often – why wait two weeks (or sometimes even a month, depending on sprint duration)? We can add value to a software development project daily, if needed, without even involving the customer.

Expand the Benefit of Demos
One of the common practices where I have seen first-hand the effectiveness of product demos involves daily deployment of code into a testing or “QA” environment. Here’s the scenario:

  • Testing uncovers five (5) “non-complex” defects (i.e. defects which were easily identified in testing – usually GUI-type defects including typos, navigation or flow issues, table alignments, etc.) which are submitted into a bug-tracking tool. Depending on the bug-tracking tool employed by the team, this process is sometimes quite tedious.
  • Defects 1-5 are addressed by the development team and declared fixed (or “ready to test”) and these fixes are included in the latest deployment into a QA environment for retesting.
  • Defects 1-5 are retested. Defects #1 and #2 are confirmed fixed and closed.
  • But there is a problem – it’s discovered EARLY in retesting that Defects #3 and #4 are not actually fixed and additionally, the “fixes” have now resulted in two additional defects – #6 and #7. For purposes of explanation, these two additional defects were also easily identified early in the retesting process.
  • At this point, Defects 3-4 need to be resubmitted, with new Defects – #6 and #7 – also added to the tracking tool.

Where are we at this point? In summary, all of this time and effort has essentially resulted in closing out one net defect, and we’ve essentially lost a day’s worth of work. Not to mention, developers are wasting time fixing and re-fixing bugs (and in many cases, becoming increasingly frustrated) and not contributing to the team’s true velocity. Simultaneously, testers are wasting time retesting and tracking easily identifiable defects, therefore increasing risk by minimizing the time they have to test more complex code and scenarios.

So there is our conundrum. In a nutshell, we’re wasting time and team members are unhappy. Check back for a follow-up post next week, which will provide a simple yet effective solution to this unfortunately all-too-common issue in many of today’s software development practices.

 

Our Agile practice at CC Pace has been partnering with Art Chantker, President of Potomac Forum, LTD. since 2011.  Potomac Forum has trained thousands of government and industry professionals throughout the country in a wide variety of information technology, acquisition, financial, and management subjects of importance to our government.  It never ceases to amaze me how many people Art knows in the federal government.  I’ve read scores of reviews on his workshops, and there are consistent praises for his chosen content, speakers and venues.

Over the years, Art has invited CC Pace back again and again to present on a wide variety of topics around Agile software development and Agile project management along with officials from the Department of Homeland Security, USDA, the VA, Department of Education, NGA, GSA and other Federal agencies.

On January 28th we will be conducting our first Agile workshop at the Willard Hotel for the 2015 Potomac Forum series.  The focus will be on Agile and Lean Best Practices, including Scrum, Kanban and scaling Agile for larger projects and programs.  Our speakers will talk about the latest Agile and Lean approaches on the horizon and how to effectively contract for Agile services within the Federal government.  The government speakers and panelists already confirmed for the 28th include representatives from NGA, GAO and FEDSIM.  There is a tentative agenda already posted on the Potomac Forum website, pending some agenda changes once other invited officials are confirmed.  If your agency is starting down the path of agility you should attend this workshop.  And if you haven’t met Art, you should!

http://www.potomacforum.org/content/new-date-jan-28-2015-agile-development-government-training-workshop-iv-implementing-and

http://www.ccpace.com/about-us/news-events/#events

Ron Jeffries recently posted an article on writing code for “The Diamond Problem” using TDD in response to an article by Alistair Cockburn on Thinking before programming.  Cockburn starts out:

“A long time ago, in a university far away, a few professors had the idea that they might teach programmers to think a bit before hitting the keycaps. Nearly a lost cause, of course, even though Edsger Dijkstra and David Gries championed the movement, but the progress they made was astonishing. They showed how to create programs (of a certain category) without error, by thinking about the properties of the problem and deriving the program as a small exercise in simple logic. The code produced by the Dijkstra-Gries approach is tight, fast and clear, about as good as you can get, for those types of problems.”

To which Ron responds:

“I looked at Alistair’s article and got two things out of it. First, I understood the problem immediately. It seemed interesting. Second, I got that Alistair was suggesting doing rather a lot of analysis before beginning to write tests. He seems to have done that because Seb reported that some of his Kata players went down a rat hole by choosing the wrong thing to test.

“Alistair calls upon the gods, Dijkstra and Gries, who both championed doing a lot of thinking before coding. Recall that these gentlemen were writing in an era where the biggest computer that had ever been built was less powerful than my telephone. And Dijkstra, in particular, often seemed to insist on knowing exactly what you were going to do before doing it.

“I read both these men’s books back in the day and they were truly great thinkers and developers. I learned a lot from them and followed their thoughts as best I could.”

Ron’s article goes on to solve the same programming problem with TDD and no particular up-front thinking that Cockburn solved in what he calls a combination of “the Dijkstra-Gries approach” and TDD.  On the whole, I would tend more towards the pure TDD approach that Ron takes because it got him feedback earlier and more frequently, while Cockburn’s approach, with more upfront thinking, didn’t provide him any feedback until he really did start writing his tests.  If Cockburn had gone down a blind alley with his thinking, he wouldn’t have gotten any concrete feedback on it until much later in the game.

But that’s not what I actually want to think about.  I did read both Dijkstra’s A Discipline of Programming and Gries’ The Science of Programming “back in the day” as well (last read in 1991 and 2001, respectively, although it was a second reading of Gries; I remember finding Dijkstra almost impossible to understand, but I did keep it, so it may be time to try it again), but I didn’t remember the emphasis on up front thinking that both Ron & Cockburn seemed to claim for them.  I dug out my copies of both books and did a quick flip through both of them, and I still feel that the emphasis is much more on proving the correctness of one’s code rather than doing a lot of up-front thinking.  I’d previously had the feeling that there was a similarity between Gries’ proofs and doing TDD.  As I poke around in chapter 13 of Gries’ book, where he introduces his methodology, I find myself believing it even more strongly.

Gries starts out asking “What is a proof?”  His answer?

“A proof, according to Webster’s Third New International Dictionary, is ‘the cogency of evidence that compels belief by the mind of a truth or fact,’  It is an argument that convinces the reader of the truth of something.

“The definition of proof does not imply the need for formalism or mathematics.  Indeed, programmers try to prove their programs correct in this sense of proof, for they certainly try to present evidence that compels their own belief.  Unfortunately, most programmers are not adept at this, as can be seen by looking at how much time is spent debugging.  The programmer must indeed feel frustrated at the lack of mastery of the subject!”

Doesn’t TDD provide that for us, at least when practiced correctly?  Oh, and the first principle that Gries gives in this chapter is: “A program and its proof should be developed hand-in-hand, with the proof usually leading the way.”  Hmm, sounds familiar, no?

Admittedly, Gries does speak out against what he calls “test-case analysis:”

“‘Development by test case’ works as follows.  Based on a few examples of what the program is to do, a program is developed.  More test cases are then exhibited – and perhaps run – and the program is modified to take the results into account.  This process continues, with program modification at each step, until it is believed that enough test cases have been checked.”   

On the face of it, this does sound like a condemnation of TDD, but does it really represent what we do when we really practice TDD?  Sort of, but it overlooks the critical questions of how we choose the test cases and the speed at which we can get feedback from them.  If we’re talking randomly picking a bunch of test cases and getting feedback from them in a matter of days or hours, then I’d agree that it would be a poor way to develop software.  When we’re practicing TDD, though, we should be looking for that next simplest test case that helps us think about what we’re doing.  Let’s turn to Gries’ “Coffee Can Problem” as an example.  

“A coffee can contains some black beans and some white beans.  The following process is to be repeated as long as possible.

“Randomly select two beans from the can.  If they have the same color, throw them out, but put another black bean in.  (Enough extra black beans are available to do this.)  If they are different colors, place the white one back into the can and throw the black one away.”

“Execution of this process reduces the number of beans in the can by one.  Repetition of the process must terminate with exactly one bean in the can, for then two beans cannot be selected.  The question is: what, if anything, can be said about the color of the final bean based on the number of white beans and the number of black beans initially in the can?”

Gries suggests we take ten minutes on the problem and then goes on to claim that “[i]t doesn’t help much to try test cases!”  But the test cases he enumerates are not the ones we’d likely try were we trying to solve this with TDD.  He suggests test cases for a black bean and a white bean to start with and then two black beans.  Doing TDD, we’d probably start with a single bean in the can.  That’s really the simplest case.  What’s the color of the final bean in the can if I start with only a single black bean?  Well, it’s black.  And it’s going to be white if the only bean in the can is white.  Okay, what happens if I start with two black beans?  I should end up with a black bean.  Two white beans wouldn’t make me change my code, so let’s try starting with a black and a white bean.  Ah, I would end up with a white bean in that case.  Can I draw any conclusions from this?  

I did actually think about those test cases before I read Gries’ description of his process:

“Perhaps there is a simple property of the beans in the can that remains true as the beans are removed and that, together with the fact that only one bean remains, can give the answer.  Since the property will always be true, we will call it an invariant.  Well, suppose upon termination there is one black bean and no white beans.  What property is true upon termination, which could generalize, perhaps, to be our invariant?  One is an odd number, so perhaps the oddness of the number of black beans remains true.  No, this is not the case, in fact the number of black beans changes from even to odd or odd to even with each move.”

There’s more to it, but this is enough to make me wonder if this is really different from writing a test case.  Actually it is: he’s reasoning about the problem, and by extension the code he’d write.  But he is still testing his hypotheses, it’s just in his head rather than in code.  And there I would suggest that TDD, as opposed to using randomly selected test cases, allows us to do that same kind of reasoning with working code and extremely rapid feedback.  (To be fair, I believe this is what Ron was saying, too.  I just want to highlight the similarity to what Gries was saying, while Ron seems to be suggesting more of a difference.)

What might get lost in TDD, at least when it’s not practiced well, is that idea of reasoning about the code.  There’s an art to picking that next simplest test to write, and I suspect that that’s where much of the reasoning really happens.  If we write too much code in response to a single test, we’re losing some of the reasoning.  If we write our tests after the code, we’ve probably lost it entirely.  And that’s something I do believe is lacking in many programmers today, evidenced, as Gries suggests, by the amount of time spent fumbling around in debuggers and randomly adding code “to see what will happen.”  But that’s for another time.