An Agile Coach’s Guide to Managing Pride, Ego and Personal Growth

An Agile Coach’s Guide to Managing Pride, Ego and Personal Growth

As a ScrumMaster and Agile Coach, one of my favorite tools to use is Mural.  Mural is a digital whiteboard designed for engaging collaborations.  It’s a creative outlet for me; it’s like scrapbooking for a job. I like to think that I’m pretty good at using the application.  If I’m being honest, I think I’m really good at using Mural. I usually get compliments on my creativity, layouts, and functionality of my Mural boards. I pride myself on being knowledgeable and helpful in Mural, which is a part of my Personal Brand.

Recently I was in a Thursday ScrumMaster Community of Practice (CoP) where we were discussing Mural and I shared some of my examples.  An Agile Coach said, “Tracy – if you want to help me dress this one up, I’d be happy to see what you can do!”.  I felt honored and excited, especially since I had only been consulting at the company for less than 2 months.  I had the drive and desire to do my best work and build a positive reputation for myself as an informed, helpful, and creative resource.

That night I reworked his Mural.  I created a Lego Theme that focused on Building Foundations Together.  I created colored Lego Brick Backgrounds for different topics.  I was really excited about what I had made.  I shared right away in the morning, expecting to get positive feedback.  The instant gratification!  However, the Agile Coach was out of the office that Friday.

On Monday, I received his response. “You were using Areas and Shapes inconsistently.”.  “Wait, what?!?!”.  It wasn’t the reaction I was expecting, and a meeting was scheduled. I didn’t even know what he was talking about with “Areas”.  My ego definitely took a hit.

I prepared myself for a meeting with the Agile Coach.  I felt like I was a student in detention.  I felt like I was in trouble.  I expected him to be upset with me because, after all, I ruined the functionality of his Mural.

Instead of a meeting, it was a working session.  I learned about functionality I never knew existed.  He showed me where, when, and how to use Mural Areas.  He was not upset at all in the working session; he knew I had the best intentions.

We learned that even though we’re both passionate and knowledgeable about Agile, it was also easy to see how quickly we fell back on old patterns.  We needed to align first with what we were trying to accomplish.  We also brainstormed from different personas which created a variety of solutions.  Those perspectives made the Mural and instructions more valuable and understandable.

I was humbled by this experience.

I can be, and still am, proud of the work I did. 

If I wouldn’t have changed my defensiveness to a Growth Mindset, I wouldn’t have gotten the gift of learning something new.

If I would have gone into the meeting with negativity, I wouldn’t have seen the Coaching Demo that happened right in front of my eyes.

In my reflection, I realized I needed to put my ego aside to grow.

I did that by:

    1. Change to a Growth Mindset.  Embrace challenges and learn from feedback.
    2. Think of every opportunity as a coachable moment.  Imagine what your day would look like if you went into every situation thinking, “What am I going to learn here?”.  How would the atmosphere and culture change?
    3. Give grace.  Give grace that everyone was trying to do the best they could at the time – Including yourself.
    4. Have Gratitude.  Gratitude greatly reduces negativity and anxiety.  It shifts a focus from thinking of yourself to others.  When we have gratitude, what we appreciate grows.
    5. Pay it forward.   Paying it forward is the greatest compliment you can give your mentor.  Don’t just share the information that you received but pass on the learning environment.  Create a space of psychological safety, patience, and understanding.

In conclusion, I still pride myself on being knowledgeable and helpful in Mural.  I’m practicing a Growth Mindset and trying to embrace every situation as a coachable moment. Be grateful for being mentored, and in return, become a mentor.  But what I really learned in the experience and reflection was:

You can have pride in your career, but to continue your growth journey you need to practice removing the ego. 

Another great part of this story is that the mentor/mentee relationship didn’t end at that working session.  We continue to collaborate, asking for opinions, and sharing knowledge.  We continue to learn together.  Now that’s what I call Building Foundations Together.

Today’s business leaders find themselves navigating a world in which artificial intelligence (AI) plays an increasingly pivotal role. Among the various types of AI, generative AI – the kind that can produce novel content – has been a game changer. One such example of generative AI is OpenAI’s ChatGPT. Though it’s a powerful tool with significant business applications, it’s also essential to understand its limitations and potential pitfalls.

1. What are Generative AI and ChatGPT?
Generative AI, a subset of AI, is designed to create new content. It can generate human-like text, compose music, create artwork, and even design software. This is achieved by training on vast amounts of data, learning patterns, structures, and features, and then producing novel outputs based on what it has learned.

In the realm of generative AI, ChatGPT stands out as a leading model. Developed by OpenAI, GPT, or Generative Pre-training Transformer, uses machine learning to produce human-like text. By training on extensive amounts of data from the internet, ChatGPT can generate intelligent and coherent responses to text prompts.

Whether it’s crafting detailed emails, writing engaging articles, or offering customer service solutions, ChatGPT’s potential applications are vast. However, the technology is not without its drawbacks, which we’ll delve into shortly.

2. Strategic Considerations for Business Leaders
Adopting a generative AI model like ChatGPT in your business can offer numerous benefits, but the key lies in understanding how best to leverage these tools. Here are some areas to consider:

  • 2.1. Efficiency and Cost Savings
    Generative AI models like ChatGPT can automate many routine tasks. For example, they can provide first-level customer support, draft emails, or generate content for blogs and social media. Automating these tasks can lead to considerable time savings, freeing your team to focus on more strategic, creative tasks. This not only enhances productivity but could also lead to significant cost savings.
  • 2.2. Scalability
    One of the biggest advantages of generative AI models is their scalability. They can handle numerous tasks simultaneously, without tiring or requiring breaks. For businesses looking to scale, generative AI can provide a solution that doesn’t involve a proportional increase in costs or resources. Moreover, the ability of ChatGPT to learn and improve over time makes it a sustainable solution for long-term growth.
  • 2.3. Customization and Personalization
    In today’s customer-centric market, personalization is key. Generative AI can create content tailored to individual user preferences, enhancing personalization in your services or products. Whether it’s customizing email responses or offering personalized product recommendations, ChatGPT can drive customer engagement and satisfaction to new heights.
  • 2.4. Innovation
    Generative AI is not just about automating tasks; it can also stimulate innovation. It can help in brainstorming sessions by generating fresh ideas and concepts, assist in product development by creating new design ideas, and support marketing strategies by providing novel content ideas. Leveraging the innovative potential of generative AI could be a game-changer in your business strategy.

3. The Pitfalls of Generative AI
While the benefits of generative AI are clear, it’s essential to be aware of its potential drawbacks and pitfalls:

  • 3.1. Data Dependence and Quality
    Generative AI models learn from the data they’re trained on. This means the quality of their output is directly dependent on the quality of their training data. If the input data is biased, inaccurate, or unrepresentative, the output will likely be flawed as well. This necessitates rigorous data selection and cleaning processes to ensure high-quality outputs.
    Employing strategies like AI auditing and fairness metrics can help detect and mitigate data bias and improve the quality of AI outputs.
  • 3.2. Hallucination
    Generative AI models can sometimes produce outputs that appear sensible but are completely invented or unrelated to the input – a phenomenon known as “hallucination”. There are numerous examples in the press regarding false statements or claims made by these models, sometimes funny (like claiming that someone ‘walked’ across the English Channel) to the somewhat frightening (claiming someone has committed a crime, when in fact, they did not). This can be particularly problematic in contexts where accuracy is paramount. For example, if a generative model hallucinates while generating a financial report, it could lead to serious misinterpretations and errors. It’s crucial to have safeguards and checks in place to mitigate such risks.
    Implementing robust quality checks and validation procedures can help. For instance, combining the capabilities of generative AI with verification systems, or cross-checking the AI outputs with trusted data sources, can significantly reduce the risk of hallucination.
  • 3.3. Ethical Considerations
    The ability of generative AI models to create human-like text can lead to ethical dilemmas. For instance, they could be used to generate deepfake content or misinformation. Businesses must ensure that their use of AI is responsible, transparent, and aligned with ethical guidelines and societal norms.
    Regular ethics training for your team, and keeping lines of communication open for ethical concerns or dilemmas, can help instill a culture of responsible AI usage.
  • 3.4. Regulatory Compliance
    As AI becomes increasingly pervasive, regulatory bodies worldwide are developing frameworks to govern its use. Businesses must stay updated on these regulations to ensure compliance. This is especially important in sectors like healthcare and finance, where data privacy is paramount. Not adhering to these regulations can lead to hefty penalties and reputational damage.
    Keep up-to-date with the latest changes in AI-related laws, especially in areas like data privacy and protection. Consider consulting with legal experts specializing in AI and data to ensure your practices align with regulatory requirements.
  • 3.5 AI Transparency and Explainability
    Generative AI models, including ChatGPT, often function as a ‘black box’, with their internal workings being complex and difficult to interpret.
    Enhancing AI transparency and explainability is key to gaining trust and mitigating risks. This could involve using techniques that make AI decisions more understandable to humans or adopting models that provide an explanation for their outputs.

4. Navigating the Generative AI Landscape: A Step-by-Step Approach
As generative AI continues to evolve and redefine business operations, it is essential for business leaders to strategically navigate this landscape. Here’s an in-depth look at how you can approach this:

  • 4.1. Encourage Continuous Learning
    The first step in leveraging the power of AI in your business is building a culture of continuous learning. Encourage your team to deepen their understanding of AI, its applications, and its implications. You can do this by organizing workshops, sharing learning resources, or even bringing in an AI expert (like myself) to educate your team on the best ways to leverage the potential of AI. The more knowledgeable your team is about AI, the better equipped they will be to harness its potential.
  • 4.2. Identify Opportunities for AI Integration
    Next, identify the areas in your business where generative AI can be most beneficial. Start by looking at routine, repetitive tasks that could be automated, freeing up your team’s time for more strategic work. Also, consider where personalization could enhance the customer experience – from marketing and sales to customer service. Finally, think about how generative AI can support innovation, whether in product development, strategy formulation, or creative brainstorming.
  • 4.3. Develop Ethical and Responsible Use Guidelines
    As you integrate AI into your operations, it’s essential to create guidelines for its ethical and responsible use. These should cover areas such as data privacy, accuracy of information, and prevention of misuse. Having a clear AI ethics policy not only helps prevent potential pitfalls but also builds trust with your customers and stakeholders.
  • 4.4. Stay Abreast of AI Developments
    In the fast-paced world of AI, new developments, trends, and breakthroughs are constantly emerging. Make it a point to stay updated on these advancements. Subscribe to AI newsletters, follow relevant publications, and participate in AI-focused forums or conferences. This will help you keep your business at the cutting edge of AI technology.
  • 4.5. Consult Experts
    AI implementation is a significant step and involves complexities that require expert knowledge. Don’t hesitate to seek expert advice at different stages of your AI journey, from understanding the technology to integrating it into your operations. An AI consultant or specialist can help you avoid common pitfalls, maximize the benefits of AI, and ensure that your AI strategy aligns with your overall business goals.
  • 4.6. Prepare for Change Management
    Introducing AI into your operations can lead to significant changes in workflows and job roles. This calls for effective change management. Prepare your team for these changes through clear communication, training, and support. Help them understand how AI will impact their work and how they can upskill to stay relevant in an AI-driven workplace.
    In conclusion, navigating the generative AI landscape requires a strategic, well-thought-out approach. By fostering a culture of learning, identifying the right opportunities, setting ethical guidelines, staying updated, consulting experts, and managing change effectively, you can harness the power of AI to drive your business forward.

5. Conclusion: The Promise and Prudence of Generative AI
Generative AI like ChatGPT carries immense potential to revolutionize business operations, from streamlining mundane tasks to sparking creative innovation. However, as with all powerful tools, its use requires a measured approach. Understanding its limitations, such as data dependency, hallucination, and ethical and regulatory challenges, is as important as recognizing its capabilities.

As a business leader, balancing the promise of generative AI with a sense of prudence will be key to leveraging its benefits effectively. In this exciting era of AI-driven transformation, it’s crucial to navigate the landscape with a keen sense of understanding, responsibility, and strategic foresight.

If you have questions or want to identify ways to enhance your organization’s AI capabilities, I’m happy to chat. Feel free to reach out to me at jfuqua@ccpace.com or connect with me on LinkedIn

With the World Cup taking over the headlines, we couldn’t miss an opportunity to bring two of our favorite topics at CC Pace together: sports and Agile. As Team USA gears up to take on the Netherlands, here’s a little history on the unique style of soccer the Dutch created and what Agile teams can learn from their success.

In the 1970s, the Dutch dominated their international counterparts by using a style of soccer they called totaalvoetbal or total football. Total football requires each player on the team to be comfortable and adept enough to switch positions with any other player on the field at any time. The Dutch required the goalkeeper to remain in a fixed position, but everyone else was fluid and able to become an attacker, defender, or midfield player when the play dictated it. Whenever a player moved out of his position, they were replaced by another player. Successively, all other players on the team shifted their positions to maintain their team structure. In modern soccer, we call this collective team behavior compensatory movement. All teammates compensate and adjust to each other’s actions.

This philosophy helped create teams without points of weakness that their opponents could exploit.

Totaalvoetbal only worked because players trained to develop the skills needed to play all positions. Each player was a specialist in a certain position or role, such as striker or center defense, but was also quite competent playing other roles on the team.

In the Agile world, this can be applied to the makeup of scrum teams. Scrum teams that are self-sufficient because of their fluidity are always the most productive and dependable. If scrum teams are comprised of team members with “T-shaped” skills, then there will always be team members that can fill in for others when needed.

People with T-shaped skills have a deep level of skill and expertise in one area and a lower level of expertise across many other areas. When scrum teams are comprised of team members with T-shaped skills, it helps to ensure that all work can be completed within the team. It also means that productivity is less likely to drop when a team member is out of the office because others can roll up their sleeves and help get the job done.

Cross-training and pair programming are great ways to help develop team members with T-shaped skills.  Pair programming is an Agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently.

T-shaped skills are not developed by accident but rather intentionally. Careful planning and a thoughtful, proactive approach by the individual and their manager are crucial. A manager must understand the value of investing in the development of people. Cross-training, stretch assignments, training opportunities, shadowing, and pair programming are all excellent methods for developing additional skills that allow for compensatory movement and fluid teams, yet, in some ways, represent short-term reductions in individual productivity. Managers must make this short-term investment to see the long-term value and score more goals.

Does your Agile Transformation feel like it is stuck in mud? Maybe it is facing one of the many challenges organizations must navigate as part of their transformation. 

According to the 15th State of Agile Report  the top 10 Agile Challenges (Digital ai, 2021) are: 

 

The impact of each of these can be detrimental to any Agile Transformation. The report suggests that some organizations face many of these challenges at once. The survey notes that these results have been unchanged over several years. As we look across these top 10, we find most of them relate to the organization’s cultural aspects which means that trying to improve a team’s specific use of an Agile method like Scrum, is less important than trying to improve the overall Agile Culture of the organization. 

For a different view into why Agile transformations are challenged, we find the article, How to mess up your agile transformation in seven easy (mis)steps (Handscomb et al., 2018) by a McKinsey team as yet another way to look at the challenges organizations face. The seven missteps are: 

  1. Not having alignment on the aspiration and value of an agile transformation 
  2. Not treating agile as a strategic priority that goes beyond pilots 
  3. Not putting culture first over everything else 
  4. Not investing in the talents of your people 
  5. Not thinking through the pace and strategy for scaling up beyond pilots 
  6. Not having a stable backbone to support agile 
  7. Not infusing experimentation and iteration into the DNA of the organization 

If your organization is facing any of these challenges, or you’ve made any of the “missteps” identified by the McKinsey team, your Agile adoption may be at a stall. If you’re experiencing multiple challenges, it may be time to get some help.   

When things aren’t going well with your transformation the impacts are felt at the team and organization levels. Next, we’ll look at some of these impacts. 

Team Impacts 

  • Team members feel frustrated and demotivated about working in an Agile environment when they aren’t supported through the transition by managers, each other, and the culture of the organization. 
  • Team members are unable to deliver quality increments of work due to a lack of consistent processes and tools.  
  • Team deliveries are hindered by defects.  
  • Team members struggle to learn and implement new Agile methods, or what they learn does not stick. 
  • Team members don’t go beyond process and learn delivery practices to improve quality, like TDD, or DevOps. 
  • Team members don’t work cross-functionally and instead keep silos of expertise. 
  • Team members struggle to coordinate across teams to remove dependencies.  
  • Team members burn out due to working at an unsustainable pace. 
  • Team member turnover is high. 
  • Stakeholders don’t see the value in their participation with the Team members doing the work, hindering the team’s ability to get real-time feedback and continuously improve. 
  • Management fails to let the team self-organize taking away from team ownership and accountability for the work. Or even worse, they micro-manage the team, their backlog, or even how they do the work 
  • Teams find it difficult to release rapidly hampering innovation and increasing time to market. 
  • Team members aren’t engaged in learning new ways of working. 

In addition to the team impacts, there are several organizational impacts that can occur. 

Resolving the challenges 

In looking at the data, it is imperative that organizations address these challenges head-on. If you’re seeing any of the impacts listed above, you may wonder what to do to alleviate them.  One of the biggest mistakes is not incorporating change management as part of an Agile Transformation. Leadership needs to help everyone in the organization understand the new culture underlying how they all work together. It requires leaders and doers alike to learn about Agile values and behaviors. Organizations must invest in training and education at all levels so that learning becomes part of the culture. How do we expect people to become Agile if we don’t teach them what Agile really is?  

“What Are the Greatest Contributors to Success When Integrating Change Management and Agile?” According to Tim Creasey (Creasey, n.d.), they are: 

  1. Early engagement of a change manager 
  2. Consistent communication 
  3. Senior leader engagement 
  4. Early wins 

Regardless of the Agile Methods you are trying to implement, an organization must start with changing its organizational culture, and the best way to do this is to start with change management. All the efforts of teaching and coaching will fail without the culture change to support the new Agile ways of working. Finally, engage the entire organization, not just IT, in the adoption of Agile practices and behaviors. While a pilot team will help test out a specific Agile method, their work with other members of the organization requires everyone to understand what it means to be Agile. Agile is for everyone, not just IT. A comprehensive change management program will help ensure the entire organization becomes focused on becoming Agile.

Watch for future blogs where we delve into these challenges and what CC Pace can do to help you solve them.

PI planning is considered the heartbeat of Scaled Programs. It is a high-visibility event, that takes up considerable resources (and yes, people too). It is critical for organizations to realize the value of PI planning, otherwise, leadership tends to lose patience and gives up on the approach, leading to the organization sliding back on their SAFe Agile journey. There are many reasons PI planning can fall short of achieving its intended outcome. For the purposes of quick readability, I will limit the scope of this post to the following 5 reasons which I have found to have the most adverse effect.

  1. Insufficient preparation
  2. Inviting only a subset (team leads)
  3. Lack of focus on dependencies
  4. Risks not addressed effectively
  5. Not leaving a buffer

Let’s do a deeper dive and try to understand how each of the above anti-patterns in the SAFe implementation can impact PI planning.

Insufficient Preparation: By preparation, I don’t mean the event logistics and the preparation that goes along with it. The logistics are very important, but I am focusing on the content side of it. Often, the Product Owner and team members are entirely focused on current PI work, putting out fires, and scrambling to deliver on the PI commitments, so much so that even thinking about a future PI seems like an ineffective use of time. When that happens, teams often learn about the PI scope almost on the day of PI planning, or a few days before, which is not enough time to digest the information and provide meaningful input. PI planning should be an event for people from different teams/programs to come together and collaboratively create a plan for the PI. To do that, participants need to know what they are preparing for and have time to analyze it so when they come to the table to plan, the discussions are not impeded by questions that require analysis. Specifically, this means, that teams should know well in advance, what are the top 10 features that teams should prioritize, what is the acceptance criteria, and which teams will be involved in delivering those features. This gives the involved teams a runway to analyze the features, iron out any unknowns, and come to the table ready to have a discussion that leads to a realistic plan.

Inviting only a subset: As I said in the beginning, PI planning is a high-cost event. Many leaders see it as a waste of resources and choose to only include team leads/SMs/POs/Tech Leads/Architects and managers in the planning. This is more common than you might think. It might seem obvious why this is not a good practice, but let’s do a deep dive to make sure we are on the same page. The underlying assumption behind inviting a subset of people is that the invitees are experts in their field and can analyze, estimate, plan, and commit to the work with high accuracy. What’s missing from that assumption is, that they are committing on behalf of someone else (teams that are actually going to perform the work) with entirely different skill levels, understanding of the systems, organizational processes, and people. The big gap that emerges from this approach to planning is that work that is analyzed by a subset of folks tends not to account for quite a few complexities in the implementation, and the estimate is often based on the expert’s own assessment of effort. Teams do not feel ownership of the work, because they didn’t plan for or commit to it, and eventually the delivery turns into a constant battle of sticking to the plan and putting out fires.

Lack of focus on dependencies: The primary focus of PI planning should be the coordination and collaboration between teams/programs. Effectively identifying the dependencies that exist between teams and proper collaboration to resolve them is a major part of the planning event to achieve a plan with higher accuracy. However, teams sometimes don’t prioritize dependency management high enough and focus more on doing team-level planning, writing stories, estimating, adding them to the backlog, and slotting them for sprints. The dependencies are communicated, but the upstream and downstream teams don’t have enough time to actually analyze and assess the dependency and make commitments with confidence. The result is a PI plan with dependencies communicated to respective teams but not fully committed. Or even worse, some of the dependencies are missed only to be identified later in the PI when the team takes up the work. A mature ART prioritizes dependencies and uses a shift-left approach to begin conversations, capture dependencies, and give ample time to teams to analyze and plan for meeting them.

Risks not addressed effectively: During PI planning, the primary goal of program leadership should be to actively seek and resolve risks. I will acknowledge that most leaders do try to resolve the risks, but when teams bring up risks that require tough decisions, change in prioritization, and a hard conversation with a stakeholder, program leadership is not swift to act and make it happen. The risk gets “owned” by someone who will have follow-up action items to set up meetings to talk about it. This might seem like the right approach, but it ends up hurting the teams that are spending so much time and effort to come up with a reasonable plan for the PI. There is nothing wrong with “owning” the risks and acting on them in due time, however, during PI planning, time is of the essence. A risk that is not resolved right away, can lead to plans based on assumptions. If the resolution, which happens at a later date/time, turns out to be different from the original assumption made by the team, it can lead to changes in the plan and usually ends up putting more work on the team’s plate. The goal should be to completely “resolve” as many risks as possible during planning, and not avoid tough conversations/decisions necessary to make it happen.

Not leaving a buffer: We all know that trying to maximize utilization is not a good practice. Most leaders encourage teams to leave a buffer during the planning context on the first day. But, in practice, most teams have more in the backlog than they can accomplish in a PI. During the 2 days of planning, it is usually a battle to fit as much work as possible to make the stakeholders happy. For programs that are just starting to use SAFe, even the IP sprint gets eaten up by planned feature development work. One of the root causes for this is having a false sense of accuracy in the plan. Teams tend to forget that this is a plan for about 5 to 6 sprints that span over a quarter. A 1 sprint plan can be expected to have a higher level of accuracy because of a shorter timebox, less scope, and more refined stories. However, when a program of more than 50 people (sometimes close to 150 people) plans for a scope full of interdependencies, expecting the same level of accuracy is a recipe for failure. In order to make sure the plan is realistic, teams should leave the needed buffer and allow teams to adjust course when changes occur.

As I mentioned at the start of this post, there are many ways a high-stakes event like PI planning can fail to achieve the intended outcomes. These are just the ones I have experienced first-hand.  I would love to know your thoughts and hear about some of the anti-patterns that affected your PI Planning and how you went about addressing them.

 

 

 

 

 

 

Are you new to Agile testing?

I’ve been reading Agile Testing, by Lisa Crispin and Janet Gregory. If you are new to Agile testing, this book is for you. It provides a comprehensive guide for any organization moving from waterfall to Agile testing practices. The “Key Success Factors” outlined in the book are important when implementing Agile testing practices, and I would like to share them with you.

Success Factor 1: Use the Whole Team Approach

Success Factor 2: Adopt an Agile Testing Mind-Set

Success Factor 3: Automate Regression Testing

Success Factor 4: Provide and Obtain Feedback

Success Factor 5: Build a Foundation of Core Practices

Success Factor 6: Collaborate with Customers

Success Factor 7: Look at the big picture

Success Factor 1: Use the Whole Team Approach

In Agile, we like to take a team approach to testing. Developers are just as responsible for the quality of a product as any tester; they embrace practices like Test-Driven Development (TDD) and ensure good Unit testing is in place before moving code to test. Agile Testers participate in the refinement process, helping Product Owners write better User Stories by asking powerful questions and adding test scenarios to stories. Everyone works together to build quality into the product development process. This approach is very different from a waterfall environment where the QA/Test team is responsible for ensuring quality software/products are delivered.

Success Factor 2: Adopt an Agile Testing Mind-Set

Adopting Agile starts with changing how we think about the work and embracing Agile Values and Principles. In addition to the Agile Manifesto’s 12 Principles, Lisa and Janet define 10 Principles for Agile Testing. Testers adopt and demonstrate an Agile testing mindset by keeping these principles top of mind. They ask, “How can we do a better job of delivering real value?”

10 Principles for Agile Testing

  1. Provide Continuous Feedback
  2. Deliver Value to the Customer
  3. Enable Face-to-Face Communication
  4. Have Courage
  5. Keep it Simple
  6. Practice Continuous Improvement
  7. Respond to Change
  8. Self-Organize
  9. Focus on People
  10. Enjoy

Success Factor 3: Automate Regression Testing

Automate tests where you can and continuously improve. As seen in the Agile Testing Quadrants, automation is an essential part of the process. If you’re not automating Regression tests, you’re wasting valuable time on Manual testing, which could be beneficial elsewhere. Test automation is a team effort, start small and experiment.

https://www.cigniti.com/blog/agile-test-automation-and-agile-quadrants/

Success Factor 4: Provide and Obtain Feedback

Testers provide a lot of feedback, from the beginning of refinement through to testing acceptance. But keep in mind – feedback is a two-way street, and testers should be encouraged to ask for their own feedback. There are two groups where testers should look for feedback. The first is from the developers. Ask them for feedback about the test cases you are writing. Test cases should inform development, so they need to make sense to the developers. A second place to get feedback is from the PO or customer. Ask them if your tests cover the acceptance criteria satisfactorily and confirm you’re meeting their expectations around quality.

Success Factor 5: Build a Foundation of Core Practices

The following core practices are integral to Agile development.

  • Continuous Integration: One of the first priorities is to have automated builds that happen at least daily.
  • Test Environments: Every team needs a proper test environment to deploy to where all automated and manual tests can be run.
  • Manage Technical Debt: Don’t let technical debt get away from you; instead make it part of every iteration.
  • Working Incrementally: Don’t be tempted to take on large stories; instead break down the work into small stories and test incrementally.
  • Coding and Testing Are Part of One Process: Testers write tests, and developers write code to make the test pass.
  • Synergy between Practices: Incorporating any one practice will get you started. A combination of these practices, which to work together, is needed to gain the advantage of Agile development fully.

Success Factor 6: Collaborate with Customers

Collaboration is key, and it isn’t just with the developers. Collaboration with Product Owners and customers helps clarify the acceptance criteria. Testers make good collaborators as they understand the business domain and can speak technically. The “Power of Three” is an important concept – developers, testers, and a business representative form a powerful triad. When we work to understand what our customers value, we include them to answer our questions and clear up misunderstandings; then, we are truly collaborating to deliver value.

Success Factor 7: Look at the Big Picture

The big picture is important for the entire team. Developers may focus on the implementation details, while testers and the Product Owners have the view into the big picture. Aside from sharing the big picture with the team, the four quadrants can help you to guide development so developers don’t miss anything.

In addition, you’ll want to ensure your test environments are as similar as possible to production.  Test with production-like data to make the test as close to the real world as possible. Help developers take a step back and see the big picture.

Summary

At the end of the day, developers and testers form a strong partnership. They both have their area of expertise. However, the entire team is the first line of defense against poor quality. The focus is not on how many defects are found; the focus is on delivering real value after each iteration.

When I was first introduced to Agile development, it felt like a natural flow for developers and business stakeholders to collaborate and deliver functionality in short iterations. It was rewarding (and sometimes disappointing) to demo features every two weeks and get direct feedback from users. Continuous Integration tools matured to make the delivery process more automated and consistent. However, the operations team was left out of this process. Environment provisioning, maintenance, exception handling, performance monitoring, security – all these aspects were typically deprioritized in favor of keeping the feature release cadence. The DevOps/DevSecOps movement emerged as a cultural and technical answer to this dilemma, advocating for a much closer relationship between development and operations teams. 

Today, companies are rapidly expanding their cloud infrastructure footprint. What I’ve heard from discussions with customers is that the business value driven by the cloud is simply too great to ignore. However, much like the relationship between development and ops teams during the early Agile days, a gap is forming between Finance and DevOps teams. Traditional infrastructure budgeting and planning doesn’t work when you’re moving from a CapEx to OpEx cost structure.  Engineering teams can provision virtually unlimited cloud resources to build solutions, but cost accountability is largely ignored. Call it the pandemic cloud spend hangover. 

Our customers see the flexibility of the cloud as an innovation driver rather than simply an expense. But they still need to understand the true value of their cloud spend – which products or systems are operating efficiently? Which ones are wasting resources? 

I decided to look into FinOps practices to discover techniques for optimizing cloud spend. I researched the FinOps Foundation and read the book, Cloud FinOps. Much like the DevOps movement, FinOps seeks to bring cross-functional teams together before cloud spend gets out of hand. It encompasses both cultural and technical approaches. 

Here are some questions that I had before and the answers that I discovered from my research: 

Where do companies start with FinOps without getting overwhelmed by yet another oversight process? 

Start by understanding where your costs are allocated. Understand how the cloud provider’s billing details are laid out and seek to apply the correct costs to a business unit or project team. Resource tagging is an essential first step to allocating costs. The FinOps team should work together to come up with standard tagging guidelines. 

Don’t assume the primary goal is cost savings. Instead, approach FinOps as a way to optimize cloud usage to meet your business objectives. Encourage reps from engineering and finance to work together to define objectives and key results (OKRs). These objectives may be different for each team/project and should be considered when making cloud optimization recommendations. For example, if one team’s objective is time-to-market, then costs may spike as they strive to beat the competition.  

What are some common tagging/allocation strategies?

Cloud vendors provide granular cost data down to the millisecond of usage. For example, AWS Lambda recently went from rounding to the nearest 100ms of duration to the nearest millisecond. However, it’s difficult to determine what teams/projects/initiatives are using which resources and for how long. For this reason, tagging and cost allocation are essential to FinOps. 

According to the book, there are generally two approaches for cost allocation: 

  1. Tagging – these are resource-level labels that provide the most granularity. 
  1. Hierarchy-based – these are at the cloud account or subscription-level. For example, using separate AWS accounts for prod/dev/test environments or different business units. 

Their recommendation is to start with hierarchy-based allocations to ensure the highest level of coverage.  Tagging is often overlooked or forgotten by engineering teams, leading to unallocated resources. This doesn’t suggest skipping tags, but make sure you have a consistent strategy for tagging resources to set team expectations. 

 How do you adopt a FinOps approach without disrupting the development team and slowing down their progress? 

The nature of usage-based cloud resources puts spending responsibility on the engineering team since inefficient use can affect the bottom line. This is yet another responsibility that “shifts left”, or earlier in the development process. In addition to shifting left on security/testing/deployment/etc., engineering is now expected to monitor their cloud usage. How can FinOps alleviate some of this pressure so developers can focus on innovation? 

Again, collaboration is key. Demands to reduce cloud spend cannot be a one-way conversation. A key theme in the book is to centralize rate reduction and decentralize usage reduction (cost avoidance).  

  • Engineering teams understand their resource needs so they’re responsible for finding and reducing wasted/unused resources (i.e., decentralized).  
  • Rate reduction techniques like using reserved instances and committed use discounts are best handled by a centralized FinOps team. This team takes a comprehensive view of cloud spend across the organization and can identify common resources where reservations make sense. 

Usage reduction opportunities, such as right sizing or shutting down unused resources, should be identified by the FinOps team and provided to the engineering teams. These suggestions become technical debt and are prioritized along with other work in the backlog. Quantifying the potential savings of a suggestion allows the team to determine if it’s worth spending the engineering hours on the change. 

https://www.finops.org/projects/encouraging-engineers-to-take-action/

How do you account for cloud resources that are shared among many different teams?

Allocating cloud spend to specific teams or projects based on tagging ensures that costs are distributed fairly and accurately. But what about shared costs like support charges? The book provides three examples for splitting these costs:

  • Proportional – Distribute proportionally based on each team’s actual cloud spend. The more you spend, the higher the allocation of support and other shared costs. This is the recommended approach for most organizations.
  • Evenly – split evenly among teams.
  • Fixed – Pre-determined fixed percentage for each team.

Overall, I thought the authors did a great job of introducing Cloud FinOps without overwhelming the reader with another rigid set of practices. They encourage the Crawl/Walk/Run approach to get teams started on understanding their cloud spend and where they can make incremental improvements. I had some initial concerns about FinOps bogging down the productivity and innovation coming from engineering teams. But the advice from practitioners is to provide data to inform engineering about upward trends and cost anomalies. Teams can then make decisions on where to reduce usage or apply for discounts.

The cloud providers are constantly changing, introducing new services and cost models. FinOps practices must also evolve. I recommend checking out the Cloud FinOps book and the related FinOps Foundation website for up-to-date practices.

As a Senior Scrum Master, I’ve worked with many organizations, and I’m frequently asked by leaders one common question: how long will it take for my team to be Agile? The answer is never easy; developing an Agile mindset can be complex. From senior executives to developers, everyone in the organization must be open-minded and willing to change. Of course, there will always be resistance, but this can be handled through open dialogue and continual conversation.

I’d like to walk through a staged representation depending upon the maturity levels that each team goes through in becoming Agile. The information you can gather from the maturity levels is an important metric that organizations are intrigued and excited to see because they show progress in their Agile journey. I have used the maturity levels extensively with teams, and it’s always great to show progress; but remember, it’s just a tool.

So, with that being said, let’s get started. I will review each maturity level a team goes through, and along the way, I’ll share my perspective and lessons learned.

 

Level 1 is a Learning Phase. As teams get started on their Agile journey, it’s essential to introduce and establish an Agile mindset. From my experience, an Agile Boot Camp is a great way to create this awareness and introduce some initial concepts. It’s also an opportunity to see the team composition, make introductions, and begin a new way of working together. While Level 1 focuses on awareness, you can’t short-change the importance of this initial step; it sets the foundation for the beginning of the journey and the team’s future success.

To reach Level 2, teams must have a solid understanding of what it means to be Agile, and they must also recognize there is a difference between Agile and Scrum/Kanban. Frequently, I hear teams using words like Agile and Scrum interchangeably; and it’s important everyone understands that Agile is a methodology, whereas Scrum and Kanban are different frameworks of Agile. The Agile ceremonies or events should be scheduled with a set agenda and teams should practice story card-based requirement gathering.

Once the team has a firm grip on what it means to be Agile, they’re practicing the events, using terms correctly, and understanding the different frameworks; it’s time to move to Level 3, where the focus is on proper planning, practicing trade-offs, backlog management, and inspect/adapt principles. Better planning is your key to executing a sprint well. Having a solid backlog and adopting inspect/adapt principles will play a crucial part in a team’s success. The team should be encouraged and start to practice trade-offs.

As a Scrum Master and coach, I continually talk with my teams about embracing change. As they transition from Waterfall to Agile, the team should practice the “yes, I can, but…” phrase. For example, the Product Owner issues a new requirement, but the team already has a full plate of work. At this point, the team should be willing to practice a trade-off, accepting the requirement but be open to a conversation with the Product Owner to reprioritize existing items. Through this process, we ensure change is embraced and simultaneously practice trade-offs to not burn out the team.

As we move up the ladder, Level 4 is about the team starting to self-organize. In the planning session, there is a discussion about the sprint goals and stories, and the team should be able to self-organize and pick up the tasks that will help them achieve the sprint goal. Remember – it’s team commitment, not individual commitments, that matter. The team should be able to start measuring the process and looking at ways to improve, such as introducing some automation in the form of testing.

Level 5 focuses on improving T-Shaped skills, which can be attained by having a buddy-pair system within the team. Through this process, knowledge is gained and shared across teammates, thereby ensuring the team becomes cross-functional as time progresses. Teams will now be experienced looking at extreme automation techniques like RPA and AI, developing CI/CD pipelines, and eventually working in a DevOps model.

In conclusion, an Agile Transformation begins with a people mindset. While we looked at the Agile Transformation Maturity levels, it’s important to understand the effort put in by all players within an organization. From developers to Product Owners to Agile sponsors, everyone plays a role in achieving a successful Agile Transformation. As your organization moves through the different levels, remember, it’s going to be a bumpy ride. There will be forward momentum as well as setbacks; hence, it takes time. But as a Scrum Master who has worked with teams on their transformation journey, moving through the different maturity levels is a process, but the result is worth it.

We’ve been using the AWS Amplify toolkit to quickly build out a serverless infrastructure for one of our web apps. The services we use are IAM, Cognito, API Gateway, Lambda, and DynamoDB. We’ve found that the Amplify CLI and platform is a nice way to get us up and running. We then update the resulting CloudFormation templates as necessary for our specific needs. You can see our series of videos about our experience here.

The Problem

However, starting with the Amplify CLI version 7, AWS changed the way to override Amplify-generated resource configurations in the form of CFT files. We found this out the hard way when we tried to update the generated CFT files directly. After upgrading the CLI and then calling amplify push, our changes were overridden with default values – NOT GOOD! Specifically, we wanted to add a custom attribute to our Cognito pool.

After a few frustrating hours of troubleshooting and support from AWS, we realized that the Amplify CLI tooling changed how to override Amplify-generated content. AWS announced the changes here, but unfortunately, we didn’t see the announcement or accompanying blog post.

The Solution

Amplify now generates an “overrides.ts” Typescript file for you to provide your own customizations using Cloud Development Kit (CDK) constructs.

In our case, we wanted to create a Cognito custom attribute. Instead of changing the CFT directly (under the new “build” folder in Amplify), we generate an “override.ts” file using the command: “amplify override auth”. We then added our custom attribute using the CDK:

Important Note: The amplify folder structure gets changed starting with CLI version 7. To avoid deployment issues, be sure to keep your CLI version consistent between your local environment and the build settings in the AWS console. Here’s the Amplify Build Setting window in the console (note that we’re using “latest” version):

 

If you’re upgrading your CLI, especially to version 7, make sure to test deployments in a non-production environment, first.

What are some other uses for this updated override technique? The Amplify blog post and documentation mention examples like Cognito overrides for password policies and IAM roles for auth/unauth users. They also mention S3 overrides for bucket configurations like versioning.

For DynamoDB, we’ve found that Amplify defaults to a provisioned capacity model. There are benefits to this, but this model charges an hourly rate for consumption whether you use it or not. This is not always ideal when you’re building a greenfield app or a proof-of-concept. We used the amplify override tools to set our billing mode to On-demand or “Pay per request”. Again, this may not be ideal for your use case, but here’s the override.ts file we used:

Conclusion

At first, I found this new override process frustrating since it discourages direct updates to the generated CFT files. But I suppose this is a better way to abstract out your own customizations and track them separately. It’s also a good introduction to the AWS CDK, a powerful way to program your environment beyond declarative yaml files like CFT.

Further reading and references:

DynamoDB On-Demand: When, why and how to use it in your serverless applications

Authentication – Override Amplify-generated Cognito resources – AWS Amplify Docs

Override Amplify-generated backend resources using CDK | Front-End Web & Mobile (amazon.com)

Top reasons why we use AWS CDK over CloudFormation – DEV Community

AWS Amplify is a set of tools that promises to make full-stack, cloud-native development quicker and easier. We’ve used it to build and deploy different products without getting bogged down by heavy infrastructure configuration. On one hand, Amplify gives you a rapid head start with services like Lambda functions, APIs, CI/CD pipelines, and CloudFormation/IaC templates. On the other hand, you don’t always know what it’s generating and how it’s securing your resources.

If you’re curious about rapid development tools that can get you started on the road to serverless but want to understand what’s being created, check out our series of videos.

We’ll take a front-end web app and incrementally build out authentication, API/function, and storage layers. Along the way, we’ll point out any gotchas or lessons learned from our experience.

According to Deutsche Bank CIO Frederic Veron, “enterprises that wish to reap the potentially rich rewards of getting IT and business line leaders to build software together in agile fashion must also embrace the DevOps model.”[1]

Why is that? It’s simple: DevOps is necessary to scale Agile. DevOps practices are what enable an organization to rapidly deploy changes to many different parts of their product, across many products, on a frequent basis—with confidence.

That last part is key. Companies like Amazon, Google, and Netflix developed DevOps methods so that they could deploy frequently at a massive scale without worrying if they will break something. DevOps is, at its core, a risk management strategy. DevOps practices are what enable you to maintain a complex multi-product ecosystem and make sure that everything works. DevOps substitutes traditional risk management approaches with what the Agile 2 authors call real-time risk management.[2]

You might think that all this is just for software product companies. But today, most organizations operate on a technology platform, and if you do, then DevOps applies. DevOps methods apply to any enterprise that creates and maintains products and services that are defined by digital artifacts.

DevOps methods apply to any enterprise that creates and maintains products and services that are defined by digital artifacts.

That includes manufacturers, online commercial services, government agencies that use custom software to provide services to constituents, and pretty much any large commercial, non-profit, and public sector enterprise today.

As JetBlue and Breeze airlines founder David Neeleman said, “we’re a high-tech company that just happens to fly airplanes,”[3] and Capital One Bank’s CIO Rob Alexander said, “We’re a founder-led, 20-year-old technology company.”[4]

Most large businesses today are fundamentally technology companies that direct their efforts toward the markets in which they have expertise, assets, and customer relationships.

DevOps Is Necessary at Scale

Scaling frameworks such as SAFe and DA provide potentially useful patterns for organizing the work of lots of teams. However, DevOps is arguably more important than any framework, because without DevOps methods, scaling is not even possible, and many organizations (Google, Amazon, Netflix…) use DevOps methods at scale without a scaling framework.

If teams cannot deploy their changes without stepping on each other’s work, they will often be waiting or going no faster than the slowest team, and lots of teams will have a very difficult time managing their dependencies—no framework will remedy that if the technical methods for multi-product dependency management and on-demand deployment at scale are not in place. If you are not using DevOps methods, you cannot scale your use of Agile methods.

How Does Agile 2 View DevOps?

DevOps as it is practiced today is technical. When you automate things so that you can make frequent improvements to your production systems without worrying about a mistake, you are using DevOps. But DevOps is not a specific method. It is a philosophy that emerged over time. In practice, it is a broad set of techniques and approaches that reflect that common philosophy.

With the objective of not worrying in mind, you can derive a whole range of techniques to leverage tools that are available today: cloud services, elastic resources, and approaches that include horizontal scaling, monitoring, high-coverage automated tests, and gradual releases.

While DevOps and Agile seem to overlap, especially philosophically, DevOps techniques are highly technical, while the Agile community has not focused on technical methods for a very long time. Thus, DevOps fills a gap, and Agile 2 promotes the idea that Agile and DevOps go best together.

DevOps evangelist Gene Kim has summarized DevOps by his “Three Ways.”[5]  One can paraphrase those as follows:

  1. Systems thinking: always consider the whole rather than just the part.
  2. Use feedback loops to learn and refine one’s artifacts and processes over time.
  3. Treat everything as an experiment that you learn from, and adjust accordingly.

The philosophical approaches are very powerful for the DevOps goal of delivering frequent changes with confidence, because (1) a systems view informs you on what might go wrong, (2) feedback loops in the form of tests and automated checks tell you if you hit the mark or are off, and (3) if you view every action as an experiment, then you are ready to adjust so that you then hit the mark. In other words, you have created a self-correcting system.

Agile 2 takes this further by focusing on the entire value creation flow, beginning with strategy and defining the kinds of leadership that are needed. Agile 2 promotes product design and product development as parallel and integrated activities, with feedback from real users and real-world outcomes wherever possible. This approach embeds Gene Kim’s three DevOps “ways” into the Agile 2 model, unifying Agile 2 and DevOps.

Download this White Paper here!

 

[1] https://www.cio.com/article/3141577/true-agile-software-development-requires-devops.html

[2] Agile 2: The Next Iteration of Agile, by Cliff Berg et al, pp 205 ff.

[3] https://www.businessinsider.com/breeze-airways-pushing-back-launch-until-2021-what-we-know-2020-7

[4] https://www.youtube.com/watch?v=0E90-ExySb8

[5] https://itrevolution.com/the-three-ways-principles-underpinning-devops/

Recently, I read an article titled, “Why Distributed Software Development Teams Work Infinitely Better”, by Boris Kontsevoi.

It’s a bit hyperbolic to say that distributed teams work infinitely better, but it’s something that any software development team should consider now that we’ve all been distributed for at least a year.

I’ve worked on Agile teams for 10-15 years and thought that they implicitly required co-located teams. I also experienced the benefits of working side-by-side with (or at least close to) other team members as we hashed out problems on whiteboards and had adhoc architecture arguments.

But as Mr. Kontsevoi points out, Agile encourages face-to-face conversation, but not necessarily in the same physical space. The Principles behind the Agile Manifesto were written over 20 years ago, but they’re still very much relevant because they don’t prescribe exactly “how” to follow the principles. We can still have face-to-face conversations, but now they’re over video calls.

This brings me to a key point of the article -” dispersed teams outperform co-located teams and collaboration is key”. The Manifesto states that building projects around motivated individuals is a key Agile principle.

Translation: collaboration and motivated individuals are essential for a distributed team to be successful.

  • You cannot be passive on a team that requires everyone to surface questions and concerns early so that you can plan appropriately.
  • You cannot fade into the background on a distributed team, hoping that minimal effort is good enough.
  • If you’re leading a distributed team, you must encourage active participation by having regular, collaborative team meetings. If there are team members that find it difficult to speak above the “din” of group meetings, seek them out for 1:1 meetings (also encouraged by Mr. Kontsevoi).

Luckily, today’s tools are vastly improved for distributed teams. They allow people to post questions on channels where relevant team members can respond, sparking adhoc problem-solving sessions that can eventually lead to a video call.

Motivated individuals will always find a way to make a project succeed, whether they’re distributed, co-located, or somewhere in between. The days of tossing software development teams into a physical room to “work it out” are likely over. The new distributed paradigm is exciting and, yes, better – but the old principles still apply.

Last year, we worked with experts from George Mason University to build a COVID screening and tracing platform called Pass2Play. We used this opportunity to implement a Serverless architecture using the AWS cloud.

This video discusses our experience, including our solution goals, high-level design, lessons learned and product outcomes.

It’s specific to our situation, but we’d love to hear about other experiences with the Serverless tools and services offered by AWS, Azure and Google. There are a lot of opinions on Serverless, but there’s no doubt that it’s pushing product developers to rethink their delivery and maintenance processes.

Feel free to leave a comment if we’re missing anything or to share your own experience.

CC Pace was recently featured in Agile Uprising’s Blog series.  Agile Uprising is a network that is focused on the advancement of the Agile mindset and professional networking between leading Agilists.  In the blog, CC Pace created a short video where we highlighted one of our latest projects!  Bobby Pantall, CC Pace Lead Technology Consultant, speaks to our experience building an app for a startup company named Twisty Systems.  This app that was developed is a navigation app aimed towards driving enthusiasts. In the video we describe the framework of the Lean Startup methodology and some of the highs and lows in the context of the pandemic and releasing a new app.

Enjoy and please share your thoughts on this project!

In the first installment of our Product Owner Empowerment series, we talked about the three crucial dimensions of ‘Knowledge’ that affect a Product Owner’s effectiveness. This post is going to take a deeper dive into the impact Empathy has on a Product Owner’s ability to succeed.

Empathy: Assuming positive intent, empathy is something that comes naturally to a person. However, environmental factors can influence a person’s ability to relate or connect with another person or team. Let’s explore some aspects of empathy and how they may impact a Product Owner’s success.

  •  Empathy towards the team(s): To facilitate an empathetic relationship between a Product Owner and the team, the PO must be able to meet the team where they are (literally and figuratively). Getting to know the team members and building a rapport requires the Product Owner to extensively interact with the team and proactively work to build such relationships. Organizations should facilitate this by making sure Product Owners are physically located where the team is and is empowered to not only represent the team to the business but also play the role of protector from external interruptions, so that team can function effectively. As alluded to above, having a good understanding of what it takes to deliver, helps tremendously with the ability to place themselves in the team’s shoes and see things from their perspective.
  • Empathy towards the customer(s): It is easy to assume that a Product Owner acting on behalf of the business will automatically have empathy and an understanding of their needs to adequately represent their business interests. However, organizational culture can sometimes influence how a Product Owner prioritizes work. If it is only the sponsors directing the team’s scope and prioritization, a critical element of customer input is missed. Product Owners should place sufficient emphasis on obtaining customer opinion and feedback to inform the direction of product development.
  • Empathy in the Organization: This factor relates to the organizational culture. As companies embrace Agile and expect its benefits to be equally realized, more emphasis on the desire to be lean begins to form. While being lean is a goal every organization should have, it is important to understand what kind of impact a lean organization has on individual teams or team members. A systemic push to be lean, in combination with less than optimal Agile maturity and the presence of antipatterns, can lead to teams being held against unsustainable delivery expectations. This problem is more common than you would think. Most organizations are going through some level of Agile transformation, but leadership expectations of benefits realization are always a few steps ahead of where the organization truly is on the Agile journey. Having the right set of expectations and the empathy necessary to reset them based on continuous learning and feedback is needed at an organizational level.

Check back next week to see how a Product Owner’s success is tied to psychological safety for themselves and the teams they are working with.

If you are a Product Owner or your Agile team struggles with this role, you won’t want to miss our upcoming webinar on Product Owner Empowerment. This webinar will be held on December 15th and you can register today here! Space is limited and on a first-come basis.

What is App Modernization

Legacy application modernization is a process to update existing and aging applications with modern architecture to enhance features and capabilities. By migrating your legacy applications, you can include the latest functionalities that better align with what your business needs to succeed. Keeping legacy applications running smoothly while still being able to meet current day needs can be a time consuming and resource intensive affair. That is doubly the case when software becomes so outdated that it may not even be compatible with modern day systems.

A Quick Look at a Sample Legacy Monolithic Application

For this article, say a decade and half year-old, Legacy Monolithic Application is considered as depicted in the following diagram.

 

This  depicts a traditional, n-tier architecture that was very common in the past 20 years or so. There are several shortcomings with this architecture, including the “big bang” deployment that had to be tightly managed when rolling out a release. Most of the resources on the team would sit idle while requirements and design were ironed out. Multiple source control branches had to be managed across the entire system, adding complexity and risk to the merge process. Finally, scalability applied to the entire system, rather than smaller subsystems, causing increase costs for hardware resources.

Why Modernize?

We define modernization as migrating from a monolithic system to many decoupled subsystems, or microservices.

The advantages are:

  1. Reduce cost
    1. Costs can be reduced by redirecting computing power only to the subsystems that need it. This allows for more granular scalability.
  2. Avoid vendor lock-in
    1. Each subsystem can be built with a technology for which it is best suited
  3. Reduce operational overhead
    1. Monolithic systems that are written in legacy technologies tend to stay that way, due to increased cost of change. This requires resources with a specific skillset.
  4. De-coupling
    1. Strong coupling makes it difficult to optimize the infrastructure budget
    2. De-coupling the subsystems makes it easier to upgrade components individually.

Finally, a modern, microservices architecture is better suited for Agile development methodologies. Since work effort is broken up into iterative chunks, each microservice can be upgraded, tested and deployed with significantly less risk to the rest of the system.

Legacy App Modernization Strategies

Legacy application modernization strategies can include the re-architecting, re-factoring, re-coding, re-building, re-platforming, re-hosting or the replacement and retirement of your legacy systems. Applications dating back decades may not be optimized for mobile experiences on smartphones or tablets, which could require entire re-platforming. Lift and Shift will not add any business value if you migrate legacy applications just for the sake of Modernization. Instead, it’s about taking the bones, or DNA, of the original software, and modernizing it to better represent current business needs.

Legacy Monolithic App Modernization Approaches

Having examined the nightmarish aspects of continuing to maintain Legacy Monolithic Applications, this article presents you with two Application Modernization Strategies. Both listed below will be explained at length to get basic idea on to pick whatever is feasible with constraints you might have.

  • Migrating to Microservices Architecture
  • Migrating to Microservices Architecture with Realtime Data Movement (Aggregation/Deduping) to Data Lake

Microservices Architecture

In this section, we shall take a dig at how re-architecting, re-factoring and re-coding per microservices paradigm will help avoid a lot of overheads of maintaining a legacy monolithic system. The following diagram helps you better understand Microservice Architecture – a leap forward from legacy monolithic architecture.

 

At a quick glance of above diagram, you can understand there is a big central piece called API Gateway with Discovery Client. This is comparable to a Façade in a Monolithic Application. API Gateway is essentially an entry point to access several microservices which are comparable to modules in Monolithic Application and are identified/discovered with the help of Discovery Client. In this Design/Architecture of Microservices, API Gateway also acts as API Orchestrator as it resorts to one Database set via Database Microservice in the diagram. In other words, API Gateway/Orchestrator orchestrates the sequence of calls based on the business logic to call Database Microservice as individual Microservices have no direct access to database. One can also notice this architecture supports various client systems such as Mobile App, Web App, IOT APP, MQTT App et al.

Although this architecture gives an edge to using different technologies in different microservices, it leaves us with a heavy dependency on the API Gateway/Orchestrator. The Orchestrator is tightly coupled to the business logic and object/data model, which requires it to be re-deployed and tested after each microservice change. This dependency prevents each microservice from having its own separate and distinct Continuous Integration/Continuous Delivery (CI/CD) pipeline. Still, this architecture is a huge step towards building heterogenous systems that work in tandem to provide a complete solution. This goal would otherwise be impossible with a Monolithic Legacy Application Architecture.

Microservices Architecture with Realtime Data Movement to Data Lake

In this section, we shall take a dig at how re-architecting, re-factoring, re-coding, re-building, re-platforming, re-hosting or the replacement and retirement of your legacy systems per microservices paradigm will help avoid a lot of overheads of maintaining a legacy monolithic system. The following diagram helps you understand a complete advanced Microservices Architecture.

 

At the outset, most part of the diagram for this approach looks like the previous approach. But this adheres to the actual Microservice paradigm more than the previous. In this case, each microservice is individual and has its own micro database of any flavor it chooses to be based on the business needs and avoids dependency on a microservice called database microservice or overload API Gateway to act as Orchestrator with business logic. The advantage of this approach is, each Microservice can have its own CI/CD pipeline release. In other words, a part of application can be released with TDD/ATDD properly implemented avoiding costs incurred for Testing/Deploying and Release Management. This kind of architecture does not limit the overall solution to stick to any particular technical stack but encourages to provide quick solutions with various technical stacks. And gives flexibility to scale resources for highly hit microservices when necessary.

Besides this architecture encourages one to have a Realtime Engine (which can be a microservice itself) that reads data from various databases asynchronously and apply data aggregation and data deduping algorithms and send pristine data to Data lake. Advanced Applications can then use the data from Data lake for Machine Learning and Data Analytics to cater to the business needs.

 

Note: This article has not been written any cloud flavor in mind. This is general App Modernization Microservices architecture that can run anywhere on-prem or OpenShift (Private Cloud) or Azure Cloud or Google Cloud or AWS (Private Cloud)

At heart, I’m still a developer. Titles or positions aside, what I enjoy most is solving problems and writing good code to implement the solutions. And so, when I work on strategic technology engagements, I often empathize with the developer teams, and can clearly see their points of view. 

One issue I often see on such engagements is that the developer teams find little or no value in the Agile retrospective meetings. I hear expressions like, “waste of time,” or, “lots of psychobabble.” I understand where developers are coming from when they express these sentiments, but I disagree with them. If done well (emphasis on well), retrospectives are absolutely essential and make developers’ lives much better. When developers are so cynical about retrospectives, I know that retrospectives are not being conducted very well. 

Let’s take a look at some of the “anti-patterns” I’ve seen. 

Pro forma retrospectives 

In a pro forma retrospective, everybody takes turns to say what they like and what they think can be improved. So far, so good. But nobody is encouraged to develop their thoughts or to expound. So week after week, the retrospective’s answers are, “Everyone is doing a great job, working hard,” and, “Nothing is really wrong.” At that point, the meeting is no longer a retrospective, it’s just a ceremony.

The coach or the scrum master should encourage everyone to speak their minds. Developers coming from a waterfall background, or from a “command and control” background, may not be comfortable expressing their doubts and reservations. Make sure they’re encouraged to do so. 

Using attendees as lab rats 

Some coaches I worked with thought that projects were an ideal way to test out new approaches to retrospectives that they had dreamed up. One involved putting stickies on team members’ foreheads and trying to guess something… I don’t even remember the details, the whole thing was so silly. Another thought that putting things in verse was a good idea. I have no idea where in left field these notions come from, but all they succeeded in doing was making the team members feel totally patronized. 

Stick with tried and true methods: ask for things that are going well and should be reinforced, then things that need improvement, then action items for reinforcement and improvement. If you want to experiment, that’s fine, but make it clear it’s an experiment and ask for feedback on the experiment itself. 

Not following up on action items 

Some retrospectives do everything right in the meeting, but nobody acts on the action items. Now it’s just a gab session, not a useful discussion. Make sure the Scrum Master acts outside the meeting to reinforce the good and improve the bad. Then discuss the actions taken at the next retrospective, so that everyone sees something is being done. If there’s time, revisit the improvement areas from the last meeting and ask if things have improved. 

Without actual followup, developers will chalk up another “I’ve seen it all before” score, and will see the meetings as wastes of time. 

Putting people on the spot 

Retrospectives should be about making things better, not about forcing members to be defensive. I’ve seen them run as blame sessions, where “who’s to blame for this?” is the aim. Don’t be that person. The team should be accountable, yes, but trying to affix blame on individuals or groups isn’t productive. Instead, ask everyone for suggested action items to improve the situation, and make sure everyone signs on to the action items. 

As developers, we’re trained to expect the worst and to see risk everywhere. Sometimes, it makes us cynical and resistant to faddish concepts. Retrospectives are not that. They’re useful forums to express frustrations in a civil and productive way so that frustrations don’t build up and lead to disruptive behavior in other settings. If conducted well and if effective action is taken, they really do lead to tangible improvements that make everyone’s jobs much easier, including developers. 

If you notice that your developer team thinks retrospectives are worthless, glance at this checklist: 

  • Don’t patronize the developers or ask them to defend themselves 
  • Show you’re listening by actually effecting requested changes 
  • Value the outcome of the meetings over the process of the meetings 

Introduction
In the last post, we looked at pattern matching and data structures in Elixir.  In this post, we’re going to look particularly at Elixir processes and how they affect handling errors and state.

Supervisors & Error Handling
Supervisors, or rather supervision trees as controlled by supervisors, are another differentiating feature of Elixir.  Processes in Elixir are supposed to be much lighter weight than threads in other languages,[1] and you’re encouraged to create separate processes for many different purposes.  These processes can be organized into supervision trees that control what happens when a process within the tree terminates unexpectedly.  There are a lot of options for what happens in different cases, so I’ll refer you to the documentation to learn about them and just focus on my experiences with them.  Elixir has a philosophy of “failing fast” and letting a supervision tree handle restarting the failing process automatically.  I tried to follow this philosophy to start with, but I’ve been finding that I’m less enamoured with this feature than I thought I would be.  My initial impression was that this philosophy would free me of the burden of thinking about error handling, but that turned out not to be the case.  If nothing else, you have to understand the supervision tree structure and how that will affect the state you’re storing (yes, Elixir does actually have state, more below) because restarting a process resets its state.  I had one case where I accidentally brought a server down because I just let a supervisor restart a failed process.  The process dutifully started up again, hit the same error and failed again, over and over again.  In surprisingly short order, this caused the log file to take up all the available disk space and took down the server entirely.  Granted this could happen if I had explicit error handling, too, but at least there I’d have to think about it consciously and I’m more likely to be aware of the possibilities.  You also have to be aware of how any child processes you created, either knowingly or unknowingly through libraries, are going to behave and add some extra code to your processes if you want to prevent the children bringing your process down with them.  So, I’m finding supervisors a mixed blessing at best.  They do offer some convenience, but they don’t really allow you to do anything that couldn’t be done in other languages with careful error handling.  By hiding a lot of the necessity for that error handling, though, I feel like it discourages me from thinking about what might happen and how I would want to react to those events as much as I should.  As a final note tangentially related to supervisors, I also find that the heavy use of processes and Elixir’s inlining of code can make stack traces less useful than one might like.

All that being said, supervisors are very valuable in Elixir because, at least to my thinking, the error handling systems are a mess.  You can have “errors,” which are “raised” and may be handled in a “rescue” clause.  You have “throws,” which appear not not have any other name and are “thrown” and may be handled in a “catch” clause and you have “exits,” which can also be handled in a “catch” clause or trapped and converted to a message and sent to the process outside of the current flow of control.  I feel like the separation of errors and throws is a distinction without a difference.  In theory, throws should be used if you intend to control the flow of execution and errors should be used for “unexpected and/or exceptional situations,” but the idea of what’s “unexpected and/or exceptional” can vary a lot between people, so there’s been a few times when third-party libraries I’ve been using raise an error in a situation that I think should have been perfectly expectable and should have been handled by returning a result code instead of raising an error.  (To be fair, I haven’t come across any code using throws.  The habit seems to be handle things with with statements, instead.)  There is a recommendation that one should have two versions of a method, one that returns a result code and the other, with the same name suffixed with a bang (File.read vs. File.read!, for example) and raises an error, but library authors don’t follow this convention religiously for themselves, and sometimes you’ll find a function that’s supposed to return no matter what calling a function in yet another library that does raise an error, so you can’t really rely on it.

I haven’t seen any code explicitly using an exit statement or any flow control trying to catch exits, but I have found that converting the unintentional errors that sometime happen into messages allows me to avoid other processes taking down my processes, thus allowing me to maintain the state of my process rather than having it start from its initial state again.  That’s something I appreciate, but I still find it less communicative when reading code as the conversion is handled by setting a flag on the process, and that flag persists until explicitly turned off, so I find it harder to pinpoint where in my code I was calling the thing that actually failed.

All-in-all, I think that three forms of error handling are too many and that explicit error handling is easier to understand and forces one into thinking about it more.  (I happen to believe that the extra thinking is worthwhile, but others may disagree.)  That being said, adding supervision trees to other languages might be a nice touch.

State
Given that the hallmark of functional programming languages is being stateless, it may seem odd to conclude with a discussion of state, but the world is actually a stateful place and Elixir provides ways to maintain state that are worth discussing.

At one level, Elixir is truly stateless in that it’s not possible to modify the value of a variable.  Instead, you create a new value that’s a function of the old value.  For example, to remove an element from a list, you’d have to write something like:

my_list = List.delete_at(my_list, 3)

It’s not just that the List.delete_at function happens to create a new list with one fewer element, there’s no facility in the language for modifying the original value.  Elixir, however, does let you associate a variable with a process, and this effectively becomes the state of the process.  There are several different forms that these processes can take, but the one that I’ve used (and seen most used) are GenServers.  The example in the linked document is quite extensive, and I refer you to that for details, but essentially there are two things to understand for the purposes of my discussion.  On the client side, just like in an OOP, you don’t need to know anything about the state associated with a process, so, continuing with Stack example from the documentation, a client would add an element to a stack by calling something like:

GenServer.call(process_identifier, {:push, “value”})

(Note that process_identifier can be either a system generated identifier or a name that you give it when creating the process.  I’ll explain why that’s important in a little while.)  On the server side, you need code something like:

def handle_call({:push, value}, from, state) do

{:reply, :ok, new_state}
end

Here, the tuple {:push, value} matches the second parameter of the client’s invocation.  The from parameter is the process identifier for the client, and state is the state associated with the server process.  The return from the server function must be a tuple, the first value of which is a command to the infrastructure code.  In the case of the :reply command, the second value of the tuple is what will be returned to the client, and the final value in the tuple is the state that should be associated with the process going forward.  Within a process, only one GenServer call back (handle_call, handle_cast, handle_info, see the documentation for details) can be running at a time.  Strictly in terms of the deliberate state associated with the process, this structure makes concurrent programming quite safe because you don’t need to worry about concurrent modification of the state.  Honestly, you’d achieve the same effect in Java, for example, by marking every public method of a class as synchronized.  It works, and it does simplify matters, but it might be overkill, too.  On the other hand, when I’m working with concurrency in OOP languages,  I tend towards patterns like Producer/Consumer relationships, which I feel are just as safe and offer me more flexibility.

There is a dark side to Elixir’s use of processes to maintain state, too: Beyond the explicit state discussed above, I’ve run into two forms of implicit state that have tripped me up from time to time.  Not because they’re unreasonable in-and-of themselves, but because they are hidden and as I become more aware of them, I realize that they do need thinking about.  The first form of implicit state is the message queue that’s used to communicate with each process.  When a client makes a call like GenServer.call(pid, {:push, “something”}), the system actually puts the {:push, “something”} message onto the message queue for the given pid and that message waits in line until it gets to the front of the queue and then gets executed by a corresponding handle_call function.  The first time this tripped me up was when I was using a third-party library to talk to a device via a TCP socket.  I used this library for a while without any issues, but then there was one case where it would start returning the data from one device when I was asking for data from another device.  As I looked at the code, both the library code and mine, I realized that my call was timing out (anytime you use GenServer.call, there’s a timeout involved; I was using the default of 5 seconds and it was that timeout being triggered instead of a TCP timeout), but the device was still returning its data somewhat after that.  The timing was such that the code doing the low-level TCP communications put the response data into the higher-level library’s message queue before my code put its next message requesting data into the library’s queue, so the library sent my query to the device but gave my code the response to the previous query and went back to waiting for another message from either side, thus making everything one query off until another query call timed out.  This was easy enough to solve by making the library use a different process for each device, but it illustrates that you’re not entirely relieved of thinking about state when writing concurrent programs in Elixir.  Another issue I’ve had with the message queue is that anything that has your process identifier can send you any message.  I ran into this when I was using a different third-party library to send text messages to Slack.  The documentation for that library said that I should use the asynchronous send method if I didn’t care about the result.  In fact, it described the method as “fire-and-forget.”  These were useful but non-essential status messages, so that seemed the right answer for me.  Great, until the library decided to put a failure message into the message queue of my process and my code wasn’t written to accept it.  Because there was no function that matched the parameters sent by the library, the process crashed, the supervisor restarted it, and it just crashed again, and again, and again until the original condition that caused the problem went away.  Granted this was mostly a problem with library’s documentation and was easy enough to solve by adding a callback to my process to handle the library’s messages, but it does illustrate another area where the state of the message queue can surprise you.

The second place that I’ve been surprised to find implicit state is the state of the processes themselves.  To be fair, this has only been an issue in writing unit tests, but it does still make life somewhat more difficult.  In the OOP world, I’m used to having a new instance of the class I’m testing created for each and every test.  That turns out not to be the case when testing GenServers.  It’s fine if you’re using the system generated process identifiers, but I often have processes that have a fixed identifier so that any other process can send a message to them.  I ended up with messages from one test getting into the message queue of the process while another test was running.  It turns out that tests run asynchronously by default, but even when I went through and turned that off, I still had trouble controlling the state of these specially named processes.  I tried restarting them in the test setup, but that turned out to be more work than I expected because the shutdown isn’t instantaneous and trying to start a new process with the same name causes an error.  Eventually I settled on just adding a special function to reset the state without having to restart the process.  I’m not happy about this because that function should never be used in production, but it’s the only reliable way I’ve found to deal with the problem.  Another interesting thing I’ve noticed about tests and state is that the mocking framework is oddly stateful, too.  I’m not yet sure why this happens, but I’ve found that when I have some module, like the library that talks over TCP, a failing test can cause other tests to fail because the mocking framework still has hold of the module.  Remove the failing test and the rest pass without issue, same thing if you fix the failing test.  But having a failing test that uses the mocking framework can cause other tests to fail.  It really clutters up the test results to have all the irrelevant failures.

Conclusion
I am reminded of a story wherein Jascha Heifetz’s young nephew was learning the violin and asked his uncle to demonstrate something for him.  Heifetz apparently took up his nephew’s “el cheapo student violin” and made it sound like the finest Stradivarius.  I’m honestly not sure if this is a true story or not, but I like the point that we shouldn’t let our tools dictate what we’re capable of.  We may enjoy playing a Stradivarius/using our favorite programming language more than an “el cheapo” violin/other programming language, but we shouldn’t let that stop us achieving our ends.  Elixir is probably never going to be my personal Strad, but I’d be happy enough to use it commercially once it matures some more.  Most of the issues I’ve described in these postings are likely solvable with more experience on my part, but the lack of maturity would hold me back from recommending it for commercial purposes at this point.  When I started using it last June, Dave Thomas’ Programming Elixir was available for version 1.2 and the code was written in version 1.3, although later version were already available.  Since then, the official release version has gotten up to 1.6.4 with some significant seeming changes in each minor version.  Other things that make me worry about the maturity of Elixir include Ecto, the equivalent of an ORM for Elixir, which doesn’t list Oracle as an officially supported database and the logging system, which only writes to the console out of the box.  (Although you can implement custom backends for it.  I personally would prefer to spend my time solving business problems rather than writing code to make more useful logs, and I haven’t seen a third-party library yet that would do what I want for logging.)  For now, my general view is that Elixir should be kept for hobby projects and person interest, but it could be a viable tool for commercial projects once the maturity issues are dealt with.

About the Author
Narti is a former CC Pace employee now working at ConnectDER.  He’s been interested in the design of programming languages since first reading The Dragon Book and the proceedings of HOPL I some thirty years ago.

[1] I’ve no reason to doubt this.  I just haven’t tested it myself.

Introduction
Last June I took over an Elixir based project that involves a central server communicating with various devices over a TCP/IP network.  This was my first real exposure to Elixir, so I quickly got a copy of Dave Thomas’ Programming Elixir and went through Code School’s Elixir tutorials so that I could at least start understanding the code I was inheriting.  Since then, I’ve spent a fair amount of time changing and adding to this project and feel that I’m starting to come to terms with Elixir and have something to say about it.

My purpose in these posts is not to offer another introductory course in the language but rather to look at some of the features that are said to distinguish it from other languages and think about how different these features really are and how they affect the readability and expressivity of the code that one writes.  In this first entry, we’ll look at pattern matching, which constitute the biggest difference between Elixir’s biggest differentiator, and the data structures available in Elixir.  A forthcoming post will look at supervision trees, error handling and the handling of state in Elixir.

Pattern Matching
Pattern matching is perhaps the feature that most distinguishes Elixir from traditional imperative languages and, for me, is the most interesting part of the language.  Take a statement like:

x = 1

In an imperative language, this assigns the value of 1 to the variable x.  In Elixir, it effectively does the same thing in this case.  But in Elixir, the equals symbol is called a match operator and actually performs a match between the left-hand side and the right-hand side of the operator.  So, we can do things like

{:ok, x} = Enum.fetch([2, 4, 6, 8], 2)

do_something(x)
report_success()

In this case, the fetch function of the Enum module would return {:ok, 6}, which would match the left-hand side of the expression and the variable x would take on the value of 6.  We then call the do_something method, passing it the value of x.  What would happen, though, were we to call fetch with a position that didn’t exist in the collection we passed it?  In this case, fetch returns the atom :error, which would not match the left-hand side of the expression.  If we don’t do anything to handle that non-matching expression, the VM will throw an error and stop the process that tried to execute the failed match.  (This is not as drastic as it sounds, see the section on Supervisors, below.)  All is not lost, though.  There are several ways we could avoid terminating the process when the match fails, but I’ll just describe two: the with statement and pattern matching as part of the formal parameters to a function declaration.  Strictly speaking, the former is a macro, which I’m not going to cover, but it acts like a flow control statement and quacks like a flow control statement, so I’m not going to worry about the distinctions.  Rewriting the above to avoid terminating the running process using a with statement would look like

with {:ok, x} <- Enum.fetch([2, 4, 6, 8], 2) do

do_something(x)

report_success()

else

:error -> report_fetch_error()

end

In this case, we’re saying that we’ll call do_something if and only if the result of calling fetch matches {:ok, x}; if the result of calling fetch is :error, we’re going to call report_error, and if the result is something else, which isn’t actually possible with this function, we’d still throw an error.  We can actually have multiple matches in both the with and the else clauses, so we could also do something like:

with {:ok, x} <- Enum.fetch([2, 4, 6, 8], 2),
:ok <- do_something(x) do
report_success()
else
:do_something_error -> report_do_something_error()
:error -> report_fetch_error()
end

Here, do_something will still not be executed unless the result of fetch matches the first clause, but the else clauses are executed with the first non-matching value in the with clauses.  If do_something also just returned :error, though, we’d have to separate this out into two with statements.

The other way we could handle the original problem of the non-matching expression would be to use pattern matching in the formal parameters of a set of functions we define.  For example, we could express the code above as:

do_something(Enum.fetch([2, 4, 6, 8], 2))

def do_something({:ok, x}) do
result = …  # :ok or :error
report_do_something_result(result)
end

def do_something(:error) do
report_fetch_error()
end

def report_do_something_result(:ok) do
report_success()
end

def report_do_something_result(:error) do
# whatever it takes to report the error
end

In this case, we’re using pattern matching to overload a function definition and do different things based on what parameters are passed instead of using flow control statements.  The Elixir literature tends to say that this is actually preferable to using flow control statements as it makes each method simpler and easier to understand, but I have my doubts about it.  I find that I tend to duplicate a lot of code when use pattern matching as a flow control mechanisms.  To be fair, I’m learning more and more techniques to avoid this, but in the end I suspect that the discouragement of flow control structures and, perhaps more importantly, the lack of subclassing makes the code harder to understand and makes it harder to avoid duplicating code than I’d like.

Sometimes, too, I feel like the cure for avoiding this kind of duplication can be worse than  the disease.  One technique I’ve seen used is essentially to use reflection, and I found that code very hard to follow and some of the code just got duplicated between modules instead of being duplicated in a single module.  There are other techniques for dealing with this kind of duplication, too, but I can’t shake the feeling that I’m being pushed into making the code less expressive when trying to remove duplication when other languages would have made removing the duplication easier and more obvious.

Another concern I have around this form of overloading methods through pattern matching is that the meaning of the patterns to match isn’t always obvious.  For example looking at two functions like:

def do_something(item, false) do

end

def do_something(item, true) do

end

How do you know what the true and false mean?  Sadly, you have to go back and find the code that’s calling these functions to see what the distinguishing parameter represents.  You can actually write the declarations to be more intention revealing:

def do_something(item, false = _active) do

end

def do_something(item, true = _active) do

end

But the majority of the code I’ve looked at is written in the former style that doesn’t reveal its intentions as well as the latter.

Pattern matching is at once both the most intriguing and a sometimes troubling element of the language.  I find it an interesting concept, but I’m still not sure whether it’s something I’m really happy about.

Data Structures
Elixir displays a curious dichotomy in regards to data structures.  On the one hand, we’re told that we shouldn’t use data structures because they represent the bad-old object oriented way of doing things.  On the other hand, Elixir does allow one to create structs, which define a set of fields, but no behavior, and set default values for them.  Structs are really just syntactic sugar on top of Elixir’s maps, though.  An Elixir map is a set of key/value mappings and a struct enforces the set of keys that one can use for that particular map.  Otherwise they’re interchangeable, so I’m going to ignore structs for the rest of this discussion.  Elixir offers two other data structure that aggregate data: lists and tuples.  Conceptually Elixir lists are pretty much like a list in other programming languages.  (Remembering that all of these data structures are immutable in Elixir.)  The documentation says that there are some low-level implementation differences that make Elixir lists more efficient to work with than in other languages, but I haven’t concerned myself terribly about this.  Finally, in terms of aggregating data, we have tuples.  One can use a tuple much like a list, except that again there are implementation details that make they more efficient to use in some situations than lists, while in other situations the lists are more efficient.  In practice, I’ve mostly seen tuples to be used when one wants to match a pattern and lists used when one is just storing the data.  One somewhat contrived example of all of this and then some discussion:

{:ok, %{grades: [g1, g2, g3], names: [n1, “student2”, n3}} = get_grades()

So, what’s going on here?  We’re again using pattern matching to expect the function get_grades to return a tuple consisting of the atom :ok and a map containing two lists of three elements each, where the name of the second student in the list must be “student2,” just to show that we can do a match to that level.

While more specific than is typical, the pattern of returning status and response is fairly common in Elixir.  This supposedly obviates a lot of exception handling, but to my way of thinking, it really just replaces it with with statements, as described previously.  I’m not convinced that it makes that much difference.

I think that the prevalent use of atoms, on the other hand, makes a large difference.  Sadly, it’s a negative difference in that atoms turn into another form of magic numbers (in the code smell sense).  Elixir has no way to define a truly global constant, so there’s actually no way around polluting one’s code with these magic numbers and there’s no compiler support for when one mistypes one of them.  For example, consider some hypothetical code dealing with a TCP socket:

def send_stuff(socket, stuff) do
with :ok <- :gen_tcp.send(socket, stuff) do
:ok
else
{:error, :econnrefused} -> …
{:error, :timeout} -> …
end
end

def open_socket(address, port, options) do
with {:ok, socket} <- :gen_tcp.connect(address, port_options) do
{:ok, socket}
else
{:error, :econnnrefused} -> …
{:error, :timeout} -> …
end
end

There’s nothing to tell me that I’ve accidentally added an extra “n” to :econnrefused in the open_socket function until it it turns into a runtime error. (This is also the kind of code that will often not get unit tested because it’s hard to deal with.  I haven’t had to write any code at this level yet, so I’m unsure if I could mock gen_tcp or not, but that’s the only way I could think of to ensure that all the error handling worked correctly.)

Conclusion
I find Elixir’s pattern matching a very interesting feature of the language.  Sometimes I think that it’s a nice step towards adding assertions as first-class elements of production code.  At other times, though, I wonder if it really adds to the readability of the code.  For example, is the send_stuff example really more readable than:

public send_stuff(socket, stuff) {
try {
socket.send(stuff)
} catch (ConnectionRefusedException cre) {

} catch (TimeoutException te) {

}
}

That being said, I often vacillate between Java’s checked exceptions and .NET’s unchecked exceptions.  Elixir has pushed me much more towards making exceptions checked by default since that makes sure that I think about them when .NET and Elixir let me forget about them, often to my detriment.

Unless and until Elixir offers truly global constants, however, I find myself distrusted the use of atoms in pattern matching to distinguish between functions.  I worry that I have too much code that’s like:

def do_something(:type1 = _device_type, other_params) …
def do_something(:type2 = _device_type, other params) …
def do_something(:type3 = _device_type, other_params) …

Granted that I could probably do more to avoid this, but I also feel like this would make the code less expressive.

Next time we’ll look at supervision trees, error handling and handling state in Elixir.

About the Author
Narti is a former CC Pace employee now working at ConnectDER.  He’s been interested in the design of programming languages since first reading The Dragon Book and the proceedings of HOPL I some thirty years ago.

In part 1, we discussed how a new team could help lay out the foundation of a social contract between itself and the larger organization.  While that works at the macro level, what about the micro, day-to-day interactions within a team?  The focus on a social contract can be a powerful tool in helping a team to perform at its peak.  At its core, this commitment between team members strengthens trust, set expectations, and lets the team push beyond a previously understood comfort level.  Both sides of an interaction must be dedicated to fulfilling their part of the equation.

What are some way this could look like on a day to day basis?

Take the simple act of a code review.  At a surface level, the developer running the review would go over a high level overview of the card being worked.  They would walk through the basic coding that was done, and do a brief demo of the new functionality.  The reviewer nods along, perhaps pointing out an item or two regarding coding standards, and moves along without a deeper probing of the work done.  In a scenario with a strong social contract defined, this would be seen as falling well short of what is expected of both team members.  The developer running the review would do a fuller explanation of the goal of the card, and tie it into the larger picture of the application or the organization’s goals.  The reviewer would ask more intensive questions, including, for example, how test-driven development concepts were used to work the card.  The demo of the new functionality would not only show the happy path, but push at the edge cases that were considered.  The reviewer would prod on the design and maintainability of the code.  The goal of the exercise would be understood to not just check a box, but to provide value to each member of the team, and to the overall team itself.  There would be increased confidence in the quality of the code and the product.  There would be increased knowledge-sharing amongst the team leading to less stove piping of expertise.  Lastly, team members would be building trust between each other so that they could feel more comfortable communicating questions or out-of-the-box ideas.

The same idea can be applied to team-wide activities.  Does your team do a daily stand-up and regular retrospectives?  This is an area where muscle memory can set in, and team members can start to just go with the flow.  If they follow the concepts of the social contract, team members know that they are responsible to make sure the team is generating value from these.  Put down the phone during the meeting.  Make eye contact when addressing the team.  Commit to putting into action the steps generated to make the team work better.

The big picture with the social contract is to ensure team members are fully engaging in the tasks of the team.  Confidence and respect, a sense of responsibility to each other and the larger team builds organically over time.  Only then can teams truly hit the “perform” staging of “Form-Storm-Norm-Perform”, and deliver the full value to the organization and its customers.

Introduction
This is the first in a series of posts about our experience using Visual Studio Team Services (VSTS) to build a deployment pipeline for an ASP.NET Core Web application. Future posts will cover automated builds, release artifacts and deployment to Azure cloud services.

The purpose of this post is to show how to create a new Visual Studio Team Services, or VSTS, project, set up a Git code repository and integrate everything through your Visual Studio IDE.

Prerequisites:

Here is a video summarizing the steps described below:

 

Creating a New Project

  1. If you haven’t already done so, create a VSTS account using your Microsoft Account. Your Microsoft Account is the “owner” of this VSTS account.
  2. Log into VSTS. By default, the URL should look like (VSTS account name).visualstudio.com. For this demo, The VSTS account that we will be using is https://ccpacetest.visualstudio.com, and the Microsoft user is CCPaceTest@outlook.com
  3. Create a new project. This Microsoft Account is the “owner” of this project.

4. For this demo, we will select “Git” as our Version Control and “SCRUM” as the work item process.

Connecting VSTS Project to Visual Studio IDE

  1. There are many ways you can work on your project. A list of supported IDE’s is listed in the dropdown below. For this demo, we will use Visual Studio 2017 Professional Edition.
  2. Click “Clone in Visual Studio”. This will prompt you to launch your Visual Studio.
  3. If this is your first time running Visual Studio, you will be prompted to log in to your Visual Studio account. To simplify the process, log in to the Microsoft Account that you used to create the demo project previously. This should give you the admin access to the project.
  4.  Go to View > Team Explorer > Manage Connection. Visual Studio will automatically show you a list of the hosted repositories for the account you used to log in. If you are following the previous steps, you should be able to see HelloWorld.Demo project. If you are not seeing a project that you are expecting to have access to, verify the account you log in to or check with the project owner to make sure you are given the right permission.

5. Connect to the project.

6. If this is the first time you are accessing this project in your local environment, you will be prompted to clone the repository to your local git repository.

 

Initial Check In

  1. Within the Team Explorer, click the Home button. Under the “Solutions”, select “new…”. Using this method will ensure the solution is added to the right folder.
  2. For this Demo, we will create a new project using the ASP.NET Core Web Application template. The solution name doesn’t have to be the same as the project name in VSTS, but to avoid confusion, it is recommended to use the same name. 
  3. Once the solution is created, go back to Team Explorer and select “Changes”, you should be able to view all the items you have just added. 
  4. Enter a comment and click “Commit All”. This will update your local repository. 
  5. To “Push” these changes to the remote Repository, click “sync”, and finally “push”. 
  6.  You can verify this by logging into your VSTS, go to “Code” and you should be able to see all of the codes you have just checked in.

Collaborating with other Team Members

  1. You can add additional members to your project by going to the Settings > Security
  2. Please note that:

a:  If your VSTS account is Azure Active Directory backed, then you can only add email addresses that are internal to the tenant.

b: You must add email addresses for users who have “personal” Microsoft accounts unless your VSTS account uses your organization’s directory to authenticate users and control account access through Azure Active Directory (Azure AD). If new users don’t have Microsoft accounts, have them sign up.

‘Agile’… ‘Lean’… ‘Fitnesse’… ‘Fit’… ‘(Win)Runner’… ‘Cucumber’… ‘Eggplant’… ‘Lime’… As 2018 draws near, one might hear a few of these words bantered around the water cooler at this time of year as part of the trendy discussion topic: our personal New Year’s resolutions to get back into shape and eat healthy. While many of my well-intentioned colleagues are indeed well on their way to a healthier 2018, many of these words were actually discussed during a strategy session I attended recently –  which, surprisingly, based on the fact that many of these words are truly foods – did not cover new diet and exercise trends for 2018. Instead, this planning session agenda focused on another trendy discussion topic in our office as we close out 2017 and flip the calendar over to 2018: software test automation.

SOFTWARE TEST AUTOMATION?!?” you ask?

“SERIOUSLY – cucumbers and limes and fitness(e)?!?”

This thought came to mind after the planning session and gave me a chuckle. I thought, “If a complete stranger walked by our meeting room and heard these words thrown around, what would they think we were talking about?”

This humorous thought resonated further when recently – and rather coincidentally – a client asked me for a high-level, summary explanation as to how I would implement automated testing on a software development project. It was a broad and rather open-ended question – not meant to be technical in nature or to solicit a solution. Rather, how would I, with a background in Agile Business Analysis and Testing (i.e. I am not a developer) go about kickstarting and implementing a test automation framework for a particular software development project?

This all got me thinking. I’ve never seen an official survey, but I assume many people employed or with an interest in software development could provide a reasonable and well-informed response if ever asked to define or discuss software test automation, the many benefits of automated testing and how the practice delivers requisite business value. I believe, however, that there is a substantial dividing line between understanding the general concepts of test automation and successfully implementing a high-quality and sustainable automated testing suite. In other words, those who are considered experts in this domain are truly experts – they possess a unique and sought-after skill set and are very good at what they do. There really isn’t any middle ground, in my opinion.

My reasoning here is that getting from ‘Point A’ (simply understanding the concepts) to ‘Point B’ (implementing and maintaining an effective and sustainable test automation platform) is often an arduous and laborious effort, which unfortunately, in many cases, does not always result in success. At a fundamental level, the journey to a successful test automation practice involves the following:

  • Financial investment: Like with any software development quality assurance initiative, test automation requires a significant financial investment (in both tools and personnel). The notion here, however – like any other reasonable investment – is that an upfront financial investment should provide a solid return down the line if the venture is successful. This is not simply a two-point ‘spike’ user story assigned to someone to research the latest test automation tools. To use the poker metaphor – if you are ready to commit, then you should go all-in.
  • Time investment: How many software development project teams have you heard stating that they have extra time on their hands? Surely, not many, if any at all. Kicking off an automated testing initiative also requires a significant upfront time investment. Resources otherwise assigned to standard analysis, development or testing tasks will need to shift roles and contribute to the automated testing effort. Researching and learning the technical aspects of automated testing tools, along with the actual effort to design, build out and execute a suite of automated tests requires an exceptional team effort. Reassigning team tasks initially will reduce a team’s velocity, although similar to the financial investment concept, the hope is significant time savings and improved quality down the line in later sprints as larger deployments and major releases draw near.
  • Dedicated resources with unique, sought after skill sets: In my experience, I’ve seen that usually the highest rated employees with the most institutional/system knowledge and experience are called on to manage and drive automated testing efforts. These highly rated employees are also more than likely the most expensive, as the roles require a unique technical and analytical skill set, along with a significant knowledge of corresponding business processes. Because these organizational ‘all-stars’ initially will be focused solely on the test automation effort, other quality assurance tasks will inherently assume added risk. This risk needs to be mitigated in order to prevent a reduction in quality in other organizational efforts.

It turns out that the coincidental internal automated testing discussion and timely client question – along with the ongoing challenge in the QA domain associated with the aforementioned ‘Point A to Point B’ metaphor – led to a documented, bulleted list response to the client’s question. Let’s call it an Agile test automation best-practices checklist. This list can be found below and provides several concepts and ideas an organization could utilize in order to incorporate test automation into their current software testing/QA practice. Since I was familiar with the client’s organization, personnel and product offerings, I could provide a bit more detail than necessary. The idea here is not the ‘what’, as you will not find any specific automation tools mentioned. Instead, this list covers the ‘how’: the process-oriented concepts of test automation along with the associated benefits of each concept.

This list should provide your team with a handy starting point, or a ‘bridge’ between Point A and Point B. If your team can identify with many of the concepts in this list and relate them to your current testing process and procedures, then further pursuing an automated testing initiative should be a reasonable option for your team or project.

More importantly, this list can be used as a tool and foundation for non-technical members of a software development team (e.g. BA, Tester, ScrumMaster, etc.) in order to start the conversation – essentially, to decide if automated testing fits in with your established process and procedures, whether or not it will provide a return on investment and to ensure if you do indeed embark down the test automation path, that you continue to progress forward as applications, personnel and teams mature, grow and inevitably, change. Understand these concepts and when to apply them, and you can learn more about cucumbers, limes and eggplants as you progress further down the test automation path:

To successfully implement and advance an effective and sustainable automated testing initiative, I make every effort to follow the following strategy which combines proven Agile test automation best-practices with personal, hands-on project and testing experience. As such, this is not an all-inclusive list, rather just one IT consultant’s answer to a client’s question:

For folks new to the world of test automation and for those who had absolutely no idea that ‘Cucumber’ is not only a healthy vegetable but is also the name of an automated testing tool, I hope this blog entry is a good start for your journey into the world of test automation. For the ‘experts’ out there, please respond and let me know if I missed any important steps or tasks, or, how you might do things differently. After all, we’re all in this together, and as more knowledge is spread throughout the IT world, the more we can further enhance our processes.

So, if you’ll excuse me now, I’m going to go ahead and plan out my 2018 New Year’s resolution exercise regimen and diet. Any additional thoughts on test automation will have to wait until next year.

Thoughts on Agile DC 2017
In early February of 2001, a small group of software developers published the Agile Manifesto. Its principles are now familiar to countless professionals. Over the years, Agile practices seem to have gained mainstream appeal. Today it is difficult to find a software developer, or even a manager who has never heard of Agile. Having been a software developer for over 15 years, to me, Agile methodologies appear to embody a very natural way of producing software, so in a lot of ways, Agile conferences produce a “preaching to the choir” effect as well. As I attended the Agile DC 2017 conference, I wanted to remove myself from the bubble and try to objectively take a pulse of the state of Agile today. I also wanted to learn about the challenges that Agile seems to continue to face in its adoption at the enterprise level.

Basic takeaways
As I listened to various speakers and held conversations with colleagues and even one of the presenters, it appears that today the main challenges of adopting Agile no longer deal specifically with software engineering. That generally makes sense, given the amount of literature, combined experience and continuous improvement software development teams have been able to produce as Agile has matured from a disruptive toddler into a young adult. The real struggle appears to be in adoption of Agile across other business units, not so much IT. Why is that? I believe the two main reasons for the struggle can be attributed to organizational culture, but most importantly lack of knowledge in relation to team dynamics and human psychology.

Culture
Agile seems to continue to clash with the way business units are organized. Some hierarchical structures may produce very siloed and stovepiped groups that have difficulty collaborating with one another. While this may appear as an insurmountable problem, there is at least one great example of how Agile organizations have addressed it. We can point to DevOps as a great strategy of integrating or streamlining traditionally separate IT units. Software development teams and IT Operations teams have traditionally existed in their respective silos, often functioning as completely separate units. There was a clear need of streamlining collaboration between the two to improve efficiency and increase throughput. As a result, DevOps continues on a successful path. In regards to other business units, “Marketing” may depend on “Legal” or “Product Development” may depend on “Compliance”. There may be a clear need to streamline collaboration between these business units. One of my favorite presentations at the conference was by Colleen Johnson entitled “End to End Kanban for the Whole Organization”. By creating a corporate Kanban board, separate business units were able to collaborate better and absorb change faster. One of the business unit leaders was able to see that there were unnecessary resources being allocated to supporting a product development effort that was dropped (into the “dropped” row on the board). Anecdotally, this realization occurred during a walk down the hall and a quick glance at the board.

Education
When it comes to software development, Agile appears to contain some key properties that make it a very logical and a natural fit within the practice. Perhaps its success stories in the software development community is what produced a perception that it applies specifically to software engineering and not to other groups within an organization. It seems to me that one of the main themes of the conference was to help educate professionals who possess an agnostic view related to the benefits of an Agile transformation. I believe that continuous education is critical to getting business unit leaders to embrace Agile at the enterprise level. Some properties of Agile and its related methodologies addresses some important aspects of human psychology.  This is where the education seems to lack. There is much focus on improvement in efficiency and production of value, with little explanation of why these benefits are reaped. In simple terms, working in an Agile environment makes people happier. There are “softer” aspects that business leaders should consider when adopting Agile methodologies. Focus on incremental problem solving provides a higher level of satisfaction as work gets completed. This helps trigger a mechanism within the reward system of our brain. Continuous delivery helps improve morale as staff is able to associate their immediate efforts to the visible progress of their projects. Close collaboration and problem solving creates strong bonds between team members and fosters shared accountability. Perhaps a detailed discussion on team dynamics and psychology is a whole separate topic for another blog, but I feel these topics are important to managing the perception of the role of Agile at the enterprise level.

Final thoughts
It is important to note that enterprise-wide Agile transformations are taking place. For example, several speakers at the conference highlighted success stories at Capital One. It seems that enterprise-wide adoption is finally happening, albeit very incrementally, but perhaps that is the Agile way.

Learn more about Agile Coaching.

Boy this summer flew by quickly! CC Pace’s summer intern, Niels, enjoyed his last day here in the CC Pace office on Friday, August 18th. Niels made the rounds, said his final farewells, and then he was off, all set to return to The University of Maryland, Baltimore County, for his last hurrah. Niels is entering his senior year at UMBC, and we here at CC Pace wish him all the best. We will miss him.

Niels left a solid impression in a short amount of time here at CC Pace. In a matter of 10 weeks, Niels interacted with and was able to enhance several internal processes for virtually all of CC Pace’s internal departments including Staffing, Recruiting, IT, Accounting and Financial Services (AFA), Sales and Marketing. On his last day, I walked Niels around the office and as he was thanked by many of the individuals he worked with, there were even a few hugs thrown around. Many folks also expressed wishes that Niels’ and our paths will hopefully soon cross again. In short, Niels made a very solid impression on a large group of my colleagues in a relatively short amount of time.

Back in June I gladly accepted the challenge of filling Niels’ ‘mentor’ role as he embarked on his internship. I’d like to think I did an admirable job, which I hope Niels will prove many times over in the years to come as he advances his way up the corporate ladder. As our summer internship program came to a close, I couldn’t help reminiscing back to my days as a corporate intern more than 20 years ago. Our situations were similar; I also interned during the spring/summer semesters of my junior year at Penn State University, with the assurance of knowing I had one more year of college remaining before I entered the ‘real world’. My internship was only a taste of the ‘corporate world’ and what was in store for me, and I still had one more year to learn and figure things out (and of course, one more year of fun in the Penn State football student section – priorities, priorities…)

Penn State’s Business School has a fantastic internship program, and I was very fortunate to obtain an internship at General Electric’s (GE) Corporate Telecommunications office in Princeton, NJ. My role as an intern at GE was providing support to the senior staff in the design and implementation of voice, data and video-conferencing services for GE businesses worldwide. Needless to say, this was both a challenging and rewarding experience for a 21-year-old college student, participating in the implementation of GE’s groundbreaking Global Telecommunications Network during the early years of the internet, among other things.

As I reminisced back to my eight months at GE, I couldn’t help but notice the similarities between my internship and a few of the ‘lessons learned’ I took away from my experience 20+ years ago, and how they compared or contrasted to my recent observations and feedback I provided to Niels as his mentor. Of course, there are pronounced differences – after all, many things have changed in the last 20 years – the technology we use every day is clearly the biggest distinction. I would be remiss not to also mention the obvious generation gap – I am a proud ‘Gen X’er’, raised on Atari and MTV, while Niels is a proud Millennial, raised on the Internet and smartphones. We actually had a lot of fun joking about the whole ‘generation gap thing’ and I’m sure we both learned a lot about each other’s demographic group. Niels wasn’t the only person who learned something new over the summer – I learned quite a bit myself.

In summary, my reminiscing back to the late 90’s certainly helped make my daily music choices easier for a few weeks this summer led to the vision for this blog post. I thought it would be interesting to list a few notable experiences and lessons I learned as an intern at GE, 20 odd years ago, along with how my experiences compared or contrasted with what I observed in the last 10 weeks working side-by-side with our intern, Niels. These observations are based on my role as his mentor, and were provided as feedback to Niels in his summary review, and they are in no particular order.

Have you similarly had the opportunity to engage in both roles within the intern/mentor relationship as I have? Maybe your example isn’t separated by 20 years, but several years? Perhaps you’ve only had the chance to fulfill one of these roles in your career and would love the opportunity to experience the other? In any case, see if you recognize some of the lessons you may have learned in the past and how they present themselves today. I think you’ll be amazed at how even though ‘the more things change, the more they stay the same’.

 

 

Is your business undergoing an Agile Transformation? Are you wondering how DevOps fits into that transformation and what a DevOps roadmap looks like?

Check out a webinar we offered recently, and send us any questions you might have!

Recently, I was part of a successful implementation of a project at a big financial institution. The project was the center of attention within the organization mainly because of its value addition to the line of business and their operations.

The project was essentially a migration project and the team partnered with the product vendor to implement it. At the very core of this project was a batch process that integrated with several other external systems. These multiple integration points with the external systems and the timely coordination with all the other implementation partners made this project even more challenging.

I joined the project as a Technical Consultant at a rather critical juncture where there were only a few batch cycles that we could run in the test regions before deploying it into production. Having worked on Agile/Scrum/XP projects in the past and with experience working on DevOps projects, I identified a few areas where we could improve to either enhance the existing development environment or to streamline the builds and releases. Like with most projects, as the release deadline approaches, the team’s focus almost always revolves around ‘implementing functionality’ while everything else gets pushed to the backburner. This project was no different in that sense.

When the time had finally come to deploy the application into production, it was quite challenging in itself because it was a four-day continuous effort with the team working multiple shifts to support the deployment. At the end of it, needless to say, the whole team breathed a huge sigh of relief when we deployed the application rather uneventfully, even a few hours earlier than what we had originally anticipated.

Once the application was deployed to production, ensuring the stability of the batch process became the team’s highest priority. It was during this time, I felt the resistance to any new change or enhancement. Even fixes to non-critical production issues were delayed because of the fear that they could potentially jeopardize the stability of the batch.

The team dreaded deployments.

I felt it was time for me to build my case to have the team reassess the development, build and deployment processes in a way that would improve the confidence level of any new change that is being introduced. During one of my meetings with my client manager, I discussed a few areas where we could improve in this regard. My client manager was quickly onboard with some of the ideas and he suggested I summarize my observations and recommendations. Here are a few at a high level:

blog-image

 

It’s common for these suggestions to fall through the cracks while building application functionality. In my experience, I have noticed they don’t get as much attention because they are not considered ‘project work’. What project teams, especially the stakeholders, fail to realize is the value in implementing some of the above suggestions. Project teams should not consider this as additional work but rather treat it as part of the project and include the tasks in their estimations for a better, cleaner end product.

“Your Majesty,” [German General Helmuth von] Moltke said to [Kaiser Wilhelm II] now, “it cannot be done. The deployment of millions cannot be improvised. If Your Majesty insists on leading the whole army to the East it will not be an army ready for battle but a disorganized mob of armed men with no arrangements for supply. Those arrangements took a whole year of intricate labor to complete”—and Moltke closed upon that rigid phrase, the basis for every major German mistake, the phrase that launched the invasion of Belgium and the submarine war against the United States, the inevitable phrase when military plans dictate policy—“and once settled it cannot be altered.”
Excerpt From: Barbara W. Tuchman. “The Guns of August.”

In my spare time, I try to read as much as I can. One of my favorite topics is history, and particularly the history of the 20th century as it played out in Europe with so much misery, bloodshed, and finally mass genocide on an industrial scale. Barbara Tuchman’s book, The Guns of August, deals with the factors that led to the outbreak of WWI. Her thesis is that the war was not in any way inevitable. Rather, it was forced on the major powers by the rigidity of their carefully drawn up war plans and an inability to adjust to rapidly changing circumstances. One by one, like dominos falling, the Great Powers executed their rigid war plans and went to war with each other.

Although the consequences are far less severe, I occasionally see the same thing happen on projects, and not just software projects. A lot of time, perhaps appropriately, perhaps not, is spent in planning. The output of the planning process is, of course, several plans. Inevitably, after the project runs for a short while, the plans begin to diverge from reality. Like the Great Powers in the summer of 1914, project leadership sees the plans as destiny rather than guides. At all costs, the plans must be executed.

Why is this? I believe it stems from the fallacy of sunk cost: we’ve spent so much time on planning and coming up with the plans, it would be too expensive now to re-plan. Instead, let’s try to force the project “back on plan”. Because of the sunk cost of generating the plans, too much weight is placed upon them.

Hang on, though. I’ve played up the last part of the quote above – the part that emphasizes the rigidity of thinking that von Moltke and the German General Staff displayed. What about the first part of his statement? Isn’t it true that “the deployment of millions cannot be improvised”? Indeed it is. And that’s true in any non-trivial project as well. You can’t just start a large software project and hope to “improvise” along the way. So now what?

I believe there’s great value in the act of planning, but far less value in the plans themselves. Going through a process of planning and thinking about how the project is supposed to unfold tells me several things. What are the risks? What happens at each major point of the project if things don’t go as planned? What will be the consequences? How do we mitigate those consequences? This kind of contingency planning is essential.

Here’s how I usually do contingency planning for a software development project. Note that I conduct all these activities with as much of the project team present as is feasible to gather. At a minimum, I include all the major stakeholders.

First, I start with assumptions, or constraints external to the project. What’s our budget? When must the product be delivered? Are there non-functional constraints? For example, an enterprise architecture we must embed within, or data standards, or data privacy laws?

Next, I begin to challenge the assumptions. What if we find ourselves going over budget? Are we prepared to extend the delivery deadline to return to budget? I explore how the constraints play off against each other. Essentially, I’m prioritizing the constraints.

Then comes release planning. I try to avoid finely detailed requirements at this point. Rather, we look at epics. We try to answer the question, “What epics must be implemented by what time to generate the Minimum Value Product (MVP)?” Again, I challenge the plan with contingencies. What if X happens? What about Y? How will they affect the timeline? How will we react?

I don’t restrict this planning to timelines, budgets, etc. I do it at the technical level too. “We plan to implement security with this framework. What are the risks? Have we used the framework before? If not, what happens if we can’t get it to work? What’s our fallback?”

The key is to concentrate not just on coming up with a plan, but on knowing the lay of the land. Whatever the ideal plan that comes out of the planning session may be, I know that it will quickly run into trouble. So I don’t spend a lot of time coming up with an airtight plan. Instead, I build up a good idea of how the team will react when (not if) the plan runs aground. Even if something happens that I didn’t think of in the planning, I can begin to change my plan of attack while keeping the fixed constraints in mind. I have a framework for agility.

Never forget this: when the plan diverges from reality, it’s reality that will win, not the plan. Have no qualms about discarding at least parts of the plan and be ready to go to your contingent plans. Do not let “plans dictate policy”. And don’t stop planning – keep doing it throughout the project. No project ever becomes risk-free, and you always need contingencies.

Up, down, Detroit, charm, inside out, strange, London, bottom up, outside in, mockist, classic, Chicago….

Do you remember the questions on standardized tests where they asked you to pick the thing that wasn’t like the others?  Well, this isn’t a fair example as there are really two distinct groups of things in that list, but the names of TDD philosophies have become as meaningless to me as the names of quarks.  At first I thought I’d use this post to try to sort it all out, but then I decided that I’m not the Académie française of TDD school names and I really don’t care that much.  If the names interest you, I can suggest you read TDD – From the Inside Out or the Outside In.  I’m not convinced that the author has all the grouping right (in particular, I started learning TDD from Kent Beck, Ron Jeffries and Bob Martin in Chicago, which is about as classic as you can get, and it was always what she calls outside in but without using mocks), but it’s a reasonable introduction.

Still, it felt like it was time to think about TDD again, so instead I went back to Ron Jeffries Thoughts on Mocks and a comment he made on the subject in his Google Groups Forum.  In the posting, Ron speculated that architecture could push us a particular style of TDD.  That feels right to me.  He also suggested that writing systems that are largely “assemblies of OPC (Other People’s Code)” “are surely more complex” than the monolithic architectures that he’s used to from Smalltalk applications and that complexity might make observing the behavior of objects more valuable.  That idea puzzles me more.

My own TDD style, which is probably somewhere between the Detroit school, which leans towards writing tests that don’t rely on mocks, and London schools, which leans towards using mocks to isolate each unit of the application, definitely evolved as a way to deal with the complexity I faced in trying to write all my code using TDD.  When I first started out, I was working on what I believe would count as a monolithic application in that my team wrote all the code from the UI back to right before the database drivers.  We started mocking out the database not particularly to improve the performance of the tests, but because the screens were customizable per user, the data for which was in a database, and the actual data that would be displayed was stored across multiple tables.  It was really quite painful to try to get all the data set up correctly and we had to keep a lot more stuff in mind when we were trying to focus on getting the configurable part of the UI written.  This was back in 1999 or 2000, and I don’t remember if someone saw an article on mocking, but we did eventually light on the idea of putting in a mock object that was much easier to set up than the actual database.  In a sense, I think this is what Ron is talking about in the “Across an interface” section of his post, but it was all within our code.  Could we have written that code more simply to avoid the complexity to start with?  It was a long time ago and I can’t say whether or not I’d take the same approach now to solving that same problem, but I still do find a lot of advantages in using mocks.

I’ve been wanting to try using a NoSQL database and this seemed like a good opportunity to both try that technology and, after I read Ron’s post, try writing it entirely outside-in, which I always do anyway, and without using mocks, which is unusual for me.  I started out writing my front-end using TDD and got to the point that I wanted to connect a persistence mechanism.  In a sense, I suppose the simplest thing that could possibly work here would have been to keep my data in a flat file or something like that, but part of my purpose was to experiment with a NoSQL database.  (I think this corresponds to the reasonably common situation of “the enterprise has Oracle/MS SQL Server/whatever, so you have to use it.)  I therefore started with one of the NoSQL implementations for .NET.  Everything seemed fine for my first few unit tests.  Then one of my earlier tests failed after my latest test started passing.  Okay, this happens.  I backed out my the code I’d just written to make sure the failing test started passing, but the same test failed again.  I backed out the last test I’d written, too.  Now the failing test passed but a different one failed.  After some reading and experimentation, I found that the NoSQL implementation I’d picked (honestly without doing a lot of research into it) worked asynchronously and it seemed that I’d just been lucky with timing before they started randomly failing.  Okay, this is the point that I’d normally turn to a mocking framework and isolate the problematic stuff to a single class that I could either put the effort into unit testing or else live with it being tested through automated customer tests.

Because I felt more strongly about experimenting with writing tests without using mocks than with using a particular NoSQL implementation, I switched to a different implementation.  That also proved to be a painful experience, largely because I hadn’t followed the advice I give to most people using mocks, which is to isolate the code for setting up the mock into an individual class that hides the details of how the data is set up.  Had I been following that precept now that I was accessing a real persistence mechanism rather than a mock, I wouldn’t have needed to change my tests to the same degree.  The interesting thing here was that I had to radically change both the test and the production code to change the backing store.  As I worked through this, I found myself thinking that if only I’d used a mock for the data access part, I could have concentrated on getting the front-end code to do what I wanted without worrying about the persistence mechanism at all.  This bothered me enough that I finally did end up decoupling the persistence mechanism entirely from the tests for the front-end code and focus on one thing at a time instead of having to deal with the whole thing at once.  I also ended up giving up on the NoSQL implementation for a more familiar relational database.

Image from http://martinfowler.com/bliki/BeckDesignRules.html

Image from http://martinfowler.com/bliki/BeckDesignRules.html

So, where does all this leave my thoughts on mocks? Ron worried in his forum posting that using mocks creates more classes than testing directly and thus make the system more complex. I certainly ended up with more classes than I could have, but that’s the lowest priority in Ken Beck’s criteria for simple design. Passing the tests is the highest priority, and that’s the one that became much easier when I switched back to using mocks. In this case, the mocks isolated me from the timing vagaries of the NoSQL implementations. In other cases, I’ve also found that they help isolate me from other random elements like other developers running tests that happen to modify the same database tables that are modifying. I also felt like my tests became much more intention-revealing when I switched to mocks because they talked in terms of the high-level concepts that the front-end code dealt with instead of the low-level representation of the data of the persistence code needed to know about.  This made me realize that the hard part was caused by the mismatch between the way the persistence mechanism (either a relational database or the document-oriented NoSQL database that I tried) and the way I thought of the data in my code. I have a feeling that if I’d just serialized my object graph to a file or used an object-oriented database instead of a document-oriented database, that complexity would go away.  That’s for a future experiment, though.  And, even if it’s true, I don’t know how much I can do about it when I’m required to use an existing persistence mechanism.

Ron also worried that the integration between the different components is not tested when using mocks.  As Ron puts it in his forum message: “[T]here seems to be a leap of faith made: it’s not obvious to me that if we know that A sends the right messages to Mock B, and B sends the right messages to Mock A, A and B therefore work. There’s an indirection in that logic that makes me nervous. I want to see A and B getting along.”  I don’t think I’ve ever actually had a problem with A and B not getting along when I’m using mocks, but I do recall having a lot of problems with it when I had to map between submitted HTML parameters and an object model.  (This was back when one did have to write such code oneself.)  It was just very to mistype names on either side and not realize it until actual user testing.  This is actually the problem that led us to start doing automated customer testing.  Although the automated customer tests don’t always have as much detail as the unit tests, I feel like they alleviate any concerns I might have that the wrong things are wired together or that the wiring doesn’t work.

It’s also worth mentioning that I really don’t like the style of using mocks that really just check if a method was called rather than it was used correctly.  Too often, I see test code like:

mock.Stub(m => m.Foo(Arg.Is.Anything, Arg.Is.Anything)).Return(0);

mock.AssertWasCalled(m => m.Foo(Arg.Is.Anything, Arg.Is.Anything));

I would never do something like this for a method that actually returns a value.  I’d much rather set up the mock so that I can recognize that the calling class both sent the right parameters and correctly used the return value, not just that it called some method.  The only time I’ll resort to asserting a method was called (with all the correct parameters), is when that method exists only to generate a side-effect.  Even with those types of methods, I’ve been looking for more ways to test them as state changes rather than checking behavior.  For example, I used to treat logging operations as side-effects: I’d set up a mock logger and assert that the appropriate methods were called with the right parameters.  Lately, though, with Log4Net, I’ve been finding that I prefer to set up the logger with a memory appender and then inspect its buffer to make sure that the message I wanted got logged at the level I wanted.

In his Forum posting, Ron is surely right in saying about the mocking versus non-mocking approaches to writing tests: “Neither is right or wrong, in my opinion, any more than I’m right to prefer BMW over Mercedes and Chet is wrong to prefer Mercedes over BMW. The thing is to have an approach to building software that works, in an effective and consistent way that the team evolves on its own.”  My own style has certainly changed over the years and I hope it will continue to adapt to the circumstances in which I find myself working.  Right now I find myself working with a lot of legacy code that would be extremely hard to get under test if I couldn’t break it up and substitute mocks for collaborators that are nearly impossible to get set up correctly.  Hopefully I’ll also be able to use mocks less, as I find more projects that allow me to avoid the impedance between the code’s concept of the model and that of external systems.

At CC Pace, our Agile practitioners are sometimes asked whether Scrum is useful for activities other than software development. The answer is a definite yes.

Elizabeth (“Elle”) Gan is Director of Portfolio Management at a client of ours. She writes the blog My Scrummy Life. Recently, she wrote a fascinating post on how she used Scrum to plan her upcoming wedding.

How have you used Scrum outside its “natural” setting in software development?

Recently, I had one of those rare moments when my son, a 3rd grader, seemed to understand at least part of what I do for work.

He was excited to tell me about the Code Studio site (studio.code.org) that they used during computer lab at school. The site introduces students to coding concepts in a fun and engaging way. Of course, it helps to sweeten the deal with games related to Star Wars, Minecraft and Frozen.

We chose Minecraft and started working through activities together. The left side of the screen is a game board that shows the field of play, which is your character in a field surrounded by trees and other objects. The right side allows you to drag and drop colorful coding structures and operations related to the game (e.g. place block, destroy block, move forward) in a specific order. When you’re satisfied that you’ve built the program to complete the objective, you press the “Run” button and watch your character move through your instructions.

Anyone familiar with automated testing tools can relate to the joyful anticipation of seeing if your creation does the thing that it’s supposed to do (ok, maybe joy is a strong word, but it’s kind of fun). In our case, we anxiously watched as Steve, our Minecraft Hero, moved through our steps, avoiding lava and creepers along the way. If we failed and ended up taking a lava bath, we tried again. If we succeeded, the site would move us to the next objective, but also inform us that it’s possible to do it in fewer steps/lines of code (never good enough, huh?!).

Together, we were able to break down a complex problem into several smaller steps, which is a fundamental skill when building software incrementally. We also had to detect patterns in our code and determine the best way to reuse them given a limited set of statements and constructs.  For my son, it was a fun combination of gaming and puzzle solving. For me, it was nice to return to the fundamentals of working through logic to solve a problem – no annoying XML configurations or git merges to deal with.

I’m a big supporter of the Hour of Code movement and similar initiatives that expose students to programming, but feel that there’s often an emphasis on funneling kids into STEM career paths. This is all well and good for those with the interest and aptitude, but coding also teaches you to patiently and steadfastly work through problems. This can be applied to many different careers. Chris Bosh, the NBA player, even wrote an article about his interest in coding.

I encourage students of all ages to check out the Code Studio site and Hour of Code events: https://hourofcode.com

Like Martin Fowler, I am a long-time Doctor Who fan.  Although I haven’t actually gotten around to watching the new series yet, I’ve been going back through as much of the classic series as Netflix has available.  Starting way back from An Unearthly Child several years ago, I’ve worked my way up through the late Tom Baker period.  The quality of the special effects is often uppermost in descriptions of the classic series, but that doesn’t actually bother me that much.  Truth be told, I feel that a lot of modern special effects have their issues, too.  Sometimes modern effects are good and sometimes they’re bad; the bad effects are bad differently than those in the classic series, but at least the classic series didn’t substitute effects, good or bad, for story quality or acting ability.  To be fair, the classic series also has its share of less than successful stories, but the quality on the whole has remained high enough that I keep going with it.  (In my youth, I stopped watching shortly after Colin Baker became The Doctor, although I can’t remember if I became disenchanted with it or if my local PBS station stopped carrying it.  I’m very curious to see what happens when I get past Peter Davison this time.)

I’ve also been very interested to see the special features that come with most of the discs.  As a rule, I don’t bother with special features as I find them either inane or annoying (particularly directors talking over their movies), but the features on the Doctor Who discs have mostly been worth my time.  Each disc generally has at least one extra that has some combination of the directors, producers, writers, designers and actors talking pretty frankly about what worked and didn’t work with the serial.  I’ve learned a lot about making a television show at the BBC from these and also about the constraints, both technological and budgetary, that the affected the quality of the effects on the show.  (Curiously, it also brought home the effects of inflation to me.  I’d heard the stories of people going around with wheelbarrows full of cash in the Weimar Republic, but that didn’t have nearly as much impact on me as hearing someone talking about setting the budget for each show at the beginning of a season in the early 1970s and being able to get only half as much as they expected by the time they were working on the last series of the season.  Perhaps because I do create annual budgets, the scale is easier to relate to than the descriptions of hyperinflation.)

While I am sure there are those are fascinated by what I choose to get from Netflix (you know who you are 😊), I actually had a point to make here.  I recently watched the Tom Baker episode The Creature from the Pit, which has a decent storyline but was arguably let down by the design of the creature itself.  (There are those that argue it was just a bad story to start with and those that argue that it was written as an anti-capitalist satire that the director didn’t understand.)  As I watched the special feature in which Mat Irvine, the person in charge of the visual effects including the creature, and some of his crew talked about the problems involved in the serial, I realized that this was a lovely example of why we should value customer collaboration over contract negotiation.  Apparently the script described the creature as “a giant, feathered (or perhaps scaled) slug of no particular shape, but of a fairly repulsive grayish-purplish colour…unimaginably huge.  Anything from a quarter of a mile to a mile in length.”  (Quote from here.)  I suspect someone trying to realize this would have a horrible time of it even in today’s world of CGI effects, but apparently in the BBC at the time you just got your requirement and implemented it as best you could even though both the director and the designer were concerned about its feasibility and could point to a previous failure to create a massive creature in The Power of Kroll.

drwhoAlas, neither the designer nor the director felt empowered enough to work with the writer or script editor (or someone, I’m unclear on who could actually authorize a change) on the practical difficulties of creating a creature with such a vague description, resulting in what we tend to have a laugh at now.  I don’t know what the result would have been if there had been more collaboration in the realization of this creature, but it seems to me that the size and shape of the creature were not really essential to the story.  Various characters say that it eats people, although the creature vehemently denies this, and we see some evidence that it accidentally crushes people, but neither of these ideas requires a minimum size of a quarter of a mile.  It seems like the design could have gone a different way, that was easier to realize, if everyone involved had really been able to collaborate on the deliverable rather than having each group doing the best that it could in a vacuum.

In my last post, I talked about the interesting Agile 2015 sessions on team building that I’d attended. This time we’ll take a look at some sessions on DevOps and Craftsmanship.

On the DevOps’ side, Seth Vargo’s The 10 Myths of DevOps, was by far the most interesting and useful presentation that I attended. Vargo’s contention is that the DevOps concept has been over-hyped (like so many other things) and people are soon going to be becoming disenchanted with the DevOps concept (the graphic below shows where Vargo believes DevOps stands on the Gartner Hype Cycle right now). I might quibble about whether we’ve passed the cusp of inflated expectations yet or not, but this seems just about right to me. It’s only recently that I’ve heard a lot of chatter about DevOps and seen more and more offerings and that’s probably a good indication that people are trying to take advantage of those inflated expectations. Vargo also says that many organizations either mistake the DevOps concept for just plain operations or use the title to try to hire SysAdmins under the more trendy title of DevOps. Vargo didn’t talk to it, but I’d also guess that a lot of individuals are claiming to be experienced in DevOps when they were SysAdmins who didn’t try to collaborate with other groups in their organizations.

n1

The other really interesting myth in Vargo’s presentation was the idea that DevOps is just between engineers and operators. Although that’s certainly one place to start, Vargo’s contention is that DevOps should be “unilaterally applied across the organization.” This was characteristic of everything in Vargo’s presentation: just good common sense and collaboration.

Abigail Bangser was also focused on common sense and collaboration in Team Practices Applied to How We Deploy, Not Just What, but from a narrower perspective. Her pain point seems to have been that technical stories that weren’t well defined and were treated differently than business stories. Her prescription was to extend the Three Amigos practice to technical stories and generally treat techincal stories like any other story. This was all fine, but I found myself wondering why that kind of collaboration wasn’t happening anyway. It seems like doing one’s best to understand a story and deliver the best value regardless of whether the story is a business or a technical one. Alas, Bangser didn’t go into how they’d gotten to that state to start with.

CaptureOn the craftsmanship side, Brian Randell’s Science of Technical Debt helped us come to a reasonably concise definition of technical debt and used Martin Fowler’s Technical Debt Quadrant distinguish between different types of technical debt: prudent vs. reckless, and deliberate vs. inadvertent. He also spent a fair amount of time demonstrating SonarQube and explaining how it had been integrated into the .NET ecosystem. SonarQube seemed fairly similar to NDepend, which I’ve used for some years now, with one really useful addition: both NDepend and SonarQube evaluate your codebase compared to various configurable design criteria, but SonarQube also provides an estimated time to fix all the issues that it found with your codebase. Although it feels a little gimmicky, I think it would be more useful than just having the number of instances of failed rules in explaining to Product Owners the costs that they are incurring.

I also attended two divergent presentations on improving our quality as developers. Carlos Sirias presented Growing a Craftsman through Innovation & Apprenticeship. Obviously, Sirias advocates for an apprenticeship model, a la blacksmiths and cobblers, to help improve developer quality. The way I remember the presentation, Sirias’ company, Pernix, essentially hired people specifically as apprentices and assigned them to their “lab” projects, which are done at low-cost for startups and small entrepreneurs. The apprenticeship aspect came from their senior people devoting 20% of their time to the lab projects. I’m now somewhat perplexed, though, because the Pernix website says that “Pernix apprentices learn from others; they don’t work on projects” and the online PDF of the presentation doesn’t have any text in it, so I can’t double check my notes. Perhaps the website is just saying that the apprentices don’t work as consultants on the full-price projects, and I do remember Sirias saying that he didn’t feel good about charging clients for the apprentices. On the other hand, I can’t imagine that the “lab” projects, which are free for NGOs and can be financed by micro-equity or actual money, don’t get cross-subsidised by the normal projects. I feel like just making sure that junior people are always pairing and get a fair chance to pair with people they can learn from, which isn’t always “senior” people, is a better apprenticeship model than the one that Sirias presented.

The final craftsmanship presentation I attended, Steve Ropa’s Agile Craftsmanship and Technical Excellence, How to Get There was both the most exciting and the most challenging presentation for me. Ropa recommends “micro-certifications,” which he likens to Boy Scout merit badges, to help people improve their technical abilities. It’s challenging to me for two reasons. First, I’m just not a great believe in credentialism because I don’t find they really tell me anything when I’m trying to evaluate a person’s skills. What Ropa said about using internally controlled micro-certifications to show actual competence in various skill areas make a lot of sense, though, since you know exactly what it takes to get one. That brings me to the second challenge: the combination of defining a decent set of micro-certifications, including what it takes to get each certification, and a fair way of administering such a system. For the most part, the first part of this concern just takes work. There are some obvious areas to start with: TDD, refactoring, continuous integration, C#/Java/Python skills, etc., that can be evaluated fairly objectively. After that, there are some softer areas that would be more difficult to figure out certifications for, though. How, for example, do you grade skills in keeping a code base continually releasable? It seems like an all-or-nothing kind of thing. And how does one objectively certify a person’s ability to take baby steps or pair program?

Administering such a program also presents me with a real challenge: even given a full set of objective criteria for each micro-certification, I worry that the certifications could become diluted through cronyism or that the people doing the evaluations wouldn’t be truly competent to do so. Perhaps this is just me being overly pessimistic, but any organization has some amount of favoritism and I suspect that the sort of organizations that would benefit most from micro-certifications are the ones where that kind of behavior has already done the most damage. On the other hand, I’ve never been a boy scout and these concerns may just reflect my lack of experience with such things. For all that, the concept of micro-certifications seems like one worth pursuing and I’ll be giving more thought on how to successfully implement such a system over the coming months.

In Diamond in the Rough I talked some about the similarity I found between David Gries’ work on proving programs correct in The Science of Programming and actually doing TDD.  I’ve since gone back to re-read Science.  Honestly it hasn’t gotten any easier to read since I last read it so long ago.  It’s still a worthwhile, though, if you can find a copy. It’s a nice corrective to an issue I’ve seen with a lot of developers over the years: they just don’t have the skills or maybe the inclination to reason about their code.

This seems most evident in an over-reliance on debuggers.  For myself, I only use a debugger when I have a specific question to answer, and I can’t answer it immediately from the code.  “What’s the run-time type of this visitor given these circumstances?” of “Which of the five different objects on the silly call chain was null when it was used?” (that’s a little lazy, since I could refactor the code to make the answer clear without the debugger, but I’m generally trying to find the unit test to write to expose the problem before I fix it, and I don’t want to refactor until I have a green bar that includes the fix to the actual problem). Those are the types of questions I might turn to the debugger to help answer. Particularly when the cost of setting up a test to get that type of feedback is unusually high compared to using a debugger.  This is common when my code is interacting with third-party code in a server container.  Trying to set up some kind of integration or functional test to get the very specific information I want can be a horrible rabbit hole.  (Although it may still be worth setting up the functional test to prove a problem is actually solved.)

So I do recognize times when one can get good feedback more effectively from a debugger than immediately trying to write a unit test or just reasoning about the code one is looking at, but it worries me when the debugger is the first tool a developer uses when something doesn’t act as they expect.  Even when I try asking some developers the questions that I ask myself when trying to understand the problem, the first response is to fire-up the debugger, even if it’s patently not a question that needs a debugger to answer (“What kind of type is that method returning?”  when the actual method declaration is no more than twenty lines down and declares that it’s returning a concrete type).  And, most egregiously, I’ve often seen people debugging their unit tests.

That’s really worrying since the unit tests are meant to help us understand our code.  It concerns me when I see someone’s first response to a unit test failure (even a test they’re just writing) is to run it through the debugger.  Which is not to say that I never do so, but for me it’s always a case of “I know this test failed because this variable wasn’t what I expected it to be.  What was it?”  Again, for me the debugger is a means for getting a specific piece of information rather than something I try to use to help me understand the code.  And that seems a much more efficient and effective way to get the code to do what I want it to do.

The more general case for turning to the debugger seems to be when one doesn’t understand the code.  It’s a little more understandable when trying to understand someone else’s code.  Even then, though, I’m not convinced that the debugger is the best option for trying to really understand what the code is doing.  In a case like this, I’d comment out the code and write unit tests to make me uncomment one small part at a time.  This forces me to really understand what the code is doing because setting up the tests correctly helps me look at the various conditions that affect the code. Gries’ techniques come into play here, too.  It’s unconscious for me now, but the ability to reason formally about the code helps lead me into each new test that will make me uncomment the smallest possible bit of code.

So, how do we learn to reason about our code rather than turning to the debugger as an oracle that will hopefully explain it?  Part of it may be the skills one learns in Gries’ Science, even if they’re not formally applied. The stronger influence, however, may be the way I learned to practice TDD.  I do genuinely test-drive my code and when I first learned TDD it was drilled into me to write my failing test and state why the test would fail.  After not really writing one’s tests before the code, not asking that simple question seems the biggest failure in how I’ve seen TDD practiced.  That might be the better way to learn to really reason about what one’s code is doing.  While I still respect and appreciate the techniques that Gries described in Science, it’s probably both easier and more efficient to learn the discipline of really writing tests before the code, asking the why the test should fail and thinking about it when it fails differently than one expects.

As a developer, I tend to think of YAGNI in terms of code.  It’s a good way to help avoid building in speculative generality and over-designing code.  While I do sometimes think about YAGNI at the feature level, I have a tendency to view the decision to implement a feature as a business decision that should be made by the customer rather than something the developers should decide.  That’s never stopped me from giving an opinion, though!  Until recently, my argument against implementing a feature has generally been a simple argument about opportunity cost.   Happily, Martin Fowler’s recent post on YAGNI (http://martinfowler.com/bliki/Yagni.html) adds greatly to my understanding of all the cost that not considering YAGNI can add to a project.  Well worth a read regardless of whether you’re a developer, Product Owner, Scrum Master or fill any other role.

In previous installments in this series, I’ve talked about what Product Owners and development team members can do to ensure iteration closure. By iteration closure, I mean that the system is functioning at the end of each iteration, available for the Product Owner to test and offer feedback. It may not have complete feature sets, but what feature sets are present do function and can be tested on the actual system: no “prototypes”, no “mock-ups”, just actual functioning albeit perhaps limited code. I call this approach fully functional but not necessarily fully featured.

In this installment, I’ll take a look at the Scrum Master or Project Manager and see what they can do to ensure full functionality if not full feature sets at the end of each iteration. I’ll start out by repeating the same caveat I gave at the start of the Product Owner installment: I’m a developer, so this is going to be a developer-focused look at how the Scrum Master can assist. There’s a lot more to being a Scrum Master, and a class goes a long way to giving you full insight into the responsibilities of the role.

My personal experience is that the most important thing you as a Scrum Master can do is to watch and listen. You need to see and experience the dynamics of the team.

At Iteration Planning Meetings (IPMs), are Product Owners being intransigent about priorities or functional decomposition? Are developers resisting incremental functional delivery, wanting to complete technical infrastructure tasks first? These are the two most serious obstacles to iteration closure. Be prepared to intervene and discuss why it’s in everyone’s interest to achieve this iteration closure.

At the daily stand-up meetings, ensure that every team member speaks (that includes you!), and that they only answer the three canonical questions:

  1. What did I do since the last stand-up?
  2. What will I do today?
  3. What is in my way?

Don’t allow long-winded discussions, especially technical “solution” discussions. People will tune out.

You’re listening for:

  • Someone who always answers (1) and (2) with the same tasks every day and yet says they have no obstacles
  • Whatever people say in response to (3)

Your task immediately after the stand-up is to speak with team members who have obstacles and find out what you can do to clear the obstacles. Then address any team members who’re always doing the same task every day and find out why they’re stuck. Are they inexperienced and unwilling to ask for help? Are they not committed to the project mission and need to be redeployed?

Guard against an us-versus-them mentality on teams, where the developers see Product Owners or infrastructure teams as “the enemy” or at least an obstacle, and vice versa. These antagonistic relationships come from lack of trust, and lack of trust comes from lack of results. Again, actual working deliverables at the close of each iteration go a long way to building trust. Look for intransigence on either the developer team or with the Product Owner: both should be willing to speak freely and frankly with each other about how much work can be done in an iteration and what constitutes Minimal Value Product for this iteration. It has to be a negotiation; try to facilitate that negotiation.

Know your team as human beings – after all, that is what they are. Learn to empathize with them. How do individuals behave when they’re happy or when they’re frustrated? What does it take to keep Jim motivated? It’s probably not the same things as Bill or Sally. I’ve heard people advocate the use of Meyers-Briggs Personality Tests or similar to gain this understanding. I disagree. People are more complex than 4 or 5 letters or numbers at one moment in time. I may be an introvert today and an extrovert tomorrow, depending on how my job is going. Spend time with people to really know them, and don’t approach people as test subjects or lab rats. Approach them as human beings, the complex, satisfying, irritating, and ultimately rewarding organisms that we actually are.

Occasionally, when I speak at technical or project management meet-ups, an audience member will ask, “I’m a Scrum Manager and I can’t get the Product Owner to attend the IPM; what should I do?” or, “My CIO comes in and tasks my developer team directly without going through the IPM; how do I handle this?” I try to give them hints, but the answer I always give is, “Agile will only expose your problems; it won’t solve them.” In the end, you have to fall back on your leadership and management skills to effect the kind of change that’s necessary. There’s nothing in Scrum or XP or whatever to help you here. Like any other process or tool, just implementing something won’t make the sun come out. You still have to be a leader and a manager – that’s not going away anytime soon.

Before I close, let me point out one thing I haven’t listed as something a Scrum Master ought to be adept at: administration. I see projects where the Scrum Master thinks their primary role is to maintain the backlog, measure velocity, track completion, make sure people are updating their Jira entries, and so on. I’m not saying this isn’t important – it is. It’s very important. But if you’re doing this stuff to the exclusion of the other stuff I talked about up there, you’re kind of missing the point. Those administrative tasks give you data. You need to act on the data, or what’s the point? Velocity is decreasing. OK…what are you and the team going to do about it? That’s the important part of your role.

When we at CC Pace first started doing Agile XP projects back in 2000-2001, we had a role on each project called a Tracker. This person would be part time on the project and would do all the data collection and presentation tasks. I’d like to see this role return on more Agile projects today, because it makes it clear that that’s not the function of the Scrum Master. Your job is to lead the team to a successful delivery, whatever that takes.

So here we are at the end of my series. If there’s one mantra I want you to take away from this entire series, it’s Keep the system fully functional even if not fully featured. Full functionality – the ability of the system to offer its implemented feature set to the Product Owner for feedback – should always come before full features – the completeness of the features and the infrastructure. Of course, you must implement the complete feature set and the full infrastructure – but evolve towards it. Don’t take an approach that requires that the system be complete to be even minimally useful.

If you’re a Product Owner:

  • Understand the value proposition not just of the entire system, but of each of its components and subsets.
  • Be prepared to see, use, and test subsets, or subsets of subsets of subsets, of the total feature set. Never say, “Call me only when the system is complete.” I guarantee this: your phone will never ring.

If you’re a developer:

  • Adopt Agile Engineering techniques such as TDD, CI, CD, and so on. Don’t just go through the motions. Become really proficient in them, and understand how they enable everything else in Agile methodologies.
  • Use these techniques to embrace change, and understand that good design and good architecture demand encapsulation and abstraction. Keeping the subsystems isolated so that the system is functional even if not complete is not just good for business. It’s good engineering. A car’s engine can (and does) run even before it’s installed into the car. Just don’t expect it to take you to the grocery store.
  • Be an active team member. Contribute to the success of the mission. Don’t just take orders.

If you’re a Scrum Master:

  • Watch and listen. Develop your sense of empathy so you “plug in” to the team’s dynamics and understand your team.
  • Keep the team focused on the mission.
  • If you want to sweat the details of metrics and data, fine – but your real job is to act on the data, not to collect it. If you aren’t good at those collection details, delegate them to a tracking role.

I hope you’ve enjoyed this series. Feel free to comment and to connect with me and with CC Pace through LinkedIn. Please let me hear how you’ve managed when you were on a supposedly Agile project and realized that the sound of rushing water you heard was the project turning into a waterfall.

As I write this blog entry, I’m hoping that the curiosity (or confusion) of the title captures an audience. Readers will ask themselves, “Who in the heck is Jose Oquendo? I’ve never seen his name among the likes of the Agile pioneers. Has he written a book on Agile or Scrum? Maybe I saw his name on one of the Agile blogs or discussion threads that I frequent?”

In fact, you won’t find Oquendo’s name in any of those places. In the spirit of baseball season (and warmer days ahead!), Jose Oquendo was actually a Major League Baseball player in the 1980’s, playing most of his career with the St. Louis Cardinals.

Perhaps curiosity has gotten the better of you yet again, and you look up Oquendo’s statistics. You’ll discover that Oquendo wasn’t a great hitter, statistically-speaking. His career .256 batting average and 14 homeruns over a 12 year MLB career is hardly astonishing.

People who followed Major League Baseball in the 1980’s, however, would most likely recognize Oquendo’s name, and more specifically, the feat which made him unique as a player. Oquendo has done something that only a handful of players have ever done in the long history of Major League Baseball – he’s played EVERY POSITION on the baseball diamond (all nine positions in total).

Oquendo was an average defensive player and his value obviously wasn’t driven from his aforementioned offensive statistics. He was, however, one of the most valuable players on those successful Cardinal teams of the 80’s, as the unique quality he brought to his team was derived from a term referred to in baseball lingo as “The Utility Player”. (Interestingly enough, Oquendo’s nickname during his career was “Secret Weapon”.)

Over the course of a 162-game baseball season, players get tired, injured and need days off. Trades are executed, changing the dynamic of a team with one phone call. Further complicating matters, baseball teams are limited to a set number of roster spots. Due to these realities and constraints of a grueling baseball season, every team needs a player like Oquendo who can step up and fill in when opportunities and challenges present themselves. And that is precisely why Oquendo was able to remain in the big leagues for an amazing 12 years, despite the glaring deficiency in his previously noted statistics.

Oquendo’s unique accomplishment leads us directly into the topic of the Agile Business Analyst (BA), as today’s Agile BA is your team’s “Utility Player”. Today’s Agile BA is your team’s Jose Oquendo.

A LITTLE HISTORY – THE “WATERFALL BUSINESS ANALYST”

Before we get into the opportunities afforded to BA’s in today’s Agile world, first, a little walk down memory lane. Historically (and generally) speaking – as these are only my personal observations and experiences – a Business Analyst on a Waterfall project wrote requirements. Maybe they also wrote test cases to be “handed off” and used later. In many cases, requirements were written and reviewed anywhere from six to nine months before documented functionality was even implemented. As we know, especially in today’s world, a lot can change in six months.

I can remember personally writing requirements for a project in this “Waterfall BA” role. After moving onto another project entirely, I was told several months down the road, “’Project ABC’ was implemented this past weekend – nice work.” Even then, it amazed me that many times I never even had the opportunity to see the results of my work. Usually, I was already working on an entirely new project, or more specifically, another completely new set of requirements.

From a communications perspective, BA’s collaborated up-front mostly with potential users or sellers of the software in order to define requirements. Collaboration with developers was less common and usually limited to a specific timeframe. I actually worked on a project where a Development Manager once informed our team during a stressful phase of a project, “please do not disturb the developers over the next several weeks unless absolutely necessary.” (So much for collaboration…) In retrospective, it’s amazing to me that this directive seemed entirely normal to me at the time.

Communication with testers seemed even rarer – by the very definition, on a Waterfall project, I’ve already passed my knowledge on to the testers – it’s now their responsibility. I’m more or less out of the loop. By the time the specific requirements are being tested, I’m already off onto an entirely new project.

In my personal opinion the monotony of the BA role on a Waterfall project was sometimes unbearable. Month-long requirements cycles, workdays with little or no variation, and some days with little or no collaboration with other team members outside of standard team meetings became a day to day, week to week, month to month grind, with no end in sight.

AND NOW INTRODUCING… THE “AGILE BUSINESS ANALYST”

Fast-forward several years (and several Agile project experiences) and I have found that the role of today’s Agile Business Analyst has been significantly enhanced on teams practicing Agile methodologies and more specifically, Scrum. Simply as a result of team set-up, structure, responsibilities – and most importantly, opportunities – I feel that Agile teams have enhanced the role of the Business Analyst by providing opportunities which were never seemingly available on teams using the traditional Waterfall approach. There are new opportunities for me to bring value to my team and my project as a true “Utility Player”, my team’s Jose Oquendo.

The role of the Agile BA is really what one makes of it. I can remain content with the day to day “traditional” responsibilities and barriers associated with the BA role if I so choose; back to the baseball analogy – I can remain content playing one position. Or, I can pursue all of the opportunities provided to me in this newly-defined role, benefitting from new and exciting experiences as a result; I can play many different positions, each one further contributing to the short and long-term success of the team.

Today, as an Agile BA, I have opportunities – in the form of different roles and responsibilities – which not only enhance my role within the team but also allow me to add significant value to the project. These roles and responsibilities span not only across functional areas of expertise (e.g. Project Management, Testing, etc.) but they also span over the entire lifetime of a software development project (i.e. Project Kickoff to Implementation). In this sense, Agile BA’s are not only more valuable to their respective teams, they are more valuable AND for a longer period of time – basically, the entire lifespan of a project. I have seen specifically that Agile BA’s can greatly enhance their impact on project teams and the quality of their projects in the following five areas:

  • Project Management
  • Product Management (aka the Product Backlog)
  • Testing
  • Documentation
  • Collaboration (with Project Stakeholders and Team Members)

We’ll elaborate – in a follow-up blog entry – specifically how Agile BA’s can enhance their role and add value to a project by directly contributing to the five functional areas listed above.

I found Mike Cohn’s posting Don’t Blindly Follow very curious because it seems to contradict what many luminaries of the Agile community have said about starting out by strictly following the rules until you’ve really learned what you’re doing.

In one sense, I do agree with this sentiment of not blindly doing something.  Indeed, when I was younger, I thought it was quite clever of me to say things like “The best practice is not to follow best practices.”  But then I discovered the Dreyfus Model of Skills Acquisition and that made me realize that there’s a more nuanced view.  In a nutshell, the Dreyfus model says that we progress through different stages as we learn skills.  In particular when we start learning something, we do start by following context-free rules (a.k.a., best practices) and progress through situational awareness to “transcend reliance on rules, guidelines, and maxims.”  This is resonates with me since I recognize it as the way I learn things and when I can see it in others when they are serious about something.  (To be fair, there are people that don’t seem to fit into this model, too, but I’m okay with a model that’s useful even if it doesn’t cover every possibility.)  So, I would say that we should start out following the rules blindly until we have learned enough to recognize how to helpfully modify the rules that we’ve been following.

Cohn concludes with another curious statement: “No expert knows more about your company than you do.”  Again, there’s a part of me that wants to agree with this, but then again…  An outsider could well see things that an insider takes for granted and have perspectives that allow them to come to different conclusions from the information that you both share.

I find myself much more in sympathy with Ron Jeffries’ statements in The Increment and Culture: “Rather than change ourselves, and learn the new game, we changed Scrum. We changed it before we ever knew what it was.”  This seems like it would offer a much better opportunity to really learn the basics before we start changing this to suit ourselves.

As I mentioned in the introductory post in this series, an issue I frequently see with underperforming Agile teams is that work always spills over from one iteration into the next. Nothing ever seems to finish, and the project feels like a waterfall project that’s adopted a few Agile ceremonies. Without completed tasks at iteration planning meetings (IPMs), there’s nothing to demo, and the feedback loop that’s absolutely fundamental to any Agile methodology is interrupted. Rather than actual product feedback, IPMs become status meetings.

In this post, let’s look at how a Product Owner can help ensure that tasks can be completed within an iteration. As a developer myself, I’m focusing on the things that make life easier for the development team. Of course, there’s a lot more to being a good Product Owner, and I encourage you to consider taking a Scrum Product Owner training class (CC Pace offers an excellent one).

First and foremost, be willing to compromise and prioritize. An anti-pattern I observe on some underperforming teams is a Product Owner who’s asked to prioritize stories and replies, “Everything is important. I need everything.” I advise such teams that when you ask for all or nothing, you will never get all; you will always get nothing. So don’t ask for “all or nothing”, which is what a Product Owner is saying when they say everything has a high priority.

As a Product Owner, understand two serious implications of saying that every task has a high priority.

  1. You are saying that everything has an equal priority. In other words, to the developer team, saying that everything has a high priority is indistinguishable from saying that everything has medium or indeed low priority. The point of priority isn’t to motivate or scare the developers, it’s to allow them to choose between tasks when time is pressing. You’re basically leaving the choice up to the developer team, which is probably not what you had in mind.
  2. You’re really delegating your job to the developer team, and that isn’t fair. You’re the Product Owner for a reason: the ultimate success of the product depends on you, and you need to make some hard choices to ensure success. You know which bits of the system have the most business value. The developers signed on to deliver functionality, not to make decisions about business value. That’s your responsibility, and it’s one of the most important responsibilities on the entire team.

The consequence of “everything has a high priority” is that the developers have no way to break epic stories down into smaller stories that fit within an iteration. Everything ends up as an epic, and the developer team tends to focus on one epic after another, attempting to deliver complete epics before moving on to the next. It’s almost certain that no epic can be completed in one 2-week iteration, and so work keeps on spilling over to the next iteration and the next and the next.

Second, work with the developer team to find out how much of each story can be delivered in an iteration. Keep in mind that “delivered” means that you should be able to observe and participate in a demo of the story at the end of the iteration. Not a “prototype”, but working software that you can observe. Encourage the developers to suggest reduced functionality that would allow the story to fit in an iteration. For example, how about dealing only with “happy path” scenarios – no errors, no exceptions? Deal with those edge cases in a later iteration. At all costs, move towards a scenario where fully functional (that is, actually working) software is favored over fully featured software. Show the team that you’re willing to work with the evolutionary and incremental approach that Agile demands.

Third, be wary of stories that are all about technical infrastructure rather than business value. Sure, the development team very often need to attend to purely technical issues, but ask how each such story adds business value. You are entitled to a response that convinces you of the business need to spend time on the infrastructure stories.

At the end of a successful IPM, you as the Product Owner should have:

  • Seen some working software – remember, fully functional but perhaps not fully featured
  • Offered the developer team your feedback on what you saw
  • Worked with the developer team to have another set of stories, each of which is deliverable within one iteration
  • Prioritized those stories into High, Medium, and Low buckets, with the mutual understanding  that nobody will work on a medium story if there are high stories remaining, and nobody will work on low stories if there are medium stories remaining
  • A clear understanding of why any technical infrastructure stories are required, and what business problem will be addressed by such stories

Finally, make yourself available for quick decisions during the iteration. No plan survives its first encounter with reality. There will always be questions and problems the developers need to talk to you about. Be available to talk to them, face to face, or with an interactive medium like instant messaging or video chat. Be prepared to make decisions…

Developer: “Sorry, I know we said we could get this story done this iteration, but blahblah happened and…”

You: “OK, how much can you get done?”

Developer: “We would have to leave out the blahblah.”

You: “Fine, go ahead.” (Or, “No, I need that, what else can we defer?”)

Be decisive.

Next post, I’ll talk about what developers should concentrate on to make sure some functionality is delivered in each iteration.

One of the basic Agile tenets that most people agree on – whether or not you’re an Agile enthusiast or supporter – is the value of early and continuous customer feedback. This early and often feedback is invaluable in helping software development projects quickly adapt to unexpected changes in customer demands and priorities, while concurrently minimizing risk and reducing ‘waste’ over the lifetime of a project.

Sprint Demo
In the Agile world, we traditionally view this customer feedback in the form of a formal product demonstration (i.e. sprint demo) which would occur during the closeout of a Sprint/Iteration. In short, the “team” – those building the software – presents features and functionality they’ve built in the current sprint to the “customer” – those who will be users or sellers of the product. Even though issues may be uncovered and discussed – usually resulting in action items for the following sprint – the ideal end result is both the team and the customer leaving the demo session (and in theory, closing the sprint) with a significant level of confidence in the current state and progress of the project. Sprint demos are subsequently repeated as an established norm over the project’s duration.

This simple, yet valuable Agile concept of working product demos can be expanded and applied effectively to other areas of software development projects:

  • Why not take advantage of the proven benefits of product demos and actually apply them internally – within the actual development team – before the customer is even involved?
  • Let’s also incorporate product demos more often – why wait two weeks (or sometimes even a month, depending on sprint duration)? We can add value to a software development project daily, if needed, without even involving the customer.

Expand the Benefit of Demos
One of the common practices where I have seen first-hand the effectiveness of product demos involves daily deployment of code into a testing or “QA” environment. Here’s the scenario:

  • Testing uncovers five (5) “non-complex” defects (i.e. defects which were easily identified in testing – usually GUI-type defects including typos, navigation or flow issues, table alignments, etc.) which are submitted into a bug-tracking tool. Depending on the bug-tracking tool employed by the team, this process is sometimes quite tedious.
  • Defects 1-5 are addressed by the development team and declared fixed (or “ready to test”) and these fixes are included in the latest deployment into a QA environment for retesting.
  • Defects 1-5 are retested. Defects #1 and #2 are confirmed fixed and closed.
  • But there is a problem – it’s discovered EARLY in retesting that Defects #3 and #4 are not actually fixed and additionally, the “fixes” have now resulted in two additional defects – #6 and #7. For purposes of explanation, these two additional defects were also easily identified early in the retesting process.
  • At this point, Defects 3-4 need to be resubmitted, with new Defects – #6 and #7 – also added to the tracking tool.

Where are we at this point? In summary, all of this time and effort has essentially resulted in closing out one net defect, and we’ve essentially lost a day’s worth of work. Not to mention, developers are wasting time fixing and re-fixing bugs (and in many cases, becoming increasingly frustrated) and not contributing to the team’s true velocity. Simultaneously, testers are wasting time retesting and tracking easily identifiable defects, therefore increasing risk by minimizing the time they have to test more complex code and scenarios.

So there is our conundrum. In a nutshell, we’re wasting time and team members are unhappy. Check back for a follow-up post next week, which will provide a simple yet effective solution to this unfortunately all-too-common issue in many of today’s software development practices.

 

Ron Jeffries recently posted an article on writing code for “The Diamond Problem” using TDD in response to an article by Alistair Cockburn on Thinking before programming.  Cockburn starts out:

“A long time ago, in a university far away, a few professors had the idea that they might teach programmers to think a bit before hitting the keycaps. Nearly a lost cause, of course, even though Edsger Dijkstra and David Gries championed the movement, but the progress they made was astonishing. They showed how to create programs (of a certain category) without error, by thinking about the properties of the problem and deriving the program as a small exercise in simple logic. The code produced by the Dijkstra-Gries approach is tight, fast and clear, about as good as you can get, for those types of problems.”

To which Ron responds:

“I looked at Alistair’s article and got two things out of it. First, I understood the problem immediately. It seemed interesting. Second, I got that Alistair was suggesting doing rather a lot of analysis before beginning to write tests. He seems to have done that because Seb reported that some of his Kata players went down a rat hole by choosing the wrong thing to test.

“Alistair calls upon the gods, Dijkstra and Gries, who both championed doing a lot of thinking before coding. Recall that these gentlemen were writing in an era where the biggest computer that had ever been built was less powerful than my telephone. And Dijkstra, in particular, often seemed to insist on knowing exactly what you were going to do before doing it.

“I read both these men’s books back in the day and they were truly great thinkers and developers. I learned a lot from them and followed their thoughts as best I could.”

Ron’s article goes on to solve the same programming problem with TDD and no particular up-front thinking that Cockburn solved in what he calls a combination of “the Dijkstra-Gries approach” and TDD.  On the whole, I would tend more towards the pure TDD approach that Ron takes because it got him feedback earlier and more frequently, while Cockburn’s approach, with more upfront thinking, didn’t provide him any feedback until he really did start writing his tests.  If Cockburn had gone down a blind alley with his thinking, he wouldn’t have gotten any concrete feedback on it until much later in the game.

But that’s not what I actually want to think about.  I did read both Dijkstra’s A Discipline of Programming and Gries’ The Science of Programming “back in the day” as well (last read in 1991 and 2001, respectively, although it was a second reading of Gries; I remember finding Dijkstra almost impossible to understand, but I did keep it, so it may be time to try it again), but I didn’t remember the emphasis on up front thinking that both Ron & Cockburn seemed to claim for them.  I dug out my copies of both books and did a quick flip through both of them, and I still feel that the emphasis is much more on proving the correctness of one’s code rather than doing a lot of up-front thinking.  I’d previously had the feeling that there was a similarity between Gries’ proofs and doing TDD.  As I poke around in chapter 13 of Gries’ book, where he introduces his methodology, I find myself believing it even more strongly.

Gries starts out asking “What is a proof?”  His answer?

“A proof, according to Webster’s Third New International Dictionary, is ‘the cogency of evidence that compels belief by the mind of a truth or fact,’  It is an argument that convinces the reader of the truth of something.

“The definition of proof does not imply the need for formalism or mathematics.  Indeed, programmers try to prove their programs correct in this sense of proof, for they certainly try to present evidence that compels their own belief.  Unfortunately, most programmers are not adept at this, as can be seen by looking at how much time is spent debugging.  The programmer must indeed feel frustrated at the lack of mastery of the subject!”

Doesn’t TDD provide that for us, at least when practiced correctly?  Oh, and the first principle that Gries gives in this chapter is: “A program and its proof should be developed hand-in-hand, with the proof usually leading the way.”  Hmm, sounds familiar, no?

Admittedly, Gries does speak out against what he calls “test-case analysis:”

“‘Development by test case’ works as follows.  Based on a few examples of what the program is to do, a program is developed.  More test cases are then exhibited – and perhaps run – and the program is modified to take the results into account.  This process continues, with program modification at each step, until it is believed that enough test cases have been checked.”   

On the face of it, this does sound like a condemnation of TDD, but does it really represent what we do when we really practice TDD?  Sort of, but it overlooks the critical questions of how we choose the test cases and the speed at which we can get feedback from them.  If we’re talking randomly picking a bunch of test cases and getting feedback from them in a matter of days or hours, then I’d agree that it would be a poor way to develop software.  When we’re practicing TDD, though, we should be looking for that next simplest test case that helps us think about what we’re doing.  Let’s turn to Gries’ “Coffee Can Problem” as an example.  

“A coffee can contains some black beans and some white beans.  The following process is to be repeated as long as possible.

“Randomly select two beans from the can.  If they have the same color, throw them out, but put another black bean in.  (Enough extra black beans are available to do this.)  If they are different colors, place the white one back into the can and throw the black one away.”

“Execution of this process reduces the number of beans in the can by one.  Repetition of the process must terminate with exactly one bean in the can, for then two beans cannot be selected.  The question is: what, if anything, can be said about the color of the final bean based on the number of white beans and the number of black beans initially in the can?”

Gries suggests we take ten minutes on the problem and then goes on to claim that “[i]t doesn’t help much to try test cases!”  But the test cases he enumerates are not the ones we’d likely try were we trying to solve this with TDD.  He suggests test cases for a black bean and a white bean to start with and then two black beans.  Doing TDD, we’d probably start with a single bean in the can.  That’s really the simplest case.  What’s the color of the final bean in the can if I start with only a single black bean?  Well, it’s black.  And it’s going to be white if the only bean in the can is white.  Okay, what happens if I start with two black beans?  I should end up with a black bean.  Two white beans wouldn’t make me change my code, so let’s try starting with a black and a white bean.  Ah, I would end up with a white bean in that case.  Can I draw any conclusions from this?  

I did actually think about those test cases before I read Gries’ description of his process:

“Perhaps there is a simple property of the beans in the can that remains true as the beans are removed and that, together with the fact that only one bean remains, can give the answer.  Since the property will always be true, we will call it an invariant.  Well, suppose upon termination there is one black bean and no white beans.  What property is true upon termination, which could generalize, perhaps, to be our invariant?  One is an odd number, so perhaps the oddness of the number of black beans remains true.  No, this is not the case, in fact the number of black beans changes from even to odd or odd to even with each move.”

There’s more to it, but this is enough to make me wonder if this is really different from writing a test case.  Actually it is: he’s reasoning about the problem, and by extension the code he’d write.  But he is still testing his hypotheses, it’s just in his head rather than in code.  And there I would suggest that TDD, as opposed to using randomly selected test cases, allows us to do that same kind of reasoning with working code and extremely rapid feedback.  (To be fair, I believe this is what Ron was saying, too.  I just want to highlight the similarity to what Gries was saying, while Ron seems to be suggesting more of a difference.)

What might get lost in TDD, at least when it’s not practiced well, is that idea of reasoning about the code.  There’s an art to picking that next simplest test to write, and I suspect that that’s where much of the reasoning really happens.  If we write too much code in response to a single test, we’re losing some of the reasoning.  If we write our tests after the code, we’ve probably lost it entirely.  And that’s something I do believe is lacking in many programmers today, evidenced, as Gries suggests, by the amount of time spent fumbling around in debuggers and randomly adding code “to see what will happen.”  But that’s for another time.