An Agile Coach’s Guide to Managing Pride, Ego and Personal Growth
As a ScrumMaster and Agile Coach, one of my favorite tools to use is Mural. Mural is a digital whiteboard designed for engaging collaborations. It’s a creative outlet for me; it’s like scrapbooking for a job. I like to think that I’m pretty good at using the application. If I’m being honest, I think I’m really good at using Mural. I usually get compliments on my creativity, layouts, and functionality of my Mural boards. I pride myself on being knowledgeable and helpful in Mural, which is a part of my Personal Brand.
Recently I was in a Thursday ScrumMaster Community of Practice (CoP) where we were discussing Mural and I shared some of my examples. An Agile Coach said, “Tracy – if you want to help me dress this one up, I’d be happy to see what you can do!”. I felt honored and excited, especially since I had only been consulting at the company for less than 2 months. I had the drive and desire to do my best work and build a positive reputation for myself as an informed, helpful, and creative resource.
That night I reworked his Mural. I created a Lego Theme that focused on Building Foundations Together. I created colored Lego Brick Backgrounds for different topics. I was really excited about what I had made. I shared right away in the morning, expecting to get positive feedback. The instant gratification! However, the Agile Coach was out of the office that Friday.
On Monday, I received his response. “You were using Areas and Shapes inconsistently.”. “Wait, what?!?!”. It wasn’t the reaction I was expecting, and a meeting was scheduled. I didn’t even know what he was talking about with “Areas”. My ego definitely took a hit.
I prepared myself for a meeting with the Agile Coach. I felt like I was a student in detention. I felt like I was in trouble. I expected him to be upset with me because, after all, I ruined the functionality of his Mural.
Instead of a meeting, it was a working session. I learned about functionality I never knew existed. He showed me where, when, and how to use Mural Areas. He was not upset at all in the working session; he knew I had the best intentions.
We learned that even though we’re both passionate and knowledgeable about Agile, it was also easy to see how quickly we fell back on old patterns. We needed to align first with what we were trying to accomplish. We also brainstormed from different personas which created a variety of solutions. Those perspectives made the Mural and instructions more valuable and understandable.
I was humbled by this experience.
I can be, and still am, proud of the work I did.
If I wouldn’t have changed my defensiveness to a Growth Mindset, I wouldn’t have gotten the gift of learning something new.
If I would have gone into the meeting with negativity, I wouldn’t have seen the Coaching Demo that happened right in front of my eyes.
In my reflection, I realized I needed to put my ego aside to grow.
I did that by:
-
- Change to a Growth Mindset. Embrace challenges and learn from feedback.
- Think of every opportunity as a coachable moment. Imagine what your day would look like if you went into every situation thinking, “What am I going to learn here?”. How would the atmosphere and culture change?
- Give grace. Give grace that everyone was trying to do the best they could at the time – Including yourself.
- Have Gratitude. Gratitude greatly reduces negativity and anxiety. It shifts a focus from thinking of yourself to others. When we have gratitude, what we appreciate grows.
- Pay it forward. Paying it forward is the greatest compliment you can give your mentor. Don’t just share the information that you received but pass on the learning environment. Create a space of psychological safety, patience, and understanding.
In conclusion, I still pride myself on being knowledgeable and helpful in Mural. I’m practicing a Growth Mindset and trying to embrace every situation as a coachable moment. Be grateful for being mentored, and in return, become a mentor. But what I really learned in the experience and reflection was:
You can have pride in your career, but to continue your growth journey you need to practice removing the ego.
Another great part of this story is that the mentor/mentee relationship didn’t end at that working session. We continue to collaborate, asking for opinions, and sharing knowledge. We continue to learn together. Now that’s what I call Building Foundations Together.
Pt 1 of a two-part series
Effective communication remains at the very heart of team efficiency. Entire business models are based on improving team communications, look no further than SalesForce’s acquisition of Slack. Microsoft Teams, BaseCamp, Sharepoint, Zoom, Webex, instant messenger, and text are just a few of the frequently used communication tools in the workplace. Meeting facilitation is now a ‘service offering’ – take a moment and search LinkedIn… perhaps you’ll get more than the 7,700 results that I returned!
Yet here we are in this post-COVID, hybrid/remote work environment, and one of the most effective and proven communication channels has been cast aside. COVID brought most technology work into the home somewhat permanently, and the overscheduling of meetings proliferated, where thanks were provided to higher powers when a meeting was gracefully ended 50 mins after the hour, allowing a few minutes to check on the kids, feed the whining dog, or run to the bathroom—really anything other than staring at an array of faces staring back at their screens. What this also brought was the feeling that since schedules were so solidly stacked, the last thing that made sense was to just call someone. It felt intrusive and presumptuous.
While COVID remains in circulation, the work world is transitioning with some firms fully remote forever, others hybrid with overlap day mandates, and others fully back to pre-pandemic norms. At CC Pace, we remain convinced that flexibility is critical to our employee’s success, but we are also strong believers that face-to-face time is time well spent. I’m reminded of my first job in technology working for one of the Big 4 firms; at my first project, if I hadn’t had the opportunity to strategically place myself at a senior developer’s desk as his day started to ask a few questions and get a few answers, I’d have been sunk. The informal communication channel is critical—and I couldn’t imagine posting my question to him on Teams (even if it had existed back then), nor would he have answered! So how can that experience be replicated within the modern hybrid/remote work environment?
Virtual stand-ups remain useful. Pair programming has its place. Yet we see questions left overnight, where there is a desire not to bother a fellow developer. One of the most basic behaviors that our Agile coaches are pressing on is to push developers to ‘pick up the phone and call’. This deference has a huge cost on team productivity as “stuck developer time” ticks away to no effect.
This is Sachin, picking up with the perspective of a Sr. Scrum Master and Agile coach, I have seen a shift in mindset with team members content with completing their part in a user story. It’s more of a “throw it over the wall” kind of mindset. I usually relate it to a very popular British party game we used to play as kids, i.e., “Pass the parcel.” The parcel, which is the user story, is passed on from one person to another until the music stops, which is when the sprint ends. The result is an incomplete story that just moves on from one sprint to another. The concerning part is team members are not willing to take responsibility for a user story. A simple question, such as “can you finish the user story in a sprint?” goes unanswered. Take, for example, a backlog refinement session: this session is productive when team members communicate and ask questions. But when you consider them as a waste of time and just want to get done with them, assumptions are made without proper analysis, leading to improper sizing and stories remaining “not done”. To some extent, teams and programs have started to blame the very process for missing a deadline.
Effective interactions remain paramount for Agile teams. The efficiency of text, email, and other one-way transmissions is not always effective.
In Part II of this series, Mike Wittrup will talk about a framework that CC Pace uses to assess and improve team communication protocols in the co-working world that we all live in.
In the meantime, we remain at your service in providing tailored approaches to driving business agility. For over 20 years we’ve earned clients’ trust across all stages of the Agile journey. Give us a call!
With the World Cup taking over the headlines, we couldn’t miss an opportunity to bring two of our favorite topics at CC Pace together: sports and Agile. As Team USA gears up to take on the Netherlands, here’s a little history on the unique style of soccer the Dutch created and what Agile teams can learn from their success.
In the 1970s, the Dutch dominated their international counterparts by using a style of soccer they called totaalvoetbal or total football. Total football requires each player on the team to be comfortable and adept enough to switch positions with any other player on the field at any time. The Dutch required the goalkeeper to remain in a fixed position, but everyone else was fluid and able to become an attacker, defender, or midfield player when the play dictated it. Whenever a player moved out of his position, they were replaced by another player. Successively, all other players on the team shifted their positions to maintain their team structure. In modern soccer, we call this collective team behavior compensatory movement. All teammates compensate and adjust to each other’s actions.
This philosophy helped create teams without points of weakness that their opponents could exploit.
Totaalvoetbal only worked because players trained to develop the skills needed to play all positions. Each player was a specialist in a certain position or role, such as striker or center defense, but was also quite competent playing other roles on the team.
In the Agile world, this can be applied to the makeup of scrum teams. Scrum teams that are self-sufficient because of their fluidity are always the most productive and dependable. If scrum teams are comprised of team members with “T-shaped” skills, then there will always be team members that can fill in for others when needed.
People with T-shaped skills have a deep level of skill and expertise in one area and a lower level of expertise across many other areas. When scrum teams are comprised of team members with T-shaped skills, it helps to ensure that all work can be completed within the team. It also means that productivity is less likely to drop when a team member is out of the office because others can roll up their sleeves and help get the job done.
Cross-training and pair programming are great ways to help develop team members with T-shaped skills. Pair programming is an Agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently.
T-shaped skills are not developed by accident but rather intentionally. Careful planning and a thoughtful, proactive approach by the individual and their manager are crucial. A manager must understand the value of investing in the development of people. Cross-training, stretch assignments, training opportunities, shadowing, and pair programming are all excellent methods for developing additional skills that allow for compensatory movement and fluid teams, yet, in some ways, represent short-term reductions in individual productivity. Managers must make this short-term investment to see the long-term value and score more goals.
In a previous blog post, we covered the top 10 challenges organizations face when adopting Agile. In this article, we’ll dive deep into one of these challenges: “Inadequate management support and sponsorship.” We will explore why organizations face this challenge and what can be done to address it.
Lack of management support is one of the most widespread and yet hard-to-uncover challenges. Agile transformation requires an enterprise-wide shift in mindset and culture and this requires buy-in from all levels of the organization, otherwise, it can cause the entire effort to lose alignment.
With Agile transformations, middle management is usually caught, well, in the middle. While expected to embrace the significant amount of change they will be experiencing in their own roles and responsibilities, they are also expected to be the torchbearers and lead their teams through the transformation.
Traditional management in a non-Agile culture operates largely in a command-and-control mode. It’s how they make decisions, forecast, and get work done. Agile principles strive to decentralize some of that control in favor of building self-organized and autonomous teams. This factor directly threatens management’s perception of control and can make them feel a loss of power. The root of this problem can sometimes trace back to a poorly executed change management strategy, or lack thereof. Agile is not just a small incremental change in the way organizations plan and deliver, but a fundamentally different way altogether that requires careful planning and execution to accomplish.
Below are some of the indicators that show management is not fully aligned and supportive:
- Teams do not get enough support from management when delivery challenges manifest
- Management puts a greater emphasis on following the practices and mechanics than the mindset
- There is a lack of psychological safety in the organization
- Management overvalues Agile metrics and dashboards, leading to an increase in Agile antipatterns
- Agile coaches struggle to maintain a coach and influence teams
- Key Agile tenets get ignored in favor of the sponsor’s wishes, e.g., artificial deadlines
To prevent this issue from becoming a challenge in the first place, it is important to acknowledge and plan for it while developing the transformation strategy. Here are a few ways to address this challenge:
- Address the “why”. Organizations can prevent resistance from happening by making sure management understands the “why” behind the change. Leadership needs to invest in and sponsor education and awareness campaigns to make sure people managers understand and align with the spirit of agile and are able to effectively articulate the value expected from the transformation.
- Engage and recruit management as advocates of change: Change is hard, and advocating change requires skill. If we want our management to be those advocates, it is necessary to ensure they have the right knowledge and training to play their part. Key members of management should be strategically identified and trained for advocacy, early in the transformation.
- Answer “What’s in it for me?”. While it might be well understood that Agile helps organizations deliver value early and often, it is human nature to seek personal benefit. Ensure they get a clear understanding of their future, feel secure from a career and work-life standpoint, and are able to see the benefits they stand to receive if the transformation goes well.
- Let managers know that you have their back. Agile transformations are always full of ups and downs. And the ‘downs’ ideally should be learning opportunities. However, if management is held to their old expectations and they feel they will be penalized for failures as a result of the new delivery approach, they will be less inclined to support the transformation because of potential negative impacts. Leadership needs to understand and acknowledge how ‘fail fast’ manifests itself when using Agile, and ensure management understands that they won’t be penalized when failures do occur.
- Create a ‘Growth Roadmap’. Just like ‘what’s in it for me,’ in the short term, it is important to address long-term personal benefits. As part of change management, ensure there is a clear understanding of how people will advance in their new roles in the next 3-5-8 years, what resources they will have to gain, new knowledge and credentials to consider, and which career paths they can pursue. Leadership should explain to middle-management how to be successful in an Agile culture, and ensure they recognize and reward managers for the new behaviors they want to see in them.
There is a lot that can be discussed about this challenge, and I would love to hear more from you. Have you faced similar challenges? What did you do to address them? At CC Pace, we love to engage with the community and provide value wherever possible. Reach out, we’d love to help.
Does your Agile Transformation feel like it is stuck in mud? Maybe it is facing one of the many challenges organizations must navigate as part of their transformation.
According to the 15th State of Agile Report the top 10 Agile Challenges (Digital ai, 2021) are:
The impact of each of these can be detrimental to any Agile Transformation. The report suggests that some organizations face many of these challenges at once. The survey notes that these results have been unchanged over several years. As we look across these top 10, we find most of them relate to the organization’s cultural aspects which means that trying to improve a team’s specific use of an Agile method like Scrum, is less important than trying to improve the overall Agile Culture of the organization.
For a different view into why Agile transformations are challenged, we find the article, How to mess up your agile transformation in seven easy (mis)steps (Handscomb et al., 2018) by a McKinsey team as yet another way to look at the challenges organizations face. The seven missteps are:
- Not having alignment on the aspiration and value of an agile transformation
- Not treating agile as a strategic priority that goes beyond pilots
- Not putting culture first over everything else
- Not investing in the talents of your people
- Not thinking through the pace and strategy for scaling up beyond pilots
- Not having a stable backbone to support agile
- Not infusing experimentation and iteration into the DNA of the organization
If your organization is facing any of these challenges, or you’ve made any of the “missteps” identified by the McKinsey team, your Agile adoption may be at a stall. If you’re experiencing multiple challenges, it may be time to get some help.
When things aren’t going well with your transformation the impacts are felt at the team and organization levels. Next, we’ll look at some of these impacts.
Team Impacts
- Team members feel frustrated and demotivated about working in an Agile environment when they aren’t supported through the transition by managers, each other, and the culture of the organization.
- Team members are unable to deliver quality increments of work due to a lack of consistent processes and tools.
- Team deliveries are hindered by defects.
- Team members struggle to learn and implement new Agile methods, or what they learn does not stick.
- Team members don’t go beyond process and learn delivery practices to improve quality, like TDD, or DevOps.
- Team members don’t work cross-functionally and instead keep silos of expertise.
- Team members struggle to coordinate across teams to remove dependencies.
- Team members burn out due to working at an unsustainable pace.
- Team member turnover is high.
- Stakeholders don’t see the value in their participation with the Team members doing the work, hindering the team’s ability to get real-time feedback and continuously improve.
- Management fails to let the team self-organize taking away from team ownership and accountability for the work. Or even worse, they micro-manage the team, their backlog, or even how they do the work
- Teams find it difficult to release rapidly hampering innovation and increasing time to market.
- Team members aren’t engaged in learning new ways of working.
In addition to the team impacts, there are several organizational impacts that can occur.
Resolving the challenges
In looking at the data, it is imperative that organizations address these challenges head-on. If you’re seeing any of the impacts listed above, you may wonder what to do to alleviate them. One of the biggest mistakes is not incorporating change management as part of an Agile Transformation. Leadership needs to help everyone in the organization understand the new culture underlying how they all work together. It requires leaders and doers alike to learn about Agile values and behaviors. Organizations must invest in training and education at all levels so that learning becomes part of the culture. How do we expect people to become Agile if we don’t teach them what Agile really is?
“What Are the Greatest Contributors to Success When Integrating Change Management and Agile?” According to Tim Creasey (Creasey, n.d.), they are:
- Early engagement of a change manager
- Consistent communication
- Senior leader engagement
- Early wins
Regardless of the Agile Methods you are trying to implement, an organization must start with changing its organizational culture, and the best way to do this is to start with change management. All the efforts of teaching and coaching will fail without the culture change to support the new Agile ways of working. Finally, engage the entire organization, not just IT, in the adoption of Agile practices and behaviors. While a pilot team will help test out a specific Agile method, their work with other members of the organization requires everyone to understand what it means to be Agile. Agile is for everyone, not just IT. A comprehensive change management program will help ensure the entire organization becomes focused on becoming Agile.
Watch for future blogs where we delve into these challenges and what CC Pace can do to help you solve them.
In today’s digital business landscape, businesses are looking for solutions to be more competitive. Some digital businesses have found the Scrum Framework to be that competitive edge with its foundation of empiricism. Empiricism uses knowledge based on experience to make decisions. Scrum framework empiricisms reflect in three pillars of transparency, inspection, and adaption. Scrum’s Evidence-Based Management (EBM) is an empirical approach that supports organizations in demonstrating business outcomes, organizational capabilities, and business results. In the simplest terms, the EBM helps organizations navigate through complex problems by intentional small steps and adapting through continuous feedback loops. (For more on complexity, see the Scrum Theory section of the Scrum Guide at https://www.scrumguides.org/scrum-guide.html)
Scrum’s Evidence-Based Measurement approach allows organizations to avoid traps like metrics turning into milestone deliveries or project task activities. EBM offers an opportunity for the digital organization to experiment and move to their next step to value through these areas:
Key Value Areas – Unrealized value, Current Value, Time to Market, and Ability-to-Innovate.

Scrum.org, 2020
Let’s look at them and see if your organization might benefit from using them.
First Value Area – Current Value
What is the purpose?
The organization reflects on the current value they bring to the customers or stakeholders. It is in its current state. It is not a future state at all.
How does an organization identify it?
EBM shares this value indicating the happiness of customers, stakeholders, and employees.
It can be very simple: measure how happy the customers, stakeholders, and employees are. Is their happiness up or down? The important aspect here is to get a baseline and then, at regular intervals check-in that the current state is moving towards the desired state.
Potential examples:
Second Value Area – Unrealized Value:
What is the purpose?
The organization reflects on what future value they could provide to their customers, stakeholders, or employees. Based on the current value, what is the difference? That difference is the potential unrealized value.
How does an organization identify it?
EBM shares this value is about understanding ALL the desired needs or future state that our customers, stakeholders, and employees would like to have to be very, very happy. It allows the organization to evaluate where finances are being expended.
Potential examples:
Third Value Area – Time to Market:
What is the purpose?
The organization determines how quickly it can deliver something new to its customers or stakeholders.
How does an organization identify it?
EBM shares this value is about understanding how the organization does things, how the organization learns, and how quickly the organization applies what they learned.
Potential examples:
Fourth Value Area – Ability to Innovate:
What is the purpose?
The organization determines how good they are at developing new ideas that might deliver value to meet customer needs.
How does an organization identify it?
EBM shares the ability to innovate the organization by looking for blockers and dependencies and addressing them. What are the “gotchas” that prevent customers from being happy about the delivered solution?
Potential examples:
Gathering the information for EBM’s 4 values provides the organization with a baseline and is a great first step. However, the biggest secret to success the Scrum’s Evidence-Based Measurement guidance offers is to be intentional with small steps of change to move the needle in each of the four value areas. EBM supports using the PDCA model derived by W. Edwards Deming to accomplish this.

https://deming.org/explore/pdsa/, n.d.
Deming’s approach is built on:
Plan: What is the organization trying to do? (What is the problem they are trying to solve?)
Do: What are the intentional, time-boxed steps the organization will use to solve it?
Check: What are the outcomes?
Act: If the outcomes are great, then continue with that process. If the outcomes are not as expected, what changes does the organization make?
Scrum’s EBM guidance has been applied in various industries.
In one great case study, a Real-Estate Software Provider shared that after one year of using the EBM, the company had its’ largest revenue growth in over 10 years. Its success was also reflected in its EBITDA and Net Promoter score.

Scrum.org, 2018
This case study is only one example. Has your organization used this approach? If so, what were your results?
Appendix:
The sample below shares an approach that might be synthesized from the Scrum Evidence-Based Measurement.
Over the past several years, I’ve worked with many organizations as an Agile Coach and Scrum Master. Through this experience, I’ve noticed there is often misunderstanding at varying levels within the organization around the difference between Agile Adoption and Agile Transformation.
My goal through this blog will be to share my thoughts around Adoption and Transformation and to articulate the basic differences.
First, let me start by defining what I mean by Adoption and Transformation. We find that many organizations think they are Agile when they adopt different Agile practices and begin to check these boxes daily, like having a Daily Standup, working in Sprints, or creating a Kanban board. Organizations may see some benefit in the early stages of a Transformation when Adoption is on the uptake; however, Adoption is just one of the factors that will help a Transformation be successful. A Transformation is about more than adopting a methodology and checking a box. A Transformation changes how the entire organization behaves; it changes the organization’s culture. This change may include aligning along value streams to create virtual teams focused on the same outcome delivery. And it should also involve getting business partners to engage with the IT delivery teams to provide regular input and feedback on the work. And Transformations may see new ways of budgeting around product delivery rather than project delivery. The bottom line is that an Agile transformation is about more than having teams adopt an Agile method.
Before we dive deeper into the differences between Agile Adoption and transformation, let me draw your attention to the 15th Annual State of Agile Report on company experience in practicing Agile.
The Annual State of Agile Report is the longest continuous annual survey of Agile techniques and practices as identified by over 1300 global respondents.

https://digital.ai/resource-center/analyst-reports/state-of-agile-report
Most of the respondents report their company has been practicing Agile. Quite clearly, Agile is the new buzzword, and corporations want to ensure that they don’t miss the bus. Many think it’s a plug-and-play kind of solution, and things improve overnight. It IS NOT!!
But what really is Agile Adoption, and how is it different from Agile Transformation?
In simple terms, Agile Adoption is changing the way a team does the work, whereas an Agile Transformation deals with changing the way an organization gets the work done. It’s about Doing Agile vs. Being Agile.
Sounds interesting?? Look at the 5 key differences as illustrated by Anthony Mersino.
- Speed of Change
Adoption can be quick and even measured in days or weeks. After a day or two of training, teams can agree on a tool, set up their board and events, and start following an Agile method like Scrum. I would refer to it as kind of a jumpstart. Transformations, on the other hand, can only be measured in years since the goal is about continuous improvement and cultural changes. In my previous blog, I defined the different levels and timelines as to when an organization can be transformed. While organizations might see Agile Adoption happen quickly, culture change across the organization is needed for a Transformation. And without question, culture change takes longer to come to fruition than team Adoption.
- Planning Timeframe
Agile Adoptions can be short—as the speed of change is quick compared to Transformations. Projects are temporary; thus, teams on projects can switch from Waterfall to Agile methodologies and be seen as adopting Agile. The planning timeframe for getting teams to adopt Agile can be short since we are simply changing how people work. Transformations, however, require long-standing and stable, Agile teams that take time to build. As you plan your Agile Transformation, the timeframe is much longer than that of simple Adoption, and it should include learning opportunities for everyone in the organization around the Agile mindset and how Agile organizations work differently. Change Management plays heavily in a Transformation plan.
- Productivity Gain
Researchers show a gain between 20% – 100% when it comes to Agile Adoption. In other words, just by teams following simple steps such as the PO setting priorities, focusing on a prioritized backlog, teams working together cross-functionally, etc., an organization will see gains in productivity. However, in a Transformation, one of the most significant benefits comes from developing T-shaped skills to become a cross-functional team. Transformation is more about empowering employees by encouraging them to be creative, understanding and accepting risks, and negating management layers allowing for more transparency. As an organization transforms, they decentralize decision making, advocate for innovation, focus on team outcomes over individual performance, engage business partners more readily, to name a few, and thus as a whole, may experience a productivity gain of close to 300%.
- Org Structure Change
Minimal-to-no structural change is required while a team adopts Agile methodologies. A Team of employees from different functional areas can come together to complete a project and may even move back to previous projects post completion. While together, they practice and adopt Agile methods. However, there can be chances of teams working in silos which can lead to significant inefficiencies. Team members reporting to different managers and having multiple hierarchies within a team can even delay decision-making.
Agile Transformation can have a significant impact on the organization. In a Transformation, the focus is on shifting from functional silos to more long-standing cross-functional stable teams, thereby reducing inefficiencies. While the reporting structure can remain matrixed, the team bond comes first in a Transformation. People managers in a Transformation shift their focus to employee enablement and engagement and less on directing daily work.
- Change in Culture
Have you ever heard the saying, “Culture eats strategy for breakfast?” As Adoption focuses on changing the way the team accomplishes work, a culture change may be seen solely within the group. One or more teams adopting Agile can lead to a Transformation, but it is unlikely to impact the culture significantly. When an organization sees the value from a team’s Adoption of Agile principles and practices, it may pave the way for a greater Transformation. I recommend engaging with a change management expert to help the organization’s culture change. This culture change is key to a Transformation and will bring customer satisfaction and respect for people.
In summary, both Agile Adoption and Transformations will bring value to your organization.
It all depends on the organization as to what path they follow; there isn’t a right or wrong here. I’m a strong believer that anything, when done the right way, will yield results. Its more about the process and believing in the process. What I think we can see here is that while organizations can derive value from an Agile Adoption, the true benefits come from the longer Agile Transformation.
What is the bottom line? Adoption is fast and easy, while Transformations take longer and require more planning and culture change, but without question, the benefits are worth it.
PI planning is considered the heartbeat of Scaled Programs. It is a high-visibility event, that takes up considerable resources (and yes, people too). It is critical for organizations to realize the value of PI planning, otherwise, leadership tends to lose patience and gives up on the approach, leading to the organization sliding back on their SAFe Agile journey. There are many reasons PI planning can fall short of achieving its intended outcome. For the purposes of quick readability, I will limit the scope of this post to the following 5 reasons which I have found to have the most adverse effect.
- Insufficient preparation
- Inviting only a subset (team leads)
- Lack of focus on dependencies
- Risks not addressed effectively
- Not leaving a buffer
Let’s do a deeper dive and try to understand how each of the above anti-patterns in the SAFe implementation can impact PI planning.
Insufficient Preparation: By preparation, I don’t mean the event logistics and the preparation that goes along with it. The logistics are very important, but I am focusing on the content side of it. Often, the Product Owner and team members are entirely focused on current PI work, putting out fires, and scrambling to deliver on the PI commitments, so much so that even thinking about a future PI seems like an ineffective use of time. When that happens, teams often learn about the PI scope almost on the day of PI planning, or a few days before, which is not enough time to digest the information and provide meaningful input. PI planning should be an event for people from different teams/programs to come together and collaboratively create a plan for the PI. To do that, participants need to know what they are preparing for and have time to analyze it so when they come to the table to plan, the discussions are not impeded by questions that require analysis. Specifically, this means, that teams should know well in advance, what are the top 10 features that teams should prioritize, what is the acceptance criteria, and which teams will be involved in delivering those features. This gives the involved teams a runway to analyze the features, iron out any unknowns, and come to the table ready to have a discussion that leads to a realistic plan.
Inviting only a subset: As I said in the beginning, PI planning is a high-cost event. Many leaders see it as a waste of resources and choose to only include team leads/SMs/POs/Tech Leads/Architects and managers in the planning. This is more common than you might think. It might seem obvious why this is not a good practice, but let’s do a deep dive to make sure we are on the same page. The underlying assumption behind inviting a subset of people is that the invitees are experts in their field and can analyze, estimate, plan, and commit to the work with high accuracy. What’s missing from that assumption is, that they are committing on behalf of someone else (teams that are actually going to perform the work) with entirely different skill levels, understanding of the systems, organizational processes, and people. The big gap that emerges from this approach to planning is that work that is analyzed by a subset of folks tends not to account for quite a few complexities in the implementation, and the estimate is often based on the expert’s own assessment of effort. Teams do not feel ownership of the work, because they didn’t plan for or commit to it, and eventually the delivery turns into a constant battle of sticking to the plan and putting out fires.
Lack of focus on dependencies: The primary focus of PI planning should be the coordination and collaboration between teams/programs. Effectively identifying the dependencies that exist between teams and proper collaboration to resolve them is a major part of the planning event to achieve a plan with higher accuracy. However, teams sometimes don’t prioritize dependency management high enough and focus more on doing team-level planning, writing stories, estimating, adding them to the backlog, and slotting them for sprints. The dependencies are communicated, but the upstream and downstream teams don’t have enough time to actually analyze and assess the dependency and make commitments with confidence. The result is a PI plan with dependencies communicated to respective teams but not fully committed. Or even worse, some of the dependencies are missed only to be identified later in the PI when the team takes up the work. A mature ART prioritizes dependencies and uses a shift-left approach to begin conversations, capture dependencies, and give ample time to teams to analyze and plan for meeting them.
Risks not addressed effectively: During PI planning, the primary goal of program leadership should be to actively seek and resolve risks. I will acknowledge that most leaders do try to resolve the risks, but when teams bring up risks that require tough decisions, change in prioritization, and a hard conversation with a stakeholder, program leadership is not swift to act and make it happen. The risk gets “owned” by someone who will have follow-up action items to set up meetings to talk about it. This might seem like the right approach, but it ends up hurting the teams that are spending so much time and effort to come up with a reasonable plan for the PI. There is nothing wrong with “owning” the risks and acting on them in due time, however, during PI planning, time is of the essence. A risk that is not resolved right away, can lead to plans based on assumptions. If the resolution, which happens at a later date/time, turns out to be different from the original assumption made by the team, it can lead to changes in the plan and usually ends up putting more work on the team’s plate. The goal should be to completely “resolve” as many risks as possible during planning, and not avoid tough conversations/decisions necessary to make it happen.
Not leaving a buffer: We all know that trying to maximize utilization is not a good practice. Most leaders encourage teams to leave a buffer during the planning context on the first day. But, in practice, most teams have more in the backlog than they can accomplish in a PI. During the 2 days of planning, it is usually a battle to fit as much work as possible to make the stakeholders happy. For programs that are just starting to use SAFe, even the IP sprint gets eaten up by planned feature development work. One of the root causes for this is having a false sense of accuracy in the plan. Teams tend to forget that this is a plan for about 5 to 6 sprints that span over a quarter. A 1 sprint plan can be expected to have a higher level of accuracy because of a shorter timebox, less scope, and more refined stories. However, when a program of more than 50 people (sometimes close to 150 people) plans for a scope full of interdependencies, expecting the same level of accuracy is a recipe for failure. In order to make sure the plan is realistic, teams should leave the needed buffer and allow teams to adjust course when changes occur.
As I mentioned at the start of this post, there are many ways a high-stakes event like PI planning can fail to achieve the intended outcomes. These are just the ones I have experienced first-hand. I would love to know your thoughts and hear about some of the anti-patterns that affected your PI Planning and how you went about addressing them.
Are you new to Agile testing?
I’ve been reading Agile Testing, by Lisa Crispin and Janet Gregory. If you are new to Agile testing, this book is for you. It provides a comprehensive guide for any organization moving from waterfall to Agile testing practices. The “Key Success Factors” outlined in the book are important when implementing Agile testing practices, and I would like to share them with you.
Success Factor 1: Use the Whole Team Approach
Success Factor 2: Adopt an Agile Testing Mind-Set
Success Factor 3: Automate Regression Testing
Success Factor 4: Provide and Obtain Feedback
Success Factor 5: Build a Foundation of Core Practices
Success Factor 6: Collaborate with Customers
Success Factor 7: Look at the big picture
Success Factor 1: Use the Whole Team Approach
In Agile, we like to take a team approach to testing. Developers are just as responsible for the quality of a product as any tester; they embrace practices like Test-Driven Development (TDD) and ensure good Unit testing is in place before moving code to test. Agile Testers participate in the refinement process, helping Product Owners write better User Stories by asking powerful questions and adding test scenarios to stories. Everyone works together to build quality into the product development process. This approach is very different from a waterfall environment where the QA/Test team is responsible for ensuring quality software/products are delivered.
Success Factor 2: Adopt an Agile Testing Mind-Set
Adopting Agile starts with changing how we think about the work and embracing Agile Values and Principles. In addition to the Agile Manifesto’s 12 Principles, Lisa and Janet define 10 Principles for Agile Testing. Testers adopt and demonstrate an Agile testing mindset by keeping these principles top of mind. They ask, “How can we do a better job of delivering real value?”
10 Principles for Agile Testing
- Provide Continuous Feedback
- Deliver Value to the Customer
- Enable Face-to-Face Communication
- Have Courage
- Keep it Simple
- Practice Continuous Improvement
- Respond to Change
- Self-Organize
- Focus on People
- Enjoy
Success Factor 3: Automate Regression Testing
Automate tests where you can and continuously improve. As seen in the Agile Testing Quadrants, automation is an essential part of the process. If you’re not automating Regression tests, you’re wasting valuable time on Manual testing, which could be beneficial elsewhere. Test automation is a team effort, start small and experiment.

https://www.cigniti.com/blog/agile-test-automation-and-agile-quadrants/
Success Factor 4: Provide and Obtain Feedback
Testers provide a lot of feedback, from the beginning of refinement through to testing acceptance. But keep in mind – feedback is a two-way street, and testers should be encouraged to ask for their own feedback. There are two groups where testers should look for feedback. The first is from the developers. Ask them for feedback about the test cases you are writing. Test cases should inform development, so they need to make sense to the developers. A second place to get feedback is from the PO or customer. Ask them if your tests cover the acceptance criteria satisfactorily and confirm you’re meeting their expectations around quality.
Success Factor 5: Build a Foundation of Core Practices
The following core practices are integral to Agile development.
- Continuous Integration: One of the first priorities is to have automated builds that happen at least daily.
- Test Environments: Every team needs a proper test environment to deploy to where all automated and manual tests can be run.
- Manage Technical Debt: Don’t let technical debt get away from you; instead make it part of every iteration.
- Working Incrementally: Don’t be tempted to take on large stories; instead break down the work into small stories and test incrementally.
- Coding and Testing Are Part of One Process: Testers write tests, and developers write code to make the test pass.
- Synergy between Practices: Incorporating any one practice will get you started. A combination of these practices, which to work together, is needed to gain the advantage of Agile development fully.
Success Factor 6: Collaborate with Customers
Collaboration is key, and it isn’t just with the developers. Collaboration with Product Owners and customers helps clarify the acceptance criteria. Testers make good collaborators as they understand the business domain and can speak technically. The “Power of Three” is an important concept – developers, testers, and a business representative form a powerful triad. When we work to understand what our customers value, we include them to answer our questions and clear up misunderstandings; then, we are truly collaborating to deliver value.
Success Factor 7: Look at the Big Picture
The big picture is important for the entire team. Developers may focus on the implementation details, while testers and the Product Owners have the view into the big picture. Aside from sharing the big picture with the team, the four quadrants can help you to guide development so developers don’t miss anything.
In addition, you’ll want to ensure your test environments are as similar as possible to production. Test with production-like data to make the test as close to the real world as possible. Help developers take a step back and see the big picture.
Summary
At the end of the day, developers and testers form a strong partnership. They both have their area of expertise. However, the entire team is the first line of defense against poor quality. The focus is not on how many defects are found; the focus is on delivering real value after each iteration.
Have you heard of OKRs? Is your organization considering adopting OKRs? If so, this post is for you.
OKR stands for Objectives and Key Results. Andy Grove created them while at Intel, and they’ve been growing in their use ever since. The Objective equals the “what” we will achieve, and the Key Result is the benchmark we use to measure how we are doing.
OKRs have been working for organizations like Google and Intel for years. Implementing them for your organization can help drive focus and alignment around working on the right things. While anyone can read the book Measure What Matters, by John Doerr to learn how to write OKRs, it is by following a tried-and-true implementation plan that OKRs truly help organizations achieve the desired focus.
According to Scaled OKRs, Inc. the following key steps should be included:
- Build the team
- Communicate
- Train
- Execute the OKR cycle
- Calibrate Regularly
- Continue to the next OKR cycle
Step 1: Build the team
Build a team to lead the implementation of OKRs. Identify a sponsor and champion. These leaders should understand how keeping OKRs visible throughout the organization will lead to success. As with any change, practicing good change management is important. Introducing OKRs is no exception. Be sure to include a change manager in your team. In addition, your team should include someone familiar with how to write OKRs to guide and mentor those new to writing them.
Step 2: Communicate
Once you have identified the team that will lead and support implementing OKRs, the next step is all about change management communications. Your first communication should occur about two months before roll-out. In your communication, be sure and answer the questions, Why OKRs? And why now? Aside from creating a sense of urgency to adopt, create a vision for the change and share it out too. Have the communication come from Leadership to show the importance of implementing OKRs. Our change manager follows the Prosci ADKAR Model. The next message should come about one month prior to the roll-out and should be the formal kick-off announcement of the OKR process. Followed shortly by sharing the company level OKRs, and training workshops schedule.
Step 3: Train
Next, you’ll need to do some training. While OKRs may seem easy to write, putting pen to paper and coming up with the right OKRs can be a daunting task. A training workshop with a writing exercise will help attendees get orientated around what makes good OKRs. Here they will learn that the objective is qualitative, and the Key Results are quantitative. You may need a train-the-trainer session to enable others to assist teams in writing their OKRs. I like to share John Doerr’s “Super Powers” of OKRS as a reminder. These include:
- Focus & commit to priorities
- Align & connect teamwork
- Track for accountability
- Stretch for amazing
Step 4: Execute the OKR Cycle
With company OKRs in hand and training underway, the next step is to start the OKR Cycle.
In this cycle, teams write and share their OKRs to ensure vertical and horizontal alignment.
The Enterprise Context grounds teams to the highest level OKRs that were developed by leadership. This gives everyone something to tie their OKRs to and sets the direction for the organization. This allows teams to work on co-creating the localized OKRs.
The OKR Cycle looks like this:
The next step is for individual teams to write their localized OKRs. When it comes to writing OKRs, one of my favorite tips for writing OKRs is that you should be able to read them in the following format:
We will achieve (objective), as measured by (Key result).
Before everyone starts writing OKRS, you may want to think about how they will be kept. If you haven’t picked a tool to manage your OKRs, writing them in a shareable manner can become difficult. It’s easy enough to share across one or two verticals and even one or two horizontals. However, the more widely your OKR implementation, the more imperative a tool becomes.
Once the teams have created their OKRs, it’s time for them to develop their “Action plan,” or backlog of epics they will use to accomplish their OKRs. As part of an action plan, identify key result owners, and the frequency of review huddles. You’ll want a regular cadence of review. This step can be done at Scrum events, like the Sprint Review. The worst thing that can happen is to write OKRs and then forget about them. Finally, identify what scoring mechanism will be used.
Around the second month of the OKR cycle, check-ins and scoring occur. Many organizations follow Google’s lead when it comes to scoring. In their system, 0 is a failure, and 1 is a success. Here is a view of what the scores look like:

Figure 1: https://www.whatmatters.com/faqs/how-to-grade-okrs
You can see from this scale that a .7 is green. It is considered a success. This is especially true for OKRs that are ambitious and represent a real stretch for the team. A team that consistently gets a 1.0 could be considered as not creating ambitious enough OKRs.
In the timeline above, we allocated four weeks to work on the first set of OKRs and two weeks for subsequent quarters. The first draft of OKRs tends to take the longest. Be sure and allocate plenty of time for creating your first set of OKRs.
One last comment about this cycle, tracking and sharing progress is an ongoing effort. All-hands meetings are great places to talk about progress. This will keep them from being just another goal-setting activity and keeps them at the forefront of everyone’s work plans.
Step 5: Calibration
Calibration reviews should happen quarterly. This is where you gather the data and identify if anything has changed before we move on to creating our next quarter’s OKRs. Calibration is a great time to do a retrospective on the OKRs. It is also a good time to ask questions like has the company level OKR’s changed? Before you start a new quarter of OKRs, calibrate where you ended on the current quarter and what, if anything, do you need to update or change before writing new OKRs for the upcoming quarter.
Step 6: Continue to the next OKR cycle
Repeat Steps 4 and 5 as part of your ongoing OKR program.
As you can see, rolling out a successful OKR program takes a bit of effort, but it is well worth it. When you incorporate your OKR roll-out into a program that is well planned, you are sure to get the entire organization on board. Once in place, you can use your OKRs to measure the outcomes the organization is striving to achieve, and everyone will be aligned. You can use your OKRs to determine the right things to be working on and say no to things that don’t align with your OKRs. With a good tool, everyone can see how their work aligns with the big picture. Regular check-ins give you the opportunity to see ongoing progress or make course corrections. Most importantly, your organization will be on the path to measuring progress towards desirable outcomes. If you have questions or want to know more, just reach out to me: jbrace@ccpace.com
As a Senior Scrum Master, I’ve worked with many organizations, and I’m frequently asked by leaders one common question: how long will it take for my team to be Agile? The answer is never easy; developing an Agile mindset can be complex. From senior executives to developers, everyone in the organization must be open-minded and willing to change. Of course, there will always be resistance, but this can be handled through open dialogue and continual conversation.
I’d like to walk through a staged representation depending upon the maturity levels that each team goes through in becoming Agile. The information you can gather from the maturity levels is an important metric that organizations are intrigued and excited to see because they show progress in their Agile journey. I have used the maturity levels extensively with teams, and it’s always great to show progress; but remember, it’s just a tool.
So, with that being said, let’s get started. I will review each maturity level a team goes through, and along the way, I’ll share my perspective and lessons learned.
Level 1 is a Learning Phase. As teams get started on their Agile journey, it’s essential to introduce and establish an Agile mindset. From my experience, an Agile Boot Camp is a great way to create this awareness and introduce some initial concepts. It’s also an opportunity to see the team composition, make introductions, and begin a new way of working together. While Level 1 focuses on awareness, you can’t short-change the importance of this initial step; it sets the foundation for the beginning of the journey and the team’s future success.
To reach Level 2, teams must have a solid understanding of what it means to be Agile, and they must also recognize there is a difference between Agile and Scrum/Kanban. Frequently, I hear teams using words like Agile and Scrum interchangeably; and it’s important everyone understands that Agile is a methodology, whereas Scrum and Kanban are different frameworks of Agile. The Agile ceremonies or events should be scheduled with a set agenda and teams should practice story card-based requirement gathering.
Once the team has a firm grip on what it means to be Agile, they’re practicing the events, using terms correctly, and understanding the different frameworks; it’s time to move to Level 3, where the focus is on proper planning, practicing trade-offs, backlog management, and inspect/adapt principles. Better planning is your key to executing a sprint well. Having a solid backlog and adopting inspect/adapt principles will play a crucial part in a team’s success. The team should be encouraged and start to practice trade-offs.
As a Scrum Master and coach, I continually talk with my teams about embracing change. As they transition from Waterfall to Agile, the team should practice the “yes, I can, but…” phrase. For example, the Product Owner issues a new requirement, but the team already has a full plate of work. At this point, the team should be willing to practice a trade-off, accepting the requirement but be open to a conversation with the Product Owner to reprioritize existing items. Through this process, we ensure change is embraced and simultaneously practice trade-offs to not burn out the team.
As we move up the ladder, Level 4 is about the team starting to self-organize. In the planning session, there is a discussion about the sprint goals and stories, and the team should be able to self-organize and pick up the tasks that will help them achieve the sprint goal. Remember – it’s team commitment, not individual commitments, that matter. The team should be able to start measuring the process and looking at ways to improve, such as introducing some automation in the form of testing.
Level 5 focuses on improving T-Shaped skills, which can be attained by having a buddy-pair system within the team. Through this process, knowledge is gained and shared across teammates, thereby ensuring the team becomes cross-functional as time progresses. Teams will now be experienced looking at extreme automation techniques like RPA and AI, developing CI/CD pipelines, and eventually working in a DevOps model.
In conclusion, an Agile Transformation begins with a people mindset. While we looked at the Agile Transformation Maturity levels, it’s important to understand the effort put in by all players within an organization. From developers to Product Owners to Agile sponsors, everyone plays a role in achieving a successful Agile Transformation. As your organization moves through the different levels, remember, it’s going to be a bumpy ride. There will be forward momentum as well as setbacks; hence, it takes time. But as a Scrum Master who has worked with teams on their transformation journey, moving through the different maturity levels is a process, but the result is worth it.
From the title of this post, it might seem that we are going to beat up on the ‘status reporting’ aspect. Let me clarify here, both are equally important and convey critical pieces of information to the audience.
What I want to highlight in this post is how inefficiencies are caused due to one replacing the other. In organizations using Agile, adapting to the changing landscape is a central tenet to its way of working. How people communicate is a significant factor in the success or failure of an organizations’ ability to adapt to change.
Before we explore how organizations struggle with effectively using the two ways to communicate, let’s first align on what they are.
Status reporting: The primary intent of this interaction is for one stakeholder to provide information to other stakeholders about the current state of progress. It is a one-way conversation with the information primarily flowing from the status update provider to the listener. There might be some interaction for clarification or instruction, but it is not built into the central idea of the interaction.
Inspect and Adapt: The primary purpose of this interaction is for the stakeholders to examine the progress towards the end goal, identify risks, acknowledge impediments and figure out actionable takeaways that will resolve or mitigate any challenges towards progress. Most Agile ceremonies share this primary intent and should be facilitated as such to achieve the aforementioned outcome.
What really happens: No matter how mature an organization is in its Agile journey, organizational factors such as psychological safety and human psychology influence how people contribute to the ceremonies. Let’s take a closer look at a couple of factors:
- FOMO: Fear of Missing Out. When there are no efficient ways to know the latest status of work, people tend to gravitate towards asking for status-related information during Agile ceremonies. If the status is made transparent and highly visible to the stakeholders, it will greatly reduce the desire to discuss it during discussions that should really be inspect and adapt. In Agile organizations, the use of BVIRs (Big Visible Information Radiators) such as Scrum Boards and relevant Agile metrics can accomplish this. It is critical to make the BVIRs readily accessible and easier to consume to get better user adoption.
- Perception and Recognition: Human behavior is often affected by how your actions will make you look. A status report allows a person to state what they have worked on so far and how it will make them look in the eyes of the audience for their progress. Inspect and adapt discussion makes people state the problems they are or could be facing and ask for help. It is a natural human tendency to try and not be the bearer of bad news. Especially in public forums, such as the Agile ceremonies. If people are getting enough recognition for their work and are made to feel safe for opening up and expressing concerns during the inspect and adapt discussions, the Agile ceremonies can become highly effective.
Tactically speaking, standups and scrum of scrums are typical examples of Agile ceremonies that should be heavily focused on inspect and adapt aspects. What ends up happening on most teams and programs is the ceremony turns into a status reporting session, leading most participants to not get true value out of the interaction.
If you would like to get some tactical advice on how to make these ceremonies more effective, please join us in a free webinar on November 3rd at 12pm EST, where I will discuss the topic in more detail with a fun story about chicken curry, draw the parallels and then do a deep dive into some effective techniques you can implement right now to improve the inspect and adapt aspect in your organization. Click here to register.
Metrics are vital to a team. They give us a starting point, just like choosing “my location” on your GPS. Then metrics tell us where we are headed. They help us know whether our team is winning at continuous improvement. They tell us how our journey is going, and where we might be off course.
The stability of metrics demonstrates that teams are predictable and can forecast future work. Many times, it is a combination of metrics that give us the real picture though.
One measure that can demonstrate a team’s stability is velocity. A team made up of dedicated members should have a stable velocity over time, which helps them better determine how much work to commit for a given sprint. So, measuring velocity over a number of sprints is a great Scrum metric. You might think that a rising velocity is good. However, a rising velocity along with rising escaped metrics tells us we haven’t reliably sped up at all. In fact, if escaped defects are rising, we need to fix something else in our process. We may need to slow back down. In contrast, a rising velocity and a decreasing or stable number of escaped defects would be a good team goal.
Another metric for a Scrum team is Committed vs Delivered %. Rather than just focusing on velocity, add a measure of Committed vs Delivered for velocity to see how your team is doing at being predictable. Teams that frequently have a low percentage of Committed to Delivered are committing to too much work in the sprint. If this is happening to your team, it is time to figure out why. Sometimes, we will know exactly what has impacted our metrics, while other times we may need to have a retrospective to discuss why. Escaped defects are important to consider, as a high percent Committed to Delivered is good, lots of rework is not.
While Velocity and Committed vs Delivered % are end-of-sprint metrics, throughout the sprint, a team’s burndown gives us a view into progress during the sprint. It’s important to look at a team’s burndown every day to assess whether the team is likely to complete all the work they committed to. Burn-down gives us the point-in-time view of the work to be completed within the sprint. If the remaining work is far above the ideal burndown line, we may need to have a conversation with the Product Owner about stories that won’t be completed. On the other hand, if the burndown is far below the ideal line, we may be able to bring another story into the sprint.
Some teams go a step further and measure Cycle Time for stories. Cycle Time starts when a team member starts work on the story and continues until the story is moved all the way to the team’s done column. Cycle time can help us determine if we are right-sizing our stories. For Scrum, stories are right-sized to just a couple of days of work or less. If Cycle Time for stories is extending well past a couple of days, you may need to try and break your work down into smaller pieces.
Using a combination of metrics and charts for a Scrum team can help the Scrum team spot problems and see if their experiments are working. These internal team metrics are just the start and give an even fuller picture when customer outcome metrics are added, but that is a topic for another post. Do not let your team meander along. Get them metrics to see where they are and where they are heading.
If you missed Philippa Fewell’s Meet Up where she speaks about how Agile 2 came about – you are in luck, we have it here! Philippa shares with us how she and a group of 14 other Agilists reflected together over a period of 8 months to create a new set of values and principles called Agile 2. We invite you to listen, learn and enjoy her lively discussion!
According to Deutsche Bank CIO Frederic Veron, “enterprises that wish to reap the potentially rich rewards of getting IT and business line leaders to build software together in agile fashion must also embrace the DevOps model.”[1]
Why is that? It’s simple: DevOps is necessary to scale Agile. DevOps practices are what enable an organization to rapidly deploy changes to many different parts of their product, across many products, on a frequent basis—with confidence.
That last part is key. Companies like Amazon, Google, and Netflix developed DevOps methods so that they could deploy frequently at a massive scale without worrying if they will break something. DevOps is, at its core, a risk management strategy. DevOps practices are what enable you to maintain a complex multi-product ecosystem and make sure that everything works. DevOps substitutes traditional risk management approaches with what the Agile 2 authors call real-time risk management.[2]
You might think that all this is just for software product companies. But today, most organizations operate on a technology platform, and if you do, then DevOps applies. DevOps methods apply to any enterprise that creates and maintains products and services that are defined by digital artifacts.
DevOps methods apply to any enterprise that creates and maintains products and services that are defined by digital artifacts.
That includes manufacturers, online commercial services, government agencies that use custom software to provide services to constituents, and pretty much any large commercial, non-profit, and public sector enterprise today.
As JetBlue and Breeze airlines founder David Neeleman said, “we’re a high-tech company that just happens to fly airplanes,”[3] and Capital One Bank’s CIO Rob Alexander said, “We’re a founder-led, 20-year-old technology company.”[4]
Most large businesses today are fundamentally technology companies that direct their efforts toward the markets in which they have expertise, assets, and customer relationships.
DevOps Is Necessary at Scale
Scaling frameworks such as SAFe and DA provide potentially useful patterns for organizing the work of lots of teams. However, DevOps is arguably more important than any framework, because without DevOps methods, scaling is not even possible, and many organizations (Google, Amazon, Netflix…) use DevOps methods at scale without a scaling framework.
If teams cannot deploy their changes without stepping on each other’s work, they will often be waiting or going no faster than the slowest team, and lots of teams will have a very difficult time managing their dependencies—no framework will remedy that if the technical methods for multi-product dependency management and on-demand deployment at scale are not in place. If you are not using DevOps methods, you cannot scale your use of Agile methods.
How Does Agile 2 View DevOps?
DevOps as it is practiced today is technical. When you automate things so that you can make frequent improvements to your production systems without worrying about a mistake, you are using DevOps. But DevOps is not a specific method. It is a philosophy that emerged over time. In practice, it is a broad set of techniques and approaches that reflect that common philosophy.
With the objective of not worrying in mind, you can derive a whole range of techniques to leverage tools that are available today: cloud services, elastic resources, and approaches that include horizontal scaling, monitoring, high-coverage automated tests, and gradual releases.
While DevOps and Agile seem to overlap, especially philosophically, DevOps techniques are highly technical, while the Agile community has not focused on technical methods for a very long time. Thus, DevOps fills a gap, and Agile 2 promotes the idea that Agile and DevOps go best together.
DevOps evangelist Gene Kim has summarized DevOps by his “Three Ways.”[5] One can paraphrase those as follows:
- Systems thinking: always consider the whole rather than just the part.
- Use feedback loops to learn and refine one’s artifacts and processes over time.
- Treat everything as an experiment that you learn from, and adjust accordingly.
The philosophical approaches are very powerful for the DevOps goal of delivering frequent changes with confidence, because (1) a systems view informs you on what might go wrong, (2) feedback loops in the form of tests and automated checks tell you if you hit the mark or are off, and (3) if you view every action as an experiment, then you are ready to adjust so that you then hit the mark. In other words, you have created a self-correcting system.
Agile 2 takes this further by focusing on the entire value creation flow, beginning with strategy and defining the kinds of leadership that are needed. Agile 2 promotes product design and product development as parallel and integrated activities, with feedback from real users and real-world outcomes wherever possible. This approach embeds Gene Kim’s three DevOps “ways” into the Agile 2 model, unifying Agile 2 and DevOps.
Download this White Paper here!
[1] https://www.cio.com/article/3141577/true-agile-software-development-requires-devops.html
[2] Agile 2: The Next Iteration of Agile, by Cliff Berg et al, pp 205 ff.
[3] https://www.businessinsider.com/breeze-airways-pushing-back-launch-until-2021-what-we-know-2020-7
[4] https://www.youtube.com/watch?v=0E90-ExySb8
[5] https://itrevolution.com/the-three-ways-principles-underpinning-devops/
Everyone says you should use Agile. The call for Agile has reached the CEO level: I myself have heard CEO announcements stating that the organization must use “Agile”—whatever that is, because I wonder how many actually know.
On the other hand, how many Agile proponents actually understand what Agile is? As I wrote in a recent article, Old Versus New Agile, Agile has changed—and changed a lot. Thus if you bring in “Agile” consultants to help, are they using “old Agile,” or “new Agile”?
Old Agile is arguably very limited, and does not acknowledge the realities of a large organization. What I refer to as “new Agile”—and I believe it is described well by Agile 2—is completely focused on the general problem of agility, and how that plays out in the broad range of situations, including and especially large organizations. Because to do big things—profitable things at scale—you need a very sophisticated model. Agile 2 provides that.
I have seen IT managers make tragic and far-reaching mistakes in their attempt to follow “old” Agile. For example, in more than one case an IT SVP eliminated all roles pertaining to testing. I wrote in an article why that is a tragic error that eventually results in terrible quality issues and actually impedes agility.
Old Agile is not all bad. It broke the grip of rigid approaches being pushed at the time by PMI and the procurement school of thought. In a fast-changing market, custom market-facing software cannot be “procured”: it must be seen as something that evolves over time. Agile made us face that. Some of the ideas that it brought into the forefront were:
- Phase-based (requirements, design, implementation) software development does not usually work.
- Business users often do not know what they want or need.
- It is almost impossible to fully design software up front
- Documents alone are not effective for communicating things
- Don’t build something entirely in one go
- Big teams do not work
- Don’t micromanage how developers work
- Don’t trust anything until you see it running
- Build quality in
- More effort ≠ better; automate to avoid effort
- Continuously reflect and improve
These are all good things, especially if one views them as reminders rather than absolutes. But the Agile community also came to espouse some extreme and ultimately toxic viewpoints—again, especially if one views them as absolutes (which is often the case). I consider these views to be part of “old” Agile. These include:
⚠️ Teams do not need leaders, except to “remove impediments”.
⚠️ Always trust the team.
⚠️ A team must be completely autonomous.
⚠️ Multiple teams will collectively self-organize.
⚠️ Written communication is not important.
⚠️ Everyone must sit together.
⚠️ Most challenges pertain to individual contributor team behavior.
⚠️ Teams can resolve technical issues if leaders merely “get out of the way.”
⚠️ If Agile does not work at scale, it is the organization’s “fault.”
⚠️ Specific technical practices such as pair programming and TDD are always “best.”
In contrast, “new” Agile ideas are markedly different. A tiny sampling of these authors includes Klaus Leopold, Nicole Forsgren, Jeff Dalton, David Marquet, Matthew Skelton, Manuel Pais, Mirco Hering, Mark Schwartz, and Gary Gruver, as well as some “old Agile” authors who have evolved a mature view over time (or had one from the beginning) such as Johanna Rothmann, Diana Larsen, as well as myself and the 15 members of the Agile 2 team.
You probably see now that the peril of bringing in Agile consultants is that you might not know if they embrace “old” Agile ideas or “new” Agile ideas. But that is not all. “New” Agile includes many additional narratives that are critical for achieving agility at scale. The Agile 2 team attempted to summarize these through its principles, but a very abbreviated summary is as follows. Note that these are considerably at variance with “old” Agile, but are well aligned with the “new” Agile that Agile 2 and many of the above authors advocate:
On leadership:
- The predominant forms of leadership are the most determinant factors of success.
- Someone usually needs to coordinate things, and be the organizer.
- On any team, one wants a “missionary, not mercenary”—someone who values the organization’s success first and foremost.
- There are many forms of leadership: team focused, advocate focused, technically focused, and maybe others; as well as individual leadership.
- The organization needs to explicitly focus on encouraging benign and effective forms of leadership, and take steps to avoid giving the wrong people authority—avoiding people who “seem like leaders,” and instead selecting (actively or passively) those who are the “missionaries” and the helpers.
- Leadership is needed at every level of an organization, and the same principles apply.
- Leaders of tech-focused organizations not only need to understand outcomes, but they also need to understand how the work is done, because the “how” is often strategic.
On a systems approach:
- Don’t be extreme, unless the situation is extreme.
- Always think holistically—in terms of the whole system.
- Product design is an essential element, apart from product implementation; yet the two are intertwined.
On products:
- Direct feedback from customers and stakeholders is the only way to measure success.
- Product implementation teams must be partners with business stakeholders—not mere order takers.
On data:
- Data is strategic, and it must not be treated as an afterthought.
On collaboration:
- Collaboration is essential, but so is deep thought. People often need quiet and isolation in order to think deeply.
- People work, communicate, and collaborate best differently. A one-size-fits-all approach is not effective.
- Team autonomy is an essential aspiration; but for a complex endeavor, full autonomy is seldom fully realizable.
- Some people want to be experts. Some people want to be generalists. Some are in between. All are valuable.
- Both teams and individuals matter. Don’t over-emphasize one over the other.
- A team should collectively decide how to approach its work; but then individuals perform the work and interact as they need to.
On transformations:
- Transformations are mostly a learning journey—not a process change.
- Never use a framework as defined: treat it as a source of ideas—not an Agile-by-numbers process.
Conclusion
Agile is not a single theory or approach. There is great diversity of thought within the Agile community. When choosing consultants or an approach for adopting Agile methods, be thoughtful about who you choose. Ask yourself, are they interested in putting people on the ground to deliver a commodity service? Or are they deeply thoughtful about what they do, and represent mature and effective viewpoints? And will the people they provide be as up-to-date and astute about the nuances of old versus new Agile ideas as those who have had conversations with you? Because it matters.
Organizations with internal cultures that are aligned with their strategies are far more effective than those without aligned cultures. Decades of data prove this.[1] For example, over the last 50 years, culture specialist Human Synergistics has compiled data on more than 30,000 organizations and it clearly shows strong correlations between specific organizational culture attributes and business performance. Yet it is common for organizations to ignore culture when trying to implement their strategies.
Agile 2 is a more mature version of Agile, and it relies on having a supporting healthy culture. In fact, analysis that Agile 2 Academy has done with Human Synergistics shows that Agile 2 ideas strongly align with what Human Synergistics calls a Constructive culture, which is the most effective kind.
When an organization decides to adopt Agile 2 (or any Agile) methods, it is common to define a set of “practices” that development teams must follow. This is an essential step, but there are some great perils in assuming this approach is enough:
- Many, if not most, practices require people to learn new skills, make new judgments, and behave in new ways. Practices alone are not enough.
- Most of the obstacles to using Agile 2 (or legacy Agile) methods actually exist outside of the development teams. These obstacles are widespread and manifest as management behaviors, lack of supporting systems that Agile teams need, and processes and procedures that make it nearly impossible for teams to operate with agility.
Peril #1 means that people will not be able to execute the practices. They will “go through the motions”—but Agile 2 (agility) is, in its essence, a replacement of step-by-step processes with just-in-time contextual decision-making. If people follow practices and make poor judgments, then the organization will suffer from ongoing bad decisions and poor outcomes. But if the organization’s culture is one that encourages people to seek safety through following procedure, rather than relying on their judgment, then they will not be willing to make judgments: they will copy what others do, and perhaps do the wrong thing.
Peril #2, that most obstacles to agility originate from beyond the teams, is seldom appreciated by organizations beginning an Agile journey. Senior leadership often views Agile as something that development teams and individual contributors do. They don’t realize the extent to which Agile—having agility—relies on having the right support systems in place and the right kinds of leadership supporting the teams.
If the organization has a culture of hands-off leadership, then people who find themselves in a leadership role will not know how to behave when leadership is needed. For example, a common situation is when managers have learned the Agile practice that teams “self organize” but do not realize that that is just a placeholder or reminder. Most teams cannot self-organize well; they need leadership. Self-organization is an aspiration, not a starting point.
The need for leadership is even more acute when one has many teams, and they need to coordinate, and resolve issues such as “How will we design the product? How will we involve real users? How are we going to integrate? How will we manage quality? How will we support our product? How will we agree on branch and merge strategies for the product as a whole?”
When people in a non-Agile organization implement Agile practices, they look for a rule book or procedure to follow, because that is what they are used to, but there isn’t one. If you were to create one, it would not work everywhere, because every Agile decision and judgment is contextual. It always depends; that is what yields agility and makes it possible for people to select the shortest path for each situation.
The above are aspects of the organization’s culture: the ability to discuss issues openly and honestly so that they can be resolved, the willingness to take risks when making a decision, and the patterns of leadership that people have learned. There are many other dimensions of culture that are essential for agility, such as the inclination to learn, the tendency to try things on a small scale before scaling up, and the acceptance of things not going perfectly the first time.
As Peter Drucker said, “Culture eats strategy for breakfast,” and that certainly is true for Agile transformations. If you don’t address your organization’s culture, your agile strategy with its new practices will fail to yield the desired outcomes, and Agile will become a source of problems instead of a driving force for business agility. The good news is that culture can be changed, with the right commitment and the right approach. Agile 2 Academy considers culture improvement to be an important element in business agility. An Agile transformation strategy that includes analyzing and improving your organization’s culture is far more likely to succeed than simply adopting a set of agile practices or frameworks and hoping for the best.
[1] The best-selling book Accelerate documents research that makes this connection in the context of Agile and DevOps.
Last year, we worked with experts from George Mason University to build a COVID screening and tracing platform called Pass2Play. We used this opportunity to implement a Serverless architecture using the AWS cloud.
This video discusses our experience, including our solution goals, high-level design, lessons learned and product outcomes.
It’s specific to our situation, but we’d love to hear about other experiences with the Serverless tools and services offered by AWS, Azure and Google. There are a lot of opinions on Serverless, but there’s no doubt that it’s pushing product developers to rethink their delivery and maintenance processes.
Feel free to leave a comment if we’re missing anything or to share your own experience.
A look at Agile 2’s values and why we need both sides of the coin to create value.
While the values of the original Agile Manifesto have helped shift the way software is developed today, it is very much left to the interpretation of the individual applying them as to how it should be done. This interpretation can often be dogmatic to focus solely on the left, sometimes at the total exclusion of those on the right, even though it clearly states “That is, while there is value in the items on the right, we value the items on the left more”.
Agile 2 consists of six values and 43 principles. It comprises a deep and nuanced set of ideas, and all of its ideas are supported by the thoughts that led to them. In this article I am going to provide a view into Agile 2’s six values and why we chose the word ‘and’ instead of ‘over’. Agile 2 seeks a balance of both the left and the right, both are needed, and both are useful. Also, Agile 2 seeks to speak to a broader range of activities beyond only software, including product development in general, and in fact any collaborative human endeavor.
The Values of Agile 2

Let’s take a look at them one by one.
1. Thoughtfulness and Prescription.
Frameworks are good and useful. They often give us a place to start and help us to solidify an approach. But they should not be followed blindly without considering your own context – where are you and what can and can’t you do in the context of your organization. Many variables are at play to construct the perfect environment for a framework to succeed, including organizational structure, culture, compliance regulation, and leadership support to name a few. A practice that is best for one organization may not be the best or have a chance of succeeding in another. That is not to say that you can’t move towards an ideal, but it very often isn’t the place you can start.
This contextual variability is why thoughtfulness is essential when applying a framework, or any practice or methodology. However, there are cases when one should start with a well-defined practice and follow it precisely. For example, if one is replacing a component of a complex machine, such as a blade on a jet engine, following procedure is often essential for ensuring that it is done right. Lives and safety might depend on following the procedure rather than improvising.
Still, judgment is sometimes required. Surgeons often have procedures for specific types of operations, but they need to be able to improvise, using their own judgment, when things do not proceed as expected; their experience and judgment are key, and the patient’s life might depend on it.
There is a place for prescription; and there is a place for thoughtfulness. Knowing the right balance is a matter of context.
2. Outcomes and Outputs
In the course of product development, outputs are the things that get produced by the development teams. These are usually design artifacts: digital files that define the product and its fabrication (hardware) or deployment (software).
Outputs are important – they help us measure our progress using metrics such as the team’s throughput, number of defects, quality of the software etc. These are mainly what you might call activity-based measures. But ultimately, we are building products or creating a service for our customers using Agile to achieve some business outcome. Sure, the customer will care about the quality, no one wants a product that is not stable, and the organization may not achieve its revenue target or market share if the product is not released on time. But the product also needs to be what the customer wants and will use. It must be the right product and someone or some group needs to be accountable for directing that vision to achieve the desired outcome, which is the true measure of success for the organization.
So, outputs matter: they are essential elements of any process; but they are not the end goal. The end goal is the outcome.
3. Individuals and Teams
We often hear there is no ‘I’ in ‘Team’ and much of what is written about Agile is written regarding the practices of the team. Teams are necessary to accomplish much of what we do, as no one person has the knowledge or capacity. But Agile often advocates for the team’s preference over the preference of the individual, be it with team norms, communication style, and even the team environment. This can be to the detriment of the individual, which can then cascade into the detriment of the team reaching its goals.
Balance is needed so as not to stifle creativity or alienate team members simply because they do not think, learn, or process information in the same way. It is necessary for teams to have norms but sometimes allowing someone to operate outside of the majority’s norm is needed too to accomplish the team’s goal.
4. Business Understanding and Technical Understanding
We often hear that business is the driver, and that technology is the enabler, that the business is responsible for the “what” and that IT is responsible for the “how”. This thinking has led to many structures where development teams are led by product managers who have a great understanding of their domain but very little understanding of how it will be implemented and vice versa. But knowledge of both is necessary to make optimal decisions.
Technical decisions have financial and future business impact, and business decisions have financial and future architectural impact. Not every person can have a deep understanding of both, but they should not entirely shift the accountability of understanding to someone else. Instead, they should seek to understand as much as possible and collaborate closely.
5. Individual Empowerment and Good Leadership
Self-organizing teams is one of the first things people often talk about when discussing Agile. Two of the principles behind the original Agile Manifesto state:
“Build projects around motivated individuals. Give them the environment and support they need and trust them to get the job done” and
“The best architectures, requirements, and designs emerge from self-organizing teams.”
These are often translated into “leave the teams alone and don’t tell them what to do”. Yet not all teams are ready for that level of autonomy and authority.
Empowering teams, or individuals for that matter, is a great motivator. It can bring about better outcomes, and it can help to build future leaders within the organization. But to empower people without some level of supportive leadership and assessment is to set them adrift. Borrowing from a talk by former Captain David Marquet, teams need to have clarity of purpose to set direction and competency and the capability and skills necessary to get the job done . Good leadership will assess the level of oversight and direction that the team needs and help them navigate while teaching them how to make good decisions on their own.
6. Adaptability and Planning
One of the original values of the Agile Manifesto contains the phrase “…responding to change over following a plan”. I do not think anyone would disagree that planning can be useful, or that if there were a significant change that would cause the goal to not be achieved then adapting the plan would not be a good idea. The point is that both the act of planning and the ability to adapt that plan are valuable.
The planning in and of itself helps us to think through the variables that would have us choose one option, course of action, timeline, etc. over another. It helps us think through the constraints we must work under and the risks we face if we come up against them. Keeping these things in mind with respect to what is occurring while executing the plan gives us useful information, we can use to adapt the plan for a better outcome.
Conclusion
Both sides of the coin, heads and tails, are necessary to create value. Agile 2 believes that both sets of values are necessary to achieve agility. Balance is needed and consideration for the context of your situation when applying practices that might favor one over the other. I do not believe the Agile Manifesto meant to infer only one side of the coin either, and that the word ‘over’ has led to misinterpretation and rigidity on how to apply it. I could be wrong in my view and understanding also. Therefore Agile 2 has provided interpretation and insight as to how the values were derived, so as not to be misinterpreted. There is a lot of thought and experience behind them from the original authors. However, we expect Agile 2 to evolve and develop with input from the community. We hope that you take some time to review Agile 2 and add to the ideas and content that comprise it.
CC Pace was recently featured in Agile Uprising’s Blog series. Agile Uprising is a network that is focused on the advancement of the Agile mindset and professional networking between leading Agilists. In the blog, CC Pace created a short video where we highlighted one of our latest projects! Bobby Pantall, CC Pace Lead Technology Consultant, speaks to our experience building an app for a startup company named Twisty Systems. This app that was developed is a navigation app aimed towards driving enthusiasts. In the video we describe the framework of the Lean Startup methodology and some of the highs and lows in the context of the pandemic and releasing a new app.
Enjoy and please share your thoughts on this project!
In the first installment of our Product Owner Empowerment series, we talked about the three crucial dimensions of knowledge that a Product Owner’s effectiveness is measured on. We looked at how various aspects of empathy influence a Product Owner’s ability to connect with the team, client and organization in our second post. In our third and final post in this series, we are going to look at the impact that psychological safety has on a Product Owner’s success.
Psychological Safety: Psychological safety is a state of mind and a cultural aspect that is created and fostered by someone or something other than the subject themselves. When people in the organization are afraid to make difficult choices and shy away from having tough conversations, it is generally due to a lack of psychological safety. Various factors contribute to how safe people feel psychologically in an organization. Empathy, as discussed above, at all levels plays a big role in this. If the culture, in general, is more empathetic, there will be a higher level of psychological safety. However, it is not the only factor. Let’s dig a little deeper on this factor and see how PO’s success is tied to psychological safety for themselves and the teams they are working with.
- Psychological Safety for Teams: In the case of teams, it is usually the leaders that are responsible for ensuring that team members feel psychologically safe to work in challenging situations with often unpredictable outcomes. Agile requires teams to be flexible and collaborative in their approach to address the ‘just-in-time’ nature of work. Innovation, exploration, and taking risks is a necessity to succeed in such an environment. This comes with the possibility of teams running into occasional failures that should be treated as learning opportunities. But, if the organizational culture is not forgiving of failures, it leads to teams not feeling safe enough to take the risks and challenges they may need to, to optimize value delivery. Leaders should actively address the topic of psychological safety and ensure that they foster a culture that encourages innovation and taking calculated risks.
- Psychological Safety for the Product Owner: The Product Owner is in a unique position having a tremendous influence on a team’s psychological safety, but in turn, are equally or proportionally affected by the psychological safety afforded to them by their leadership. A Product Owner who is well engaged with the team will need to make decisions regarding priority, scope, and timelines. An empowered Product Owner will be able to make such decisions with appropriate communication, keeping the team working at a sustainable pace. However, we often see organizations where such pivots must go through multiple levels of approvals, often surrounded by process constraints and red tape. This causes the organization to see change as undesirable and anyone bringing the change is viewed as the bearer of bad news.
Having leadership that welcomes change, is key to providing psychological safety to the Product Owner and the team. It is not sufficient for leadership to merely say that they support and welcome change. They need to look at systemic and cultural aspects of the organization that might be working against this mindset and actively work to realign them to an Agile way of working. It may not be the norm in some organizations to have leadership unsupportive of change, but a delay in the ability to pivot can lead to periods of unsustainable workload on the team while the Product Owner is negotiating upwards. To truly support psychological safety, allowing for making difficult choices and having tough conversations, organizations need to embrace decentralization decision making and empower the Product Owner and teams as much as possible.
Conclusion:
I hope you enjoyed this series. Throughout these three posts, we explored various factors that impact the ability of a Product Owner to work effectively in their role and serve the purpose they are meant to. They must be empowered with knowledge of not just the business domain, but also the delivery and process knowledge to successfully operate in the organizational context. Empathy at different levels of the organization and towards the customer, whether they are internal or external, is crucial to move the product in the right direction. Psychological safety is often an unmeasurable and underlying factor that should be proactively managed by leadership and ingrained in the culture and processes of the organization to truly empower their Product Owners.
As a final thought, if you’ve enjoyed this series or have additional questions on how to truly empower your Product Owners, join me for a FREE webinar on Product Owner Empowerment. We have a few seats available (on a first-come basis) and invite you to register today.
In the first installment of our Product Owner Empowerment series, we talked about the three crucial dimensions of ‘Knowledge’ that affect a Product Owner’s effectiveness. This post is going to take a deeper dive into the impact Empathy has on a Product Owner’s ability to succeed.
Empathy: Assuming positive intent, empathy is something that comes naturally to a person. However, environmental factors can influence a person’s ability to relate or connect with another person or team. Let’s explore some aspects of empathy and how they may impact a Product Owner’s success.
- Empathy towards the team(s): To facilitate an empathetic relationship between a Product Owner and the team, the PO must be able to meet the team where they are (literally and figuratively). Getting to know the team members and building a rapport requires the Product Owner to extensively interact with the team and proactively work to build such relationships. Organizations should facilitate this by making sure Product Owners are physically located where the team is and is empowered to not only represent the team to the business but also play the role of protector from external interruptions, so that team can function effectively. As alluded to above, having a good understanding of what it takes to deliver, helps tremendously with the ability to place themselves in the team’s shoes and see things from their perspective.
- Empathy towards the customer(s): It is easy to assume that a Product Owner acting on behalf of the business will automatically have empathy and an understanding of their needs to adequately represent their business interests. However, organizational culture can sometimes influence how a Product Owner prioritizes work. If it is only the sponsors directing the team’s scope and prioritization, a critical element of customer input is missed. Product Owners should place sufficient emphasis on obtaining customer opinion and feedback to inform the direction of product development.
- Empathy in the Organization: This factor relates to the organizational culture. As companies embrace Agile and expect its benefits to be equally realized, more emphasis on the desire to be lean begins to form. While being lean is a goal every organization should have, it is important to understand what kind of impact a lean organization has on individual teams or team members. A systemic push to be lean, in combination with less than optimal Agile maturity and the presence of antipatterns, can lead to teams being held against unsustainable delivery expectations. This problem is more common than you would think. Most organizations are going through some level of Agile transformation, but leadership expectations of benefits realization are always a few steps ahead of where the organization truly is on the Agile journey. Having the right set of expectations and the empathy necessary to reset them based on continuous learning and feedback is needed at an organizational level.
Check back next week to see how a Product Owner’s success is tied to psychological safety for themselves and the teams they are working with.
If you are a Product Owner or your Agile team struggles with this role, you won’t want to miss our upcoming webinar on Product Owner Empowerment. This webinar will be held on December 15th and you can register today here! Space is limited and on a first-come basis.
The Product Owner plays a crucial role in the success of a team and subsequently the organization. Most organizations consider the Product Owner to be a person with sufficient business knowledge such that they can communicate the business needs to the teams for implementation. While this is an important qualifier for a Product Owner, other factors must be present to have a truly empowered Product Owner.
The following 3 factors have the biggest impact on a Product Owner’s ability to succeed:
- Knowledge
- Empathy
- Psychological Safety
These factors are not just applicable to the Product Owner, but also the surrounding environment. This includes the organization, processes, culture, teams, the product itself, stakeholders, and customers. Let us dig a little deeper.
In this post, we are going to focus on the first influencer: Knowledge. Next week, we’ll take a deeper dive into the impact Empathy has on a Product Owner’s ability to succeed, and we’ll round out the series with defining Psychological Safety and looking at the effect it has on a Product Owner.
Knowledge: The knowledge aspect is the most obvious and widely analyzed and assessed factor to evaluate the effectiveness of a Product Owner. However, there are several dimensions of knowledge that are required.
- Domain Knowledge: This dimension doesn’t need explaining and is probably the most obvious one of all. The Product Owner must have a good understanding of the domain they are working in, to effectively lead product development. While it may seem comfortable to assume that industry-wide it is common practice to have Product Owners with sufficient domain knowledge, some Agile antipatterns have led to the emergence of roles such as proxy Product Owner, or technical Product Owner. The primary culprit for organizations gravitating to this antipattern is a lack of understanding of the Product Owner’s role. Organizations assume that as long as there is a communication path to transfer the business needs to the teams, it is ok to have layers between the customer and the team. This creates dysfunction in Agile teams that are supposed to collaborate with the Product Owner in discovering and delivering value, but don’t have an empowered Product Owner who can make decisions and pivot when needed effectively.
- Process Knowledge: When it comes to a PO having knowledge and understanding of the Agile approach, best practices, and antipatterns usually takes a back seat. To clarify, we are not talking about just having your PO take a 2-day certification class and call it a day. Yes, it is necessary to have formal learning as part of role development, but, the learning should not remain at this minimal level. An effective Product Owner should have lived the process, learned from the challenges, successes, and failures, so they understand what it takes to deliver the product they are helping build
- Knowledge of PO role across in the organization: We talked about knowledge that a PO should have, but, to have a truly empowered Product Owner, an organization needs to know what to do with such a role. As mentioned before, a Product Owner role is different from a mere subject matter expert. To empower a Product Owner to make decisions, the organizational stakeholders need to understand how the Product Owner role works in an Agile context. Product Owners are not only the primary source of information on business needs for the team but are also one of the key roles influencing the pace of value delivery.
If the organization doesn’t understand what it takes for an Agile team to receive business needs Just-in-Time, iterate on requirements, let design emerge, deliver value and allow for slack time and innovation, It will make a Product Owner’s job quite difficult if they are still held to traditional expectations from leadership. These expectations may include providing inflexible delivery commitments, obtaining buy-in from a myriad of stakeholders before any scope changes are made, or worst of all, ensuring maximized utilization from the team. The organization’s stakeholders must be trained and knowledgeable about the basics of Agile if they are new to this way of working. Additionally, leadership must know the common pitfalls, assumptions, and antipatterns that organizations fall into and actively work to avoid or mitigate them.
If you are a Product Owner or your Agile team struggles with this role, join me for a free webinar on Product Owner Empowerment. This webinar will be held on December 12th and you can register here! Space is limited and on a first-come basis.
Agile 2 is here!
I was fortunate to be included in a group of exceptional Agile leaders and practitioners, led by Clifford Berg, to retrospect on Agile and improve upon what it has become over the last 20 years. Each of us began by citing issues and problems we have encountered over the years, drawing on our unique experiences. Not all of us experienced the same issues, but it was eye opening to discover what others came up with because of the diversity of the group both in practice and expertise.
We then discussed why we felt the problems occurred and what could be done to change them. This led us to revisiting the values and principles of the Agile Manifesto and many of the frameworks we use today. While I am vested in many of these, having become certified in them myself and trained others on them as well, I have seen where lack of clarity or difference of interpretation, as well as too much emphasis placed on prescription, leads to less than successful outcomes.
It is this clarity and thoughtfulness that Agile 2 seeks to deliver. It is a set of values and principles based on common problems that we believe will resonate with you, the Agile practitioner. My colleagues and I are proud of Agile 2 and the potential impact we believe it can have on the current state of Agile. Have we gotten it all right? Undoubtedly there is room for debate. Have we missed some valuable principles? Perhaps. And that is why we want Agile 2 to be open to ideas from the community, and there will be an Agile 2 version 1.1. We respect and welcome your input and ideas. We want Agile 2 to be constantly improving on what it is today so that it stays relevant. There will be Agile 2 community forums, and to begin that, there is a LinkedIn group. A book is on the way. The Agile 2 website is at https://agile2.net. Check it out!
In the previous blog, I had provided insights on what ZTA is, what the core components that belong to ZTA are, why organizations should adopt ZTA and what the threats to ZTA are. In this blog, I will go through some of the common deployment use cases/scenarios for ZTA using software defined perimeters and move away from enterprise network-based perimeter security.
Scenario 1: Enterprise using cloud provider to host applications as cloud services and accessed by employees from the enterprise owned network or external private/public untrusted network
In this case, the enterprise has hosted enterprise resources or applications in a public cloud, and users want to access those to perform their tasks. This kind of infrastructure helps the organization provide services at geographically dispersed locations who might not connect to the enterprise owned network but could still work remotely using personal devices or enterprise owned assets. In such cases, the enterprise resources can be restricted based on the user identity, device identity, device posture/health, time of access, geographic location and behavioral logs. Based on these risk factors, the enterprise cloud gateway may wish to grant access to resources like employee email service, employee calendar, employee portal, but may restrict access to services that provide sensitive data like the H.R. database, finance services or account management portal. The Policy Engine/Policy Administrator will be hosted as a cloud service which will provide the decision to the gateway based on the trust score calculated from various sources like the enterprise system agent installed on devices, CDM system, activity logs, threat intelligence, SIEM, ID management, PKI certificates management, data access policy and industry compliance. The enterprise local network could also host the PE/PA service instead of the cloud provider, but it won’t provide much benefit due to an additional round trip to the enterprise network to access cloud hosted services which will impact overall performance.
Scenario 2: Enterprise using two different cloud providers to host separate cloud services as part of the application and accessed by employees from the enterprise owned network or external private/public untrusted network
The enterprise has broken the monolithic application into separate microservices, or components hosted in multiple cloud providers even though it has its own enterprise network. The web front end can be deployed in Cloud Provider A, which communicates directly to the database component hosted in Cloud Provider B, instead of tunneling through the enterprise network. It is basically a server-server implementation with software defined perimeters instead of relying on enterprise perimeters for security. The PEPs are deployed at the access points of web front end and database components which will decide whether to grant access to the service requested based on the trust score. The PE and PA can be services hosted either in cloud or other third-party cloud provider. The enterprise owned assets that have agents installed on them can request access through PEPs directly and the enterprise can still manage resources even when hosted outside the enterprise network.
Scenario 3: Enterprise having contractors, visitors and other non-employees that access the enterprise network
In this scenario, the enterprise network hosts applications, databases, IoT devices and other assets that can be accessed by employees, contractors, visitors, technicians and guests. Now we have a situation where the assets like internal applications, sensitive information data should only be accessed by employees and should be prevented from visitors, guests and technicians accessing it. The technicians who show up when there is a need to fix the IoT devices like smart HVAC and lighting systems still need to access the network or internet. The visitors and guests also need access to the local network to connect to the internet so that they could perform their operations. All these situations described earlier can be achieved by creating user, device profiles, and enterprise agents installed on their system to prevent network reconnaissance/east-west movement when connected to the network. The users based on their identity and device profile will be placed on either the enterprise employee network or BYOD guest network, thus obscuring resources using the ZTA approach of SDPs. The PE and PA could be hosted either on the LAN or as a cloud service based on the architecture decided by the organization. All enterprise owned devices that have an installed agent could access through the gateway portal that grants access to enterprise resources behind the gateway. All privately owned devices that are used by visitors, guests, technicians, employee owned personal phones, or any non-enterprise owned assets will be allowed to connect to BYOD or guest network to use the internet based on their user and device profile.
Zero Trust Maturity
As organizations mature and adopt zero trust, they go through various stages and adapt to it based on the cost, talent, awareness and business domain needs. Zero trust is a marathon, and not a sprint, hence incrementally maturing the level of zero trust is the desired approach.
Stage 0: Organizations have not yet thought about the zero trust journey but have on-premises fragmented identity, no cloud integration and passwords are used everywhere to access resources.
Stage 1: Adopting unified IAM by providing single sign-on across employees, contractors and business partners using multi-factor authentication (MFA) to access resources and starting to focus on API security.
Stage 2: In this stage, organizations move towards deploying safeguards such as context-based (user profile, device profile, location, network, application) access policies to make decisions, automating provisioning and deprovisioning of employee/external user accounts and prioritizing secure access to APIs.
Stage 3: This is the highest maturity level that can be achieved, and it adopts passwordless and frictionless solutions by using biometrics, email magic links, tokens and many others.
Most of the organizations in the world are either in stage 0 or stage 1 except for large corporations who have matured to stage 2. Due to the current COVID situation, organizations have quickly started to invest heavily to improve their ZT maturity level and the overall security posture.
Acronyms
References
Draft (2nd 1) NIST Special Publication 800-207. Available at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207-draft2.pdf
The State of Zero Trust Security in Global Organizations
Effective Business Continuity Plans Require CISOs to Rethink WAN Connectivity
Zero Trust Security For Enterprise Mobility
Are you a seasoned Agile Practitioner interested in expanding services beyond yourself while providing strategic guidance to a variety of clients?
CC Pace is currently looking for a dynamic Agile Thought Leader who is ready to make an immediate impact and drive our Agile transformation services. The ideal candidate is local to the DC metro area, is comfortable making decisions and implementing innovative ideas. CC Pace will provide a flexible working environment and support interest in growing a personal reputation in the global Agile community, in addition to a competitive, comprehensive suite of benefits.
What will an Agile Thought Leader at CC Pace do?
- Set the direction for our Agile transformation services, drive strategic imperatives, define Agile offerings, establish priorities and grow our Agile business
- Represent CC Pace at conferences, through independently orchestrated thought leadership and by guiding client engagements
- Provide strategic guidance to clients through enhancing, producing and delivering Agile training and coaching both in person and via alternative delivery modes
- Build and mentor a team of consultants to deliver the services both with internal staff and business partners
- Define and brand CC Pace while developing new relationships in the Agile community
Position Requirements:
- Current certified Agile credentials or equivalent level of experience
- Strong communication and presentation skills – must be versed at public speaking and a capable writer
- Proven experience leading Agile engagements, including developing training materials and coaching at both the enterprise and team level
- Ability to demonstrate leadership experience in IT delivery, including building and sustaining high-performing teams
- Strong leadership skills that will support setting the direction for our Agile practice, managing a team of resources and driving the Agile offering and delivery strategies
- Ability to provide ample thought leadership to further our footprint in the Agile community
- Strong business acumen that will assist in supporting the sales & marketing efforts involving Agile transformation services
At CC Pace we have a strong referral program and encourage not only our employees but even those who don’t work for us to take advantage of it – so if you know someone who would be a fit for this position please refer them!
For more information regarding this Agile Thought Leader position, please contact Rechelle Card, rcard@ccpace.com
I recently came across a blog written by a former developer at ORACLE.
The author highlighted the trials and tribulations of maintaining and modifying the codebase of Oracle (version 12) database. This version, by the way, is not some legacy piece of software, but is the current major version that is running all over the world. According to the author, the codebase is close to 25 million lines of C code and described as “unimaginable horror”. It is riddled with thousands of flags, multitudes of macros and extremely complex routines that were added by generations of developers trying to meet various deadlines. According to the author, this is a common experience for an Oracle developer at the company (their words, not mine):
- Start working on a new bug.
- Spend two weeks trying to understand the 20 different flags that interact in mysterious ways to cause this bug.
- Add one more flag to handle the new special scenario. Add a few more lines of code that checks this flag and works around the problematic situation and avoids the bug.
- Submit the changes to a test farm consisting of about 100 to 200 servers that would compile the code, build a new Oracle DB, and run the millions of tests in a distributed fashion.
- Go home. Come the next day and work on something else. The tests can take 20 hours to 30 hours to complete.
- Go home. Come the next day and check your farm test results. On a good day, there would be about 100 failing tests. On a bad day, there would be about 1000 failing tests. Pick some of these tests randomly and try to understand what went wrong with your assumptions. Maybe there are some 10 more flags to consider to truly understand the nature of the bug.
- Add a few more flags in an attempt to fix the issue. Submit the changes again for testing. Wait another 20 to 30 hours.
- Rinse and repeat for another two weeks until you get the mysterious incantation of the combination of flags right.
- Finally one fine day you would succeed with 0 tests failing.
- Add a hundred more tests for your new change to ensure that the next developer who has the misfortune of touching this new piece of code never ends up breaking your fix.
- Submit the work for one final round of testing. Then submit it for review. The review itself may take another 2 weeks to 2 months. So now move on to the next bug to work on.
- After 2 weeks to 2 months, when everything is complete, the code would be finally merged into the main branch.
Millions of tests
As I read this post, I started getting a little nervous, being a software developer and currently interacting with an Oracle 12 database. Quickly it became clear that millions of tests were acting as a true line of defense against the release of disastrous software. It was interesting to read that the author was diligent in adding tests not just because it was his job, but out of concern for ensuring that future developers will not break the bug fix being added. There are probably many ways of streamlining the development lifecycle described by the author, and maybe it is a topic for another blog, but it seems that having hundreds or thousands of failed tests is infinitely better than the alternative.
The black box
Product owners and business stakeholders are not the only people in an organization that may view their codebase as a black box. Developers may as well. As the number of lines of code grows over time, no institutional knowledge can cover the various nuances and complexities that various features introduce. When new developers are added to a project, the value of having automated tests becomes more pronounced. Organizations should strive to remove the perception of the code being a “black box” as much as possible. Failure to do so will result in loss of confidence in quality and viability of the products and features being developed, not to mention placing unnecessary burden on test teams.
Legacy Systems
I have heard horror stories from colleagues and friends about maintaining “legacy” systems. One question that comes to mind is if the system is being modified, is it really a legacy system, or is it just an old system that is still updated? In my experience, if a bug fix or a new feature is being added to an older system, the cost of not automating the corresponding tests most often results in some other seemingly unrelated bug being introduced. Sometimes it can be very difficult to automate tests especially in old systems. There may not be any available testing frameworks that could be easily plugged in. In such cases I would advocate for writing your own testing framework. It may not be as sophisticated as some of the commercial or open source packages, but it will help immensely. Testing automation of older systems can also ensure a smooth retirement of obsolete features that keep the codebase bloated and more complex that it needs to be.
Flaky Tests
In my experience, having more automated tests is better than having less. Over time, tests may be become redundant as they are being added. This is not the worst problem to have and can generally be managed as technical debt. What is worse, is presence of flaky tests. These are tests that change from passing to failing from day to day with no clear explanation. These result in significant time sink and loss of confidence in the product being developed. Getting rid of such tests should be the top priority. Ideally, they should be rewritten and made more robust, but in some cases, they may be completely removed as they do not reliably prove that the software is working as it should.
Conclusion
Test automation is not new, but it still appears to be lacking in many systems. Organizations should embrace the cost of test automation as part of the cost of developing new features and modifying existing ones. Doing so will help build confidence in ensuring quality for product owners as well as developers.
Happy testing!
Introduction
In the last post, we looked at pattern matching and data structures in Elixir. In this post, we’re going to look particularly at Elixir processes and how they affect handling errors and state.
Supervisors & Error Handling
Supervisors, or rather supervision trees as controlled by supervisors, are another differentiating feature of Elixir. Processes in Elixir are supposed to be much lighter weight than threads in other languages,[1] and you’re encouraged to create separate processes for many different purposes. These processes can be organized into supervision trees that control what happens when a process within the tree terminates unexpectedly. There are a lot of options for what happens in different cases, so I’ll refer you to the documentation to learn about them and just focus on my experiences with them. Elixir has a philosophy of “failing fast” and letting a supervision tree handle restarting the failing process automatically. I tried to follow this philosophy to start with, but I’ve been finding that I’m less enamoured with this feature than I thought I would be. My initial impression was that this philosophy would free me of the burden of thinking about error handling, but that turned out not to be the case. If nothing else, you have to understand the supervision tree structure and how that will affect the state you’re storing (yes, Elixir does actually have state, more below) because restarting a process resets its state. I had one case where I accidentally brought a server down because I just let a supervisor restart a failed process. The process dutifully started up again, hit the same error and failed again, over and over again. In surprisingly short order, this caused the log file to take up all the available disk space and took down the server entirely. Granted this could happen if I had explicit error handling, too, but at least there I’d have to think about it consciously and I’m more likely to be aware of the possibilities. You also have to be aware of how any child processes you created, either knowingly or unknowingly through libraries, are going to behave and add some extra code to your processes if you want to prevent the children bringing your process down with them. So, I’m finding supervisors a mixed blessing at best. They do offer some convenience, but they don’t really allow you to do anything that couldn’t be done in other languages with careful error handling. By hiding a lot of the necessity for that error handling, though, I feel like it discourages me from thinking about what might happen and how I would want to react to those events as much as I should. As a final note tangentially related to supervisors, I also find that the heavy use of processes and Elixir’s inlining of code can make stack traces less useful than one might like.
All that being said, supervisors are very valuable in Elixir because, at least to my thinking, the error handling systems are a mess. You can have “errors,” which are “raised” and may be handled in a “rescue” clause. You have “throws,” which appear not not have any other name and are “thrown” and may be handled in a “catch” clause and you have “exits,” which can also be handled in a “catch” clause or trapped and converted to a message and sent to the process outside of the current flow of control. I feel like the separation of errors and throws is a distinction without a difference. In theory, throws should be used if you intend to control the flow of execution and errors should be used for “unexpected and/or exceptional situations,” but the idea of what’s “unexpected and/or exceptional” can vary a lot between people, so there’s been a few times when third-party libraries I’ve been using raise an error in a situation that I think should have been perfectly expectable and should have been handled by returning a result code instead of raising an error. (To be fair, I haven’t come across any code using throws. The habit seems to be handle things with with statements, instead.) There is a recommendation that one should have two versions of a method, one that returns a result code and the other, with the same name suffixed with a bang (File.read vs. File.read!, for example) and raises an error, but library authors don’t follow this convention religiously for themselves, and sometimes you’ll find a function that’s supposed to return no matter what calling a function in yet another library that does raise an error, so you can’t really rely on it.
I haven’t seen any code explicitly using an exit statement or any flow control trying to catch exits, but I have found that converting the unintentional errors that sometime happen into messages allows me to avoid other processes taking down my processes, thus allowing me to maintain the state of my process rather than having it start from its initial state again. That’s something I appreciate, but I still find it less communicative when reading code as the conversion is handled by setting a flag on the process, and that flag persists until explicitly turned off, so I find it harder to pinpoint where in my code I was calling the thing that actually failed.
All-in-all, I think that three forms of error handling are too many and that explicit error handling is easier to understand and forces one into thinking about it more. (I happen to believe that the extra thinking is worthwhile, but others may disagree.) That being said, adding supervision trees to other languages might be a nice touch.
State
Given that the hallmark of functional programming languages is being stateless, it may seem odd to conclude with a discussion of state, but the world is actually a stateful place and Elixir provides ways to maintain state that are worth discussing.
At one level, Elixir is truly stateless in that it’s not possible to modify the value of a variable. Instead, you create a new value that’s a function of the old value. For example, to remove an element from a list, you’d have to write something like:
my_list = List.delete_at(my_list, 3)
It’s not just that the List.delete_at function happens to create a new list with one fewer element, there’s no facility in the language for modifying the original value. Elixir, however, does let you associate a variable with a process, and this effectively becomes the state of the process. There are several different forms that these processes can take, but the one that I’ve used (and seen most used) are GenServers. The example in the linked document is quite extensive, and I refer you to that for details, but essentially there are two things to understand for the purposes of my discussion. On the client side, just like in an OOP, you don’t need to know anything about the state associated with a process, so, continuing with Stack example from the documentation, a client would add an element to a stack by calling something like:
GenServer.call(process_identifier, {:push, “value”})
(Note that process_identifier can be either a system generated identifier or a name that you give it when creating the process. I’ll explain why that’s important in a little while.) On the server side, you need code something like:
def handle_call({:push, value}, from, state) do
…
{:reply, :ok, new_state}
end
Here, the tuple {:push, value} matches the second parameter of the client’s invocation. The from parameter is the process identifier for the client, and state is the state associated with the server process. The return from the server function must be a tuple, the first value of which is a command to the infrastructure code. In the case of the :reply command, the second value of the tuple is what will be returned to the client, and the final value in the tuple is the state that should be associated with the process going forward. Within a process, only one GenServer call back (handle_call, handle_cast, handle_info, see the documentation for details) can be running at a time. Strictly in terms of the deliberate state associated with the process, this structure makes concurrent programming quite safe because you don’t need to worry about concurrent modification of the state. Honestly, you’d achieve the same effect in Java, for example, by marking every public method of a class as synchronized. It works, and it does simplify matters, but it might be overkill, too. On the other hand, when I’m working with concurrency in OOP languages, I tend towards patterns like Producer/Consumer relationships, which I feel are just as safe and offer me more flexibility.
There is a dark side to Elixir’s use of processes to maintain state, too: Beyond the explicit state discussed above, I’ve run into two forms of implicit state that have tripped me up from time to time. Not because they’re unreasonable in-and-of themselves, but because they are hidden and as I become more aware of them, I realize that they do need thinking about. The first form of implicit state is the message queue that’s used to communicate with each process. When a client makes a call like GenServer.call(pid, {:push, “something”}), the system actually puts the {:push, “something”} message onto the message queue for the given pid and that message waits in line until it gets to the front of the queue and then gets executed by a corresponding handle_call function. The first time this tripped me up was when I was using a third-party library to talk to a device via a TCP socket. I used this library for a while without any issues, but then there was one case where it would start returning the data from one device when I was asking for data from another device. As I looked at the code, both the library code and mine, I realized that my call was timing out (anytime you use GenServer.call, there’s a timeout involved; I was using the default of 5 seconds and it was that timeout being triggered instead of a TCP timeout), but the device was still returning its data somewhat after that. The timing was such that the code doing the low-level TCP communications put the response data into the higher-level library’s message queue before my code put its next message requesting data into the library’s queue, so the library sent my query to the device but gave my code the response to the previous query and went back to waiting for another message from either side, thus making everything one query off until another query call timed out. This was easy enough to solve by making the library use a different process for each device, but it illustrates that you’re not entirely relieved of thinking about state when writing concurrent programs in Elixir. Another issue I’ve had with the message queue is that anything that has your process identifier can send you any message. I ran into this when I was using a different third-party library to send text messages to Slack. The documentation for that library said that I should use the asynchronous send method if I didn’t care about the result. In fact, it described the method as “fire-and-forget.” These were useful but non-essential status messages, so that seemed the right answer for me. Great, until the library decided to put a failure message into the message queue of my process and my code wasn’t written to accept it. Because there was no function that matched the parameters sent by the library, the process crashed, the supervisor restarted it, and it just crashed again, and again, and again until the original condition that caused the problem went away. Granted this was mostly a problem with library’s documentation and was easy enough to solve by adding a callback to my process to handle the library’s messages, but it does illustrate another area where the state of the message queue can surprise you.
The second place that I’ve been surprised to find implicit state is the state of the processes themselves. To be fair, this has only been an issue in writing unit tests, but it does still make life somewhat more difficult. In the OOP world, I’m used to having a new instance of the class I’m testing created for each and every test. That turns out not to be the case when testing GenServers. It’s fine if you’re using the system generated process identifiers, but I often have processes that have a fixed identifier so that any other process can send a message to them. I ended up with messages from one test getting into the message queue of the process while another test was running. It turns out that tests run asynchronously by default, but even when I went through and turned that off, I still had trouble controlling the state of these specially named processes. I tried restarting them in the test setup, but that turned out to be more work than I expected because the shutdown isn’t instantaneous and trying to start a new process with the same name causes an error. Eventually I settled on just adding a special function to reset the state without having to restart the process. I’m not happy about this because that function should never be used in production, but it’s the only reliable way I’ve found to deal with the problem. Another interesting thing I’ve noticed about tests and state is that the mocking framework is oddly stateful, too. I’m not yet sure why this happens, but I’ve found that when I have some module, like the library that talks over TCP, a failing test can cause other tests to fail because the mocking framework still has hold of the module. Remove the failing test and the rest pass without issue, same thing if you fix the failing test. But having a failing test that uses the mocking framework can cause other tests to fail. It really clutters up the test results to have all the irrelevant failures.
Conclusion
I am reminded of a story wherein Jascha Heifetz’s young nephew was learning the violin and asked his uncle to demonstrate something for him. Heifetz apparently took up his nephew’s “el cheapo student violin” and made it sound like the finest Stradivarius. I’m honestly not sure if this is a true story or not, but I like the point that we shouldn’t let our tools dictate what we’re capable of. We may enjoy playing a Stradivarius/using our favorite programming language more than an “el cheapo” violin/other programming language, but we shouldn’t let that stop us achieving our ends. Elixir is probably never going to be my personal Strad, but I’d be happy enough to use it commercially once it matures some more. Most of the issues I’ve described in these postings are likely solvable with more experience on my part, but the lack of maturity would hold me back from recommending it for commercial purposes at this point. When I started using it last June, Dave Thomas’ Programming Elixir was available for version 1.2 and the code was written in version 1.3, although later version were already available. Since then, the official release version has gotten up to 1.6.4 with some significant seeming changes in each minor version. Other things that make me worry about the maturity of Elixir include Ecto, the equivalent of an ORM for Elixir, which doesn’t list Oracle as an officially supported database and the logging system, which only writes to the console out of the box. (Although you can implement custom backends for it. I personally would prefer to spend my time solving business problems rather than writing code to make more useful logs, and I haven’t seen a third-party library yet that would do what I want for logging.) For now, my general view is that Elixir should be kept for hobby projects and person interest, but it could be a viable tool for commercial projects once the maturity issues are dealt with.
About the Author
Narti is a former CC Pace employee now working at ConnectDER. He’s been interested in the design of programming languages since first reading The Dragon Book and the proceedings of HOPL I some thirty years ago.
[1] I’ve no reason to doubt this. I just haven’t tested it myself.
Introduction
Last June I took over an Elixir based project that involves a central server communicating with various devices over a TCP/IP network. This was my first real exposure to Elixir, so I quickly got a copy of Dave Thomas’ Programming Elixir and went through Code School’s Elixir tutorials so that I could at least start understanding the code I was inheriting. Since then, I’ve spent a fair amount of time changing and adding to this project and feel that I’m starting to come to terms with Elixir and have something to say about it.
My purpose in these posts is not to offer another introductory course in the language but rather to look at some of the features that are said to distinguish it from other languages and think about how different these features really are and how they affect the readability and expressivity of the code that one writes. In this first entry, we’ll look at pattern matching, which constitute the biggest difference between Elixir’s biggest differentiator, and the data structures available in Elixir. A forthcoming post will look at supervision trees, error handling and the handling of state in Elixir.
Pattern Matching
Pattern matching is perhaps the feature that most distinguishes Elixir from traditional imperative languages and, for me, is the most interesting part of the language. Take a statement like:
x = 1
In an imperative language, this assigns the value of 1 to the variable x. In Elixir, it effectively does the same thing in this case. But in Elixir, the equals symbol is called a match operator and actually performs a match between the left-hand side and the right-hand side of the operator. So, we can do things like
{:ok, x} = Enum.fetch([2, 4, 6, 8], 2)
do_something(x)
report_success()
In this case, the fetch function of the Enum module would return {:ok, 6}, which would match the left-hand side of the expression and the variable x would take on the value of 6. We then call the do_something method, passing it the value of x. What would happen, though, were we to call fetch with a position that didn’t exist in the collection we passed it? In this case, fetch returns the atom :error, which would not match the left-hand side of the expression. If we don’t do anything to handle that non-matching expression, the VM will throw an error and stop the process that tried to execute the failed match. (This is not as drastic as it sounds, see the section on Supervisors, below.) All is not lost, though. There are several ways we could avoid terminating the process when the match fails, but I’ll just describe two: the with statement and pattern matching as part of the formal parameters to a function declaration. Strictly speaking, the former is a macro, which I’m not going to cover, but it acts like a flow control statement and quacks like a flow control statement, so I’m not going to worry about the distinctions. Rewriting the above to avoid terminating the running process using a with statement would look like
with {:ok, x} <- Enum.fetch([2, 4, 6, 8], 2) do
do_something(x)
report_success()
else
:error -> report_fetch_error()
end
In this case, we’re saying that we’ll call do_something if and only if the result of calling fetch matches {:ok, x}; if the result of calling fetch is :error, we’re going to call report_error, and if the result is something else, which isn’t actually possible with this function, we’d still throw an error. We can actually have multiple matches in both the with and the else clauses, so we could also do something like:
with {:ok, x} <- Enum.fetch([2, 4, 6, 8], 2),
:ok <- do_something(x) do
report_success()
else
:do_something_error -> report_do_something_error()
:error -> report_fetch_error()
end
Here, do_something will still not be executed unless the result of fetch matches the first clause, but the else clauses are executed with the first non-matching value in the with clauses. If do_something also just returned :error, though, we’d have to separate this out into two with statements.
The other way we could handle the original problem of the non-matching expression would be to use pattern matching in the formal parameters of a set of functions we define. For example, we could express the code above as:
do_something(Enum.fetch([2, 4, 6, 8], 2))
def do_something({:ok, x}) do
result = … # :ok or :error
report_do_something_result(result)
end
def do_something(:error) do
report_fetch_error()
end
def report_do_something_result(:ok) do
report_success()
end
def report_do_something_result(:error) do
# whatever it takes to report the error
end
In this case, we’re using pattern matching to overload a function definition and do different things based on what parameters are passed instead of using flow control statements. The Elixir literature tends to say that this is actually preferable to using flow control statements as it makes each method simpler and easier to understand, but I have my doubts about it. I find that I tend to duplicate a lot of code when use pattern matching as a flow control mechanisms. To be fair, I’m learning more and more techniques to avoid this, but in the end I suspect that the discouragement of flow control structures and, perhaps more importantly, the lack of subclassing makes the code harder to understand and makes it harder to avoid duplicating code than I’d like.
Sometimes, too, I feel like the cure for avoiding this kind of duplication can be worse than the disease. One technique I’ve seen used is essentially to use reflection, and I found that code very hard to follow and some of the code just got duplicated between modules instead of being duplicated in a single module. There are other techniques for dealing with this kind of duplication, too, but I can’t shake the feeling that I’m being pushed into making the code less expressive when trying to remove duplication when other languages would have made removing the duplication easier and more obvious.
Another concern I have around this form of overloading methods through pattern matching is that the meaning of the patterns to match isn’t always obvious. For example looking at two functions like:
def do_something(item, false) do
…
end
def do_something(item, true) do
…
end
How do you know what the true and false mean? Sadly, you have to go back and find the code that’s calling these functions to see what the distinguishing parameter represents. You can actually write the declarations to be more intention revealing:
def do_something(item, false = _active) do
…
end
def do_something(item, true = _active) do
…
end
But the majority of the code I’ve looked at is written in the former style that doesn’t reveal its intentions as well as the latter.
Pattern matching is at once both the most intriguing and a sometimes troubling element of the language. I find it an interesting concept, but I’m still not sure whether it’s something I’m really happy about.
Data Structures
Elixir displays a curious dichotomy in regards to data structures. On the one hand, we’re told that we shouldn’t use data structures because they represent the bad-old object oriented way of doing things. On the other hand, Elixir does allow one to create structs, which define a set of fields, but no behavior, and set default values for them. Structs are really just syntactic sugar on top of Elixir’s maps, though. An Elixir map is a set of key/value mappings and a struct enforces the set of keys that one can use for that particular map. Otherwise they’re interchangeable, so I’m going to ignore structs for the rest of this discussion. Elixir offers two other data structure that aggregate data: lists and tuples. Conceptually Elixir lists are pretty much like a list in other programming languages. (Remembering that all of these data structures are immutable in Elixir.) The documentation says that there are some low-level implementation differences that make Elixir lists more efficient to work with than in other languages, but I haven’t concerned myself terribly about this. Finally, in terms of aggregating data, we have tuples. One can use a tuple much like a list, except that again there are implementation details that make they more efficient to use in some situations than lists, while in other situations the lists are more efficient. In practice, I’ve mostly seen tuples to be used when one wants to match a pattern and lists used when one is just storing the data. One somewhat contrived example of all of this and then some discussion:
{:ok, %{grades: [g1, g2, g3], names: [n1, “student2”, n3}} = get_grades()
So, what’s going on here? We’re again using pattern matching to expect the function get_grades to return a tuple consisting of the atom :ok and a map containing two lists of three elements each, where the name of the second student in the list must be “student2,” just to show that we can do a match to that level.
While more specific than is typical, the pattern of returning status and response is fairly common in Elixir. This supposedly obviates a lot of exception handling, but to my way of thinking, it really just replaces it with with statements, as described previously. I’m not convinced that it makes that much difference.
I think that the prevalent use of atoms, on the other hand, makes a large difference. Sadly, it’s a negative difference in that atoms turn into another form of magic numbers (in the code smell sense). Elixir has no way to define a truly global constant, so there’s actually no way around polluting one’s code with these magic numbers and there’s no compiler support for when one mistypes one of them. For example, consider some hypothetical code dealing with a TCP socket:
def send_stuff(socket, stuff) do
with :ok <- :gen_tcp.send(socket, stuff) do
:ok
else
{:error, :econnrefused} -> …
{:error, :timeout} -> …
end
end
def open_socket(address, port, options) do
with {:ok, socket} <- :gen_tcp.connect(address, port_options) do
{:ok, socket}
else
{:error, :econnnrefused} -> …
{:error, :timeout} -> …
end
end
There’s nothing to tell me that I’ve accidentally added an extra “n” to :econnrefused in the open_socket function until it it turns into a runtime error. (This is also the kind of code that will often not get unit tested because it’s hard to deal with. I haven’t had to write any code at this level yet, so I’m unsure if I could mock gen_tcp or not, but that’s the only way I could think of to ensure that all the error handling worked correctly.)
Conclusion
I find Elixir’s pattern matching a very interesting feature of the language. Sometimes I think that it’s a nice step towards adding assertions as first-class elements of production code. At other times, though, I wonder if it really adds to the readability of the code. For example, is the send_stuff example really more readable than:
public send_stuff(socket, stuff) {
try {
socket.send(stuff)
} catch (ConnectionRefusedException cre) {
…
} catch (TimeoutException te) {
…
}
}
That being said, I often vacillate between Java’s checked exceptions and .NET’s unchecked exceptions. Elixir has pushed me much more towards making exceptions checked by default since that makes sure that I think about them when .NET and Elixir let me forget about them, often to my detriment.
Unless and until Elixir offers truly global constants, however, I find myself distrusted the use of atoms in pattern matching to distinguish between functions. I worry that I have too much code that’s like:
def do_something(:type1 = _device_type, other_params) …
def do_something(:type2 = _device_type, other params) …
def do_something(:type3 = _device_type, other_params) …
Granted that I could probably do more to avoid this, but I also feel like this would make the code less expressive.
Next time we’ll look at supervision trees, error handling and handling state in Elixir.
About the Author
Narti is a former CC Pace employee now working at ConnectDER. He’s been interested in the design of programming languages since first reading The Dragon Book and the proceedings of HOPL I some thirty years ago.
In part 1, we discussed how a new team could help lay out the foundation of a social contract between itself and the larger organization. While that works at the macro level, what about the micro, day-to-day interactions within a team? The focus on a social contract can be a powerful tool in helping a team to perform at its peak. At its core, this commitment between team members strengthens trust, set expectations, and lets the team push beyond a previously understood comfort level. Both sides of an interaction must be dedicated to fulfilling their part of the equation.
What are some way this could look like on a day to day basis?
Take the simple act of a code review. At a surface level, the developer running the review would go over a high level overview of the card being worked. They would walk through the basic coding that was done, and do a brief demo of the new functionality. The reviewer nods along, perhaps pointing out an item or two regarding coding standards, and moves along without a deeper probing of the work done. In a scenario with a strong social contract defined, this would be seen as falling well short of what is expected of both team members. The developer running the review would do a fuller explanation of the goal of the card, and tie it into the larger picture of the application or the organization’s goals. The reviewer would ask more intensive questions, including, for example, how test-driven development concepts were used to work the card. The demo of the new functionality would not only show the happy path, but push at the edge cases that were considered. The reviewer would prod on the design and maintainability of the code. The goal of the exercise would be understood to not just check a box, but to provide value to each member of the team, and to the overall team itself. There would be increased confidence in the quality of the code and the product. There would be increased knowledge-sharing amongst the team leading to less stove piping of expertise. Lastly, team members would be building trust between each other so that they could feel more comfortable communicating questions or out-of-the-box ideas.
The same idea can be applied to team-wide activities. Does your team do a daily stand-up and regular retrospectives? This is an area where muscle memory can set in, and team members can start to just go with the flow. If they follow the concepts of the social contract, team members know that they are responsible to make sure the team is generating value from these. Put down the phone during the meeting. Make eye contact when addressing the team. Commit to putting into action the steps generated to make the team work better.
The big picture with the social contract is to ensure team members are fully engaging in the tasks of the team. Confidence and respect, a sense of responsibility to each other and the larger team builds organically over time. Only then can teams truly hit the “perform” staging of “Form-Storm-Norm-Perform”, and deliver the full value to the organization and its customers.
It’s a scenario we’ve all been a part of before. To shake things up, your Agile teams are being restructured. After the initial shuffle, the team gets together for a first meeting to figure out how it is going to work. Introductions are made, experiences are shared. Maybe a team lead is named. It’s a heady time full of expectations. Following the cycle of Forming-Storming-Norming-Performing, phase one is off to a good start.
At the first team retro, a better understanding of what everyone brings to the team starts to take shape. Relationships and communications within the team, as well as other players within the organization, take root. The team also starts to get a sense of where there are some gaps. Maybe it’s a misunderstanding of how code reviews work, or how cards are pointed. Storming has happened, and the team is ready to begin the transition to the Norming phase.
I’d suggest that team norms, which tend to be prescriptive in nature, falls short of what the stakeholders are hoping it will. Instead, I’d suggest that a social contract is a better concept to work towards.
A social contract is a team-designed agreement for an aspirational set of values, behaviors and social norms. They not only set expectations but responsibilities. Instead of being focused on how individual team members should approach the work of the team and organization, it lays out the responsibilities of the team members to each other. It also lays out the responsibilities and expectations between the team and the organization.
What would this type of contract look like? It should call out both sides of a relationship. An example of part of a social contract may look like this:
- The Team promises to place value through deliverable software as the highest goal to the organization, as defined by the Product Owner
- The Team promises to raise any obstacles preventing them from delivering value immediately
- The Organization promises to address and remove obstacles in a timely manner to the best of their ability
- The Organization promises to maintain reasonable stability of the team so that it has the opportunity to mature and reach its highest potential
In the spirit of the social contract, this should be discussed and brainstormed with open minds and constructive dialog with both sides of the social equation. In truly Agile fashion, it should also be considered an iterative process, and reviewed from time to time to ensure the social contract itself is providing value.
Introduction
This is the second in a series of posts about our experience using Visual Studio Team Services (VSTS) to build a deployment pipeline for an ASP.NET Core Web Application. Future posts will cover release artifacts and deployment to Azure cloud services.
Prerequisites
It’s assumed that you have an ASP.NET Core project set up in VSTS and connected to a Git repository.
See the previous blog post for details.
Goal
The goal of this post is to set up your ASP.NET Core project to automatically build and run Unit Tests after every commit to source code repository (I.e. Continuous Integration).
Here is a video summarizing the steps described below:
Adding a New Build Definition
Log into VSTS. For this demo, The VSTS account that we will be using is https://ccpacetest.visualstudio.com, and the Microsoft user is CCPaceTest@outlook.com
Select the project from our previous post.
In VSTS, go to Build under the Build and Release tab
- Select “ + New Definition”
- Select ASP.NET Core Template
- Enter any Name that can help you to identify this build.
- Select an appropriate Agent queue. In this example, we will use Hosted VS2017. Use this agent if you’re using Visual Studio 2017 and you want the VSTS service to maintain your queue.
- To simplify the process, use the default value for other fields.
- Go to “Trigger” and Enable continuous integrations. This will cause a build to automatically kick off after every code commit.
- Save the definition.
Adding a Test Project to the Solution
- Open the solution (from the previous post) in Visual Studio.
- Add a new Test Project.
- Select .NET Core > Unit Test Project. We will name this project MyFirstApp.Tests. Note: the default build definition will look for test projects under folders that end with the word, “Tests“. So, make sure that a new folder that contains this word is created when you add your Unit Test Project.
- For a proof of concept, we are going to write a dummy Test Method. Enter the following code in UnitTest1.cs
- Rebuild the project.
- Commit the changes locally and push it to the remote source repo.
- Back in VSTS, you can see that a Build has been triggered.
- Click on the build number #2018XXXX.X to view the details of the build. Normally this will take a few minutes to complete.
- Ensure all of the steps passed. You can click on each step to view log details.
What’s Next?
We’ll demonstrate how to deploy builds to different environments, either via push-button deployment or triggered automatically after each build (I.e. continuous deployment).
Stay tuned!
‘Agile’… ‘Lean’… ‘Fitnesse’… ‘Fit’… ‘(Win)Runner’… ‘Cucumber’… ‘Eggplant’… ‘Lime’… As 2018 draws near, one might hear a few of these words bantered around the water cooler at this time of year as part of the trendy discussion topic: our personal New Year’s resolutions to get back into shape and eat healthy. While many of my well-intentioned colleagues are indeed well on their way to a healthier 2018, many of these words were actually discussed during a strategy session I attended recently – which, surprisingly, based on the fact that many of these words are truly foods – did not cover new diet and exercise trends for 2018. Instead, this planning session agenda focused on another trendy discussion topic in our office as we close out 2017 and flip the calendar over to 2018: software test automation.
“SOFTWARE TEST AUTOMATION?!?” you ask?
“SERIOUSLY – cucumbers and limes and fitness(e)?!?”
This thought came to mind after the planning session and gave me a chuckle. I thought, “If a complete stranger walked by our meeting room and heard these words thrown around, what would they think we were talking about?”
This humorous thought resonated further when recently – and rather coincidentally – a client asked me for a high-level, summary explanation as to how I would implement automated testing on a software development project. It was a broad and rather open-ended question – not meant to be technical in nature or to solicit a solution. Rather, how would I, with a background in Agile Business Analysis and Testing (i.e. I am not a developer) go about kickstarting and implementing a test automation framework for a particular software development project?
This all got me thinking. I’ve never seen an official survey, but I assume many people employed or with an interest in software development could provide a reasonable and well-informed response if ever asked to define or discuss software test automation, the many benefits of automated testing and how the practice delivers requisite business value. I believe, however, that there is a substantial dividing line between understanding the general concepts of test automation and successfully implementing a high-quality and sustainable automated testing suite. In other words, those who are considered experts in this domain are truly experts – they possess a unique and sought-after skill set and are very good at what they do. There really isn’t any middle ground, in my opinion.
My reasoning here is that getting from ‘Point A’ (simply understanding the concepts) to ‘Point B’ (implementing and maintaining an effective and sustainable test automation platform) is often an arduous and laborious effort, which unfortunately, in many cases, does not always result in success. At a fundamental level, the journey to a successful test automation practice involves the following:
- Financial investment: Like with any software development quality assurance initiative, test automation requires a significant financial investment (in both tools and personnel). The notion here, however – like any other reasonable investment – is that an upfront financial investment should provide a solid return down the line if the venture is successful. This is not simply a two-point ‘spike’ user story assigned to someone to research the latest test automation tools. To use the poker metaphor – if you are ready to commit, then you should go all-in.
- Time investment: How many software development project teams have you heard stating that they have extra time on their hands? Surely, not many, if any at all. Kicking off an automated testing initiative also requires a significant upfront time investment. Resources otherwise assigned to standard analysis, development or testing tasks will need to shift roles and contribute to the automated testing effort. Researching and learning the technical aspects of automated testing tools, along with the actual effort to design, build out and execute a suite of automated tests requires an exceptional team effort. Reassigning team tasks initially will reduce a team’s velocity, although similar to the financial investment concept, the hope is significant time savings and improved quality down the line in later sprints as larger deployments and major releases draw near.
- Dedicated resources with unique, sought after skill sets: In my experience, I’ve seen that usually the highest rated employees with the most institutional/system knowledge and experience are called on to manage and drive automated testing efforts. These highly rated employees are also more than likely the most expensive, as the roles require a unique technical and analytical skill set, along with a significant knowledge of corresponding business processes. Because these organizational ‘all-stars’ initially will be focused solely on the test automation effort, other quality assurance tasks will inherently assume added risk. This risk needs to be mitigated in order to prevent a reduction in quality in other organizational efforts.
It turns out that the coincidental internal automated testing discussion and timely client question – along with the ongoing challenge in the QA domain associated with the aforementioned ‘Point A to Point B’ metaphor – led to a documented, bulleted list response to the client’s question. Let’s call it an Agile test automation best-practices checklist. This list can be found below and provides several concepts and ideas an organization could utilize in order to incorporate test automation into their current software testing/QA practice. Since I was familiar with the client’s organization, personnel and product offerings, I could provide a bit more detail than necessary. The idea here is not the ‘what’, as you will not find any specific automation tools mentioned. Instead, this list covers the ‘how’: the process-oriented concepts of test automation along with the associated benefits of each concept.
This list should provide your team with a handy starting point, or a ‘bridge’ between Point A and Point B. If your team can identify with many of the concepts in this list and relate them to your current testing process and procedures, then further pursuing an automated testing initiative should be a reasonable option for your team or project.
More importantly, this list can be used as a tool and foundation for non-technical members of a software development team (e.g. BA, Tester, ScrumMaster, etc.) in order to start the conversation – essentially, to decide if automated testing fits in with your established process and procedures, whether or not it will provide a return on investment and to ensure if you do indeed embark down the test automation path, that you continue to progress forward as applications, personnel and teams mature, grow and inevitably, change. Understand these concepts and when to apply them, and you can learn more about cucumbers, limes and eggplants as you progress further down the test automation path:
To successfully implement and advance an effective and sustainable automated testing initiative, I make every effort to follow the following strategy which combines proven Agile test automation best-practices with personal, hands-on project and testing experience. As such, this is not an all-inclusive list, rather just one IT consultant’s answer to a client’s question:
For folks new to the world of test automation and for those who had absolutely no idea that ‘Cucumber’ is not only a healthy vegetable but is also the name of an automated testing tool, I hope this blog entry is a good start for your journey into the world of test automation. For the ‘experts’ out there, please respond and let me know if I missed any important steps or tasks, or, how you might do things differently. After all, we’re all in this together, and as more knowledge is spread throughout the IT world, the more we can further enhance our processes.
So, if you’ll excuse me now, I’m going to go ahead and plan out my 2018 New Year’s resolution exercise regimen and diet. Any additional thoughts on test automation will have to wait until next year.
Thoughts on Agile DC 2017
In early February of 2001, a small group of software developers published the Agile Manifesto. Its principles are now familiar to countless professionals. Over the years, Agile practices seem to have gained mainstream appeal. Today it is difficult to find a software developer, or even a manager who has never heard of Agile. Having been a software developer for over 15 years, to me, Agile methodologies appear to embody a very natural way of producing software, so in a lot of ways, Agile conferences produce a “preaching to the choir” effect as well. As I attended the Agile DC 2017 conference, I wanted to remove myself from the bubble and try to objectively take a pulse of the state of Agile today. I also wanted to learn about the challenges that Agile seems to continue to face in its adoption at the enterprise level.
Basic takeaways
As I listened to various speakers and held conversations with colleagues and even one of the presenters, it appears that today the main challenges of adopting Agile no longer deal specifically with software engineering. That generally makes sense, given the amount of literature, combined experience and continuous improvement software development teams have been able to produce as Agile has matured from a disruptive toddler into a young adult. The real struggle appears to be in adoption of Agile across other business units, not so much IT. Why is that? I believe the two main reasons for the struggle can be attributed to organizational culture, but most importantly lack of knowledge in relation to team dynamics and human psychology.
Culture
Agile seems to continue to clash with the way business units are organized. Some hierarchical structures may produce very siloed and stovepiped groups that have difficulty collaborating with one another. While this may appear as an insurmountable problem, there is at least one great example of how Agile organizations have addressed it. We can point to DevOps as a great strategy of integrating or streamlining traditionally separate IT units. Software development teams and IT Operations teams have traditionally existed in their respective silos, often functioning as completely separate units. There was a clear need of streamlining collaboration between the two to improve efficiency and increase throughput. As a result, DevOps continues on a successful path. In regards to other business units, “Marketing” may depend on “Legal” or “Product Development” may depend on “Compliance”. There may be a clear need to streamline collaboration between these business units. One of my favorite presentations at the conference was by Colleen Johnson entitled “End to End Kanban for the Whole Organization”. By creating a corporate Kanban board, separate business units were able to collaborate better and absorb change faster. One of the business unit leaders was able to see that there were unnecessary resources being allocated to supporting a product development effort that was dropped (into the “dropped” row on the board). Anecdotally, this realization occurred during a walk down the hall and a quick glance at the board.
Education
When it comes to software development, Agile appears to contain some key properties that make it a very logical and a natural fit within the practice. Perhaps its success stories in the software development community is what produced a perception that it applies specifically to software engineering and not to other groups within an organization. It seems to me that one of the main themes of the conference was to help educate professionals who possess an agnostic view related to the benefits of an Agile transformation. I believe that continuous education is critical to getting business unit leaders to embrace Agile at the enterprise level. Some properties of Agile and its related methodologies addresses some important aspects of human psychology. This is where the education seems to lack. There is much focus on improvement in efficiency and production of value, with little explanation of why these benefits are reaped. In simple terms, working in an Agile environment makes people happier. There are “softer” aspects that business leaders should consider when adopting Agile methodologies. Focus on incremental problem solving provides a higher level of satisfaction as work gets completed. This helps trigger a mechanism within the reward system of our brain. Continuous delivery helps improve morale as staff is able to associate their immediate efforts to the visible progress of their projects. Close collaboration and problem solving creates strong bonds between team members and fosters shared accountability. Perhaps a detailed discussion on team dynamics and psychology is a whole separate topic for another blog, but I feel these topics are important to managing the perception of the role of Agile at the enterprise level.
Final thoughts
It is important to note that enterprise-wide Agile transformations are taking place. For example, several speakers at the conference highlighted success stories at Capital One. It seems that enterprise-wide adoption is finally happening, albeit very incrementally, but perhaps that is the Agile way.
Learn more about Agile Coaching.
Boy this summer flew by quickly! CC Pace’s summer intern, Niels, enjoyed his last day here in the CC Pace office on Friday, August 18th. Niels made the rounds, said his final farewells, and then he was off, all set to return to The University of Maryland, Baltimore County, for his last hurrah. Niels is entering his senior year at UMBC, and we here at CC Pace wish him all the best. We will miss him.
Niels left a solid impression in a short amount of time here at CC Pace. In a matter of 10 weeks, Niels interacted with and was able to enhance several internal processes for virtually all of CC Pace’s internal departments including Staffing, Recruiting, IT, Accounting and Financial Services (AFA), Sales and Marketing. On his last day, I walked Niels around the office and as he was thanked by many of the individuals he worked with, there were even a few hugs thrown around. Many folks also expressed wishes that Niels’ and our paths will hopefully soon cross again. In short, Niels made a very solid impression on a large group of my colleagues in a relatively short amount of time.
Back in June I gladly accepted the challenge of filling Niels’ ‘mentor’ role as he embarked on his internship. I’d like to think I did an admirable job, which I hope Niels will prove many times over in the years to come as he advances his way up the corporate ladder. As our summer internship program came to a close, I couldn’t help reminiscing back to my days as a corporate intern more than 20 years ago. Our situations were similar; I also interned during the spring/summer semesters of my junior year at Penn State University, with the assurance of knowing I had one more year of college remaining before I entered the ‘real world’. My internship was only a taste of the ‘corporate world’ and what was in store for me, and I still had one more year to learn and figure things out (and of course, one more year of fun in the Penn State football student section – priorities, priorities…)
Penn State’s Business School has a fantastic internship program, and I was very fortunate to obtain an internship at General Electric’s (GE) Corporate Telecommunications office in Princeton, NJ. My role as an intern at GE was providing support to the senior staff in the design and implementation of voice, data and video-conferencing services for GE businesses worldwide. Needless to say, this was both a challenging and rewarding experience for a 21-year-old college student, participating in the implementation of GE’s groundbreaking Global Telecommunications Network during the early years of the internet, among other things.
As I reminisced back to my eight months at GE, I couldn’t help but notice the similarities between my internship and a few of the ‘lessons learned’ I took away from my experience 20+ years ago, and how they compared or contrasted to my recent observations and feedback I provided to Niels as his mentor. Of course, there are pronounced differences – after all, many things have changed in the last 20 years – the technology we use every day is clearly the biggest distinction. I would be remiss not to also mention the obvious generation gap – I am a proud ‘Gen X’er’, raised on Atari and MTV, while Niels is a proud Millennial, raised on the Internet and smartphones. We actually had a lot of fun joking about the whole ‘generation gap thing’ and I’m sure we both learned a lot about each other’s demographic group. Niels wasn’t the only person who learned something new over the summer – I learned quite a bit myself.
In summary, my reminiscing back to the late 90’s certainly helped make my daily music choices easier for a few weeks this summer led to the vision for this blog post. I thought it would be interesting to list a few notable experiences and lessons I learned as an intern at GE, 20 odd years ago, along with how my experiences compared or contrasted with what I observed in the last 10 weeks working side-by-side with our intern, Niels. These observations are based on my role as his mentor, and were provided as feedback to Niels in his summary review, and they are in no particular order.
Have you similarly had the opportunity to engage in both roles within the intern/mentor relationship as I have? Maybe your example isn’t separated by 20 years, but several years? Perhaps you’ve only had the chance to fulfill one of these roles in your career and would love the opportunity to experience the other? In any case, see if you recognize some of the lessons you may have learned in the past and how they present themselves today. I think you’ll be amazed at how even though ‘the more things change, the more they stay the same’.
I’m in the process of reading a book on Agile database warehouse design titled, appropriately enough, Agile Data Warehouse Design and by Lawrence Corr.
While Agile methodologies have been around for some time – going on two decades – they haven’t permeated all aspects of software design and development at the same pace. It’s only in recent years that Agile has been applied to data warehouse design in any significant way.
I’m sure many Agile consultants have worked on projects in the past where they were asked to come up with a complete design up-front. That’s true with data warehouse projects too where a client’s database team wanted the entire schema designed up-front – even before the requirements for the reports the data warehouse would be supporting were identified. What would appear to be driving the design was not the business and their report priorities, but the database team and their desire to have a complete data model.
While Agile Data Warehouse Design introduces some new methods, it emphasizes a common-sense approach that is present in all Agile methodologies. In this case, build the data warehouse or data mart one piece at a time. Instead of thinking of the data warehouse as one big star schema, think of it as a collection of smaller star schemas – each one consisting of a fact table and its supporting dimension tables.
The book covers the basics of data warehouse design including an overview of fact tables, dimension tables, how to model each and as mentioned, star schemas. The book stresses the 7-Ws when designing a data warehouse – who, what, where, when, why, how and how many. These are the questions to ask when talking to business to come up with an appropriate design. “How many” is applicable for the fact tables, while the other questions apply to dimension table design.
Agile Data Warehouse Design stresses collaboration with the business stakeholders, keeping them fully engaged so that they feel like they are not just users, but owners of the data. Agile Data Warehouse Design focuses on modeling the business processes that the business owners want to measure, not the reports to be produced or the data to be collected.
I still have a way to go before I’ve finished the book and then applied what I’ve learned, but so far, it’s been a worthwhile learning experience.
“Your Majesty,” [German General Helmuth von] Moltke said to [Kaiser Wilhelm II] now, “it cannot be done. The deployment of millions cannot be improvised. If Your Majesty insists on leading the whole army to the East it will not be an army ready for battle but a disorganized mob of armed men with no arrangements for supply. Those arrangements took a whole year of intricate labor to complete”—and Moltke closed upon that rigid phrase, the basis for every major German mistake, the phrase that launched the invasion of Belgium and the submarine war against the United States, the inevitable phrase when military plans dictate policy—“and once settled it cannot be altered.”
Excerpt From: Barbara W. Tuchman. “The Guns of August.”
In my spare time, I try to read as much as I can. One of my favorite topics is history, and particularly the history of the 20th century as it played out in Europe with so much misery, bloodshed, and finally mass genocide on an industrial scale. Barbara Tuchman’s book, The Guns of August, deals with the factors that led to the outbreak of WWI. Her thesis is that the war was not in any way inevitable. Rather, it was forced on the major powers by the rigidity of their carefully drawn up war plans and an inability to adjust to rapidly changing circumstances. One by one, like dominos falling, the Great Powers executed their rigid war plans and went to war with each other.
Although the consequences are far less severe, I occasionally see the same thing happen on projects, and not just software projects. A lot of time, perhaps appropriately, perhaps not, is spent in planning. The output of the planning process is, of course, several plans. Inevitably, after the project runs for a short while, the plans begin to diverge from reality. Like the Great Powers in the summer of 1914, project leadership sees the plans as destiny rather than guides. At all costs, the plans must be executed.
Why is this? I believe it stems from the fallacy of sunk cost: we’ve spent so much time on planning and coming up with the plans, it would be too expensive now to re-plan. Instead, let’s try to force the project “back on plan”. Because of the sunk cost of generating the plans, too much weight is placed upon them.
Hang on, though. I’ve played up the last part of the quote above – the part that emphasizes the rigidity of thinking that von Moltke and the German General Staff displayed. What about the first part of his statement? Isn’t it true that “the deployment of millions cannot be improvised”? Indeed it is. And that’s true in any non-trivial project as well. You can’t just start a large software project and hope to “improvise” along the way. So now what?
I believe there’s great value in the act of planning, but far less value in the plans themselves. Going through a process of planning and thinking about how the project is supposed to unfold tells me several things. What are the risks? What happens at each major point of the project if things don’t go as planned? What will be the consequences? How do we mitigate those consequences? This kind of contingency planning is essential.
Here’s how I usually do contingency planning for a software development project. Note that I conduct all these activities with as much of the project team present as is feasible to gather. At a minimum, I include all the major stakeholders.
First, I start with assumptions, or constraints external to the project. What’s our budget? When must the product be delivered? Are there non-functional constraints? For example, an enterprise architecture we must embed within, or data standards, or data privacy laws?
Next, I begin to challenge the assumptions. What if we find ourselves going over budget? Are we prepared to extend the delivery deadline to return to budget? I explore how the constraints play off against each other. Essentially, I’m prioritizing the constraints.
Then comes release planning. I try to avoid finely detailed requirements at this point. Rather, we look at epics. We try to answer the question, “What epics must be implemented by what time to generate the Minimum Value Product (MVP)?” Again, I challenge the plan with contingencies. What if X happens? What about Y? How will they affect the timeline? How will we react?
I don’t restrict this planning to timelines, budgets, etc. I do it at the technical level too. “We plan to implement security with this framework. What are the risks? Have we used the framework before? If not, what happens if we can’t get it to work? What’s our fallback?”
The key is to concentrate not just on coming up with a plan, but on knowing the lay of the land. Whatever the ideal plan that comes out of the planning session may be, I know that it will quickly run into trouble. So I don’t spend a lot of time coming up with an airtight plan. Instead, I build up a good idea of how the team will react when (not if) the plan runs aground. Even if something happens that I didn’t think of in the planning, I can begin to change my plan of attack while keeping the fixed constraints in mind. I have a framework for agility.
Never forget this: when the plan diverges from reality, it’s reality that will win, not the plan. Have no qualms about discarding at least parts of the plan and be ready to go to your contingent plans. Do not let “plans dictate policy”. And don’t stop planning – keep doing it throughout the project. No project ever becomes risk-free, and you always need contingencies.
“Once I started looking around behind the port frames, I figured I could just….”
And so began a summer of endless sailboat projects and no sailing. One project lead to the start of another without resolving the first. What does this possibly have to do with software development and Agile techniques?
My old man and I own and are restoring an older sailboat. He is also in the IT profession, and is steeped in classic waterfall development methodology. After another frustrating day of talking past each other, he asked how I felt things could be handled differently in our boat projects.
“Stop starting and start finishing!”
It is the key mindset for Agile. Take a small task that provides value, focus on it, and get it done. It eliminates distraction and gives the user something usable quickly.
Applying this mindset outside of software may not be intuitive, but can pay dividends quickly. On the boat, we cleared space on the bulkhead, grabbed a stack of post-its and planned through the next project, rewiring the boat. The discussion started with the goal of the project. “We’re just to tear everything out and rewire everything.” Talk about ignoring non-breaking changes! I suggested that we focus on always having a working product – a sail-able boat – and break the project into smaller tasks that can be worked from start to finish in short, manageable pieces of time.
Approaching the project from that angle, we quickly developed a list of sub tasks, prioritized them, and put them up on our make-shift Kanban board. This was planning was so intuitive and rewarding on its own that we did the same for other projects we want to tackle before April.
So stop starting, start finishing, and start providing value quicker for your stakeholders.
Up, down, Detroit, charm, inside out, strange, London, bottom up, outside in, mockist, classic, Chicago….
Do you remember the questions on standardized tests where they asked you to pick the thing that wasn’t like the others? Well, this isn’t a fair example as there are really two distinct groups of things in that list, but the names of TDD philosophies have become as meaningless to me as the names of quarks. At first I thought I’d use this post to try to sort it all out, but then I decided that I’m not the Académie française of TDD school names and I really don’t care that much. If the names interest you, I can suggest you read TDD – From the Inside Out or the Outside In. I’m not convinced that the author has all the grouping right (in particular, I started learning TDD from Kent Beck, Ron Jeffries and Bob Martin in Chicago, which is about as classic as you can get, and it was always what she calls outside in but without using mocks), but it’s a reasonable introduction.
Still, it felt like it was time to think about TDD again, so instead I went back to Ron Jeffries Thoughts on Mocks and a comment he made on the subject in his Google Groups Forum. In the posting, Ron speculated that architecture could push us a particular style of TDD. That feels right to me. He also suggested that writing systems that are largely “assemblies of OPC (Other People’s Code)” “are surely more complex” than the monolithic architectures that he’s used to from Smalltalk applications and that complexity might make observing the behavior of objects more valuable. That idea puzzles me more.
My own TDD style, which is probably somewhere between the Detroit school, which leans towards writing tests that don’t rely on mocks, and London schools, which leans towards using mocks to isolate each unit of the application, definitely evolved as a way to deal with the complexity I faced in trying to write all my code using TDD. When I first started out, I was working on what I believe would count as a monolithic application in that my team wrote all the code from the UI back to right before the database drivers. We started mocking out the database not particularly to improve the performance of the tests, but because the screens were customizable per user, the data for which was in a database, and the actual data that would be displayed was stored across multiple tables. It was really quite painful to try to get all the data set up correctly and we had to keep a lot more stuff in mind when we were trying to focus on getting the configurable part of the UI written. This was back in 1999 or 2000, and I don’t remember if someone saw an article on mocking, but we did eventually light on the idea of putting in a mock object that was much easier to set up than the actual database. In a sense, I think this is what Ron is talking about in the “Across an interface” section of his post, but it was all within our code. Could we have written that code more simply to avoid the complexity to start with? It was a long time ago and I can’t say whether or not I’d take the same approach now to solving that same problem, but I still do find a lot of advantages in using mocks.
I’ve been wanting to try using a NoSQL database and this seemed like a good opportunity to both try that technology and, after I read Ron’s post, try writing it entirely outside-in, which I always do anyway, and without using mocks, which is unusual for me. I started out writing my front-end using TDD and got to the point that I wanted to connect a persistence mechanism. In a sense, I suppose the simplest thing that could possibly work here would have been to keep my data in a flat file or something like that, but part of my purpose was to experiment with a NoSQL database. (I think this corresponds to the reasonably common situation of “the enterprise has Oracle/MS SQL Server/whatever, so you have to use it.) I therefore started with one of the NoSQL implementations for .NET. Everything seemed fine for my first few unit tests. Then one of my earlier tests failed after my latest test started passing. Okay, this happens. I backed out my the code I’d just written to make sure the failing test started passing, but the same test failed again. I backed out the last test I’d written, too. Now the failing test passed but a different one failed. After some reading and experimentation, I found that the NoSQL implementation I’d picked (honestly without doing a lot of research into it) worked asynchronously and it seemed that I’d just been lucky with timing before they started randomly failing. Okay, this is the point that I’d normally turn to a mocking framework and isolate the problematic stuff to a single class that I could either put the effort into unit testing or else live with it being tested through automated customer tests.
Because I felt more strongly about experimenting with writing tests without using mocks than with using a particular NoSQL implementation, I switched to a different implementation. That also proved to be a painful experience, largely because I hadn’t followed the advice I give to most people using mocks, which is to isolate the code for setting up the mock into an individual class that hides the details of how the data is set up. Had I been following that precept now that I was accessing a real persistence mechanism rather than a mock, I wouldn’t have needed to change my tests to the same degree. The interesting thing here was that I had to radically change both the test and the production code to change the backing store. As I worked through this, I found myself thinking that if only I’d used a mock for the data access part, I could have concentrated on getting the front-end code to do what I wanted without worrying about the persistence mechanism at all. This bothered me enough that I finally did end up decoupling the persistence mechanism entirely from the tests for the front-end code and focus on one thing at a time instead of having to deal with the whole thing at once. I also ended up giving up on the NoSQL implementation for a more familiar relational database.
So, where does all this leave my thoughts on mocks? Ron worried in his forum posting that using mocks creates more classes than testing directly and thus make the system more complex. I certainly ended up with more classes than I could have, but that’s the lowest priority in Ken Beck’s criteria for simple design. Passing the tests is the highest priority, and that’s the one that became much easier when I switched back to using mocks. In this case, the mocks isolated me from the timing vagaries of the NoSQL implementations. In other cases, I’ve also found that they help isolate me from other random elements like other developers running tests that happen to modify the same database tables that are modifying. I also felt like my tests became much more intention-revealing when I switched to mocks because they talked in terms of the high-level concepts that the front-end code dealt with instead of the low-level representation of the data of the persistence code needed to know about. This made me realize that the hard part was caused by the mismatch between the way the persistence mechanism (either a relational database or the document-oriented NoSQL database that I tried) and the way I thought of the data in my code. I have a feeling that if I’d just serialized my object graph to a file or used an object-oriented database instead of a document-oriented database, that complexity would go away. That’s for a future experiment, though. And, even if it’s true, I don’t know how much I can do about it when I’m required to use an existing persistence mechanism.
Ron also worried that the integration between the different components is not tested when using mocks. As Ron puts it in his forum message: “[T]here seems to be a leap of faith made: it’s not obvious to me that if we know that A sends the right messages to Mock B, and B sends the right messages to Mock A, A and B therefore work. There’s an indirection in that logic that makes me nervous. I want to see A and B getting along.” I don’t think I’ve ever actually had a problem with A and B not getting along when I’m using mocks, but I do recall having a lot of problems with it when I had to map between submitted HTML parameters and an object model. (This was back when one did have to write such code oneself.) It was just very to mistype names on either side and not realize it until actual user testing. This is actually the problem that led us to start doing automated customer testing. Although the automated customer tests don’t always have as much detail as the unit tests, I feel like they alleviate any concerns I might have that the wrong things are wired together or that the wiring doesn’t work.
It’s also worth mentioning that I really don’t like the style of using mocks that really just check if a method was called rather than it was used correctly. Too often, I see test code like:
mock.Stub(m => m.Foo(Arg.Is.Anything, Arg.Is.Anything)).Return(0);
…
mock.AssertWasCalled(m => m.Foo(Arg.Is.Anything, Arg.Is.Anything));
I would never do something like this for a method that actually returns a value. I’d much rather set up the mock so that I can recognize that the calling class both sent the right parameters and correctly used the return value, not just that it called some method. The only time I’ll resort to asserting a method was called (with all the correct parameters), is when that method exists only to generate a side-effect. Even with those types of methods, I’ve been looking for more ways to test them as state changes rather than checking behavior. For example, I used to treat logging operations as side-effects: I’d set up a mock logger and assert that the appropriate methods were called with the right parameters. Lately, though, with Log4Net, I’ve been finding that I prefer to set up the logger with a memory appender and then inspect its buffer to make sure that the message I wanted got logged at the level I wanted.
In his Forum posting, Ron is surely right in saying about the mocking versus non-mocking approaches to writing tests: “Neither is right or wrong, in my opinion, any more than I’m right to prefer BMW over Mercedes and Chet is wrong to prefer Mercedes over BMW. The thing is to have an approach to building software that works, in an effective and consistent way that the team evolves on its own.” My own style has certainly changed over the years and I hope it will continue to adapt to the circumstances in which I find myself working. Right now I find myself working with a lot of legacy code that would be extremely hard to get under test if I couldn’t break it up and substitute mocks for collaborators that are nearly impossible to get set up correctly. Hopefully I’ll also be able to use mocks less, as I find more projects that allow me to avoid the impedance between the code’s concept of the model and that of external systems.
Uber. Eight years ago, the company did not exist and the word was simply a rarely used adjective of German origin meaning “ultra”, like an uber intellectual. Today, Uber has become one of the most successful startups in history and the word has become a commonplace verb in English parlance. Transcending to “verb” status puts Uber in the highly exclusive class of innovative business disrupters like Google and FedEx whose business names and processes have become synonymous with an action that didn’t previously exist but is now done on a regular basis. Who today wouldn’t understand what actions you had taken if you said, “I quickly googled the address for the nearest drop-off spot and uber-ed over there so I could fed-ex my package out on time”?
Uber owns no cars, has no drivers, and has minimal fixed assets. Instead, they created an incredibly user-friendly software that improves aspects of the taxi ride industry we didn’t know needed improvement. Not surprisingly, the full legal name is Uber Technologies, Inc. While the only technologies typically found in traditional taxi cabs are the decades old meter clicking away the increments of the cost of your ride, the Uber software provides new value to both the driver and the customer with useful information such as the location of both the driver and the customer, time estimate for pick-up, exact pricing, car options, driving directions, and much more.
By creating this simple way to get a ride, Uber has reached another pinnacle accomplishment whereby the creativity of its business model has become a noun: uber-fication. According to Dr. Paul Marsden in his Reading Room article, The Uberfication of Everything, “…the real genius of Uber lies in a deep understanding of convenience – what it is and why it matters. That’s what Uberfication is all about; pivoting your business to deliver on a core under-exploited consumer need – convenience”.
One thing that every startup has is a dream and a vision. But, let’s be honest, that simply isn’t enough to successfully build a booming new business like Uber. You need the right partners, you need money, and you need passion for the project at hand. We believe that we can help in all these areas, which lead us to formalize an offering exclusively for startups.
When I formed CC Pace nearly 36 years ago, I was driven by a vision of a new model for a consulting company – one where integrity and the client’s best interest were ingrained in the firm’s culture and successful delivery could almost be guaranteed by the quality, drive and teamliness of the employees who worked there. While my dream may not have been as wide-reaching as Uber, when I think back to that time, I just remember energy, excitement and that ‘anything is possible’ feeling. Over the years, we’ve been very fortunate to work with clients in all phases, from startups to Fortune 500 organizations—all of which we value a great deal. I get excited to work with clients of all sizes, but there is something about working with startups that brings about an energy that you can’t replicate in other environments. Being a part of someone else’s vision coming to life brings me right back to where I stood over 35 years ago and is an environment in which I’ve seen our project teams thrive in.
Our experience working with startups combined with our project teams’ passion has lead us to formalize an offering to help startups get off the ground with the right technology. To enable us to work on more of these type of efforts, we are officially launching a new risk/reward program for startups. Here, we are able to combine our technical prowess with our business acumen that result in a software component that fully and effectively supports the start-up’s vision. The premise of our offering is to build the technological platform for your business with less cash required. In exchange for this discount, we agree upon a fair share of some downstream benefits of your startup reflective of the risk we take.
If you like the idea of maintaining control of your vision while paying less up-front to get the results you need, then I would love to hear from you. Interesting companies with challenging technology needs has been a driver for us for over 35 years. For this reason, we are confident that we have the ability to help better enable your dream. After all, it’s only a matter of time before the next “Uber” shocks the world.
For more information on the risk/reward program, check out our offering here.
In my personal experience working on various software development projects, the concept of team energy often appears to be either undervalued or benignly ignored by management teams. The reasons are many. First of all, the term may be confused with “team velocity” which is a relative measurement of a team’s average output or productivity. If the velocity appears to be at either predictable or positive levels, the management team may choose to believe that team energy is also at satisfactory levels. Organizations may attempt to boost employee morale by putting together team-building exercises and outing events. This macro approach may result in generating a perceived positive effect on team energy, thus obscuring the need for focus on individual teams. So, what is team energy and why should managers consider devoting some attention to it?
When thinking in terms of team energy, one can look at it as building credit with each individual team. One can also look at it from an analogy of having a rainy day fund. From a management’s standpoint it is important to keep team energy in the positive territory. This helps ensure that the team will likely empower themselves to exceed expectations, as well as step up during times of crisis or high pressure situations. I have worked in high energy teams, in which members voluntarily pushed themselves past regular working hours to produce deliverables. These cases did not involve any direct increase in compensation or promotion. People naturally wanted to succeed because they possessed enough energy to do so. I have also witnessed the opposite, where a team’s energy was low, deliverables were in a perpetual state of tardiness and the backlog was steadily accruing bugs. Developers and testers did not feel empowered to succeed and entered a cycle of doing the absolute bare minimum to “get the management off their back”.
Science behind building teams
Agile methodologies, whether Scrum or Kanban, prescribe various techniques that are focused on continuous improvement that may positively affect team energy. Regardless of whether an organization has truly embraced Agile, it is difficult to find managers that would oppose efforts in improving a team’s processes of delivering faster and at a higher value. After all, who is against a boost in productivity? There is a hidden psychological component to continuous improvement that has a causal effect on team energy. This component is more associated with the experience of individual team members.
Studies of team dynamics such as ones conducted by MIT’s Human Dynamics Laboratory, and documented in Harvard Business Review suggest that there is a science to building high performing and high energy teams. One of the keys is to focus on the human brain and social dynamics of a group. Your teams may be composed of introverts as well as extroverts and a wide range of personalities, but there is a common factor that seems to persist. The studies show that humans feel good when they achieve their goals and overcome obstacles. A human brain actually rewards its owner with extra levels of dopamine when a goal is achieved. When the team feels good more often than not, the team energy goes up. When the opposite occurs, team energy goes down. Therefore, focusing on small achievable goals not only helps the organization to shift focus of deliverables, but it also fosters this psychological benefit of achievement for each individual team member.
Measurements
An organization may choose to periodically measure team energy. One way to achieve such measurement is through anonymous surveys. Usually this is done at a more enterprise level to gauge the overall organization energy. There is certainly value in doing that, but the effort is not focused and may not necessarily apply to teams. Small teams may not produce very accurate results. There may be disincentives to be frank when answering a survey because team members may feel singled out and fear reprisals from management. In addition, more introverted team members may choose not to “rock the boat”. A more effective and team-focused approach is to have an Agile coach periodically take team energy measurements. An opportune time may be during team retrospectives, when a team is usually more receptive to be candid. Most importantly, these measurements do not need to be secretly stored in a manager’s vault but should be shared with the team. Adding transparency to the team building and management process will not only increase team energy, but also foster leadership skills among the more proactive and extraverted team members.
Building a new software product is a risky venture – some might even say adventure. The product ideas may not succeed in the marketplace. The technologies chosen may get in the way of success. There’s often a lot of money at stake, and corporate and personal reputations may be on the line.
I occasionally see a particular kind of team dysfunction on software development teams: the unwillingness to share risk among all the different parts of the team.
The business or product team may sit down at the beginning of a project, and with minimal input from any technical team members, draw up an exhaustive set of requirements. Binders are filled with requirements. At some point, the technical team receives all the binders, along with a mandate: Come up with an estimate. Eventually, when the estimate looks good, the business team says something along the lines of: OK, you have the requirements, build the system and don’t bother us until it’s done.
(OK, I’m exaggerating a bit for effect – no team is that dysfunctional. Right? I hope not.)
What’s wrong with this scenario? The business team expects the technical team to accept a disproportionate share of the product risk. The requirements supposedly define a successful product as envisioned by the business team. The business team assumes their job is done, and leaves implementation to the technical team. That’s unrealistic: the technical team may run into problems. Requirements may conflict. Some requirements may be much harder to achieve than originally estimated. The technical team can’t accept all the risk that the requirements will make it into code.
But the dysfunction often runs the other way too. The technical team wants “sign off” on requirements. Requirements must be fully defined, and shouldn’t change very much or “product delivery is at risk”. This is the opposite problem: now the technical team wants the business team to accept all the risk that the requirements are perfect and won’t change. That’s also unrealistic. Market dynamics may change. Budgets may change. Product development may need to start before all requirements are fully developed. The business team can’t accept all the risk that their upfront vision is perfect.
One of the reasons Agile methodologies have been successful is that they distribute risk through the team, and provide a structured framework for doing so. A smoothly functioning product development team shares risk: the business team accepts that technical circumstances may need adjustment of some requirements, and the technical team accepts that requirements may need to change and adapt to the business environment. Don’t fall into the trap of dividing the team into factions and thinking that your faction is carrying all the weight. That thinking leads to confrontation and dysfunction.
As leaders in Agile software development, we at CC Pace often encourage our clients to accept this risk sharing approach on product teams. But what about us as a company? If you founded a startup and you’ve raised some money through venture capital – very often putting your control of your company on the line for the money – what risk do we take if you hire us to build your product? Isn’t it glib of us to talk about risk sharing when it’s your company, your money, and your reputation at stake and not ours?
We’ve been giving a lot of thought to this. In the very near future, we’ll launch an exciting new offering that takes these risk sharing ideas and applies them to our client relationships as a software development consultancy. We will have more to say soon, so keep tuning in.
Recently, I attended a meetup for Loudoun’s Tech Startups in Ashburn, VA. It was a great opportunity to discuss ideas in various stages of development, as well as the resources available to bring these ideas to market. It was encouraging to see so many motivated entrepreneurs share their experiences in a local setting, with Loudoun showing its promise as a business incubator.
Michelle Chance from the Innovative Solutions Consortium gave a great speech about the services provided by her organization, such as “hard challenge” events, think-tank style meetings and student/recent graduate mentoring.
The hard challenge events seemed particularly interesting to me since they provide a collaborative environment for solving difficult technology problems. Organizations compete for the best solution and receive awards for most disruptive and innovative technologies.
Another sponsor of the event was the Mason Enterprise Center, which provides consultation and training to small business owners and entrepreneurs. They have regional offices in Fairfax, Fauquier, Loudoun and Prince William counties.
We attended this event because of our past experience developing collaborative solutions for startups. We’ve found that the Agile development philosophy fits nicely with the entrepreneurial spirit of organizations that want to quickly build a product that provides value. In fact, principles such as Minimum viable product and Continuous deployment are core to the Lean startup philosophy championed by Eric Ries. This method encourages startups to build a minimum amount of high-value features that can be released quickly. With frequent releases, a company can immediately begin collecting and responding to customer feedback.
If you’re at any stage in the startup process and have a technical idea that you want to explore or expand, there are plenty of resources available in the NoVa area. Also, I encourage you to attend one of the various meetups for startups, such as the Loudoun Tech Startups group.
At CC Pace, our Agile practitioners are sometimes asked whether Scrum is useful for activities other than software development. The answer is a definite yes.
Elizabeth (“Elle”) Gan is Director of Portfolio Management at a client of ours. She writes the blog My Scrummy Life. Recently, she wrote a fascinating post on how she used Scrum to plan her upcoming wedding.
How have you used Scrum outside its “natural” setting in software development?
Senior IT managers starting a new project often have to answer the question: build or buy? Meaning, should we look for a packaged solution that does mostly what we need, or should we embark on a custom software development project?
Coders and application-level programmers also face a similar problem when building a software product. To get some part of the functionality completed, should we use that framework we read about, or should we roll our own code? If we write our own code, we know we can get everything we need and nothing we don’t – but it could take a lot of time that we may not have. So, how do we decide?
Your project may (and probably does) vary, but I typically base my decision by distinguishing between infrastructure and business logic.
I consider code to be infrastructure-related if it’s related to the technology required to implement the product. On the other hand, business logic is core to the business problem being solved. It is the reason the product is being built.
Think of it this way: a completely non-technical Product Owner wouldn’t care how you solve an infrastructure issue, but would deeply care about how you implement business logic. It’s the easiest way to distinguish between the two types of problems.
Examples of infrastructure issues: do I use a relational or non-relational database? How important are ACID transactions? Which database will I use? Which transactional framework will I use?
Examples of business logic problems: how do I handle an order file sent by an external vendor if there’s an XML syntax error? How important is it to find a partial match for a record if an exact match cannot be found? How do you define partial?
Note that a business logic question could be technical in nature (XML syntax error) but how you choose to solve it is critical to the Product Owner. And a seemingly infrastructure-related question might constitute business logic – for example, if you are a database company building a new product.
After this long preamble, finally my advice: Strongly favor using existing frameworks to solve infrastructure problems, but prefer rolling your own code for business logic problems.
My rationale is simple: you are (or should be) expert in solving the business logic problems, but probably not the infrastructure problems.
If you’re working on a system to match names against a data warehouse of records, your team knows or can figure out all the details of what that involves, because that’s what the system is fundamentally all about. Your Product Owner has a product idea that includes market differentiators and intellectual property, making it very unlikely that an existing matching framework will fulfill all requirements. (If an existing framework does meet all the requirements, why is the product being developed at all?)
Secondly, the worst thing you want to do as a developer is to use an existing business logic framework “to make things simple”, find that it doesn’t handle your Product Owner’s requirements, and then start pushing back on requirements because “our technology platform doesn’t allow X or Y”. For any software developer with professional pride: I’m sorry, but that’s just weak sauce. Again, the whole point of the project is to build a unique product. If you can’t deliver that to the Product Owner, you’re not holding up your end of the bargain.
On the other hand, you are very likely not experts on transactional frameworks, message buses, XML parsing technology, or elastic cloud clusters. Oracle, Microsoft, Amazon, etc., have large expert teams and have put their own intellectual property into their products, making it highly unlikely you’ll be able to build infrastructure that works as reliably and is as bug free.
Sometimes the choice is harder. You need to validate a custom file format. Should you use an existing framework to handle validations or roll your own code? It depends. It may not even be possible to tell when the need arises. You may need to use an existing framework and see how easy it is to extend and adapt. Later, if you find you’re spending more time extending and adapting than rolling your own optimized code, you can change the implementation of your validation subsystem. Such big changes are much easier if you’ve consistently followed Agile engineering practices such as Test Driven Design.
As always, apply a fundamental Agile principle to any such decision: how can I spend my programming time generating the most business value?
You are embarking on a new software development project. Presumably, if it’s a Scrum project, a team is assembled, space and workstations for the team room are configured and the first sprint is right around the corner. The time has come for the initial gathering of project team members and stakeholders – the Project Kickoff. The Project Kickoff meeting can range from an hour to several days, and provides the opportunity for the project team, and any associated stakeholders, to come together and officially begin (i.e., ‘kick off’) a project. The ultimate goal is for everyone to leave this meeting on the same page, with a clear understanding of the project’s structure and goals.
One of the more common kickoff meeting agenda items for Scrum teams today is establishing the product vision, or the product vision statement. Many definitions and examples of product vision statements are available with a simple internet search; a solid summary of the product vision can be found here in a 2009 member article written by Roman Pichler for the Scrum Alliance:
‘The product vision paints a picture of the future that draws people in. It describes who the customers are, what customers need, and how these needs will be met. It captures the essence of the product – the critical information we must know to develop and launch a winning product.’
Here is where I have to come clean – I personally never thought much about the importance or impact of product vision statements until recently. It seemed to me that on many development projects, the product vision would simply exist as a feel-good placeholder or as a feeble attempt to energize the team: “We’re going to build this application and SAVE THE WORLD!”
I felt that the product vision statement was a guise for what seemed like the customary objective of a project which – in an admittedly negative opinion – was to provide a solid return on someone’s investment. Bluntly stated: “If we are successful, someone I’ve never met is going to make a lot of money on this product.” I observed that as a project ensued and the team became preoccupied with day to day tasks, reality would eventually kick in. At a certain point, the vision – which we were so excited about several weeks ago – usually becomes an afterthought.
Eventually, a point is reached several sprints into a project where the team’s project vision statement is scribbled on a large post-it sheet, taped to the wall in the team room, collecting dust – never to be spoken of again. In the past, my observation was that by the time our project wrapped up, we wouldn’t always take the time to measure project success against our original product vision statement. (In fact, many team members were probably already working towards achieving a new product vision on a completely new project.) We didn’t always ask the questions: Did we accomplish our mission? Did we meet all of our objectives? If not, why? (Some of these topics would assuredly be discussed in a Project Retrospective-type meeting, but in today’s reality, that isn’t always the case.)
Fortunately, times have changed. Several recent and personal discoveries (through complete happenstance) have improved my outlook; you could now say that I have a ‘newfound respect’ for the product vision statement. This inspiration is a result of successfully delivering on several development projects in the education research field. CC Pace has had the fortunate opportunity to partner with the National Student Clearinghouse (NSC) on several of their software development initiatives since 2010. Our first project supported NSC’s Research Center, whose mission is defined as ‘providing educators and policymakers with accurate longitudinal data on student outcomes to enable informed decision making.’
In June of 2010, CC Pace began the journey with NSC to redesign the StudentTracker for High Schools application (STHS 2.0), which contributes to NSC’s aforementioned mission as ‘a unique program designed to help high schools and districts more accurately gauge the college success of their graduates’. We began the project with an informative and efficient Kickoff meeting and established our product vision statement. Truth be told, I didn’t put much thought into it. After all, I was working with a new client on a newly-formed team, and Sprint #1 was approaching fast.
With all of that in mind, the following is a high-level summary (i.e., not verbatim and with some information added for clarification) of our product vision statement for the StudentTracker for High Schools 2.0 project from June, 2010:
NSC’s business goal is to leverage its unique assets and capabilities to provide the secondary-postsecondary longitudinal information required to inform the secondary education system in its efforts to increase rates of college readiness.
Redesigning the STHS 2.0 application will enhance the capacity and scalability to provide integrated secondary-postsecondary education information – in a timely and efficient fashion – to the maximum number of secondary customers possible.
The objectives to meet this business goal are as follows:
- Enhance STHS reports to include more insightful and actionable data resulting in more valuable and accurate information available to secondary schools and districts
- Provide for a more efficient file management process, reducing the turnaround time for data collection, processing and report distribution
- Increase data collection and storage capacity, allowing for more robust reports
- Improve NSC matching algorithms to enhance data quality and add reliability
- Design, configure and implement a robust set of longitudinal reports stratified by school type, demographics, gender, academics and degree
So, there it was. Our product vision statement was posted on the wall for all to see. As anyone starting out on a new software development project can attest, I still had some questions: Where is all of this leading us? Will we succeed? Will this application – which we are completely redesigning from scratch – launch successfully a year from now?
Undoubtedly these were realistic questions and concerns. At the same time, however, I began to realize that I was working as a member of a team on a development project with a realistic, measurable and highly-motivational product vision statement. I thought at the time that if we truly achieve our vision and successfully implement STHS 2.0 in the next year, our product will have a profound impact on educational research and potentially improve the college success rates of millions of high school and college students for years to come.
Fast-forward to 2015 – I can proudly say that I have seen several firsthand accounts demonstrating that we did indeed achieve the product vision that we established several years ago. The intriguing element of this discovery is that I never personally set out to measure whether or not we achieved our vision as a result of successfully delivering STHS 2.0. After all, this was over five years ago and I have worked on many different projects in that time. Instead, I discovered the answer to that question completely by chance, and on more than one occasion. Five years later, I came to the realization that our team’s product vision had indeed become a reality, and it was a really great feeling.
Check back for a follow-up post for the recent chain of events validating that our project’s product vision statement truly became a reality – more than five years after it was established.
I used to attend Agile conferences pretty frequently, but at some point I got burned out on them and the last one I attended was a 2007 conference in Washington, DC. This year, when the Agile Alliance conference returned to the DC region, and I decided it was time to give them a try again.
It’s interesting to see how things changed since I last attended an Agile conference. Agile 2015 felt much more stage managed than in previous years, with its superhero party, the keynotes making at least glancing reference to it (the opening keynote, Awesome Superproblems, appears to have been retitled for the theme, since all the references in the presentation were to “wicked” problems instead of “super” problems), and making one go through the vendor to get to lunch. It also seemed like there were mostly “experts” making presentations, whereas previously I felt like there were more presentations by community members. I have mixed feelings about all of this, but on the whole I felt that my time was well spent. Although I didn’t really plan it, I seem to have had three themes in mind when I picked my sessions, team building, DevOps and craftsmanship. Today I’ll tell you about my experiences with the team building sessions.
Two of the keynotes supported this theme: Jessie Shternshus’ Individuals, Interactions and Improvisation and James Tamm’s Want Better Collaboration? Don’t be so Defensive. I’d heard of using the skills associated with improvisation to improve collaborative skills, but the Agile analogy seemed labored. Tamm’s presentation was much more interesting to me. I’m not sure he’s aware of the use of the pigs and chickens story in Scrum, but he started out with a story about chickens. Red zone and green zone chickens, to be precise. Apparently there are those chickens (we’re outside of the scrum metaphor here, incidentally) that become star egg layers by physically abusing other chickens to suppress their egg production. These were termed red zone chickens, while the friendly, cooperative chickens were termed green zone chickens. Tamm described a few unpleasant solutions (such as trimming the chickens’ beaks) that people had tried to deal with the problem, and ended up by describing an experiment whereby the the red zone chickens were segregated from the green zone chickens, with the result that the green zone chickens’ egg production went up 260% while only the mortality rate went up for the red zone chickens (http://blog.pgi.com/2015/05/what-can-chickens-teach-us-about-collaboration/). Tamm then went on to compare this to human endeavors, pointing out the signs that an organization might be in the red zone (low trust/high blame, threats and fear, and risk avoidance, for example) or in the green zone (high trust/low blame, mutual support and a sense of contribution, for example), while explaining that no organization is going to be wholly in either zone. He wound up with showing us ways to identify when we, as individuals, are moving into the red zone and how to try to avoid it. This was easily the most thought provoking of the three keynotes, and I picked up a copy of Tamm’s book, Radical Collaboration, to further explore these ideas. The full presentation and slides are available at the Agile Alliance website (www.agilealliance.org).
In the normal sessions, I also attended Lyssa Adkins Coaching v. Mentoring, Jake Calabrese’s Benefiting from Conflict – Building Antifragile Relationships and Teams, and two presentations by Judith Mills: Can You Hear Me Now? Start Listening Instead and Emotional Intelligence in Leadership. Alas, It was only in hindsight that I realized that I’d read Adkins’ book. In this presentation, she engaged in actual coaching and mentoring sessions with two people she’d brought along specifically for the purpose. Unfortunately the sound in the room was poor and I feel like I lost a fair amount of the nuance of the sessions; the one thing I came away with was that mentoring seems like coaching while also being able to provide more detailed information to them.
Jake Calabrese turned out to be a dynamic and engaging speaker and I enjoyed his presentation and felt like it was useful, but that was before I went to Tamm’s keynote on collaboration. I did enjoy one of the exercises that Calabrese did, though. After describing the four major “team toxins”: Stonewalling, Blaming, Defensiveness and Contempt, he had us take off our name badges, write down which toxin we were most prone to on a separate name badge, and go and introduce ourselves to other people in the room using that toxin as our name. Obviously this is not something you want to do in a room full of people that work together all the time, but it was useful to talk to other people about how they used these “toxins” to react to conflict. In the end, though, I felt that Calabrese’s toxins boil down to the signs of defensiveness that Tamm described and that Tamm’s proposals for identifying signs of defensiveness in ourselves and trying to correct them are more likely to be useful than Calabrese’s idea of a “Team Alliance.”
The two presentations by Judith Mills that I attended were a mixed bag. I thought the presentation on listening was excellent, although there’s a certain irony in watching many of the other attendees checking their e-mail, being on Facebook, shopping, etc., while sitting in a presentation about listening (to be fair, there was probably less of that here than in other presentations). Mills started by describing the costs of not listening well and then went into an exercise designed to show how hard listening really is: one person would make three statements and their partner would then repeat the sentences with embellishments (unfortunately, the number of people trying this at once made it difficult to hear, never mind listen. The point was made, though). We then discussed active listening and the habits and filters we can have that might prevent us from listening well and how communication involved more than just the words we use. This was a worthwhile session and my only disappointment was that we didn’t get to the different types of question that one might use to promote communication and how they can be used.
Mills’ presentation on Emotional Intelligence in Leadership, on the other hand, was not what I anticipated. I went in expecting a discussion on EI, but the presentation was more about leadership styles and came across as another description “new” leadership. It would probably be useful for the people that haven’t experienced or heard about anything other than Taylorist scientific management, but I didn’t find anything particularly new or useful to my role in this presentation.
Notes from Agile 2015 Washington, D.C.
Having lived in Washington DC area for over 25 years, my experience caused me to presume that the majority of the audience at the Agile 2015 Washington DC would primarily consist of people working in the public sector, given our geographical proximity to a long list of federal agencies. It was not unrealistic for me to expect that the speakers at the conference might tailor their presentations and discussions to this type of audience. The audience actually turned out to be quite diverse, rendering my assumptions inaccurate. However, I could not help to feel somewhat validated after listening to the first key note speaker. Indeed, the opening presentation by Luke Hohmann entitled “Awesome Super Problems” focused on tackling “wicked problems” such as budget deficits and environmental challenges. Wicked problems, as described by Mr. Hohmann, are not technical in nature and cannot be solved by small Agile teams of 6-8 people. These problems deal more with strategic decision-making that may result in long-term consequences, intended as well as unintended. They impact millions of people and they require broad consent as well as governance. Hailing from the San Jose, California, Mr. Hohmann discussed how implementation of Agile methodologies helped the city tackle some “wicked problems” such as a budget deficit of 100 million dollars.
Planning and Executing
Solving major problems such as budget shortfalls generally require a great deal of collaboration between stakeholders with competing priorities. Mr. Hohmann stressed that the approach should focus on collaboration over competition, or in Agile terminology, “customer collaboration over contract negotiation”. Easier said than done? Maybe… To help facilitate this collaboration, Mr. Hohmann assembled a conference of public servants such as city planners, police and fire chiefs, and other community leaders. There were several discovery sessions where people could get answers to questions like how much money would be saved if the fire department removed one firefighter from their teams and what impact to safety that may entail. The group was broken down into small tables of no more than 8 people and one facilitator provided by Mr. Hohmann. The group was presented with the list of major budget items and subsequently was compelled to engage in budget games in which participants were basically bidding to get their high priority budget items included in the next budget and negotiate cuts by trading these items. Afterwards the players had a retrospective and offered feedback.
Retrospective and Outcome
Feedback provided by the participants showed that competition was replaced by collaboration. Participants tended not to get into heated arguments because the games inherently encouraged compromise. Small groups helped cut down on distractions and side conversations. The participants also reported that the game was fair since every player possessed equal bidding power. Interestingly, the final outcome resulted in surprising consensus over the budget items, as the majority of the participants actually ended up prioritizing their items very similarly after the competition aspect was removed. The “democratic” aspect of the collaborative approach helped eliminate animosity and partisanship which are not uncommon, as have been witnessed in the U.S federal budget negotiations. This experiment seemed to yield the desired outcome of tackling the imbalanced budget and was touted as a success, attracting attention of more San Jose residents.
Scaling
To tackle other public issues such as school overcrowding and water shortages, Mr. Hohmann attempted to repeat the process, but the number of participants has increased to the point where a large conference hall was needed. In order not to upset the budget by renting a giant conference hall, Mr. Hohmann and the local government set up an online forum that accepted a virtually unlimited number of participants, yet still assigning them to groups of about eight people and increasing the number of groups. The participants played other games such as Prune the Product Tree which basically involves prioritizing the list of problems the public wants to tackle. The feedback was even more positive as the majority of the participants actually preferred the online setting even more. They reported even less distractions. The data was easier to collect and aggregate, giving the participants almost an immediate view of how the game was progressing and how the priorities were moving.
Conclusion
One main takeaway I got from Mr. Hohmann’s presentation is one of encouragement to be creative. Mr. Hohmann stressed the importance of focusing on what he described as “common ground for action”. The idea is to focus on generating a list or backlog of actionable items. The process or exercise to get to the desired state can vary, and Agile methodologies can help folks get there, even when tackling wicked problems.
External Links:
http://www.innovationgames.com/budget-games-guide/
http://www.innovationgames.com/prune-the-product-tree/
Further reading:
http://conteneo.co/san-jose-residents-play-4th-annual-budget-games/
In previous installments in this series, I’ve talked about what Product Owners and development team members can do to ensure iteration closure. By iteration closure, I mean that the system is functioning at the end of each iteration, available for the Product Owner to test and offer feedback. It may not have complete feature sets, but what feature sets are present do function and can be tested on the actual system: no “prototypes”, no “mock-ups”, just actual functioning albeit perhaps limited code. I call this approach fully functional but not necessarily fully featured.
In this installment, I’ll take a look at the Scrum Master or Project Manager and see what they can do to ensure full functionality if not full feature sets at the end of each iteration. I’ll start out by repeating the same caveat I gave at the start of the Product Owner installment: I’m a developer, so this is going to be a developer-focused look at how the Scrum Master can assist. There’s a lot more to being a Scrum Master, and a class goes a long way to giving you full insight into the responsibilities of the role.
My personal experience is that the most important thing you as a Scrum Master can do is to watch and listen. You need to see and experience the dynamics of the team.
At Iteration Planning Meetings (IPMs), are Product Owners being intransigent about priorities or functional decomposition? Are developers resisting incremental functional delivery, wanting to complete technical infrastructure tasks first? These are the two most serious obstacles to iteration closure. Be prepared to intervene and discuss why it’s in everyone’s interest to achieve this iteration closure.
At the daily stand-up meetings, ensure that every team member speaks (that includes you!), and that they only answer the three canonical questions:
- What did I do since the last stand-up?
- What will I do today?
- What is in my way?
Don’t allow long-winded discussions, especially technical “solution” discussions. People will tune out.
You’re listening for:
- Someone who always answers (1) and (2) with the same tasks every day and yet says they have no obstacles
- Whatever people say in response to (3)
Your task immediately after the stand-up is to speak with team members who have obstacles and find out what you can do to clear the obstacles. Then address any team members who’re always doing the same task every day and find out why they’re stuck. Are they inexperienced and unwilling to ask for help? Are they not committed to the project mission and need to be redeployed?
Guard against an us-versus-them mentality on teams, where the developers see Product Owners or infrastructure teams as “the enemy” or at least an obstacle, and vice versa. These antagonistic relationships come from lack of trust, and lack of trust comes from lack of results. Again, actual working deliverables at the close of each iteration go a long way to building trust. Look for intransigence on either the developer team or with the Product Owner: both should be willing to speak freely and frankly with each other about how much work can be done in an iteration and what constitutes Minimal Value Product for this iteration. It has to be a negotiation; try to facilitate that negotiation.
Know your team as human beings – after all, that is what they are. Learn to empathize with them. How do individuals behave when they’re happy or when they’re frustrated? What does it take to keep Jim motivated? It’s probably not the same things as Bill or Sally. I’ve heard people advocate the use of Meyers-Briggs Personality Tests or similar to gain this understanding. I disagree. People are more complex than 4 or 5 letters or numbers at one moment in time. I may be an introvert today and an extrovert tomorrow, depending on how my job is going. Spend time with people to really know them, and don’t approach people as test subjects or lab rats. Approach them as human beings, the complex, satisfying, irritating, and ultimately rewarding organisms that we actually are.
Occasionally, when I speak at technical or project management meet-ups, an audience member will ask, “I’m a Scrum Manager and I can’t get the Product Owner to attend the IPM; what should I do?” or, “My CIO comes in and tasks my developer team directly without going through the IPM; how do I handle this?” I try to give them hints, but the answer I always give is, “Agile will only expose your problems; it won’t solve them.” In the end, you have to fall back on your leadership and management skills to effect the kind of change that’s necessary. There’s nothing in Scrum or XP or whatever to help you here. Like any other process or tool, just implementing something won’t make the sun come out. You still have to be a leader and a manager – that’s not going away anytime soon.
Before I close, let me point out one thing I haven’t listed as something a Scrum Master ought to be adept at: administration. I see projects where the Scrum Master thinks their primary role is to maintain the backlog, measure velocity, track completion, make sure people are updating their Jira entries, and so on. I’m not saying this isn’t important – it is. It’s very important. But if you’re doing this stuff to the exclusion of the other stuff I talked about up there, you’re kind of missing the point. Those administrative tasks give you data. You need to act on the data, or what’s the point? Velocity is decreasing. OK…what are you and the team going to do about it? That’s the important part of your role.
When we at CC Pace first started doing Agile XP projects back in 2000-2001, we had a role on each project called a Tracker. This person would be part time on the project and would do all the data collection and presentation tasks. I’d like to see this role return on more Agile projects today, because it makes it clear that that’s not the function of the Scrum Master. Your job is to lead the team to a successful delivery, whatever that takes.
So here we are at the end of my series. If there’s one mantra I want you to take away from this entire series, it’s Keep the system fully functional even if not fully featured. Full functionality – the ability of the system to offer its implemented feature set to the Product Owner for feedback – should always come before full features – the completeness of the features and the infrastructure. Of course, you must implement the complete feature set and the full infrastructure – but evolve towards it. Don’t take an approach that requires that the system be complete to be even minimally useful.
If you’re a Product Owner:
- Understand the value proposition not just of the entire system, but of each of its components and subsets.
- Be prepared to see, use, and test subsets, or subsets of subsets of subsets, of the total feature set. Never say, “Call me only when the system is complete.” I guarantee this: your phone will never ring.
If you’re a developer:
- Adopt Agile Engineering techniques such as TDD, CI, CD, and so on. Don’t just go through the motions. Become really proficient in them, and understand how they enable everything else in Agile methodologies.
- Use these techniques to embrace change, and understand that good design and good architecture demand encapsulation and abstraction. Keeping the subsystems isolated so that the system is functional even if not complete is not just good for business. It’s good engineering. A car’s engine can (and does) run even before it’s installed into the car. Just don’t expect it to take you to the grocery store.
- Be an active team member. Contribute to the success of the mission. Don’t just take orders.
If you’re a Scrum Master:
- Watch and listen. Develop your sense of empathy so you “plug in” to the team’s dynamics and understand your team.
- Keep the team focused on the mission.
- If you want to sweat the details of metrics and data, fine – but your real job is to act on the data, not to collect it. If you aren’t good at those collection details, delegate them to a tracking role.
I hope you’ve enjoyed this series. Feel free to comment and to connect with me and with CC Pace through LinkedIn. Please let me hear how you’ve managed when you were on a supposedly Agile project and realized that the sound of rushing water you heard was the project turning into a waterfall.
In Part 1 of this blog series, I presented a high-level summary of the many different opportunities that Business Analysts (BA) can pursue in an Agile BA role, often resulting in new and exciting experiences. I also highlighted some of the differences between today’s “Agile BA” and the traditional “Waterfall BA”. Finally, I presented an interesting metaphor for today’s Agile BA, one of a Major League Baseball “utility player” (in this case, Jose Oquendo – a player who accomplished the rare feat of playing at every position during his 12-year MLB career.)
Part 2 of this blog entry focuses on the five key functional areas (aka “opportunities”) where I feel Agile BA’s can contribute or take outright ownership of a certain project task or responsibility. A key point to remember is that in order for these opportunities to present themselves, circumstances need to exist which ultimately depend on the dynamics of a project and the makeup of the particular team. Surely we don’t want to step on any toes or introduce team conflict. But if the opportunity is presented and a need has clearly been established, take the bull by the horns and run with these five opportunities:
- Project Management
- Product Management (aka the Product Backlog)
- Testing
- Documentation
- Collaboration (with Project Stakeholders and Team Members)
PROJECT MANAGEMENT – WORK WITH (OR AS) THE SCRUMMASTER
On today’s project teams, Agile BA’s are usually best-suited to provide project management/ScrumMaster support whenever the need arises. And let’s be real – with PM’s and ScrumMasters constantly being pulled in several different directions, the Agile BA can tackle a numerous amount of responsibilities associated with this role.
In all likelihood, Agile BA’s have the experience necessary to handle many of the day-to-day responsibilities of a PM or ScrumMaster. The Agile BA can facilitate any of the recurring “events” as needed – the daily scrum (or “standup”), sprint planning, sprint review and sprint retrospective.
In many cases, the Agile BA is as close to (or even sometimes more engaged) with the project’s product backlog than the actual PM. This knowledge of the past, current and future state of the product backlog enables the Agile BA to assist with several project-related artifacts – for example, sprint and release burn-up/burn-down charts.
Successfully leading and delivering on many of these crucial project events and tasks not only contributes to the success of the team, but it also provides valuable on-the-job training. For Agile BA’s who want to eventually move into a PM or ScrumMaster role, this experience is invaluable.
THE “PROXY PRODUCT OWNER” – TODAY’S “PRODUCT OWNER” REALITY
Lately, it seems that a fully-engaged Product Owner is more of a luxury than a norm on today’s Agile projects. Agile BA’s can benefit from the potential subject-matter knowledge gained and added exposure by bridging this gap and acting as a “Proxy Product Owner”. In cases where a truly-dedicated Product Owner is not a reality, no one is better suited to step into this role than the Agile BA.
BA’s usually develop a solid rapport with the customer and can act as a liaison between the customer and the project team whenever needed. And as stated earlier, the Agile BA probably has the most experience working with the project’s user stories and backlog. On fast-moving development projects, many decisions are needed real-time, and waiting for answers from an absent Product Owner usually hinders the team’s progress.
TESTING – AFTERALL, WHO KNOWS THE STORY BETTER THAN THE BA?
Many Agile teams have already moved to this model, but for teams which have not, here is another opportunity. As previously mentioned, BA’s handing their work over to testers ‘waterfall-style’ is an outdated and inefficient practice. I have seen that 2-3 fully engaged Agile BA’s can efficiently handle the workload of 2-3 full-time BA’s and 2-3 full-time testers. Instead of separating requirements and functional testing tasks for a particular piece of functionality (e.g. user story), Agile BA’s focus on the user story as a whole – from origination (story creation) through implementation (fully-tested, potentially shippable product.) This is not to say that other methods of specialized testing aren’t needed, but in many cases, the best person to drive a user story to “done” is the Agile BA.
Automated testing has also become an invaluable practice on software development projects and provides yet another opportunity for Agile BA’s to contribute in the testing arena. With working knowledge of the current state of the application and product backlog, Agile BA’s have the capability to define and develop a project’s ongoing automated testing suite. Depending on the testing tools employed by the team, BA’s can “pair” with a technical resource (e.g. java developer). In this scenario, the developer handles the technical components of the automated testing suite while the BA designs, builds and manages the suite from a functional perspective.
COLLABORATION – BEFORE YOU KNOW IT, YOU’RE THE PROJECT’S “GO-TO” PERSON
I have included “collaboration” as an opportunity for Agile BA’s because collaboration, indeed, leads to opportunities. In my previous Waterfall experiences, BA’s didn’t collaborate much. In today’s Agile world, the Agile BA can really become a project’s “go-to” person. The collaboration piece also closely ties in with the previously mentioned PM/ScrumMaster and Product Owner opportunities.
For example, facilitating a sprint review or presenting a product demo provides invaluable experience and exposure. Sprint review meetings often include executives and/or stakeholders whom otherwise do not participate at all towards the project (basically, you see them once every two weeks). Leading these sessions provides direct communication with the “customer”, providing valuable feedback which can be relayed back to the team. Since entire project teams rarely attend these informational sessions, the team will start looking to you to provide the important feedback that we all value working on Agile projects. Personally, I have always looked forward to returning to the team room and have always appreciated team members asking, “so, how did it go!?”, after each sprint demo. At the same time, I am always glad to be able to provide that feedback to the team which we can use for future success.
DOCUMENTATION – CHANGE DOCUMENTATION FROM A TEDIUS TASK TO A VALUABLE COMMODITY
Even in today’s Agile world, most software development projects require some essential “dirty work”. The BA role has certainly evolved, but we should not completely abandon our roots. While we’ve all heard repeatedly (sometimes to our detriment) that the Agile Manifesto preaches valuing working software over comprehensive documentation, certain documentation can be critical to the success of projects.
I have seen that, if applied effectively, this basic Agile tenet not only reduces redundant documentation, but it helps teams focus on where documentation actually adds value to a project. Instead of documenting a requirement or process as part of an extensive list of deliverables promised six months ago (which will never be read or will become irrelevant), we document exactly what is needed, today. Most likely, information and processes which need to be documented aren’t even known at the outset of the project. Due to the fact that writing is a core skillset possessed by most BA’s, Agile BA’s are well-positioned to accomplish many of the documentation deliverables needed over the duration of a project.
NOW, GET TO WORK!
You’ve just finished reading this blog entry and your new sprint starts next week. It’s not entirely unrealistic that you can begin working in each of the areas mentioned in this blog over the next two weeks (if you haven’t been already). Offer to facilitate your upcoming sprint planning or review session. Ask the PM if you can contribute to the upcoming metrics reports (e.g. sprint/release burndown). If you aren’t already, start testing user stories – start at a high-level, ensuring that all acceptance criteria is met. Take a few hours, dive into and familiarize yourself with the product backlog. Offer to facilitate the sprint review and invite stakeholders who have become disengaged. And finally, document that process flow which has been taking up valuable whiteboard real-estate for the last several weeks (and really needs to be erased)!
Before you know it, you’re pursuing five completely new opportunities in a matter of two weeks. It might be similar to trying out three completely new positions on a baseball diamond. More importantly, you may even finally be able to explain exactly why Jose Oquendo was such a valuable player to have on a Major League Baseball team.
If you’ve been following this series of blog posts about why so many Agile projects seem to deteriorate into waterfall, you know that I believe failure to completely close iterations (or sprints) is a major reason why. If tasks continually spill over from one iteration into the next, the system is never stable enough to demo, and without demos, the feedback loop between the developer team and the Product Owner is broken. Without a rapid feedback loop, Agile doesn’t exist. The project is just a waterfall project with weekly or biweekly status meetings.
What can developer teams do to ensure that iterations close with functioning software that allows the feedback loop to run smoothly?
The most important thing you as a developer can do is to embrace change, as the motto of the Extreme Programming (XP) movement says. Don’t accept change. Don’t tolerate change. Embrace it. Understand that change is the very basis of the evolutionary and incremental approach to software development that is at the heart of Agile methodologies.
What does this mean?
Well, consider that most useful business functionality in an application takes a long time to develop. A single complete use case may take a month or more. And as you consider all the use cases in the system, you begin to see patterns emerging. Perhaps use case 1 requires a way to validate and persist complex incoming data to a database. Use case 20 requires a way to report invalid data to a compliance authority. Use case 35 requires you to deal with the response from the compliance authority. So you begin to think about how you can avoid having to rework what you do for use case 1 to incorporate the later use cases.
This is the road to perdition.
If you want all these use cases to “come together” all at once into a grand spectacle of software delivery, you’ll find it increasingly difficult to show progress along the way. There are too many interlocking pieces that all need to work in order for the whole thing to work. This is how work spills over from one iteration to the next – because the chunks of work you’re taking on are too big.
Instead, embrace change.
Know that some of what you do in use case 1 will likely be reworked for use case 20. This is a good thing, not a bad thing – because it allows you to chunk up your work into smaller pieces that are functional but with limited feature sets.
You may ask: but doesn’t the cost of the rework add up and inflate the project cost?
Yes, a little, but if you diligently use Agile engineering techniques like Test Driven Development (TDD), Continuous Integration (CI), and so on, the cost of change will be greatly reduced, and ultimately the cost will be less than the cost of eliminating the feedback loop. Think of it this way: if you incrementally develop and constantly demo use case 1, the Product Owner may discover that use case 20 needs modification, and that use case 35 goes away. Now you’ve actually reduced cost by using an evolutionary approach. Eliminating the short feedback loop is one of the most expensive things you can do on a software delivery project.
As a member of a developer team, take the iteration planning meetings (IPMs) very seriously. Your focus should be on working with the Product Owner to break down use cases into user stories and tasks that you and your team can complete in an iteration. Keep repeating to yourself: fully functional but not necessarily fully featured. Accept that the feature set is going to grow and evolve, but always keep the system functional so that whatever limited feature set is implemented can be demoed and incorporated into the feedback loop.
For example…
So, let’s take the example of use case 1 above: Validate and persist complex incoming data into our database. Will this fit into a 2-week iteration? Most probably not. So start small. Let’s get some valid data into the database. This needs a couple of tables and some data persistence objects. Will that fit into 2 weeks? Yes, probably, with some time left over. OK, so let’s use the time left over to do 5 of the 120 total validations we will have to do. Decouple the validation from the persistence, because that makes it easier to do each piece – but it’s better software design anyway. How do we report the validation errors with no UI yet? How about writing them to a file? Sure, you’ll end up throwing away the file-writing code, but once again, you’re practicing better software design. You’ve separated performing the validations from reporting the validations. And you can immediately start demoing your validations to the Product Owner.
(Hint: don’t simply write directly from the validators to a file. Design a validation reporting interface. Implement the interface first as a file, and later as whatever it actually needs to be. Use mock testing frameworks to test-drive the design. This decoupling and abstraction is good design whether you’re using Agile or not.)
Later, you will hook up the validations to the persistence. Then you will deal with how to handle invalid data. Then you will deal with showing validation errors via a UI. All the while, you have full functionality but not necessarily full feature sets. Each piece you develop is small, well-tested, and isolated. You then begin to interconnect them (don’t forget integration testing!) to deliver full feature sets.
[My colleague Robert Pantall wrote a blog post on how he used this technique of accepting controlled rework and incremental discovery to rewire his living room. It’s a fun read.]
Fully Functional, Not Necessarily Fully Featured
Keep in mind that you don’t get to unilaterally decide how the use cases get broken down into small chunks of functionality. You work with the Product Owner to do this, so that each small piece serves some business purpose. It’s a back and forth negotiation between you and the Product Owner.
Everything you do in an IPM should support this goal of evolutionary development that fits into the iteration. Be prepared to estimate quickly so you know whether work will fit. Don’t be afraid to say that work won’t fit and needs to be even more finely chunked down. Speak up. Have a dialog. Work with the product team. Keep the feedback loop running continuously.
After the IPM, when you’re working on the stories or tasks, you may run into unexpected trouble that lengthens the story completion time. Surface this immediately to the product team. You may need to break work down even more. That’s fine. Keep reminding yourself: fully functional, not necessarily fully featured.
I do want to emphasize something at this point. An evolutionary and incremental approach to software development is not a license to hack and slash. Saying you’re using incremental development does not give you an excuse to abandon architectural vision, best practices for design, continuous code improvement by refactoring, or coding standards. You must still design a system for its intended lifetime, following enterprise architecture best practices. Adopting incremental development only lets you make some short-term compromises for the benefit of keeping the software continuously functional. As the software evolves, it must evolve towards your architectural vision and continue to maintain its design and code integrity. As I showed above in the example, if you do this correctly, you’ll mostly be writing throw-away implementations to well-designed, long-term interfaces, and later building permanent production-ready implementations of the same interfaces.
I hope I’ve shown you that change is your friend if you truly adopt incremental delivery. It isn’t something to be feared or managed or avoided. Instead, it’s what makes it possible to develop complex business functionality while at the same time allowing the Product Owner to touch and use the software and to offer continuous feedback. Properly managed through correct Agile engineering techniques, the cost of the required rework fades into insignificance compared to the cost savings of the continuous feedback loop.
Don’t fear change. Embrace it.
Next episode, I’ll focus on what the Scrum Master or Project Manager can do to keep Agile projects from descending into waterfall.
As I write this blog entry, I’m hoping that the curiosity (or confusion) of the title captures an audience. Readers will ask themselves, “Who in the heck is Jose Oquendo? I’ve never seen his name among the likes of the Agile pioneers. Has he written a book on Agile or Scrum? Maybe I saw his name on one of the Agile blogs or discussion threads that I frequent?”
In fact, you won’t find Oquendo’s name in any of those places. In the spirit of baseball season (and warmer days ahead!), Jose Oquendo was actually a Major League Baseball player in the 1980’s, playing most of his career with the St. Louis Cardinals.
Perhaps curiosity has gotten the better of you yet again, and you look up Oquendo’s statistics. You’ll discover that Oquendo wasn’t a great hitter, statistically-speaking. His career .256 batting average and 14 homeruns over a 12 year MLB career is hardly astonishing.
People who followed Major League Baseball in the 1980’s, however, would most likely recognize Oquendo’s name, and more specifically, the feat which made him unique as a player. Oquendo has done something that only a handful of players have ever done in the long history of Major League Baseball – he’s played EVERY POSITION on the baseball diamond (all nine positions in total).
Oquendo was an average defensive player and his value obviously wasn’t driven from his aforementioned offensive statistics. He was, however, one of the most valuable players on those successful Cardinal teams of the 80’s, as the unique quality he brought to his team was derived from a term referred to in baseball lingo as “The Utility Player”. (Interestingly enough, Oquendo’s nickname during his career was “Secret Weapon”.)
Over the course of a 162-game baseball season, players get tired, injured and need days off. Trades are executed, changing the dynamic of a team with one phone call. Further complicating matters, baseball teams are limited to a set number of roster spots. Due to these realities and constraints of a grueling baseball season, every team needs a player like Oquendo who can step up and fill in when opportunities and challenges present themselves. And that is precisely why Oquendo was able to remain in the big leagues for an amazing 12 years, despite the glaring deficiency in his previously noted statistics.
Oquendo’s unique accomplishment leads us directly into the topic of the Agile Business Analyst (BA), as today’s Agile BA is your team’s “Utility Player”. Today’s Agile BA is your team’s Jose Oquendo.
A LITTLE HISTORY – THE “WATERFALL BUSINESS ANALYST”
Before we get into the opportunities afforded to BA’s in today’s Agile world, first, a little walk down memory lane. Historically (and generally) speaking – as these are only my personal observations and experiences – a Business Analyst on a Waterfall project wrote requirements. Maybe they also wrote test cases to be “handed off” and used later. In many cases, requirements were written and reviewed anywhere from six to nine months before documented functionality was even implemented. As we know, especially in today’s world, a lot can change in six months.
I can remember personally writing requirements for a project in this “Waterfall BA” role. After moving onto another project entirely, I was told several months down the road, “’Project ABC’ was implemented this past weekend – nice work.” Even then, it amazed me that many times I never even had the opportunity to see the results of my work. Usually, I was already working on an entirely new project, or more specifically, another completely new set of requirements.
From a communications perspective, BA’s collaborated up-front mostly with potential users or sellers of the software in order to define requirements. Collaboration with developers was less common and usually limited to a specific timeframe. I actually worked on a project where a Development Manager once informed our team during a stressful phase of a project, “please do not disturb the developers over the next several weeks unless absolutely necessary.” (So much for collaboration…) In retrospective, it’s amazing to me that this directive seemed entirely normal to me at the time.
Communication with testers seemed even rarer – by the very definition, on a Waterfall project, I’ve already passed my knowledge on to the testers – it’s now their responsibility. I’m more or less out of the loop. By the time the specific requirements are being tested, I’m already off onto an entirely new project.
In my personal opinion the monotony of the BA role on a Waterfall project was sometimes unbearable. Month-long requirements cycles, workdays with little or no variation, and some days with little or no collaboration with other team members outside of standard team meetings became a day to day, week to week, month to month grind, with no end in sight.
AND NOW INTRODUCING… THE “AGILE BUSINESS ANALYST”
Fast-forward several years (and several Agile project experiences) and I have found that the role of today’s Agile Business Analyst has been significantly enhanced on teams practicing Agile methodologies and more specifically, Scrum. Simply as a result of team set-up, structure, responsibilities – and most importantly, opportunities – I feel that Agile teams have enhanced the role of the Business Analyst by providing opportunities which were never seemingly available on teams using the traditional Waterfall approach. There are new opportunities for me to bring value to my team and my project as a true “Utility Player”, my team’s Jose Oquendo.
The role of the Agile BA is really what one makes of it. I can remain content with the day to day “traditional” responsibilities and barriers associated with the BA role if I so choose; back to the baseball analogy – I can remain content playing one position. Or, I can pursue all of the opportunities provided to me in this newly-defined role, benefitting from new and exciting experiences as a result; I can play many different positions, each one further contributing to the short and long-term success of the team.
Today, as an Agile BA, I have opportunities – in the form of different roles and responsibilities – which not only enhance my role within the team but also allow me to add significant value to the project. These roles and responsibilities span not only across functional areas of expertise (e.g. Project Management, Testing, etc.) but they also span over the entire lifetime of a software development project (i.e. Project Kickoff to Implementation). In this sense, Agile BA’s are not only more valuable to their respective teams, they are more valuable AND for a longer period of time – basically, the entire lifespan of a project. I have seen specifically that Agile BA’s can greatly enhance their impact on project teams and the quality of their projects in the following five areas:
- Project Management
- Product Management (aka the Product Backlog)
- Testing
- Documentation
- Collaboration (with Project Stakeholders and Team Members)
We’ll elaborate – in a follow-up blog entry – specifically how Agile BA’s can enhance their role and add value to a project by directly contributing to the five functional areas listed above.
I found Mike Cohn’s posting Don’t Blindly Follow very curious because it seems to contradict what many luminaries of the Agile community have said about starting out by strictly following the rules until you’ve really learned what you’re doing.
In one sense, I do agree with this sentiment of not blindly doing something. Indeed, when I was younger, I thought it was quite clever of me to say things like “The best practice is not to follow best practices.” But then I discovered the Dreyfus Model of Skills Acquisition and that made me realize that there’s a more nuanced view. In a nutshell, the Dreyfus model says that we progress through different stages as we learn skills. In particular when we start learning something, we do start by following context-free rules (a.k.a., best practices) and progress through situational awareness to “transcend reliance on rules, guidelines, and maxims.” This is resonates with me since I recognize it as the way I learn things and when I can see it in others when they are serious about something. (To be fair, there are people that don’t seem to fit into this model, too, but I’m okay with a model that’s useful even if it doesn’t cover every possibility.) So, I would say that we should start out following the rules blindly until we have learned enough to recognize how to helpfully modify the rules that we’ve been following.
Cohn concludes with another curious statement: “No expert knows more about your company than you do.” Again, there’s a part of me that wants to agree with this, but then again… An outsider could well see things that an insider takes for granted and have perspectives that allow them to come to different conclusions from the information that you both share.
I find myself much more in sympathy with Ron Jeffries’ statements in The Increment and Culture: “Rather than change ourselves, and learn the new game, we changed Scrum. We changed it before we ever knew what it was.” This seems like it would offer a much better opportunity to really learn the basics before we start changing this to suit ourselves.
For the past 3 months we’ve had the pleasure of working with a charitable organization called the Ceca Foundation.
Ceca, which is derived from “celebrating caregivers”, was established in 2013 to celebrate caregiver excellence and “to promote high patient satisfaction by recognizing and rewarding outstanding caregivers”. They do this by providing employees of caregiving facilities with a platform for recognizing and nominating their peers for the Ceca Award – a cash reward given throughout the year. These facilities include rehabilitation centers, hospitals, assisted living centers and similar organizations.
CC Pace partnered with Ceca to build their next generation, customized nomination platform.
This was one of those projects that fills you with pride. First, for the obvious reason – Ceca’s worthwhile mission. Second, the not-so-obvious reason, which was the development process. It was a great example of why I enjoy helping customers build products.
The process
For various reasons, Ceca was under a tight deadline to get the new platform up-and-running for several facilities. The Agile process turned out to be a great fit, as it allowed for frequent customer feedback and weekly deployments to a testable environment. We developed the platform using high-level feature stories, rather than detailed specifications. This allowed the team to concentrate on the desired outcome, rather than getting caught up in the technical details. At times, we had to forgo a software-based solution in favor of a manual process. When you have limited resources and time, you have to make these types of decisions.
In February, after about 3 weeks, the Ceca Foundation launched the new web platform for one facility and then quickly brought on several more. There was immediate gratification for the team as we watched the nominations flood in.
The “feel good” story
What made this project successful and enjoyable at the same time? I’m reminded of the first value in the Agile manifesto – individuals and interactions over processes and tools. Some factors were technical but most were not:
- a motivated and enthusiastic customer (Ceca)
- a set of agreed upon features to provide the Minimum Viable Product
- frequent collaboration with the customer
- a cloud-hosted environment to provide infrastructure on-demand for testing and live versions
- a software-as-a-service model that allowed us to quickly bring on new facilities
For me, it was Agile at its most fundamental: discuss the desired features; provide a cost estimate for those features; negotiate priority with the customer; provide frequent releases of working software.
Check out the Ceca Foundation for more information. You can see a demo of the software under “Technology“.
Occasionally, as part of our strategic advisory service, I work with clients who don’t want custom application delivery from us, but rather want me to provide advice to their own Agile development teams. Many of them don’t need a lot of help, but perhaps the single issue I observe most often is that iteration (or sprint, in Scrum terminology) planning meetings (IPMs) don’t go well. Rather than being an interactive exchange of ideas and a negotiation between developers and product owners for the next iteration, I observe that the IPMs become 2-week status meetings that don’t accomplish much. The developer team doesn’t have much or anything to demo, there’s little feedback from the product owner, and everyone just routinely agrees to meet in two weeks to go through the same thing again.
One of the main reasons for these lackluster IPMs is the failure to close tasks at iteration boundaries. If the developer team can’t close tasks at iteration boundaries, then the product can’t be usefully demoed, which means the product owner can’t offer any feedback. This isn’t any form of Agile – it’s just waterfall with 2-week status meetings.
Failure to close tasks at iteration boundaries has other implications too, because what it’s telling you is that stories are too big, and stories that are too big have big consequences.
First, big stories are hard to estimate accurately. Think of estimates as sort of like weather forecasts: anything over 2-3 days is probably too inaccurate to use for planning. The smaller the story, the more accurate will be the estimate.
Second, big stories make it harder to change business priorities. That may seem like a non sequitur, but when developers are working on any story, the system is in an unstable, non-functioning state. To change direction, the developers have to bring the system to a stable state where it can be taken in a different direction. Those stable states are achieved when stories are completed and the system is ready to demo.
An analogy I like to use is to think of the system as a big truck proceeding down a controlled-access highway, like an American interstate. You can exit only at certain points. If you’re heading north and you realize you want to head east instead, you have to wait for the next exit to make that direction change – you can’t just immediately turn east and start driving through the underbrush. The farther apart the exits are, the farther you’re going to have to go out of your way before you can adjust. Think of each exit as being the close of a story. The closer together the exits (the smaller the stories), the sooner you’ll reach an exit (a system steady state) where you can change direction.
In this series of blog posts, I’m going to look at what it takes to ensure task closure at iteration boundaries. Each post will focus on a different team role, and how that role can help ensure that iterations end in an actual delivery of working software that can be demoed in an IPM. I’ll write about what product owners, developers, and project managers (or Scrum masters) can do to reduce story size, ensure product stability and functionality at iteration boundaries, and keep the system always ready to quickly change directions – the very definition of agility.
Watch this space.
I like to attempt minor DIY projects around the house because 1) it saves money and, 2) it’s enjoyable to solve technical issues that don’t involve staring at a computer. Recently, I decided to wire the living room with recessed lights. There is no existing light fixture in the ceiling so I had to use power from a receptacle and wire in a new switch to control the lights.
I didn’t want to cut through the wall board, run wire and then find out that I didn’t know how to actually wire into the receptacle. So, I decided to approach the problem much like I do when faced with a daunting programming problem – by unit testing.
I pulled out the receptacle, examined the wiring, and scratched out a plan on a piece of paper. Then I wired a light switch and cheap single bulb fixture off of the receptacle. Luckily, it worked without major adjustments or shocks. And it entertained the kids for about 3 minutes.
I then was able to confidently cut through the wall and wire in the switch, solving one piece of the puzzle and allowing me to focus on installing the overhead lights.
As a design technique, Test-Driven Development (TDD) allows us to break down complex systems into smaller, more manageable chunks. Once you’ve written tests to satisfy a cohesive set of requirements, you commit the code and move on to the next set.
A good example of this can be found in the book, Practices of an Agile Developer (Subramaniam, Hunt). One of the practices is described as Attacking Problems in Isolation. As the authors explain:
“Large systems are complicated – many factors are involved in the way they execute. While working with the entire system, it’s hard to separate the details that have an effect on your particular problem from the ones that don’t.”
Isolating the problem is also useful when debugging a system issue that is buried under layers of UI, database and middle-tier abstractions. Remove each layer until you’ve discovered the likely culprit. Or build a simple prototype and isolate the misbehaving module.
It’s easy to get overwhelmed by a complex system when trying to decide where to begin. It may feel like a house-of-cards, teetering on the verge of collapse with the next interruption. Consider TDD as not just a test fixture, but as a design technique that helps narrow the scope.
And as for wiring a home – keep it simple and remember to cut off the power.
Ron Jeffries recently posted an article on writing code for “The Diamond Problem” using TDD in response to an article by Alistair Cockburn on Thinking before programming. Cockburn starts out:
“A long time ago, in a university far away, a few professors had the idea that they might teach programmers to think a bit before hitting the keycaps. Nearly a lost cause, of course, even though Edsger Dijkstra and David Gries championed the movement, but the progress they made was astonishing. They showed how to create programs (of a certain category) without error, by thinking about the properties of the problem and deriving the program as a small exercise in simple logic. The code produced by the Dijkstra-Gries approach is tight, fast and clear, about as good as you can get, for those types of problems.”
To which Ron responds:
“I looked at Alistair’s article and got two things out of it. First, I understood the problem immediately. It seemed interesting. Second, I got that Alistair was suggesting doing rather a lot of analysis before beginning to write tests. He seems to have done that because Seb reported that some of his Kata players went down a rat hole by choosing the wrong thing to test.
“Alistair calls upon the gods, Dijkstra and Gries, who both championed doing a lot of thinking before coding. Recall that these gentlemen were writing in an era where the biggest computer that had ever been built was less powerful than my telephone. And Dijkstra, in particular, often seemed to insist on knowing exactly what you were going to do before doing it.
“I read both these men’s books back in the day and they were truly great thinkers and developers. I learned a lot from them and followed their thoughts as best I could.”
Ron’s article goes on to solve the same programming problem with TDD and no particular up-front thinking that Cockburn solved in what he calls a combination of “the Dijkstra-Gries approach” and TDD. On the whole, I would tend more towards the pure TDD approach that Ron takes because it got him feedback earlier and more frequently, while Cockburn’s approach, with more upfront thinking, didn’t provide him any feedback until he really did start writing his tests. If Cockburn had gone down a blind alley with his thinking, he wouldn’t have gotten any concrete feedback on it until much later in the game.
But that’s not what I actually want to think about. I did read both Dijkstra’s A Discipline of Programming and Gries’ The Science of Programming “back in the day” as well (last read in 1991 and 2001, respectively, although it was a second reading of Gries; I remember finding Dijkstra almost impossible to understand, but I did keep it, so it may be time to try it again), but I didn’t remember the emphasis on up front thinking that both Ron & Cockburn seemed to claim for them. I dug out my copies of both books and did a quick flip through both of them, and I still feel that the emphasis is much more on proving the correctness of one’s code rather than doing a lot of up-front thinking. I’d previously had the feeling that there was a similarity between Gries’ proofs and doing TDD. As I poke around in chapter 13 of Gries’ book, where he introduces his methodology, I find myself believing it even more strongly.
Gries starts out asking “What is a proof?” His answer?
“A proof, according to Webster’s Third New International Dictionary, is ‘the cogency of evidence that compels belief by the mind of a truth or fact,’ It is an argument that convinces the reader of the truth of something.
“The definition of proof does not imply the need for formalism or mathematics. Indeed, programmers try to prove their programs correct in this sense of proof, for they certainly try to present evidence that compels their own belief. Unfortunately, most programmers are not adept at this, as can be seen by looking at how much time is spent debugging. The programmer must indeed feel frustrated at the lack of mastery of the subject!”
Doesn’t TDD provide that for us, at least when practiced correctly? Oh, and the first principle that Gries gives in this chapter is: “A program and its proof should be developed hand-in-hand, with the proof usually leading the way.” Hmm, sounds familiar, no?
Admittedly, Gries does speak out against what he calls “test-case analysis:”
“‘Development by test case’ works as follows. Based on a few examples of what the program is to do, a program is developed. More test cases are then exhibited – and perhaps run – and the program is modified to take the results into account. This process continues, with program modification at each step, until it is believed that enough test cases have been checked.”
On the face of it, this does sound like a condemnation of TDD, but does it really represent what we do when we really practice TDD? Sort of, but it overlooks the critical questions of how we choose the test cases and the speed at which we can get feedback from them. If we’re talking randomly picking a bunch of test cases and getting feedback from them in a matter of days or hours, then I’d agree that it would be a poor way to develop software. When we’re practicing TDD, though, we should be looking for that next simplest test case that helps us think about what we’re doing. Let’s turn to Gries’ “Coffee Can Problem” as an example.
“A coffee can contains some black beans and some white beans. The following process is to be repeated as long as possible.
“Randomly select two beans from the can. If they have the same color, throw them out, but put another black bean in. (Enough extra black beans are available to do this.) If they are different colors, place the white one back into the can and throw the black one away.”
“Execution of this process reduces the number of beans in the can by one. Repetition of the process must terminate with exactly one bean in the can, for then two beans cannot be selected. The question is: what, if anything, can be said about the color of the final bean based on the number of white beans and the number of black beans initially in the can?”
Gries suggests we take ten minutes on the problem and then goes on to claim that “[i]t doesn’t help much to try test cases!” But the test cases he enumerates are not the ones we’d likely try were we trying to solve this with TDD. He suggests test cases for a black bean and a white bean to start with and then two black beans. Doing TDD, we’d probably start with a single bean in the can. That’s really the simplest case. What’s the color of the final bean in the can if I start with only a single black bean? Well, it’s black. And it’s going to be white if the only bean in the can is white. Okay, what happens if I start with two black beans? I should end up with a black bean. Two white beans wouldn’t make me change my code, so let’s try starting with a black and a white bean. Ah, I would end up with a white bean in that case. Can I draw any conclusions from this?
I did actually think about those test cases before I read Gries’ description of his process:
“Perhaps there is a simple property of the beans in the can that remains true as the beans are removed and that, together with the fact that only one bean remains, can give the answer. Since the property will always be true, we will call it an invariant. Well, suppose upon termination there is one black bean and no white beans. What property is true upon termination, which could generalize, perhaps, to be our invariant? One is an odd number, so perhaps the oddness of the number of black beans remains true. No, this is not the case, in fact the number of black beans changes from even to odd or odd to even with each move.”
There’s more to it, but this is enough to make me wonder if this is really different from writing a test case. Actually it is: he’s reasoning about the problem, and by extension the code he’d write. But he is still testing his hypotheses, it’s just in his head rather than in code. And there I would suggest that TDD, as opposed to using randomly selected test cases, allows us to do that same kind of reasoning with working code and extremely rapid feedback. (To be fair, I believe this is what Ron was saying, too. I just want to highlight the similarity to what Gries was saying, while Ron seems to be suggesting more of a difference.)
What might get lost in TDD, at least when it’s not practiced well, is that idea of reasoning about the code. There’s an art to picking that next simplest test to write, and I suspect that that’s where much of the reasoning really happens. If we write too much code in response to a single test, we’re losing some of the reasoning. If we write our tests after the code, we’ve probably lost it entirely. And that’s something I do believe is lacking in many programmers today, evidenced, as Gries suggests, by the amount of time spent fumbling around in debuggers and randomly adding code “to see what will happen.” But that’s for another time.