An Agile Coach’s Guide to Managing Pride, Ego and Personal Growth

An Agile Coach’s Guide to Managing Pride, Ego and Personal Growth

As a ScrumMaster and Agile Coach, one of my favorite tools to use is Mural.  Mural is a digital whiteboard designed for engaging collaborations.  It’s a creative outlet for me; it’s like scrapbooking for a job. I like to think that I’m pretty good at using the application.  If I’m being honest, I think I’m really good at using Mural. I usually get compliments on my creativity, layouts, and functionality of my Mural boards. I pride myself on being knowledgeable and helpful in Mural, which is a part of my Personal Brand.

Recently I was in a Thursday ScrumMaster Community of Practice (CoP) where we were discussing Mural and I shared some of my examples.  An Agile Coach said, “Tracy – if you want to help me dress this one up, I’d be happy to see what you can do!”.  I felt honored and excited, especially since I had only been consulting at the company for less than 2 months.  I had the drive and desire to do my best work and build a positive reputation for myself as an informed, helpful, and creative resource.

That night I reworked his Mural.  I created a Lego Theme that focused on Building Foundations Together.  I created colored Lego Brick Backgrounds for different topics.  I was really excited about what I had made.  I shared right away in the morning, expecting to get positive feedback.  The instant gratification!  However, the Agile Coach was out of the office that Friday.

On Monday, I received his response. “You were using Areas and Shapes inconsistently.”.  “Wait, what?!?!”.  It wasn’t the reaction I was expecting, and a meeting was scheduled. I didn’t even know what he was talking about with “Areas”.  My ego definitely took a hit.

I prepared myself for a meeting with the Agile Coach.  I felt like I was a student in detention.  I felt like I was in trouble.  I expected him to be upset with me because, after all, I ruined the functionality of his Mural.

Instead of a meeting, it was a working session.  I learned about functionality I never knew existed.  He showed me where, when, and how to use Mural Areas.  He was not upset at all in the working session; he knew I had the best intentions.

We learned that even though we’re both passionate and knowledgeable about Agile, it was also easy to see how quickly we fell back on old patterns.  We needed to align first with what we were trying to accomplish.  We also brainstormed from different personas which created a variety of solutions.  Those perspectives made the Mural and instructions more valuable and understandable.

I was humbled by this experience.

I can be, and still am, proud of the work I did. 

If I wouldn’t have changed my defensiveness to a Growth Mindset, I wouldn’t have gotten the gift of learning something new.

If I would have gone into the meeting with negativity, I wouldn’t have seen the Coaching Demo that happened right in front of my eyes.

In my reflection, I realized I needed to put my ego aside to grow.

I did that by:

    1. Change to a Growth Mindset.  Embrace challenges and learn from feedback.
    2. Think of every opportunity as a coachable moment.  Imagine what your day would look like if you went into every situation thinking, “What am I going to learn here?”.  How would the atmosphere and culture change?
    3. Give grace.  Give grace that everyone was trying to do the best they could at the time – Including yourself.
    4. Have Gratitude.  Gratitude greatly reduces negativity and anxiety.  It shifts a focus from thinking of yourself to others.  When we have gratitude, what we appreciate grows.
    5. Pay it forward.   Paying it forward is the greatest compliment you can give your mentor.  Don’t just share the information that you received but pass on the learning environment.  Create a space of psychological safety, patience, and understanding.

In conclusion, I still pride myself on being knowledgeable and helpful in Mural.  I’m practicing a Growth Mindset and trying to embrace every situation as a coachable moment. Be grateful for being mentored, and in return, become a mentor.  But what I really learned in the experience and reflection was:

You can have pride in your career, but to continue your growth journey you need to practice removing the ego. 

Another great part of this story is that the mentor/mentee relationship didn’t end at that working session.  We continue to collaborate, asking for opinions, and sharing knowledge.  We continue to learn together.  Now that’s what I call Building Foundations Together.

Today’s business leaders find themselves navigating a world in which artificial intelligence (AI) plays an increasingly pivotal role. Among the various types of AI, generative AI – the kind that can produce novel content – has been a game changer. One such example of generative AI is OpenAI’s ChatGPT. Though it’s a powerful tool with significant business applications, it’s also essential to understand its limitations and potential pitfalls.

1. What are Generative AI and ChatGPT?
Generative AI, a subset of AI, is designed to create new content. It can generate human-like text, compose music, create artwork, and even design software. This is achieved by training on vast amounts of data, learning patterns, structures, and features, and then producing novel outputs based on what it has learned.

In the realm of generative AI, ChatGPT stands out as a leading model. Developed by OpenAI, GPT, or Generative Pre-training Transformer, uses machine learning to produce human-like text. By training on extensive amounts of data from the internet, ChatGPT can generate intelligent and coherent responses to text prompts.

Whether it’s crafting detailed emails, writing engaging articles, or offering customer service solutions, ChatGPT’s potential applications are vast. However, the technology is not without its drawbacks, which we’ll delve into shortly.

2. Strategic Considerations for Business Leaders
Adopting a generative AI model like ChatGPT in your business can offer numerous benefits, but the key lies in understanding how best to leverage these tools. Here are some areas to consider:

  • 2.1. Efficiency and Cost Savings
    Generative AI models like ChatGPT can automate many routine tasks. For example, they can provide first-level customer support, draft emails, or generate content for blogs and social media. Automating these tasks can lead to considerable time savings, freeing your team to focus on more strategic, creative tasks. This not only enhances productivity but could also lead to significant cost savings.
  • 2.2. Scalability
    One of the biggest advantages of generative AI models is their scalability. They can handle numerous tasks simultaneously, without tiring or requiring breaks. For businesses looking to scale, generative AI can provide a solution that doesn’t involve a proportional increase in costs or resources. Moreover, the ability of ChatGPT to learn and improve over time makes it a sustainable solution for long-term growth.
  • 2.3. Customization and Personalization
    In today’s customer-centric market, personalization is key. Generative AI can create content tailored to individual user preferences, enhancing personalization in your services or products. Whether it’s customizing email responses or offering personalized product recommendations, ChatGPT can drive customer engagement and satisfaction to new heights.
  • 2.4. Innovation
    Generative AI is not just about automating tasks; it can also stimulate innovation. It can help in brainstorming sessions by generating fresh ideas and concepts, assist in product development by creating new design ideas, and support marketing strategies by providing novel content ideas. Leveraging the innovative potential of generative AI could be a game-changer in your business strategy.

3. The Pitfalls of Generative AI
While the benefits of generative AI are clear, it’s essential to be aware of its potential drawbacks and pitfalls:

  • 3.1. Data Dependence and Quality
    Generative AI models learn from the data they’re trained on. This means the quality of their output is directly dependent on the quality of their training data. If the input data is biased, inaccurate, or unrepresentative, the output will likely be flawed as well. This necessitates rigorous data selection and cleaning processes to ensure high-quality outputs.
    Employing strategies like AI auditing and fairness metrics can help detect and mitigate data bias and improve the quality of AI outputs.
  • 3.2. Hallucination
    Generative AI models can sometimes produce outputs that appear sensible but are completely invented or unrelated to the input – a phenomenon known as “hallucination”. There are numerous examples in the press regarding false statements or claims made by these models, sometimes funny (like claiming that someone ‘walked’ across the English Channel) to the somewhat frightening (claiming someone has committed a crime, when in fact, they did not). This can be particularly problematic in contexts where accuracy is paramount. For example, if a generative model hallucinates while generating a financial report, it could lead to serious misinterpretations and errors. It’s crucial to have safeguards and checks in place to mitigate such risks.
    Implementing robust quality checks and validation procedures can help. For instance, combining the capabilities of generative AI with verification systems, or cross-checking the AI outputs with trusted data sources, can significantly reduce the risk of hallucination.
  • 3.3. Ethical Considerations
    The ability of generative AI models to create human-like text can lead to ethical dilemmas. For instance, they could be used to generate deepfake content or misinformation. Businesses must ensure that their use of AI is responsible, transparent, and aligned with ethical guidelines and societal norms.
    Regular ethics training for your team, and keeping lines of communication open for ethical concerns or dilemmas, can help instill a culture of responsible AI usage.
  • 3.4. Regulatory Compliance
    As AI becomes increasingly pervasive, regulatory bodies worldwide are developing frameworks to govern its use. Businesses must stay updated on these regulations to ensure compliance. This is especially important in sectors like healthcare and finance, where data privacy is paramount. Not adhering to these regulations can lead to hefty penalties and reputational damage.
    Keep up-to-date with the latest changes in AI-related laws, especially in areas like data privacy and protection. Consider consulting with legal experts specializing in AI and data to ensure your practices align with regulatory requirements.
  • 3.5 AI Transparency and Explainability
    Generative AI models, including ChatGPT, often function as a ‘black box’, with their internal workings being complex and difficult to interpret.
    Enhancing AI transparency and explainability is key to gaining trust and mitigating risks. This could involve using techniques that make AI decisions more understandable to humans or adopting models that provide an explanation for their outputs.

4. Navigating the Generative AI Landscape: A Step-by-Step Approach
As generative AI continues to evolve and redefine business operations, it is essential for business leaders to strategically navigate this landscape. Here’s an in-depth look at how you can approach this:

  • 4.1. Encourage Continuous Learning
    The first step in leveraging the power of AI in your business is building a culture of continuous learning. Encourage your team to deepen their understanding of AI, its applications, and its implications. You can do this by organizing workshops, sharing learning resources, or even bringing in an AI expert (like myself) to educate your team on the best ways to leverage the potential of AI. The more knowledgeable your team is about AI, the better equipped they will be to harness its potential.
  • 4.2. Identify Opportunities for AI Integration
    Next, identify the areas in your business where generative AI can be most beneficial. Start by looking at routine, repetitive tasks that could be automated, freeing up your team’s time for more strategic work. Also, consider where personalization could enhance the customer experience – from marketing and sales to customer service. Finally, think about how generative AI can support innovation, whether in product development, strategy formulation, or creative brainstorming.
  • 4.3. Develop Ethical and Responsible Use Guidelines
    As you integrate AI into your operations, it’s essential to create guidelines for its ethical and responsible use. These should cover areas such as data privacy, accuracy of information, and prevention of misuse. Having a clear AI ethics policy not only helps prevent potential pitfalls but also builds trust with your customers and stakeholders.
  • 4.4. Stay Abreast of AI Developments
    In the fast-paced world of AI, new developments, trends, and breakthroughs are constantly emerging. Make it a point to stay updated on these advancements. Subscribe to AI newsletters, follow relevant publications, and participate in AI-focused forums or conferences. This will help you keep your business at the cutting edge of AI technology.
  • 4.5. Consult Experts
    AI implementation is a significant step and involves complexities that require expert knowledge. Don’t hesitate to seek expert advice at different stages of your AI journey, from understanding the technology to integrating it into your operations. An AI consultant or specialist can help you avoid common pitfalls, maximize the benefits of AI, and ensure that your AI strategy aligns with your overall business goals.
  • 4.6. Prepare for Change Management
    Introducing AI into your operations can lead to significant changes in workflows and job roles. This calls for effective change management. Prepare your team for these changes through clear communication, training, and support. Help them understand how AI will impact their work and how they can upskill to stay relevant in an AI-driven workplace.
    In conclusion, navigating the generative AI landscape requires a strategic, well-thought-out approach. By fostering a culture of learning, identifying the right opportunities, setting ethical guidelines, staying updated, consulting experts, and managing change effectively, you can harness the power of AI to drive your business forward.

5. Conclusion: The Promise and Prudence of Generative AI
Generative AI like ChatGPT carries immense potential to revolutionize business operations, from streamlining mundane tasks to sparking creative innovation. However, as with all powerful tools, its use requires a measured approach. Understanding its limitations, such as data dependency, hallucination, and ethical and regulatory challenges, is as important as recognizing its capabilities.

As a business leader, balancing the promise of generative AI with a sense of prudence will be key to leveraging its benefits effectively. In this exciting era of AI-driven transformation, it’s crucial to navigate the landscape with a keen sense of understanding, responsibility, and strategic foresight.

If you have questions or want to identify ways to enhance your organization’s AI capabilities, I’m happy to chat. Feel free to reach out to me at jfuqua@ccpace.com or connect with me on LinkedIn

Pt 1 of a two-part series

Effective communication remains at the very heart of team efficiency. Entire business models are based on improving team communications, look no further than SalesForce’s acquisition of Slack. Microsoft Teams, BaseCamp, Sharepoint, Zoom, Webex, instant messenger, and text are just a few of the frequently used communication tools in the workplace. Meeting facilitation is now a ‘service offering’ – take a moment and search LinkedIn… perhaps you’ll get more than the 7,700 results that I returned!

Yet here we are in this post-COVID, hybrid/remote work environment, and one of the most effective and proven communication channels has been cast aside. COVID brought most technology work into the home somewhat permanently, and the overscheduling of meetings proliferated, where thanks were provided to higher powers when a meeting was gracefully ended 50 mins after the hour, allowing a few minutes to check on the kids, feed the whining dog, or run to the bathroom—really anything other than staring at an array of faces staring back at their screens. What this also brought was the feeling that since schedules were so solidly stacked, the last thing that made sense was to just call someone. It felt intrusive and presumptuous.

While COVID remains in circulation, the work world is transitioning with some firms fully remote forever, others hybrid with overlap day mandates, and others fully back to pre-pandemic norms.  At CC Pace, we remain convinced that flexibility is critical to our employee’s success, but we are also strong believers that face-to-face time is time well spent. I’m reminded of my first job in technology working for one of the Big 4 firms; at my first project, if I hadn’t had the opportunity to strategically place myself at a senior developer’s desk as his day started to ask a few questions and get a few answers, I’d have been sunk. The informal communication channel is critical—and I couldn’t imagine posting my question to him on Teams (even if it had existed back then), nor would he have answered! So how can that experience be replicated within the modern hybrid/remote work environment?

Virtual stand-ups remain useful. Pair programming has its place.  Yet we see questions left overnight, where there is a desire not to bother a fellow developer. One of the most basic behaviors that our Agile coaches are pressing on is to push developers to ‘pick up the phone and call’.  This deference has a huge cost on team productivity as “stuck developer time” ticks away to no effect.

This is Sachin, picking up with the perspective of a Sr. Scrum Master and Agile coach, I have seen a shift in mindset with team members content with completing their part in a user story. It’s more of a “throw it over the wall” kind of mindset. I usually relate it to a very popular British party game we used to play as kids, i.e., “Pass the parcel.” The parcel, which is the user story, is passed on from one person to another until the music stops, which is when the sprint ends. The result is an incomplete story that just moves on from one sprint to another.  The concerning part is team members are not willing to take responsibility for a user story. A simple question, such as “can you finish the user story in a sprint?” goes unanswered. Take, for example, a backlog refinement session: this session is productive when team members communicate and ask questions. But when you consider them as a waste of time and just want to get done with them, assumptions are made without proper analysis, leading to improper sizing and stories remaining “not done”. To some extent, teams and programs have started to blame the very process for missing a deadline.

Effective interactions remain paramount for Agile teams.  The efficiency of text, email, and other one-way transmissions is not always effective.

In Part II of this series, Mike Wittrup will talk about a framework that CC Pace uses to assess and improve team communication protocols in the co-working world that we all live in.

In the meantime, we remain at your service in providing tailored approaches to driving business agility.  For over 20 years we’ve earned clients’ trust across all stages of the Agile journey. Give us a call!

In early 2022, my wife and I noticed a slight drip in the kitchen sink that progressively worsened. We quickly discovered that if we adjusted the lever just right, the drip would subside for a while. Right before the holidays, we had an issue with our water heater that wasn’t so livable, so we called an expert. After the plumber finished up with our hot water tank, we asked him to check our sink. In 5 minutes, he fixed the problem that had inconvenienced us for months! We had become so used to our workaround that we didn’t even realize how much time we spent in frustration trying to get the faucet handle just right (let alone double checking it all hours of the day and night) when we could have resolved the issue in just a few minutes of effort from an expert! 

Sometimes, our workarounds are only efficient in our minds. They may save us time or a few dollars upfront, but do they save us anything in the long run? In my case, the answer was a resounding ‘no’. 

In our professional lives, we’re trained to seek out our solutions for the ‘leaky faucets’ and typically only bring in an expert when we encounter a major problem that we can’t live with. For credit unions, the ‘water heater’ problems are typically the front office, as they live and die by the member experience. Having the right technology and interface (along with a myriad of other things) tends to be the core focus when it comes to investing in process and technology improvements.  

Where are the ‘leaky faucets’ usually located? The back office. Temporary workarounds are created and then become standard practice; these workarounds are largely unknown to the broader organization, time-consuming, and surprisingly easy to correct. They add up and lead to frustrations and inefficiencies, just like my family experienced with our sink. I recently sat down with Mike Lawson of CU Broadcast and John Wyatt, CIO of Apple Federal Credit Union, to discuss the value of a back-office assessment. 

You can check out a clip of that conversation here:

To see the entire segment, click here (just scroll down to the bottom of the page). I enjoyed speaking with Mike and John and encourage everyone in the credit union arena to subscribe to CU Broadcast if you haven’t already – it’s a great show, and one I’ve enjoyed for years. 

So, when is the last time you checked your faucets? We’d love to hear from you – reach out to me if you have questions or want to learn more about maximizing your back-office efficiency.  

With the World Cup taking over the headlines, we couldn’t miss an opportunity to bring two of our favorite topics at CC Pace together: sports and Agile. As Team USA gears up to take on the Netherlands, here’s a little history on the unique style of soccer the Dutch created and what Agile teams can learn from their success.

In the 1970s, the Dutch dominated their international counterparts by using a style of soccer they called totaalvoetbal or total football. Total football requires each player on the team to be comfortable and adept enough to switch positions with any other player on the field at any time. The Dutch required the goalkeeper to remain in a fixed position, but everyone else was fluid and able to become an attacker, defender, or midfield player when the play dictated it. Whenever a player moved out of his position, they were replaced by another player. Successively, all other players on the team shifted their positions to maintain their team structure. In modern soccer, we call this collective team behavior compensatory movement. All teammates compensate and adjust to each other’s actions.

This philosophy helped create teams without points of weakness that their opponents could exploit.

Totaalvoetbal only worked because players trained to develop the skills needed to play all positions. Each player was a specialist in a certain position or role, such as striker or center defense, but was also quite competent playing other roles on the team.

In the Agile world, this can be applied to the makeup of scrum teams. Scrum teams that are self-sufficient because of their fluidity are always the most productive and dependable. If scrum teams are comprised of team members with “T-shaped” skills, then there will always be team members that can fill in for others when needed.

People with T-shaped skills have a deep level of skill and expertise in one area and a lower level of expertise across many other areas. When scrum teams are comprised of team members with T-shaped skills, it helps to ensure that all work can be completed within the team. It also means that productivity is less likely to drop when a team member is out of the office because others can roll up their sleeves and help get the job done.

Cross-training and pair programming are great ways to help develop team members with T-shaped skills.  Pair programming is an Agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently.

T-shaped skills are not developed by accident but rather intentionally. Careful planning and a thoughtful, proactive approach by the individual and their manager are crucial. A manager must understand the value of investing in the development of people. Cross-training, stretch assignments, training opportunities, shadowing, and pair programming are all excellent methods for developing additional skills that allow for compensatory movement and fluid teams, yet, in some ways, represent short-term reductions in individual productivity. Managers must make this short-term investment to see the long-term value and score more goals.

Does your Agile Transformation feel like it is stuck in mud? Maybe it is facing one of the many challenges organizations must navigate as part of their transformation. 

According to the 15th State of Agile Report  the top 10 Agile Challenges (Digital ai, 2021) are: 

 

The impact of each of these can be detrimental to any Agile Transformation. The report suggests that some organizations face many of these challenges at once. The survey notes that these results have been unchanged over several years. As we look across these top 10, we find most of them relate to the organization’s cultural aspects which means that trying to improve a team’s specific use of an Agile method like Scrum, is less important than trying to improve the overall Agile Culture of the organization. 

For a different view into why Agile transformations are challenged, we find the article, How to mess up your agile transformation in seven easy (mis)steps (Handscomb et al., 2018) by a McKinsey team as yet another way to look at the challenges organizations face. The seven missteps are: 

  1. Not having alignment on the aspiration and value of an agile transformation 
  2. Not treating agile as a strategic priority that goes beyond pilots 
  3. Not putting culture first over everything else 
  4. Not investing in the talents of your people 
  5. Not thinking through the pace and strategy for scaling up beyond pilots 
  6. Not having a stable backbone to support agile 
  7. Not infusing experimentation and iteration into the DNA of the organization 

If your organization is facing any of these challenges, or you’ve made any of the “missteps” identified by the McKinsey team, your Agile adoption may be at a stall. If you’re experiencing multiple challenges, it may be time to get some help.   

When things aren’t going well with your transformation the impacts are felt at the team and organization levels. Next, we’ll look at some of these impacts. 

Team Impacts 

  • Team members feel frustrated and demotivated about working in an Agile environment when they aren’t supported through the transition by managers, each other, and the culture of the organization. 
  • Team members are unable to deliver quality increments of work due to a lack of consistent processes and tools.  
  • Team deliveries are hindered by defects.  
  • Team members struggle to learn and implement new Agile methods, or what they learn does not stick. 
  • Team members don’t go beyond process and learn delivery practices to improve quality, like TDD, or DevOps. 
  • Team members don’t work cross-functionally and instead keep silos of expertise. 
  • Team members struggle to coordinate across teams to remove dependencies.  
  • Team members burn out due to working at an unsustainable pace. 
  • Team member turnover is high. 
  • Stakeholders don’t see the value in their participation with the Team members doing the work, hindering the team’s ability to get real-time feedback and continuously improve. 
  • Management fails to let the team self-organize taking away from team ownership and accountability for the work. Or even worse, they micro-manage the team, their backlog, or even how they do the work 
  • Teams find it difficult to release rapidly hampering innovation and increasing time to market. 
  • Team members aren’t engaged in learning new ways of working. 

In addition to the team impacts, there are several organizational impacts that can occur. 

Resolving the challenges 

In looking at the data, it is imperative that organizations address these challenges head-on. If you’re seeing any of the impacts listed above, you may wonder what to do to alleviate them.  One of the biggest mistakes is not incorporating change management as part of an Agile Transformation. Leadership needs to help everyone in the organization understand the new culture underlying how they all work together. It requires leaders and doers alike to learn about Agile values and behaviors. Organizations must invest in training and education at all levels so that learning becomes part of the culture. How do we expect people to become Agile if we don’t teach them what Agile really is?  

“What Are the Greatest Contributors to Success When Integrating Change Management and Agile?” According to Tim Creasey (Creasey, n.d.), they are: 

  1. Early engagement of a change manager 
  2. Consistent communication 
  3. Senior leader engagement 
  4. Early wins 

Regardless of the Agile Methods you are trying to implement, an organization must start with changing its organizational culture, and the best way to do this is to start with change management. All the efforts of teaching and coaching will fail without the culture change to support the new Agile ways of working. Finally, engage the entire organization, not just IT, in the adoption of Agile practices and behaviors. While a pilot team will help test out a specific Agile method, their work with other members of the organization requires everyone to understand what it means to be Agile. Agile is for everyone, not just IT. A comprehensive change management program will help ensure the entire organization becomes focused on becoming Agile.

Watch for future blogs where we delve into these challenges and what CC Pace can do to help you solve them.

Are you building the right product? Are product discovery team members sharing what they’re learning with the rest of the development team? If you’re not sure, then maybe Dual Track is for you.

So, what is Dual Track? Simply put, Dual Track is about bringing light to product discovery by focusing on generating and validating product ideas for the team’s delivery backlog. It does this by bringing the product Discovery work into visibility.

You may be thinking like me, what’s the big deal? When we practice Scrum, we have the Product Owner work with stakeholders, including UX, to build a backlog. The problem is that they often do this work in a silo and only share their stories with delivery team members at Sprint Planning. If your team sees stories for the first time at Sprint Planning, it’s too late. So, in Dual Track, we have visibility into the work the PO and the discovery team members are doing, and this work is shared on a regular cadence with the rest of the team in refinement, or through team members working together.

Dual Track is nothing new. The original idea was published in 2007 by Desiree Sy, and Marty Cagan wrote about it in 2012. While there are many instances of writings about Dual Track, Jeff Patton and Marty Cagan are two of the most well-known advocates for what Marty has coined Dual Track Scrum. In its simplest form, Dual Track provides visibility and best practices to the discovery work that must occur before a team ever sees a user story. A good article to read more by Jeff Patton can be found here.

While researching Dual Track, I found that it fits nicely with what I believe the Product Owner, Business Analyst, and UX people on the team should be doing anyway. That is working with stakeholders and the team to discover the best product. Dual Track emphasizes team engagement and focuses on not letting people work in silos. It provides guidance and structure to ensure the entire team builds the right products at the right time, together as one team. So here is one of the first things I discovered about Dual Track – the team performs as one team. Read on to see what else I discovered.

One Team, One Backlog, One set of Scrum events

According to Jeff Patton, Dual Track is “just two parts of one process”. It focuses on maximizing the value delivery of your entire team by having discovery team members working a Sprint ahead of the development (delivery) team members and including them in the work.

This approach to embracing one team is important; as Jeff Patton says, “the whole team is responsible for product outcomes, not just on-time delivery”. When we keep the entire team engaged in product discovery, the developers, and testers will have more context, and they may also surprise us in ideation with great problem solutions. Finally, not all ideas in the backlog should be implemented, and developers can help Product Owners see where ideas will be problematic.

Jeff outlines how to incorporate the product design team members into each Scrum event without disruption. The team works as one, from the Daily Standup to the Sprint Retrospective. A benefit of following a Dual Track is better engagement from developers and faster learning.

Below is a good video link I found on how to set up a single backlog in Jira for Dual Track boards that enables teams to visualize the work they are doing in the two tracks:

Progress on both tracks happens simultaneously

The Discovery Track represents teamwork through product ideation via stakeholder interviews, developing personas and stories, and market research. Once complete, the discovery team members share their findings with the rest of the team. The design team members utilize design sprints to make progress on stories for Sprint N+1, while the development team members are working on Sprint N in delivery sprints. The output from product discovery becomes the input for product delivery. At the same time, feedback from product delivery informs product discovery.

Benefits of Dual Track

The Discovery Track is always a few steps ahead of the delivery team. This approach leads to:

  • Better Products – Allowing only validated product ideas into the backlog leads to better products for your customers.
  • Less Wasted Time – Breaking the barrier between product design and development means the whole team better understands what they are building, reducing the back-and-forth conversations that occur.
  • Lower Development Costs- Teams will not pursue product features that haven’t been validated or are well-thought-out.

Summary

Dual Track increases the odds that you will deliver high-quality products that your customers will love.  It does this by bringing visibility to what Product Owners are doing to prepare for a Sprint. It requires high collaboration between the people responsible for ideation and the people responsible for development. If you decide to incorporate Dual Track and design Sprints, you’ll be in good company. Jake Knapp wrote about Google Ventures utilizing design Sprints, and now we have the likes of Uber, Slack, and Facebook embracing this way of working.

I like what I’ve read about Dual Track and hope I have piqued your curiosity.  I would love to hear about your experience with Dual Track if you have tried it.  Please feel free to share in the comments!

Over the past several years, I’ve worked with many organizations as an Agile Coach and Scrum Master. Through this experience, I’ve noticed there is often misunderstanding at varying levels within the organization around the difference between Agile Adoption and Agile Transformation.

My goal through this blog will be to share my thoughts around Adoption and Transformation and to articulate the basic differences.

First, let me start by defining what I mean by Adoption and Transformation. We find that many organizations think they are Agile when they adopt different Agile practices and begin to check these boxes daily, like having a Daily Standup, working in Sprints, or creating a Kanban board. Organizations may see some benefit in the early stages of a Transformation when Adoption is on the uptake; however, Adoption is just one of the factors that will help a Transformation be successful. A Transformation is about more than adopting a methodology and checking a box. A Transformation changes how the entire organization behaves; it changes the organization’s culture. This change may include aligning along value streams to create virtual teams focused on the same outcome delivery. And it should also involve getting business partners to engage with the IT delivery teams to provide regular input and feedback on the work. And Transformations may see new ways of budgeting around product delivery rather than project delivery. The bottom line is that an Agile transformation is about more than having teams adopt an Agile method.

Before we dive deeper into the differences between Agile Adoption and transformation, let me draw your attention to the 15th Annual State of Agile Report on company experience in practicing Agile.

The Annual State of Agile Report is the longest continuous annual survey of Agile techniques and practices as identified by over 1300 global respondents.

https://digital.ai/resource-center/analyst-reports/state-of-agile-report

Most of the respondents report their company has been practicing Agile. Quite clearly, Agile is the new buzzword, and corporations want to ensure that they don’t miss the bus. Many think it’s a plug-and-play kind of solution, and things improve overnight. It IS NOT!!

But what really is Agile Adoption, and how is it different from Agile Transformation?

In simple terms, Agile Adoption is changing the way a team does the work, whereas an Agile Transformation deals with changing the way an organization gets the work done. It’s about Doing Agile vs. Being Agile.

Sounds interesting?? Look at the 5 key differences as illustrated by Anthony Mersino.

 

  1. Speed of Change

Adoption can be quick and even measured in days or weeks. After a day or two of training, teams can agree on a tool, set up their board and events, and start following an Agile method like Scrum. I would refer to it as kind of a jumpstart. Transformations, on the other hand, can only be measured in years since the goal is about continuous improvement and cultural changes. In my previous blog, I defined the different levels and timelines as to when an organization can be transformed. While organizations might see Agile Adoption happen quickly, culture change across the organization is needed for a Transformation. And without question, culture change takes longer to come to fruition than team Adoption.

  1. Planning Timeframe

Agile Adoptions can be short—as the speed of change is quick compared to Transformations. Projects are temporary; thus, teams on projects can switch from Waterfall to Agile methodologies and be seen as adopting Agile. The planning timeframe for getting teams to adopt Agile can be short since we are simply changing how people work. Transformations, however, require long-standing and stable, Agile teams that take time to build. As you plan your Agile Transformation, the timeframe is much longer than that of simple Adoption, and it should include learning opportunities for everyone in the organization around the Agile mindset and how Agile organizations work differently. Change Management plays heavily in a Transformation plan.

  1. Productivity Gain

Researchers show a gain between 20% – 100% when it comes to Agile Adoption. In other words, just by teams following simple steps such as the PO setting priorities, focusing on a prioritized backlog, teams working together cross-functionally, etc., an organization will see gains in productivity. However, in a Transformation, one of the most significant benefits comes from developing T-shaped skills to become a cross-functional team. Transformation is more about empowering employees by encouraging them to be creative, understanding and accepting risks, and negating management layers allowing for more transparency. As an organization transforms, they decentralize decision making, advocate for innovation, focus on team outcomes over individual performance, engage business partners more readily, to name a few, and thus as a whole, may experience a productivity gain of close to 300%.

  1. Org Structure Change

Minimal-to-no structural change is required while a team adopts Agile methodologies. A Team of employees from different functional areas can come together to complete a project and may even move back to previous projects post completion. While together, they practice and adopt Agile methods. However, there can be chances of teams working in silos which can lead to significant inefficiencies. Team members reporting to different managers and having multiple hierarchies within a team can even delay decision-making.

Agile Transformation can have a significant impact on the organization. In a Transformation, the focus is on shifting from functional silos to more long-standing cross-functional stable teams, thereby reducing inefficiencies. While the reporting structure can remain matrixed, the team bond comes first in a Transformation. People managers in a Transformation shift their focus to employee enablement and engagement and less on directing daily work.

  1. Change in Culture

Have you ever heard the saying, “Culture eats strategy for breakfast?” As Adoption focuses on changing the way the team accomplishes work, a culture change may be seen solely within the group. One or more teams adopting Agile can lead to a Transformation, but it is unlikely to impact the culture significantly.  When an organization sees the value from a team’s Adoption of Agile principles and practices, it may pave the way for a greater Transformation. I recommend engaging with a change management expert to help the organization’s culture change. This culture change is key to a Transformation and will bring customer satisfaction and respect for people.

In summary, both Agile Adoption and Transformations will bring value to your organization.

It all depends on the organization as to what path they follow; there isn’t a right or wrong here.  I’m a strong believer that anything, when done the right way, will yield results. Its more about the process and believing in the process. What I think we can see here is that while organizations can derive value from an Agile Adoption, the true benefits come from the longer Agile Transformation.

What is the bottom line? Adoption is fast and easy, while Transformations take longer and require more planning and culture change, but without question, the benefits are worth it.

PI planning is considered the heartbeat of Scaled Programs. It is a high-visibility event, that takes up considerable resources (and yes, people too). It is critical for organizations to realize the value of PI planning, otherwise, leadership tends to lose patience and gives up on the approach, leading to the organization sliding back on their SAFe Agile journey. There are many reasons PI planning can fall short of achieving its intended outcome. For the purposes of quick readability, I will limit the scope of this post to the following 5 reasons which I have found to have the most adverse effect.

  1. Insufficient preparation
  2. Inviting only a subset (team leads)
  3. Lack of focus on dependencies
  4. Risks not addressed effectively
  5. Not leaving a buffer

Let’s do a deeper dive and try to understand how each of the above anti-patterns in the SAFe implementation can impact PI planning.

Insufficient Preparation: By preparation, I don’t mean the event logistics and the preparation that goes along with it. The logistics are very important, but I am focusing on the content side of it. Often, the Product Owner and team members are entirely focused on current PI work, putting out fires, and scrambling to deliver on the PI commitments, so much so that even thinking about a future PI seems like an ineffective use of time. When that happens, teams often learn about the PI scope almost on the day of PI planning, or a few days before, which is not enough time to digest the information and provide meaningful input. PI planning should be an event for people from different teams/programs to come together and collaboratively create a plan for the PI. To do that, participants need to know what they are preparing for and have time to analyze it so when they come to the table to plan, the discussions are not impeded by questions that require analysis. Specifically, this means, that teams should know well in advance, what are the top 10 features that teams should prioritize, what is the acceptance criteria, and which teams will be involved in delivering those features. This gives the involved teams a runway to analyze the features, iron out any unknowns, and come to the table ready to have a discussion that leads to a realistic plan.

Inviting only a subset: As I said in the beginning, PI planning is a high-cost event. Many leaders see it as a waste of resources and choose to only include team leads/SMs/POs/Tech Leads/Architects and managers in the planning. This is more common than you might think. It might seem obvious why this is not a good practice, but let’s do a deep dive to make sure we are on the same page. The underlying assumption behind inviting a subset of people is that the invitees are experts in their field and can analyze, estimate, plan, and commit to the work with high accuracy. What’s missing from that assumption is, that they are committing on behalf of someone else (teams that are actually going to perform the work) with entirely different skill levels, understanding of the systems, organizational processes, and people. The big gap that emerges from this approach to planning is that work that is analyzed by a subset of folks tends not to account for quite a few complexities in the implementation, and the estimate is often based on the expert’s own assessment of effort. Teams do not feel ownership of the work, because they didn’t plan for or commit to it, and eventually the delivery turns into a constant battle of sticking to the plan and putting out fires.

Lack of focus on dependencies: The primary focus of PI planning should be the coordination and collaboration between teams/programs. Effectively identifying the dependencies that exist between teams and proper collaboration to resolve them is a major part of the planning event to achieve a plan with higher accuracy. However, teams sometimes don’t prioritize dependency management high enough and focus more on doing team-level planning, writing stories, estimating, adding them to the backlog, and slotting them for sprints. The dependencies are communicated, but the upstream and downstream teams don’t have enough time to actually analyze and assess the dependency and make commitments with confidence. The result is a PI plan with dependencies communicated to respective teams but not fully committed. Or even worse, some of the dependencies are missed only to be identified later in the PI when the team takes up the work. A mature ART prioritizes dependencies and uses a shift-left approach to begin conversations, capture dependencies, and give ample time to teams to analyze and plan for meeting them.

Risks not addressed effectively: During PI planning, the primary goal of program leadership should be to actively seek and resolve risks. I will acknowledge that most leaders do try to resolve the risks, but when teams bring up risks that require tough decisions, change in prioritization, and a hard conversation with a stakeholder, program leadership is not swift to act and make it happen. The risk gets “owned” by someone who will have follow-up action items to set up meetings to talk about it. This might seem like the right approach, but it ends up hurting the teams that are spending so much time and effort to come up with a reasonable plan for the PI. There is nothing wrong with “owning” the risks and acting on them in due time, however, during PI planning, time is of the essence. A risk that is not resolved right away, can lead to plans based on assumptions. If the resolution, which happens at a later date/time, turns out to be different from the original assumption made by the team, it can lead to changes in the plan and usually ends up putting more work on the team’s plate. The goal should be to completely “resolve” as many risks as possible during planning, and not avoid tough conversations/decisions necessary to make it happen.

Not leaving a buffer: We all know that trying to maximize utilization is not a good practice. Most leaders encourage teams to leave a buffer during the planning context on the first day. But, in practice, most teams have more in the backlog than they can accomplish in a PI. During the 2 days of planning, it is usually a battle to fit as much work as possible to make the stakeholders happy. For programs that are just starting to use SAFe, even the IP sprint gets eaten up by planned feature development work. One of the root causes for this is having a false sense of accuracy in the plan. Teams tend to forget that this is a plan for about 5 to 6 sprints that span over a quarter. A 1 sprint plan can be expected to have a higher level of accuracy because of a shorter timebox, less scope, and more refined stories. However, when a program of more than 50 people (sometimes close to 150 people) plans for a scope full of interdependencies, expecting the same level of accuracy is a recipe for failure. In order to make sure the plan is realistic, teams should leave the needed buffer and allow teams to adjust course when changes occur.

As I mentioned at the start of this post, there are many ways a high-stakes event like PI planning can fail to achieve the intended outcomes. These are just the ones I have experienced first-hand.  I would love to know your thoughts and hear about some of the anti-patterns that affected your PI Planning and how you went about addressing them.

 

 

 

 

 

 

Are you new to Agile testing?

I’ve been reading Agile Testing, by Lisa Crispin and Janet Gregory. If you are new to Agile testing, this book is for you. It provides a comprehensive guide for any organization moving from waterfall to Agile testing practices. The “Key Success Factors” outlined in the book are important when implementing Agile testing practices, and I would like to share them with you.

Success Factor 1: Use the Whole Team Approach

Success Factor 2: Adopt an Agile Testing Mind-Set

Success Factor 3: Automate Regression Testing

Success Factor 4: Provide and Obtain Feedback

Success Factor 5: Build a Foundation of Core Practices

Success Factor 6: Collaborate with Customers

Success Factor 7: Look at the big picture

Success Factor 1: Use the Whole Team Approach

In Agile, we like to take a team approach to testing. Developers are just as responsible for the quality of a product as any tester; they embrace practices like Test-Driven Development (TDD) and ensure good Unit testing is in place before moving code to test. Agile Testers participate in the refinement process, helping Product Owners write better User Stories by asking powerful questions and adding test scenarios to stories. Everyone works together to build quality into the product development process. This approach is very different from a waterfall environment where the QA/Test team is responsible for ensuring quality software/products are delivered.

Success Factor 2: Adopt an Agile Testing Mind-Set

Adopting Agile starts with changing how we think about the work and embracing Agile Values and Principles. In addition to the Agile Manifesto’s 12 Principles, Lisa and Janet define 10 Principles for Agile Testing. Testers adopt and demonstrate an Agile testing mindset by keeping these principles top of mind. They ask, “How can we do a better job of delivering real value?”

10 Principles for Agile Testing

  1. Provide Continuous Feedback
  2. Deliver Value to the Customer
  3. Enable Face-to-Face Communication
  4. Have Courage
  5. Keep it Simple
  6. Practice Continuous Improvement
  7. Respond to Change
  8. Self-Organize
  9. Focus on People
  10. Enjoy

Success Factor 3: Automate Regression Testing

Automate tests where you can and continuously improve. As seen in the Agile Testing Quadrants, automation is an essential part of the process. If you’re not automating Regression tests, you’re wasting valuable time on Manual testing, which could be beneficial elsewhere. Test automation is a team effort, start small and experiment.

https://www.cigniti.com/blog/agile-test-automation-and-agile-quadrants/

Success Factor 4: Provide and Obtain Feedback

Testers provide a lot of feedback, from the beginning of refinement through to testing acceptance. But keep in mind – feedback is a two-way street, and testers should be encouraged to ask for their own feedback. There are two groups where testers should look for feedback. The first is from the developers. Ask them for feedback about the test cases you are writing. Test cases should inform development, so they need to make sense to the developers. A second place to get feedback is from the PO or customer. Ask them if your tests cover the acceptance criteria satisfactorily and confirm you’re meeting their expectations around quality.

Success Factor 5: Build a Foundation of Core Practices

The following core practices are integral to Agile development.

  • Continuous Integration: One of the first priorities is to have automated builds that happen at least daily.
  • Test Environments: Every team needs a proper test environment to deploy to where all automated and manual tests can be run.
  • Manage Technical Debt: Don’t let technical debt get away from you; instead make it part of every iteration.
  • Working Incrementally: Don’t be tempted to take on large stories; instead break down the work into small stories and test incrementally.
  • Coding and Testing Are Part of One Process: Testers write tests, and developers write code to make the test pass.
  • Synergy between Practices: Incorporating any one practice will get you started. A combination of these practices, which to work together, is needed to gain the advantage of Agile development fully.

Success Factor 6: Collaborate with Customers

Collaboration is key, and it isn’t just with the developers. Collaboration with Product Owners and customers helps clarify the acceptance criteria. Testers make good collaborators as they understand the business domain and can speak technically. The “Power of Three” is an important concept – developers, testers, and a business representative form a powerful triad. When we work to understand what our customers value, we include them to answer our questions and clear up misunderstandings; then, we are truly collaborating to deliver value.

Success Factor 7: Look at the Big Picture

The big picture is important for the entire team. Developers may focus on the implementation details, while testers and the Product Owners have the view into the big picture. Aside from sharing the big picture with the team, the four quadrants can help you to guide development so developers don’t miss anything.

In addition, you’ll want to ensure your test environments are as similar as possible to production.  Test with production-like data to make the test as close to the real world as possible. Help developers take a step back and see the big picture.

Summary

At the end of the day, developers and testers form a strong partnership. They both have their area of expertise. However, the entire team is the first line of defense against poor quality. The focus is not on how many defects are found; the focus is on delivering real value after each iteration.

As a Senior Scrum Master, I’ve worked with many organizations, and I’m frequently asked by leaders one common question: how long will it take for my team to be Agile? The answer is never easy; developing an Agile mindset can be complex. From senior executives to developers, everyone in the organization must be open-minded and willing to change. Of course, there will always be resistance, but this can be handled through open dialogue and continual conversation.

I’d like to walk through a staged representation depending upon the maturity levels that each team goes through in becoming Agile. The information you can gather from the maturity levels is an important metric that organizations are intrigued and excited to see because they show progress in their Agile journey. I have used the maturity levels extensively with teams, and it’s always great to show progress; but remember, it’s just a tool.

So, with that being said, let’s get started. I will review each maturity level a team goes through, and along the way, I’ll share my perspective and lessons learned.

 

Level 1 is a Learning Phase. As teams get started on their Agile journey, it’s essential to introduce and establish an Agile mindset. From my experience, an Agile Boot Camp is a great way to create this awareness and introduce some initial concepts. It’s also an opportunity to see the team composition, make introductions, and begin a new way of working together. While Level 1 focuses on awareness, you can’t short-change the importance of this initial step; it sets the foundation for the beginning of the journey and the team’s future success.

To reach Level 2, teams must have a solid understanding of what it means to be Agile, and they must also recognize there is a difference between Agile and Scrum/Kanban. Frequently, I hear teams using words like Agile and Scrum interchangeably; and it’s important everyone understands that Agile is a methodology, whereas Scrum and Kanban are different frameworks of Agile. The Agile ceremonies or events should be scheduled with a set agenda and teams should practice story card-based requirement gathering.

Once the team has a firm grip on what it means to be Agile, they’re practicing the events, using terms correctly, and understanding the different frameworks; it’s time to move to Level 3, where the focus is on proper planning, practicing trade-offs, backlog management, and inspect/adapt principles. Better planning is your key to executing a sprint well. Having a solid backlog and adopting inspect/adapt principles will play a crucial part in a team’s success. The team should be encouraged and start to practice trade-offs.

As a Scrum Master and coach, I continually talk with my teams about embracing change. As they transition from Waterfall to Agile, the team should practice the “yes, I can, but…” phrase. For example, the Product Owner issues a new requirement, but the team already has a full plate of work. At this point, the team should be willing to practice a trade-off, accepting the requirement but be open to a conversation with the Product Owner to reprioritize existing items. Through this process, we ensure change is embraced and simultaneously practice trade-offs to not burn out the team.

As we move up the ladder, Level 4 is about the team starting to self-organize. In the planning session, there is a discussion about the sprint goals and stories, and the team should be able to self-organize and pick up the tasks that will help them achieve the sprint goal. Remember – it’s team commitment, not individual commitments, that matter. The team should be able to start measuring the process and looking at ways to improve, such as introducing some automation in the form of testing.

Level 5 focuses on improving T-Shaped skills, which can be attained by having a buddy-pair system within the team. Through this process, knowledge is gained and shared across teammates, thereby ensuring the team becomes cross-functional as time progresses. Teams will now be experienced looking at extreme automation techniques like RPA and AI, developing CI/CD pipelines, and eventually working in a DevOps model.

In conclusion, an Agile Transformation begins with a people mindset. While we looked at the Agile Transformation Maturity levels, it’s important to understand the effort put in by all players within an organization. From developers to Product Owners to Agile sponsors, everyone plays a role in achieving a successful Agile Transformation. As your organization moves through the different levels, remember, it’s going to be a bumpy ride. There will be forward momentum as well as setbacks; hence, it takes time. But as a Scrum Master who has worked with teams on their transformation journey, moving through the different maturity levels is a process, but the result is worth it.

Here is our final video in the 3-part series Building and Securing Serverless Apps using AWS Amplify.  In case you missed Part 1 you can find it here along with Part 2 here.  Please let us know if you would like to learn more about this series!

So, it was early 2009, I had been laid off and decided to relocate from Dallas to Austin. After getting established, I set out to find a job, NOT in the mortgage industry. I wanted to be more technology-focused. I searched and interviewed at many places and began receiving offers.  

BUT – that mortgage tug – kept pulling at me. So, I thought this would be a suitable time for me to get back into Sales or Operations and get to see how the technology affected the users as well as how things had or had not changed. I was able to accept a position as an Underwriter and soon after oversaw a team of underwriters for a region. I got to live and breathe the end of the month cycle again, see how the technology was hampering or helping, provide tips and tricks and I really had a wonderful time. 

In mid-2011, Mortgage IT (Information Technology) jobs came calling again. It took me back to Dallas where we selected and rolled out a Loan Origination System, supported Capital Markets and Warehouse software, opened a Call Center, and more. That company was sold to another company, and layoffs were beginning, so I took an opportunity and moved to Houston to do another Loan Origination Selection for another employer, among other projects. Then back to Dallas where a company had bought another company and they needed to consolidate to one system. At that last company, I was Vice President of PMO (Project Management Office). Then leadership changes occurred, and they laid off the entire PMO staff. I secured as many jobs for my people as possible and decided to investigate Consulting. At this point in my career, I had been laid off three times, and climbing that ladder just to get chopped off had become exhausting. 

I have known CC Pace since 1998, as they helped us roll out automated underwriting. Following that, I worked with them throughout different employers over the years.  

So, we made it happen. In September of 2015, I went to work for CC Pace as a Senior Consultant. It has been so much fun being on this side, we get to help solve so many different problems for so many different clients. From POS (Point of Sale) and LOS (Loan Origination System) Selections to Implementations, Compliance Reviews, Process Reengineering, system tune-ups, staffing reshuffling, website buildouts, security reviews, system assessments, due diligence, and more. Getting to meet so many new people from all over the country and develop relationships has been the most rewarding. 

When I moved from temporary to permanent in 1992 as a Loan Processor, my mom wrote me this letter: 

“Dear Greg,  

On 5-1-71 I began my career in the mortgage industry. Here we are 21 years later, and you are beginning your career in the mortgage industry. My beginning salary was $375 per month and over the years that income has increased 10 times +. I hope you will experience the same kind of financial rewards but more than that I hope you enjoy the work and challenges as much as I have. It’s a good business and you will work with a lot of good people. Good Luck – you deserve it.  

Love Mom”  

She was and is right, my beginning wage was $7 an hour and the financial rewards have been good. But so much more than that, I love the mortgage industry. I have met some of the best people in the world and many of them I am proud to call my friends, 30 years plus, some of them.  

Much of what we do in this life is about the journey and nurturing good relationships along the way. On the Consulting side, we get to build so many relationships, see so many problems and help solve those problems. You see problems, we see solutions. What can I help you solve?

The video below is Part 2 of our 3-part series: Building and Securing Serverless Apps using AWS Amplify.  In case you missed Part 1 – take a look at it here.  Be sure to stay tuned for Part 3!

According to Deutsche Bank CIO Frederic Veron, “enterprises that wish to reap the potentially rich rewards of getting IT and business line leaders to build software together in agile fashion must also embrace the DevOps model.”[1]

Why is that? It’s simple: DevOps is necessary to scale Agile. DevOps practices are what enable an organization to rapidly deploy changes to many different parts of their product, across many products, on a frequent basis—with confidence.

That last part is key. Companies like Amazon, Google, and Netflix developed DevOps methods so that they could deploy frequently at a massive scale without worrying if they will break something. DevOps is, at its core, a risk management strategy. DevOps practices are what enable you to maintain a complex multi-product ecosystem and make sure that everything works. DevOps substitutes traditional risk management approaches with what the Agile 2 authors call real-time risk management.[2]

You might think that all this is just for software product companies. But today, most organizations operate on a technology platform, and if you do, then DevOps applies. DevOps methods apply to any enterprise that creates and maintains products and services that are defined by digital artifacts.

DevOps methods apply to any enterprise that creates and maintains products and services that are defined by digital artifacts.

That includes manufacturers, online commercial services, government agencies that use custom software to provide services to constituents, and pretty much any large commercial, non-profit, and public sector enterprise today.

As JetBlue and Breeze airlines founder David Neeleman said, “we’re a high-tech company that just happens to fly airplanes,”[3] and Capital One Bank’s CIO Rob Alexander said, “We’re a founder-led, 20-year-old technology company.”[4]

Most large businesses today are fundamentally technology companies that direct their efforts toward the markets in which they have expertise, assets, and customer relationships.

DevOps Is Necessary at Scale

Scaling frameworks such as SAFe and DA provide potentially useful patterns for organizing the work of lots of teams. However, DevOps is arguably more important than any framework, because without DevOps methods, scaling is not even possible, and many organizations (Google, Amazon, Netflix…) use DevOps methods at scale without a scaling framework.

If teams cannot deploy their changes without stepping on each other’s work, they will often be waiting or going no faster than the slowest team, and lots of teams will have a very difficult time managing their dependencies—no framework will remedy that if the technical methods for multi-product dependency management and on-demand deployment at scale are not in place. If you are not using DevOps methods, you cannot scale your use of Agile methods.

How Does Agile 2 View DevOps?

DevOps as it is practiced today is technical. When you automate things so that you can make frequent improvements to your production systems without worrying about a mistake, you are using DevOps. But DevOps is not a specific method. It is a philosophy that emerged over time. In practice, it is a broad set of techniques and approaches that reflect that common philosophy.

With the objective of not worrying in mind, you can derive a whole range of techniques to leverage tools that are available today: cloud services, elastic resources, and approaches that include horizontal scaling, monitoring, high-coverage automated tests, and gradual releases.

While DevOps and Agile seem to overlap, especially philosophically, DevOps techniques are highly technical, while the Agile community has not focused on technical methods for a very long time. Thus, DevOps fills a gap, and Agile 2 promotes the idea that Agile and DevOps go best together.

DevOps evangelist Gene Kim has summarized DevOps by his “Three Ways.”[5]  One can paraphrase those as follows:

  1. Systems thinking: always consider the whole rather than just the part.
  2. Use feedback loops to learn and refine one’s artifacts and processes over time.
  3. Treat everything as an experiment that you learn from, and adjust accordingly.

The philosophical approaches are very powerful for the DevOps goal of delivering frequent changes with confidence, because (1) a systems view informs you on what might go wrong, (2) feedback loops in the form of tests and automated checks tell you if you hit the mark or are off, and (3) if you view every action as an experiment, then you are ready to adjust so that you then hit the mark. In other words, you have created a self-correcting system.

Agile 2 takes this further by focusing on the entire value creation flow, beginning with strategy and defining the kinds of leadership that are needed. Agile 2 promotes product design and product development as parallel and integrated activities, with feedback from real users and real-world outcomes wherever possible. This approach embeds Gene Kim’s three DevOps “ways” into the Agile 2 model, unifying Agile 2 and DevOps.

Download this White Paper here!

 

[1] https://www.cio.com/article/3141577/true-agile-software-development-requires-devops.html

[2] Agile 2: The Next Iteration of Agile, by Cliff Berg et al, pp 205 ff.

[3] https://www.businessinsider.com/breeze-airways-pushing-back-launch-until-2021-what-we-know-2020-7

[4] https://www.youtube.com/watch?v=0E90-ExySb8

[5] https://itrevolution.com/the-three-ways-principles-underpinning-devops/

Background 

We all are humans and tend to take the easy route when we come across certain scenarios in life. Remembering passwords is one of the most common things in life these days, and we often tend to create a password that can be easily remembered to avoid the trouble of resetting it in case we forget it. In this blog, I am going to discuss a tool called “Have I Been Pwned”(HIBP) which is going to help us find any passwords that were seen in recent cybersecurity or data breaches.  

What is HIBP? What is it used for? 

“Have I Been Pwned” is an open-source initiative that helps people to check if their login information has been included in any breached data archives circling the dark web. In addition, it also allows users to check how often a given password has been found in the dataset – testing the strength of a password against dictionary-style brute force attacks. Recently, the FBI released a statement that they are going to closely work with the HIBP team to share the breached passwords for users to check against it. This open-source initiative is going to help a lot of customers avoid using breached passwords when creating accounts on the web. We used the HIBP API to help our customers who use custom web-based applications get alerted of any pwned passwords that they used while creating accounts. In this way, the users will be aware of not using such breached passwords that have been seen multiple times on the dark web. 

How does it work? 

HIBP stores more than half a billion pwned passwords that have previously been exposed in data breaches. The entire data set is both downloadable and searchable online via the Pwned Passwords page. Each password is stored as an SHA-1 hash of a UTF-8 encoded password and the password count with a colon (:) and separated by each line with a CRLF. 

If we must use an API to search online for the password that was breached multiple times, we cannot send the actual source password over the web as it will compromise the integrity of the user’s password that got entered during account creation. 

To maintain anonymity and protect the value of the source password being searched for, Pwned Passwords implements a k-Anonymity model that allows a password to be searched for by partial hash using search by range. In this way, we just need to pass the first 5 characters of an SHA-1 password hash (not case-sensitive) to the API which will respond with the suffix of every hash beginning with the specified prefix, followed by a count of how many times it appears in the dataset. The API consumer now can search the results that match the source password hash by comparing them with the prefix and the suffix of the hash results. If the source hash was not found in the results, it means that the password was not breached until date. 

Integrated Solution 

Pass2Play is one of our custom web-based solutions where we integrated the password breach API to detect any breached passwords during the sign-up process. Below is the workflow: 

  1. The user goes to sign up for the account. 
  2. Enters username and password to sign up. 
  3. After entering the password, the user gets a warning message if the password was ever breached and how many times was it seen. 


In the above screen, the user entered the password as “P@ssword” and got a warning message which clearly says that the entered password has been seen 7491 times based on the dataset circling in the dark web. We do not want our users using such passwords for their accounts which could get compromised later using dictionary-style brute-force attacks.

Architecture and Process flow diagram:

API Request and Response example:

SHA-1 hash of P@ssword: 9E7C97801CB4CCE87B6C02F98291A6420E6400AD

API GET: https://api.pwnedpasswords.com/range/9E7C9

Response: Returns 550 lines of hash suffixes that matches the first 5 chars

The highlighted text in the above image is the suffix that matches the first 5 hash chars’ prefix of the source password and has been seen 7491 times.

Conclusion

I would like to conclude this blog by saying that integration of such methods in your applications can help organizations avoid larger security issues since passwords are still the most common way of authenticating users. Alerting the end-users during account creation will make them aware of breached passwords which will also train the end users on using strong passwords.

CC Pace’s Philippa Fewell and Agile 2 Academy have co-authored a white paper “Is Your Agile Journey Evolving?”.  Here they discuss the evolution of Agile and help you identify if your organization’s Agile adoption has kept up.  We would love to hear your thoughts on your Agile journey!

 

 

Click here to download this white paper.

To learn more about Agile 2, visit the website here.

Last year, we worked with experts from George Mason University to build a COVID screening and tracing platform called Pass2Play. We used this opportunity to implement a Serverless architecture using the AWS cloud.

This video discusses our experience, including our solution goals, high-level design, lessons learned and product outcomes.

It’s specific to our situation, but we’d love to hear about other experiences with the Serverless tools and services offered by AWS, Azure and Google. There are a lot of opinions on Serverless, but there’s no doubt that it’s pushing product developers to rethink their delivery and maintenance processes.

Feel free to leave a comment if we’re missing anything or to share your own experience.

CC Pace was recently featured in Agile Uprising’s Blog series.  Agile Uprising is a network that is focused on the advancement of the Agile mindset and professional networking between leading Agilists.  In the blog, CC Pace created a short video where we highlighted one of our latest projects!  Bobby Pantall, CC Pace Lead Technology Consultant, speaks to our experience building an app for a startup company named Twisty Systems.  This app that was developed is a navigation app aimed towards driving enthusiasts. In the video we describe the framework of the Lean Startup methodology and some of the highs and lows in the context of the pandemic and releasing a new app.

Enjoy and please share your thoughts on this project!

In the first installment of our Product Owner Empowerment series, we talked about the three crucial dimensions of ‘Knowledge’ that affect a Product Owner’s effectiveness. This post is going to take a deeper dive into the impact Empathy has on a Product Owner’s ability to succeed.

Empathy: Assuming positive intent, empathy is something that comes naturally to a person. However, environmental factors can influence a person’s ability to relate or connect with another person or team. Let’s explore some aspects of empathy and how they may impact a Product Owner’s success.

  •  Empathy towards the team(s): To facilitate an empathetic relationship between a Product Owner and the team, the PO must be able to meet the team where they are (literally and figuratively). Getting to know the team members and building a rapport requires the Product Owner to extensively interact with the team and proactively work to build such relationships. Organizations should facilitate this by making sure Product Owners are physically located where the team is and is empowered to not only represent the team to the business but also play the role of protector from external interruptions, so that team can function effectively. As alluded to above, having a good understanding of what it takes to deliver, helps tremendously with the ability to place themselves in the team’s shoes and see things from their perspective.
  • Empathy towards the customer(s): It is easy to assume that a Product Owner acting on behalf of the business will automatically have empathy and an understanding of their needs to adequately represent their business interests. However, organizational culture can sometimes influence how a Product Owner prioritizes work. If it is only the sponsors directing the team’s scope and prioritization, a critical element of customer input is missed. Product Owners should place sufficient emphasis on obtaining customer opinion and feedback to inform the direction of product development.
  • Empathy in the Organization: This factor relates to the organizational culture. As companies embrace Agile and expect its benefits to be equally realized, more emphasis on the desire to be lean begins to form. While being lean is a goal every organization should have, it is important to understand what kind of impact a lean organization has on individual teams or team members. A systemic push to be lean, in combination with less than optimal Agile maturity and the presence of antipatterns, can lead to teams being held against unsustainable delivery expectations. This problem is more common than you would think. Most organizations are going through some level of Agile transformation, but leadership expectations of benefits realization are always a few steps ahead of where the organization truly is on the Agile journey. Having the right set of expectations and the empathy necessary to reset them based on continuous learning and feedback is needed at an organizational level.

Check back next week to see how a Product Owner’s success is tied to psychological safety for themselves and the teams they are working with.

If you are a Product Owner or your Agile team struggles with this role, you won’t want to miss our upcoming webinar on Product Owner Empowerment. This webinar will be held on December 15th and you can register today here! Space is limited and on a first-come basis.

Agile 2 is here!

I was fortunate to be included in a group of exceptional Agile leaders and practitioners, led by Clifford Berg, to retrospect on Agile and improve upon what it has become over the last 20 years.  Each of us began by citing issues and problems we have encountered over the years, drawing on our unique experiences.  Not all of us experienced the same issues, but it was eye opening to discover what others came up with because of the diversity of the group both in practice and expertise.

We then discussed why we felt the problems occurred and what could be done to change them.  This led us to revisiting the values and principles of the Agile Manifesto and many of the frameworks we use today.  While I am vested in many of these, having become certified in them myself and trained others on them as well, I have seen where lack of clarity or difference of interpretation, as well as too much emphasis placed on prescription, leads to less than successful outcomes.

It is this clarity and thoughtfulness that Agile 2 seeks to deliver.  It is a set of values and principles based on common problems that we believe will resonate with you, the Agile practitioner.  My colleagues and I are proud of Agile 2 and the potential impact we believe it can have on the current state of Agile.  Have we gotten it all right?  Undoubtedly there is room for debate. Have we missed some valuable principles?  Perhaps. And that is why we want Agile 2 to be open to ideas from the community, and there will be an Agile 2 version 1.1.  We respect and welcome your input and ideas.  We want Agile 2 to be constantly improving on what it is today so that it stays relevant.  There will be Agile 2 community forums, and to begin that, there is a LinkedIn group. A book is on the way. The Agile 2 website is at https://agile2.net. Check it out!

In the previous blog, I had provided insights on what ZTA is, what the core components that belong to ZTA are, why organizations should adopt ZTA and what the threats to ZTA are. In this blog, I will go through some of the common deployment use cases/scenarios for ZTA using software defined perimeters and move away from enterprise network-based perimeter security.

Scenario 1:  Enterprise using cloud provider to host applications as cloud services and accessed by employees from the enterprise owned network or external private/public untrusted network

In this case, the enterprise has hosted enterprise resources or applications in a public cloud, and users want to access those to perform their tasks. This kind of infrastructure helps the organization provide services at geographically dispersed locations who might not connect to the enterprise owned network but could still work remotely using personal devices or enterprise owned assets. In such cases, the enterprise resources can be restricted based on the user identity, device identity, device posture/health, time of access, geographic location and behavioral logs. Based on these risk factors, the enterprise cloud gateway may wish to grant access to resources like employee email service, employee calendar, employee portal, but may restrict access to services that provide sensitive data like the H.R. database, finance services or account management portal. The Policy Engine/Policy Administrator will be hosted as a cloud service which will provide the decision to the gateway based on the trust score calculated from various sources like the enterprise system agent installed on devices, CDM system, activity logs, threat intelligence, SIEM, ID management, PKI certificates management, data access policy and industry compliance. The enterprise local network could also host the PE/PA service instead of the cloud provider, but it won’t provide much benefit due to an additional round trip to the enterprise network to access cloud hosted services which will impact overall performance.

Scenario 2:  Enterprise using two different cloud providers to host separate cloud services as part of the application and accessed by employees from the enterprise owned network or external private/public untrusted network

The enterprise has broken the monolithic application into separate microservices, or components hosted in multiple cloud providers even though it has its own enterprise network. The web front end can be deployed in Cloud Provider A, which communicates directly to the database component hosted in Cloud Provider B, instead of tunneling through the enterprise network. It is basically a server-server implementation with software defined perimeters instead of relying on enterprise perimeters for security. The PEPs are deployed at the access points of web front end and database components which will decide whether to grant access to the service requested based on the trust score. The PE and PA can be services hosted either in cloud or other third-party cloud provider. The enterprise owned assets that have agents installed on them can request access through PEPs directly and the enterprise can still manage resources even when hosted outside the enterprise network.

Scenario 3:  Enterprise having contractors, visitors and other non-employees that access the enterprise network

In this scenario, the enterprise network hosts applications, databases, IoT devices and other assets that can be accessed by employees, contractors, visitors, technicians and guests. Now we have a situation where the assets like internal applications, sensitive information data should only be accessed by employees and should be prevented from visitors, guests and technicians accessing it. The technicians who show up when there is a need to fix the IoT devices like smart HVAC and lighting systems still need to access the network or internet. The visitors and guests also need access to the local network to connect to the internet so that they could perform their operations. All these situations described earlier can be achieved by creating user, device profiles, and enterprise agents installed on their system to prevent network reconnaissance/east-west movement when connected to the network. The users based on their identity and device profile will be placed on either the enterprise employee network or BYOD guest network, thus obscuring resources using the ZTA approach of SDPs. The PE and PA could be hosted either on the LAN or as a cloud service based on the architecture decided by the organization. All enterprise owned devices that have an installed agent could access through the gateway portal that grants access to enterprise resources behind the gateway. All privately owned devices that are used by visitors, guests, technicians, employee owned personal phones, or any non-enterprise owned assets will be allowed to connect to BYOD or guest network to use the internet based on their user and device profile.

Zero Trust Maturity

As organizations mature and adopt zero trust, they go through various stages and adapt to it based on the cost, talent, awareness and business domain needs. Zero trust is a marathon, and not a sprint, hence incrementally maturing the level of zero trust is the desired approach.

Stage 0: Organizations have not yet thought about the zero trust journey but have on-premises fragmented identity, no cloud integration and passwords are used everywhere to access resources.

Stage 1: Adopting unified IAM by providing single sign-on across employees, contractors and business partners using multi-factor authentication (MFA) to access resources and starting to focus on API security.

Stage 2: In this stage, organizations move towards deploying safeguards such as context-based (user profile, device profile, location, network, application) access policies to make decisions, automating provisioning and deprovisioning of employee/external user accounts and prioritizing secure access to APIs.

Stage 3: This is the highest maturity level that can be achieved, and it adopts passwordless and frictionless solutions by using biometrics, email magic links, tokens and many others.

Most of the organizations in the world are either in stage 0 or stage 1 except for large corporations who have matured to stage 2. Due to the current COVID situation, organizations have quickly started to invest heavily to improve their ZT maturity level and the overall security posture.

Acronyms

References

Draft (2nd 1) NIST Special Publication 800-207. Available at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207-draft2.pdf

The State of Zero Trust Security in Global Organizations

Effective Business Continuity Plans Require CISOs to Rethink WAN Connectivity

Zero Trust Security For Enterprise Mobility

As we mentioned in our previous post, we are celebrating our 40th anniversary and, as part of our celebrations, we have created this 40 Years and Forward blog series. So, without further ado, welcome to the second posting in that series!

In our last blog, we took a stroll down memory lane and reminisced about CC Pace’s origins and what the world was like in 1980 when we were founded. While much has changed here at CC Pace and in the world in general (internet anyone?), we have been steady in our drive to meet the needs of our customers by providing valuable business solutions. Working with a national client base that ranges from tech start-ups to Fortune 500 companies as well as government entities – no company or project has proven to be too big or too small.

While we have remained consistent to our values and in our focus, another key aspect to our longevity has been our adaptability. For instance, CC Pace’s biggest client during our first year was the Department of Energy and we were deeply involved with the Oil and Gas industries. As we grew and our client base expanded, we shifted direction to the telecommunications and the financial services/mortgage industry. We focused our strategic planning on truly understanding how innovative technologies and methodologies work, and when and where to apply them.  For example, back in 1999, when others were consistently using the waterfall approach, CC Pace started to think differently and used an Agile methodology, XP, for the first time on a custom software development project.

Our adaptability has also come into play as we have successfully navigated our way through many challenging times including the financial crisis of 2009 and most recently the coronavirus pandemic, we find ourselves in today. By seizing the opportunity to adapt to the market, investing in our people and discovering new technologies, CC Pace has successfully kept up with our clients’ needs. We are carrying that adaptability into 2020 as our development teams are currently creating mobile apps and working on cloud transitions and integrations. It is through these continued efforts and our ability to adjust to the market, that CC Pace has become a nationally recognized leader in Agile training and coaching, custom application development, financial and healthcare services consulting and IT staffing.

We invite you to stay tuned to our next 40 Years and Forward blog series in which we’ll share deeper insight into our company culture. And, how our team has thrived in a social, collaborative and productive environment that even encourages playfulness while at work!

I have a deep interest in cybersecurity, and to keep up with the latest threats, policies and security practices, I became a member of ACT-IAC organization and enrolled in the Cybersecurity Community of Interest group. This is where I got the opportunity to work as a volunteer in the Zero Trust Architecture Phase 2 project. Hence, I am trying to share the knowledge I gained around ZTA strategy and principles. I am planning to break my blog into four series based on how the project progresses.

  • What is ZTA?
  • Real world deployment scenarios
  • ZTA core capabilities
  • Vendors providing ZTA capabilities

What is ZTA and how did it come into existence?

Traditionally, perimeter-based security has been used to protect the network infrastructure behind a firewall where if the user gets authenticated, they can access all the resources behind the firewall assuming all network users/devices as trustworthy. This caused a lot of security breaches across the globe where attackers could move laterally and exploit resources to which they were not authorized. The attackers only had to get through the firewall and later crawl across any resource available in the network causing potential damage in terms of data loss and other financial implications that can come via ransomware attacks.

Currently, an enterprise’s infrastructure operates around several networks like cloud-based services, remote users connecting from their own network using their enterprise-owned or personal devices (laptops, mobile devices), network location can change based on where the users/devices are connected from for e.g. public WIFI, internal enterprise networks etc. All these complex use cases made the possibility of moving away from perimeter-based security to “perimeter less” security (not confined to one network infrastructure) which led to the evolution of a new concept called as “Zero-Trust” where you “trust no one, but verify”. ZT approach is primarily based on data protection but it can be applied across other enterprise assets like users, devices, applications and infrastructure.

ZTA is basically an enterprise cybersecurity strategy that prevents data breaches and limits lateral movement within the network infrastructure. It assumes all the internal or external agents (user, device, application, infrastructure) that wants to access an enterprise resource (internal network or externally in the cloud) is not trustworthy and needs to be verified for each request before granting access to them.

What does Zero Trust mean in a ZTA?

(Image courtesy: NIST SP 800-27 publication)

In the above diagram, the user who is trying to access the resource must go through the PDP/PEP. PDP/PEP decides whether to grant access to this request based on enterprise policies (data/access/risk), user identity, device profile, location of the user, time of request and any other attributes needed to gain enough confidence. Once granted, the user is on an “Implicit Trust Zone” where it can access all the resources based on network infrastructure design. “Implicit Trust Zone” is basically the boarding area in an airport where all the passengers are considered trustworthy once they verify themselves through immigration/security check.

You can still limit access to certain resources in the network using a concept called “Micro-Segmentation”. For example, after getting through the security check and reaching the boarding area, passengers are again checked at the boarding gate to make sure they are entering the authorized flight to reach their destination. This is what “Micro Segmentation” means where the resources are more isolated to a segment and access requests are verified separately in addition to PDP/PEP.

Tenets of ZTA: (As per NIST SP 800-27 publication)

All the resources whether its data related, or services provided should be communicating in a secure fashion irrespective of their network location. Each individual access request will be verified before granting access to any resource based on the client’s identity, device they are using to request, type of application used, location coordinates and other behavioral attributes. Each access request granted will be authenticated and authorized dynamically and strictly enforced. In addition, the enterprise should collect all activity information, log decisions, audit logs and monitor the network infrastructure to improve the overall security posture.

What are the logical components of ZTA?

(Image courtesy: NIST SP 800-27 publication)

Policy Engine: Responsible to make and log decisions based on enterprise policy and inputs from external resources (CDM, threat intelligence etc.) to grant access or not to a request.

Policy Administrator: Responsible for establishing or killing the communication path between the subject and enterprise resource based on the decision made by PE. It can generate authentication tokens for the client to access the resource. PA communicates with PEP via the control plane.

Policy Enforcement Point: Responsible for enabling, monitoring and terminating communication between subject and enterprise resource. It can be either used as a single logical component or can be broken into two components: the client agent and resource gateway component that controls access. Beyond the PEP is the “Implicit Trust Zone” to access enterprise resources.

Control Plane/Data Plane: The control plane is made up of components that receive and process requests from the data plane components that wish to access network resources. The control and data planes are more like zones in the ZTA. All the resources, devices, and users within the network can have their own control plane component within them to decide whether the data should be routed further or not. In this diagram, it is just used to explain how control plane works for data plane components.  Data plane simply passes packets around and the control plane routes them appropriately based on decisions made.

Note: The dotted line that you see in the image above is the hidden network that is used for communication between the various logical components.

Why should organizations adopt ZTA?

When adopting a ZTA, organizations must weigh all the potential benefits, risks, costs, and ROI. Core ZT outcomes should be focused on creating secure networks, securing data that travels within the network or at rest, reducing impacts during breaches, improving compliance and visibility, reducing cybersecurity costs and improving the overall security posture of an organization.

Lost or stolen data, ransomware attacks, and network and application layer breaches cost organizations huge financial losses and market reputation. It takes a lot of time and money for an organization to resume back to normal if the security breach was of the highest degree. ZT adoption can help organizations avoid such breaches which is the key to survive in today’s world, where state funded hackers are always ahead of the game.

As with all technology changes, the biggest challenge to demonstrate higher ROI and lower cybersecurity costs is the time needed to deliver the desired results. Organizations should consider the following:

  • Assess what components of ZTA pillars they currently have in their infrastructure. Integration of components with existing tools can reduce the overall investment needed to adopt ZTA.
  • Consider including costs or impacts associated with risk levels and occurrences when doing ROI calculations.
  • ZT adoption should simplify, and not complicate, the overall security strategy to reduce costs.

What are the threats to ZTA?

ZTA can reduce the overall risk exposure in an enterprise but there are some threats that can still occur in a ZTA environment.

  • Wrongly or mistakenly configured PE and PA could cause disruptions to the users trying to access the resources. Sometimes, the access requests which would get unapproved previously could get through due to misconfiguration of PE and PA by the security administrator. Now, the attackers or subjects could access resources from which they were restricted before.
  • Denial of service attacks on PA/PEP can disrupt enterprise operations. All access decisions are made by PA and enforced by PEP to make a successful connection of a device trying to access a resource. If the DoS attack happens on the PA, then no subject would be able to get access as the service would be unavailable due to a flood of requests.
  • Attackers could compromise an active user account using social engineering techniques, phishing or any other way to impersonate the subject to access resources. Adaptive MFA may reduce the possibility of such attacks on network resources but still in traditional enterprises with or without ZTA adoption, an attacker might still be able to access resources to which the compromised user has access. Micro-segmentation may protect resources against these attacks by isolating or segmenting the resource using technologies like NGFW, SDP.
  • Enterprise network traffic is inspected and analyzed by policy administrators via PEPs but there are other non-enterprise-owned assets that can’t be monitored passively. Since the traffic is encrypted and it’s difficult to perform deep packet inspection, a potential attack could happen on the network from non-enterprise owned devices. ML/AI tools and techniques can help analyze traffic to find anomalies and remediate it quickly.
  • Vendors or ZT solution providers could cause interoperability issues if they don’t follow certain standards or protocols when interacting. If one provider has a security issue or disruption, it could potentially disrupt enterprise operations due to service unavailability or the time taken to switch to another provider which can be very costly. Such disruptions can affect core business functions of an enterprise when working in a ZTA environment.

References

[ACT-IAC] American Council for Technology and Industry Advisory Council (2019) Zero Trust Cybersecurity Current Trends. Available at https://www.actiac.org/zero-trust-cybersecurity-current-trends

Draft (2nd 1) NIST Special Publication 800-207. Available at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207-draft2.pdf

NIST Zero Trust Architecture Release: https://www.nccoe.nist.gov/projects/building-blocks/zero-trust-architecture

What is App Modernization

Legacy application modernization is a process to update existing and aging applications with modern architecture to enhance features and capabilities. By migrating your legacy applications, you can include the latest functionalities that better align with what your business needs to succeed. Keeping legacy applications running smoothly while still being able to meet current day needs can be a time consuming and resource intensive affair. That is doubly the case when software becomes so outdated that it may not even be compatible with modern day systems.

A Quick Look at a Sample Legacy Monolithic Application

For this article, say a decade and half year-old, Legacy Monolithic Application is considered as depicted in the following diagram.

 

This  depicts a traditional, n-tier architecture that was very common in the past 20 years or so. There are several shortcomings with this architecture, including the “big bang” deployment that had to be tightly managed when rolling out a release. Most of the resources on the team would sit idle while requirements and design were ironed out. Multiple source control branches had to be managed across the entire system, adding complexity and risk to the merge process. Finally, scalability applied to the entire system, rather than smaller subsystems, causing increase costs for hardware resources.

Why Modernize?

We define modernization as migrating from a monolithic system to many decoupled subsystems, or microservices.

The advantages are:

  1. Reduce cost
    1. Costs can be reduced by redirecting computing power only to the subsystems that need it. This allows for more granular scalability.
  2. Avoid vendor lock-in
    1. Each subsystem can be built with a technology for which it is best suited
  3. Reduce operational overhead
    1. Monolithic systems that are written in legacy technologies tend to stay that way, due to increased cost of change. This requires resources with a specific skillset.
  4. De-coupling
    1. Strong coupling makes it difficult to optimize the infrastructure budget
    2. De-coupling the subsystems makes it easier to upgrade components individually.

Finally, a modern, microservices architecture is better suited for Agile development methodologies. Since work effort is broken up into iterative chunks, each microservice can be upgraded, tested and deployed with significantly less risk to the rest of the system.

Legacy App Modernization Strategies

Legacy application modernization strategies can include the re-architecting, re-factoring, re-coding, re-building, re-platforming, re-hosting or the replacement and retirement of your legacy systems. Applications dating back decades may not be optimized for mobile experiences on smartphones or tablets, which could require entire re-platforming. Lift and Shift will not add any business value if you migrate legacy applications just for the sake of Modernization. Instead, it’s about taking the bones, or DNA, of the original software, and modernizing it to better represent current business needs.

Legacy Monolithic App Modernization Approaches

Having examined the nightmarish aspects of continuing to maintain Legacy Monolithic Applications, this article presents you with two Application Modernization Strategies. Both listed below will be explained at length to get basic idea on to pick whatever is feasible with constraints you might have.

  • Migrating to Microservices Architecture
  • Migrating to Microservices Architecture with Realtime Data Movement (Aggregation/Deduping) to Data Lake

Microservices Architecture

In this section, we shall take a dig at how re-architecting, re-factoring and re-coding per microservices paradigm will help avoid a lot of overheads of maintaining a legacy monolithic system. The following diagram helps you better understand Microservice Architecture – a leap forward from legacy monolithic architecture.

 

At a quick glance of above diagram, you can understand there is a big central piece called API Gateway with Discovery Client. This is comparable to a Façade in a Monolithic Application. API Gateway is essentially an entry point to access several microservices which are comparable to modules in Monolithic Application and are identified/discovered with the help of Discovery Client. In this Design/Architecture of Microservices, API Gateway also acts as API Orchestrator as it resorts to one Database set via Database Microservice in the diagram. In other words, API Gateway/Orchestrator orchestrates the sequence of calls based on the business logic to call Database Microservice as individual Microservices have no direct access to database. One can also notice this architecture supports various client systems such as Mobile App, Web App, IOT APP, MQTT App et al.

Although this architecture gives an edge to using different technologies in different microservices, it leaves us with a heavy dependency on the API Gateway/Orchestrator. The Orchestrator is tightly coupled to the business logic and object/data model, which requires it to be re-deployed and tested after each microservice change. This dependency prevents each microservice from having its own separate and distinct Continuous Integration/Continuous Delivery (CI/CD) pipeline. Still, this architecture is a huge step towards building heterogenous systems that work in tandem to provide a complete solution. This goal would otherwise be impossible with a Monolithic Legacy Application Architecture.

Microservices Architecture with Realtime Data Movement to Data Lake

In this section, we shall take a dig at how re-architecting, re-factoring, re-coding, re-building, re-platforming, re-hosting or the replacement and retirement of your legacy systems per microservices paradigm will help avoid a lot of overheads of maintaining a legacy monolithic system. The following diagram helps you understand a complete advanced Microservices Architecture.

 

At the outset, most part of the diagram for this approach looks like the previous approach. But this adheres to the actual Microservice paradigm more than the previous. In this case, each microservice is individual and has its own micro database of any flavor it chooses to be based on the business needs and avoids dependency on a microservice called database microservice or overload API Gateway to act as Orchestrator with business logic. The advantage of this approach is, each Microservice can have its own CI/CD pipeline release. In other words, a part of application can be released with TDD/ATDD properly implemented avoiding costs incurred for Testing/Deploying and Release Management. This kind of architecture does not limit the overall solution to stick to any particular technical stack but encourages to provide quick solutions with various technical stacks. And gives flexibility to scale resources for highly hit microservices when necessary.

Besides this architecture encourages one to have a Realtime Engine (which can be a microservice itself) that reads data from various databases asynchronously and apply data aggregation and data deduping algorithms and send pristine data to Data lake. Advanced Applications can then use the data from Data lake for Machine Learning and Data Analytics to cater to the business needs.

 

Note: This article has not been written any cloud flavor in mind. This is general App Modernization Microservices architecture that can run anywhere on-prem or OpenShift (Private Cloud) or Azure Cloud or Google Cloud or AWS (Private Cloud)

I’m in the process of reading a book on Agile database warehouse design titled, appropriately enough, Agile Data Warehouse Design and by Lawrence Corr.

While Agile methodologies have been around for some time – going on two decades – they haven’t permeated all aspects of software design and development at the same pace. It’s only in recent years that Agile has been applied to data warehouse design in any significant way.

I’m sure many Agile consultants have worked on projects in the past where they were asked to come up with a complete design up-front. That’s true with data warehouse projects too where a client’s  database team wanted the entire schema designed up-front – even before the requirements for the reports the data warehouse would be supporting were identified. What would appear to be driving the design was not the business and their report priorities, but the database team and their desire to have a complete data model.

While Agile Data Warehouse Design introduces some new methods, it emphasizes a common-sense approach that is present in all Agile methodologies. In this case, build the data warehouse or data mart one piece at a time. Instead of thinking of the data warehouse as one big star schema, think of it as a collection of smaller star schemas – each one consisting of a fact table and its supporting dimension tables.

The book covers the basics of data warehouse design including an overview of fact tables, dimension tables, how to model each and as mentioned, star schemas. The book stresses the 7-Ws when designing a data warehouse – who, what, where, when, why, how and how many. These are the questions to ask when talking to business to come up with an appropriate design. “How many” is applicable for the fact tables, while the other questions apply to dimension table design.

Agile Data Warehouse Design stresses collaboration with the business stakeholders, keeping them fully engaged so that they feel like they are not just users, but owners of the data. Agile Data Warehouse Design focuses on modeling the business processes that the business owners want to measure, not the reports to be produced or the data to be collected.

I still have a way to go before I’ve finished the book and then applied what I’ve learned, but so far, it’s been a worthwhile learning experience.

At CC Pace, our Agile practitioners are sometimes asked whether Scrum is useful for activities other than software development. The answer is a definite yes.

Elizabeth (“Elle”) Gan is Director of Portfolio Management at a client of ours. She writes the blog My Scrummy Life. Recently, she wrote a fascinating post on how she used Scrum to plan her upcoming wedding.

How have you used Scrum outside its “natural” setting in software development?

You are embarking on a new software development project. Presumably, if it’s a Scrum project, a team is assembled, space and workstations for the team room are configured and the first sprint is right around the corner. The time has come for the initial gathering of project team members and stakeholders – the Project Kickoff. The Project Kickoff meeting can range from an hour to several days, and provides the opportunity for the project team, and any associated stakeholders, to come together and officially begin (i.e., ‘kick off’) a project. The ultimate goal is for everyone to leave this meeting on the same page, with a clear understanding of the project’s structure and goals.

One of the more common kickoff meeting agenda items for Scrum teams today is establishing the product vision, or the product vision statement. Many definitions and examples of product vision statements are available with a simple internet search; a solid summary of the product vision can be found here in a 2009 member article written by Roman Pichler for the Scrum Alliance:

The product vision paints a picture of the future that draws people in. It describes who the customers are, what customers need, and how these needs will be met. It captures the essence of the product – the critical information we must know to develop and launch a winning product.’

Here is where I have to come clean – I personally never thought much about the importance or impact of product vision statements until recently. It seemed to me that on many development projects, the product vision would simply exist as a feel-good placeholder or as a feeble attempt to energize the team: “We’re going to build this application and SAVE THE WORLD!”

I felt that the product vision statement was a guise for what seemed like the customary objective of a project which – in an admittedly negative opinion – was to provide a solid return on someone’s investment. Bluntly stated: “If we are successful, someone I’ve never met is going to make a lot of money on this product.” I observed that as a project ensued and the team became preoccupied with day to day tasks, reality would eventually kick in. At a certain point, the vision – which we were so excited about several weeks ago – usually becomes an afterthought.

Eventually, a point is reached several sprints into a project where the team’s project vision statement is scribbled on a large post-it sheet, taped to the wall in the team room, collecting dust – never to be spoken of again. In the past, my observation was that by the time our project wrapped up, we wouldn’t always take the time to measure project success against our original product vision statement. (In fact, many team members were probably already working towards achieving a new product vision on a completely new project.) We didn’t always ask the questions: Did we accomplish our mission? Did we meet all of our objectives? If not, why? (Some of these topics would assuredly be discussed in a Project Retrospective-type meeting, but in today’s reality, that isn’t always the case.)

Fortunately, times have changed. Several recent and personal discoveries (through complete happenstance) have improved my outlook; you could now say that I have a ‘newfound respect’ for the product vision statement. This inspiration is a result of successfully delivering on several development projects in the education research field. CC Pace has had the fortunate opportunity to partner with the National Student Clearinghouse (NSC) on several of their software development initiatives since 2010. Our first project supported NSC’s Research Center, whose mission is defined as ‘providing educators and policymakers with accurate longitudinal data on student outcomes to enable informed decision making.’

In June of 2010, CC Pace began the journey with NSC to redesign the StudentTracker for High Schools application (STHS 2.0), which contributes to NSC’s aforementioned mission as ‘a unique program designed to help high schools and districts more accurately gauge the college success of their graduates’. We began the project with an informative and efficient Kickoff meeting and established our product vision statement. Truth be told, I didn’t put much thought into it. After all, I was working with a new client on a newly-formed team, and Sprint #1 was approaching fast.

With all of that in mind, the following is a high-level summary (i.e., not verbatim and with some information added for clarification) of our product vision statement for the StudentTracker for High Schools 2.0 project from June, 2010:

NSC’s business goal is to leverage its unique assets and capabilities to provide the secondary-postsecondary longitudinal information required to inform the secondary education system in its efforts to increase rates of college readiness.

Redesigning the STHS 2.0 application will enhance the capacity and scalability to provide integrated secondary-postsecondary education information – in a timely and efficient fashion – to the maximum number of secondary customers possible.

The objectives to meet this business goal are as follows:

  • Enhance STHS reports to include more insightful and actionable data resulting in more valuable and accurate information available to secondary schools and districts
  • Provide for a more efficient file management process, reducing the turnaround time for data collection, processing and report distribution
  • Increase data collection and storage capacity, allowing for more robust reports
  • Improve NSC matching algorithms to enhance data quality and add reliability
  • Design, configure and implement a robust set of longitudinal reports stratified by school type, demographics, gender, academics and degree

So, there it was. Our product vision statement was posted on the wall for all to see. As anyone starting out on a new software development project can attest, I still had some questions: Where is all of this leading us? Will we succeed? Will this application – which we are completely redesigning from scratch – launch successfully a year from now?

Undoubtedly these were realistic questions and concerns. At the same time, however, I began to realize that I was working as a member of a team on a development project with a realistic, measurable and highly-motivational product vision statement. I thought at the time that if we truly achieve our vision and successfully implement STHS 2.0 in the next year, our product will have a profound impact on educational research and potentially improve the college success rates of millions of high school and college students for years to come.

Fast-forward to 2015 – I can proudly say that I have seen several firsthand accounts demonstrating that we did indeed achieve the product vision that we established several years ago. The intriguing element of this discovery is that I never personally set out to measure whether or not we achieved our vision as a result of successfully delivering STHS 2.0. After all, this was over five years ago and I have worked on many different projects in that time. Instead, I discovered the answer to that question completely by chance, and on more than one occasion. Five years later, I came to the realization that our team’s product vision had indeed become a reality, and it was a really great feeling.

Check back for a follow-up post for the recent chain of events validating that our project’s product vision statement truly became a reality – more than five years after it was established.

In my last post, I talked about the interesting Agile 2015 sessions on team building that I’d attended. This time we’ll take a look at some sessions on DevOps and Craftsmanship.

On the DevOps’ side, Seth Vargo’s The 10 Myths of DevOps, was by far the most interesting and useful presentation that I attended. Vargo’s contention is that the DevOps concept has been over-hyped (like so many other things) and people are soon going to be becoming disenchanted with the DevOps concept (the graphic below shows where Vargo believes DevOps stands on the Gartner Hype Cycle right now). I might quibble about whether we’ve passed the cusp of inflated expectations yet or not, but this seems just about right to me. It’s only recently that I’ve heard a lot of chatter about DevOps and seen more and more offerings and that’s probably a good indication that people are trying to take advantage of those inflated expectations. Vargo also says that many organizations either mistake the DevOps concept for just plain operations or use the title to try to hire SysAdmins under the more trendy title of DevOps. Vargo didn’t talk to it, but I’d also guess that a lot of individuals are claiming to be experienced in DevOps when they were SysAdmins who didn’t try to collaborate with other groups in their organizations.

n1

The other really interesting myth in Vargo’s presentation was the idea that DevOps is just between engineers and operators. Although that’s certainly one place to start, Vargo’s contention is that DevOps should be “unilaterally applied across the organization.” This was characteristic of everything in Vargo’s presentation: just good common sense and collaboration.

Abigail Bangser was also focused on common sense and collaboration in Team Practices Applied to How We Deploy, Not Just What, but from a narrower perspective. Her pain point seems to have been that technical stories that weren’t well defined and were treated differently than business stories. Her prescription was to extend the Three Amigos practice to technical stories and generally treat techincal stories like any other story. This was all fine, but I found myself wondering why that kind of collaboration wasn’t happening anyway. It seems like doing one’s best to understand a story and deliver the best value regardless of whether the story is a business or a technical one. Alas, Bangser didn’t go into how they’d gotten to that state to start with.

CaptureOn the craftsmanship side, Brian Randell’s Science of Technical Debt helped us come to a reasonably concise definition of technical debt and used Martin Fowler’s Technical Debt Quadrant distinguish between different types of technical debt: prudent vs. reckless, and deliberate vs. inadvertent. He also spent a fair amount of time demonstrating SonarQube and explaining how it had been integrated into the .NET ecosystem. SonarQube seemed fairly similar to NDepend, which I’ve used for some years now, with one really useful addition: both NDepend and SonarQube evaluate your codebase compared to various configurable design criteria, but SonarQube also provides an estimated time to fix all the issues that it found with your codebase. Although it feels a little gimmicky, I think it would be more useful than just having the number of instances of failed rules in explaining to Product Owners the costs that they are incurring.

I also attended two divergent presentations on improving our quality as developers. Carlos Sirias presented Growing a Craftsman through Innovation & Apprenticeship. Obviously, Sirias advocates for an apprenticeship model, a la blacksmiths and cobblers, to help improve developer quality. The way I remember the presentation, Sirias’ company, Pernix, essentially hired people specifically as apprentices and assigned them to their “lab” projects, which are done at low-cost for startups and small entrepreneurs. The apprenticeship aspect came from their senior people devoting 20% of their time to the lab projects. I’m now somewhat perplexed, though, because the Pernix website says that “Pernix apprentices learn from others; they don’t work on projects” and the online PDF of the presentation doesn’t have any text in it, so I can’t double check my notes. Perhaps the website is just saying that the apprentices don’t work as consultants on the full-price projects, and I do remember Sirias saying that he didn’t feel good about charging clients for the apprentices. On the other hand, I can’t imagine that the “lab” projects, which are free for NGOs and can be financed by micro-equity or actual money, don’t get cross-subsidised by the normal projects. I feel like just making sure that junior people are always pairing and get a fair chance to pair with people they can learn from, which isn’t always “senior” people, is a better apprenticeship model than the one that Sirias presented.

The final craftsmanship presentation I attended, Steve Ropa’s Agile Craftsmanship and Technical Excellence, How to Get There was both the most exciting and the most challenging presentation for me. Ropa recommends “micro-certifications,” which he likens to Boy Scout merit badges, to help people improve their technical abilities. It’s challenging to me for two reasons. First, I’m just not a great believe in credentialism because I don’t find they really tell me anything when I’m trying to evaluate a person’s skills. What Ropa said about using internally controlled micro-certifications to show actual competence in various skill areas make a lot of sense, though, since you know exactly what it takes to get one. That brings me to the second challenge: the combination of defining a decent set of micro-certifications, including what it takes to get each certification, and a fair way of administering such a system. For the most part, the first part of this concern just takes work. There are some obvious areas to start with: TDD, refactoring, continuous integration, C#/Java/Python skills, etc., that can be evaluated fairly objectively. After that, there are some softer areas that would be more difficult to figure out certifications for, though. How, for example, do you grade skills in keeping a code base continually releasable? It seems like an all-or-nothing kind of thing. And how does one objectively certify a person’s ability to take baby steps or pair program?

Administering such a program also presents me with a real challenge: even given a full set of objective criteria for each micro-certification, I worry that the certifications could become diluted through cronyism or that the people doing the evaluations wouldn’t be truly competent to do so. Perhaps this is just me being overly pessimistic, but any organization has some amount of favoritism and I suspect that the sort of organizations that would benefit most from micro-certifications are the ones where that kind of behavior has already done the most damage. On the other hand, I’ve never been a boy scout and these concerns may just reflect my lack of experience with such things. For all that, the concept of micro-certifications seems like one worth pursuing and I’ll be giving more thought on how to successfully implement such a system over the coming months.

Notes from Agile 2015 Washington, D.C. 

Having lived in Washington DC area for over 25 years, my experience caused me to presume that the majority of the audience at the Agile 2015 Washington DC would primarily consist of people working in the public sector, given our geographical proximity to a long list of federal agencies. It was not unrealistic for me to expect that the speakers at the conference might tailor their presentations and discussions to this type of audience. The audience actually turned out to be quite diverse, rendering my assumptions inaccurate. However, I could not help to feel somewhat validated after listening to the first key note speaker.  Indeed, the opening presentation by Luke Hohmann entitled “Awesome Super Problems” focused on tackling “wicked problems” such as budget deficits and environmental challenges. Wicked problems, as described by Mr. Hohmann, are not technical in nature and cannot be solved by small Agile teams of 6-8 people. These problems deal more with strategic decision-making that may result in long-term consequences, intended as well as unintended. They impact millions of people and they require broad consent as well as governance. Hailing from the San Jose, California, Mr. Hohmann discussed how implementation of Agile methodologies helped the city tackle some “wicked problems” such as a budget deficit of 100 million dollars.

Planning and Executing
Solving major problems such as budget shortfalls generally require a great deal of collaboration between stakeholders with competing priorities. Mr. Hohmann stressed that the approach should focus on collaboration over competition, or in Agile terminology, “customer collaboration over contract negotiation”. Easier said than done? Maybe… To help facilitate this collaboration, Mr. Hohmann assembled a conference of public servants such as city planners, police and fire chiefs, and other community leaders. There were several discovery sessions where people could get answers to questions like how much money would be saved if the fire department removed one firefighter from their teams and what impact to safety that may entail. The group was broken down into small tables of no more than 8 people and one facilitator provided by Mr. Hohmann. The group was presented with the list of major budget items and subsequently was compelled to engage in budget games  in which participants were basically bidding to get their high priority budget items included in the next budget and negotiate cuts by trading these items. Afterwards the players had a retrospective and offered feedback.

Retrospective and Outcome
Feedback provided by the participants showed that competition was replaced by collaboration. Participants tended not to get into heated arguments because the games inherently encouraged compromise. Small groups helped cut down on distractions and side conversations. The participants also reported that the game was fair since every player possessed equal bidding power. Interestingly, the final outcome resulted in surprising consensus over the budget items, as the majority of the participants actually ended up prioritizing their items very similarly after the competition aspect was removed. The “democratic” aspect of the collaborative approach helped eliminate animosity and partisanship which are not uncommon, as have been witnessed in the U.S federal budget negotiations. This experiment seemed to yield the desired outcome of tackling the imbalanced budget and was touted as a success, attracting attention of more San Jose residents.

Scaling
To tackle other public issues such as school overcrowding and water shortages, Mr. Hohmann attempted to repeat the process, but the number of participants has increased to the point where a large conference hall was needed. In order not to upset the budget by renting a giant conference hall, Mr. Hohmann and the local government set up an online forum that accepted a virtually unlimited number of participants, yet still assigning them to groups of about eight people and increasing the number of groups. The participants played other games such as Prune the Product Tree which basically involves prioritizing the list of problems the public wants to tackle. The feedback was even more positive as the majority of the participants actually preferred the online setting even more. They reported even less distractions. The data was easier to collect and aggregate, giving the participants almost an immediate view of how the game was progressing and how the priorities were moving.

Conclusion
One main takeaway I got from Mr. Hohmann’s presentation is one of encouragement to be creative. Mr. Hohmann stressed the importance of focusing on what he described as “common ground for action”. The idea is to focus on generating a list or backlog of actionable items. The process or exercise to get to the desired state can vary, and Agile methodologies can help folks get there, even when tackling wicked problems.

External Links:
http://www.innovationgames.com/budget-games-guide/

http://www.innovationgames.com/prune-the-product-tree/

Further reading:
http://conteneo.co/san-jose-residents-play-4th-annual-budget-games/

As a developer, I tend to think of YAGNI in terms of code.  It’s a good way to help avoid building in speculative generality and over-designing code.  While I do sometimes think about YAGNI at the feature level, I have a tendency to view the decision to implement a feature as a business decision that should be made by the customer rather than something the developers should decide.  That’s never stopped me from giving an opinion, though!  Until recently, my argument against implementing a feature has generally been a simple argument about opportunity cost.   Happily, Martin Fowler’s recent post on YAGNI (http://martinfowler.com/bliki/Yagni.html) adds greatly to my understanding of all the cost that not considering YAGNI can add to a project.  Well worth a read regardless of whether you’re a developer, Product Owner, Scrum Master or fill any other role.

In previous installments in this series, I’ve talked about what Product Owners and development team members can do to ensure iteration closure. By iteration closure, I mean that the system is functioning at the end of each iteration, available for the Product Owner to test and offer feedback. It may not have complete feature sets, but what feature sets are present do function and can be tested on the actual system: no “prototypes”, no “mock-ups”, just actual functioning albeit perhaps limited code. I call this approach fully functional but not necessarily fully featured.

In this installment, I’ll take a look at the Scrum Master or Project Manager and see what they can do to ensure full functionality if not full feature sets at the end of each iteration. I’ll start out by repeating the same caveat I gave at the start of the Product Owner installment: I’m a developer, so this is going to be a developer-focused look at how the Scrum Master can assist. There’s a lot more to being a Scrum Master, and a class goes a long way to giving you full insight into the responsibilities of the role.

My personal experience is that the most important thing you as a Scrum Master can do is to watch and listen. You need to see and experience the dynamics of the team.

At Iteration Planning Meetings (IPMs), are Product Owners being intransigent about priorities or functional decomposition? Are developers resisting incremental functional delivery, wanting to complete technical infrastructure tasks first? These are the two most serious obstacles to iteration closure. Be prepared to intervene and discuss why it’s in everyone’s interest to achieve this iteration closure.

At the daily stand-up meetings, ensure that every team member speaks (that includes you!), and that they only answer the three canonical questions:

  1. What did I do since the last stand-up?
  2. What will I do today?
  3. What is in my way?

Don’t allow long-winded discussions, especially technical “solution” discussions. People will tune out.

You’re listening for:

  • Someone who always answers (1) and (2) with the same tasks every day and yet says they have no obstacles
  • Whatever people say in response to (3)

Your task immediately after the stand-up is to speak with team members who have obstacles and find out what you can do to clear the obstacles. Then address any team members who’re always doing the same task every day and find out why they’re stuck. Are they inexperienced and unwilling to ask for help? Are they not committed to the project mission and need to be redeployed?

Guard against an us-versus-them mentality on teams, where the developers see Product Owners or infrastructure teams as “the enemy” or at least an obstacle, and vice versa. These antagonistic relationships come from lack of trust, and lack of trust comes from lack of results. Again, actual working deliverables at the close of each iteration go a long way to building trust. Look for intransigence on either the developer team or with the Product Owner: both should be willing to speak freely and frankly with each other about how much work can be done in an iteration and what constitutes Minimal Value Product for this iteration. It has to be a negotiation; try to facilitate that negotiation.

Know your team as human beings – after all, that is what they are. Learn to empathize with them. How do individuals behave when they’re happy or when they’re frustrated? What does it take to keep Jim motivated? It’s probably not the same things as Bill or Sally. I’ve heard people advocate the use of Meyers-Briggs Personality Tests or similar to gain this understanding. I disagree. People are more complex than 4 or 5 letters or numbers at one moment in time. I may be an introvert today and an extrovert tomorrow, depending on how my job is going. Spend time with people to really know them, and don’t approach people as test subjects or lab rats. Approach them as human beings, the complex, satisfying, irritating, and ultimately rewarding organisms that we actually are.

Occasionally, when I speak at technical or project management meet-ups, an audience member will ask, “I’m a Scrum Manager and I can’t get the Product Owner to attend the IPM; what should I do?” or, “My CIO comes in and tasks my developer team directly without going through the IPM; how do I handle this?” I try to give them hints, but the answer I always give is, “Agile will only expose your problems; it won’t solve them.” In the end, you have to fall back on your leadership and management skills to effect the kind of change that’s necessary. There’s nothing in Scrum or XP or whatever to help you here. Like any other process or tool, just implementing something won’t make the sun come out. You still have to be a leader and a manager – that’s not going away anytime soon.

Before I close, let me point out one thing I haven’t listed as something a Scrum Master ought to be adept at: administration. I see projects where the Scrum Master thinks their primary role is to maintain the backlog, measure velocity, track completion, make sure people are updating their Jira entries, and so on. I’m not saying this isn’t important – it is. It’s very important. But if you’re doing this stuff to the exclusion of the other stuff I talked about up there, you’re kind of missing the point. Those administrative tasks give you data. You need to act on the data, or what’s the point? Velocity is decreasing. OK…what are you and the team going to do about it? That’s the important part of your role.

When we at CC Pace first started doing Agile XP projects back in 2000-2001, we had a role on each project called a Tracker. This person would be part time on the project and would do all the data collection and presentation tasks. I’d like to see this role return on more Agile projects today, because it makes it clear that that’s not the function of the Scrum Master. Your job is to lead the team to a successful delivery, whatever that takes.

So here we are at the end of my series. If there’s one mantra I want you to take away from this entire series, it’s Keep the system fully functional even if not fully featured. Full functionality – the ability of the system to offer its implemented feature set to the Product Owner for feedback – should always come before full features – the completeness of the features and the infrastructure. Of course, you must implement the complete feature set and the full infrastructure – but evolve towards it. Don’t take an approach that requires that the system be complete to be even minimally useful.

If you’re a Product Owner:

  • Understand the value proposition not just of the entire system, but of each of its components and subsets.
  • Be prepared to see, use, and test subsets, or subsets of subsets of subsets, of the total feature set. Never say, “Call me only when the system is complete.” I guarantee this: your phone will never ring.

If you’re a developer:

  • Adopt Agile Engineering techniques such as TDD, CI, CD, and so on. Don’t just go through the motions. Become really proficient in them, and understand how they enable everything else in Agile methodologies.
  • Use these techniques to embrace change, and understand that good design and good architecture demand encapsulation and abstraction. Keeping the subsystems isolated so that the system is functional even if not complete is not just good for business. It’s good engineering. A car’s engine can (and does) run even before it’s installed into the car. Just don’t expect it to take you to the grocery store.
  • Be an active team member. Contribute to the success of the mission. Don’t just take orders.

If you’re a Scrum Master:

  • Watch and listen. Develop your sense of empathy so you “plug in” to the team’s dynamics and understand your team.
  • Keep the team focused on the mission.
  • If you want to sweat the details of metrics and data, fine – but your real job is to act on the data, not to collect it. If you aren’t good at those collection details, delegate them to a tracking role.

I hope you’ve enjoyed this series. Feel free to comment and to connect with me and with CC Pace through LinkedIn. Please let me hear how you’ve managed when you were on a supposedly Agile project and realized that the sound of rushing water you heard was the project turning into a waterfall.

As I write this blog entry, I’m hoping that the curiosity (or confusion) of the title captures an audience. Readers will ask themselves, “Who in the heck is Jose Oquendo? I’ve never seen his name among the likes of the Agile pioneers. Has he written a book on Agile or Scrum? Maybe I saw his name on one of the Agile blogs or discussion threads that I frequent?”

In fact, you won’t find Oquendo’s name in any of those places. In the spirit of baseball season (and warmer days ahead!), Jose Oquendo was actually a Major League Baseball player in the 1980’s, playing most of his career with the St. Louis Cardinals.

Perhaps curiosity has gotten the better of you yet again, and you look up Oquendo’s statistics. You’ll discover that Oquendo wasn’t a great hitter, statistically-speaking. His career .256 batting average and 14 homeruns over a 12 year MLB career is hardly astonishing.

People who followed Major League Baseball in the 1980’s, however, would most likely recognize Oquendo’s name, and more specifically, the feat which made him unique as a player. Oquendo has done something that only a handful of players have ever done in the long history of Major League Baseball – he’s played EVERY POSITION on the baseball diamond (all nine positions in total).

Oquendo was an average defensive player and his value obviously wasn’t driven from his aforementioned offensive statistics. He was, however, one of the most valuable players on those successful Cardinal teams of the 80’s, as the unique quality he brought to his team was derived from a term referred to in baseball lingo as “The Utility Player”. (Interestingly enough, Oquendo’s nickname during his career was “Secret Weapon”.)

Over the course of a 162-game baseball season, players get tired, injured and need days off. Trades are executed, changing the dynamic of a team with one phone call. Further complicating matters, baseball teams are limited to a set number of roster spots. Due to these realities and constraints of a grueling baseball season, every team needs a player like Oquendo who can step up and fill in when opportunities and challenges present themselves. And that is precisely why Oquendo was able to remain in the big leagues for an amazing 12 years, despite the glaring deficiency in his previously noted statistics.

Oquendo’s unique accomplishment leads us directly into the topic of the Agile Business Analyst (BA), as today’s Agile BA is your team’s “Utility Player”. Today’s Agile BA is your team’s Jose Oquendo.

A LITTLE HISTORY – THE “WATERFALL BUSINESS ANALYST”

Before we get into the opportunities afforded to BA’s in today’s Agile world, first, a little walk down memory lane. Historically (and generally) speaking – as these are only my personal observations and experiences – a Business Analyst on a Waterfall project wrote requirements. Maybe they also wrote test cases to be “handed off” and used later. In many cases, requirements were written and reviewed anywhere from six to nine months before documented functionality was even implemented. As we know, especially in today’s world, a lot can change in six months.

I can remember personally writing requirements for a project in this “Waterfall BA” role. After moving onto another project entirely, I was told several months down the road, “’Project ABC’ was implemented this past weekend – nice work.” Even then, it amazed me that many times I never even had the opportunity to see the results of my work. Usually, I was already working on an entirely new project, or more specifically, another completely new set of requirements.

From a communications perspective, BA’s collaborated up-front mostly with potential users or sellers of the software in order to define requirements. Collaboration with developers was less common and usually limited to a specific timeframe. I actually worked on a project where a Development Manager once informed our team during a stressful phase of a project, “please do not disturb the developers over the next several weeks unless absolutely necessary.” (So much for collaboration…) In retrospective, it’s amazing to me that this directive seemed entirely normal to me at the time.

Communication with testers seemed even rarer – by the very definition, on a Waterfall project, I’ve already passed my knowledge on to the testers – it’s now their responsibility. I’m more or less out of the loop. By the time the specific requirements are being tested, I’m already off onto an entirely new project.

In my personal opinion the monotony of the BA role on a Waterfall project was sometimes unbearable. Month-long requirements cycles, workdays with little or no variation, and some days with little or no collaboration with other team members outside of standard team meetings became a day to day, week to week, month to month grind, with no end in sight.

AND NOW INTRODUCING… THE “AGILE BUSINESS ANALYST”

Fast-forward several years (and several Agile project experiences) and I have found that the role of today’s Agile Business Analyst has been significantly enhanced on teams practicing Agile methodologies and more specifically, Scrum. Simply as a result of team set-up, structure, responsibilities – and most importantly, opportunities – I feel that Agile teams have enhanced the role of the Business Analyst by providing opportunities which were never seemingly available on teams using the traditional Waterfall approach. There are new opportunities for me to bring value to my team and my project as a true “Utility Player”, my team’s Jose Oquendo.

The role of the Agile BA is really what one makes of it. I can remain content with the day to day “traditional” responsibilities and barriers associated with the BA role if I so choose; back to the baseball analogy – I can remain content playing one position. Or, I can pursue all of the opportunities provided to me in this newly-defined role, benefitting from new and exciting experiences as a result; I can play many different positions, each one further contributing to the short and long-term success of the team.

Today, as an Agile BA, I have opportunities – in the form of different roles and responsibilities – which not only enhance my role within the team but also allow me to add significant value to the project. These roles and responsibilities span not only across functional areas of expertise (e.g. Project Management, Testing, etc.) but they also span over the entire lifetime of a software development project (i.e. Project Kickoff to Implementation). In this sense, Agile BA’s are not only more valuable to their respective teams, they are more valuable AND for a longer period of time – basically, the entire lifespan of a project. I have seen specifically that Agile BA’s can greatly enhance their impact on project teams and the quality of their projects in the following five areas:

  • Project Management
  • Product Management (aka the Product Backlog)
  • Testing
  • Documentation
  • Collaboration (with Project Stakeholders and Team Members)

We’ll elaborate – in a follow-up blog entry – specifically how Agile BA’s can enhance their role and add value to a project by directly contributing to the five functional areas listed above.

Occasionally, as part of our strategic advisory service, I work with clients who don’t want custom application delivery from us, but rather want me to provide advice to their own Agile development teams. Many of them don’t need a lot of help, but perhaps the single issue I observe most often is that iteration (or sprint, in Scrum terminology) planning meetings (IPMs) don’t go well. Rather than being an interactive exchange of ideas and a negotiation between developers and product owners for the next iteration, I observe that the IPMs become 2-week status meetings that don’t accomplish much. The developer team doesn’t have much or anything to demo, there’s little feedback from the product owner, and everyone just routinely agrees to meet in two weeks to go through the same thing again.

One of the main reasons for these lackluster IPMs is the failure to close tasks at iteration boundaries. If the developer team can’t close tasks at iteration boundaries, then the product can’t be usefully demoed, which means the product owner can’t offer any feedback. This isn’t any form of Agile – it’s just waterfall with 2-week status meetings.

Failure to close tasks at iteration boundaries has other implications too, because what it’s telling you is that stories are too big, and stories that are too big have big consequences.

First, big stories are hard to estimate accurately. Think of estimates as sort of like weather forecasts: anything over 2-3 days is probably too inaccurate to use for planning. The smaller the story, the more accurate will be the estimate.

Second, big stories make it harder to change business priorities. That may seem like a non sequitur, but when developers are working on any story, the system is in an unstable, non-functioning state. To change direction, the developers have to bring the system to a stable state where it can be taken in a different direction. Those stable states are achieved when stories are completed and the system is ready to demo.

An analogy I like to use is to think of the system as a big truck proceeding down a controlled-access highway, like an American interstate. You can exit only at certain points. If you’re heading north and you realize you want to head east instead, you have to wait for the next exit to make that direction change – you can’t just immediately turn east and start driving through the underbrush. The farther apart the exits are, the farther you’re going to have to go out of your way before you can adjust. Think of each exit as being the close of a story. The closer together the exits (the smaller the stories), the sooner you’ll reach an exit (a system steady state) where you can change direction.

In this series of blog posts, I’m going to look at what it takes to ensure task closure at iteration boundaries. Each post will focus on a different team role, and how that role can help ensure that iterations end in an actual delivery of working software that can be demoed in an IPM. I’ll write about what product owners, developers, and project managers (or Scrum masters) can do to reduce story size, ensure product stability and functionality at iteration boundaries, and keep the system always ready to quickly change directions – the very definition of agility.

Watch this space.