The Next Frontier: Leveraging ChatGPT and Generative AI in Business – Promises, Pitfalls, and Practical Considerations
Today’s business leaders find themselves navigating a world in which artificial intelligence (AI) plays an increasingly pivotal role. Among the various types of AI, generative AI – the kind that can produce novel content – has been a game changer. One such example of generative AI is OpenAI’s ChatGPT. Though it’s a powerful tool with significant business applications, it’s also essential to understand its limitations and potential pitfalls.
1. What are Generative AI and ChatGPT?
Generative AI, a subset of AI, is designed to create new content. It can generate human-like text, compose music, create artwork, and even design software. This is achieved by training on vast amounts of data, learning patterns, structures, and features, and then producing novel outputs based on what it has learned.
In the realm of generative AI, ChatGPT stands out as a leading model. Developed by OpenAI, GPT, or Generative Pre-training Transformer, uses machine learning to produce human-like text. By training on extensive amounts of data from the internet, ChatGPT can generate intelligent and coherent responses to text prompts.
Whether it’s crafting detailed emails, writing engaging articles, or offering customer service solutions, ChatGPT’s potential applications are vast. However, the technology is not without its drawbacks, which we’ll delve into shortly.
2. Strategic Considerations for Business Leaders
Adopting a generative AI model like ChatGPT in your business can offer numerous benefits, but the key lies in understanding how best to leverage these tools. Here are some areas to consider:
- 2.1. Efficiency and Cost Savings
Generative AI models like ChatGPT can automate many routine tasks. For example, they can provide first-level customer support, draft emails, or generate content for blogs and social media. Automating these tasks can lead to considerable time savings, freeing your team to focus on more strategic, creative tasks. This not only enhances productivity but could also lead to significant cost savings. - 2.2. Scalability
One of the biggest advantages of generative AI models is their scalability. They can handle numerous tasks simultaneously, without tiring or requiring breaks. For businesses looking to scale, generative AI can provide a solution that doesn’t involve a proportional increase in costs or resources. Moreover, the ability of ChatGPT to learn and improve over time makes it a sustainable solution for long-term growth. - 2.3. Customization and Personalization
In today’s customer-centric market, personalization is key. Generative AI can create content tailored to individual user preferences, enhancing personalization in your services or products. Whether it’s customizing email responses or offering personalized product recommendations, ChatGPT can drive customer engagement and satisfaction to new heights. - 2.4. Innovation
Generative AI is not just about automating tasks; it can also stimulate innovation. It can help in brainstorming sessions by generating fresh ideas and concepts, assist in product development by creating new design ideas, and support marketing strategies by providing novel content ideas. Leveraging the innovative potential of generative AI could be a game-changer in your business strategy.
3. The Pitfalls of Generative AI
While the benefits of generative AI are clear, it’s essential to be aware of its potential drawbacks and pitfalls:
- 3.1. Data Dependence and Quality
Generative AI models learn from the data they’re trained on. This means the quality of their output is directly dependent on the quality of their training data. If the input data is biased, inaccurate, or unrepresentative, the output will likely be flawed as well. This necessitates rigorous data selection and cleaning processes to ensure high-quality outputs.
Employing strategies like AI auditing and fairness metrics can help detect and mitigate data bias and improve the quality of AI outputs. - 3.2. Hallucination
Generative AI models can sometimes produce outputs that appear sensible but are completely invented or unrelated to the input – a phenomenon known as “hallucination”. There are numerous examples in the press regarding false statements or claims made by these models, sometimes funny (like claiming that someone ‘walked’ across the English Channel) to the somewhat frightening (claiming someone has committed a crime, when in fact, they did not). This can be particularly problematic in contexts where accuracy is paramount. For example, if a generative model hallucinates while generating a financial report, it could lead to serious misinterpretations and errors. It’s crucial to have safeguards and checks in place to mitigate such risks.
Implementing robust quality checks and validation procedures can help. For instance, combining the capabilities of generative AI with verification systems, or cross-checking the AI outputs with trusted data sources, can significantly reduce the risk of hallucination. - 3.3. Ethical Considerations
The ability of generative AI models to create human-like text can lead to ethical dilemmas. For instance, they could be used to generate deepfake content or misinformation. Businesses must ensure that their use of AI is responsible, transparent, and aligned with ethical guidelines and societal norms.
Regular ethics training for your team, and keeping lines of communication open for ethical concerns or dilemmas, can help instill a culture of responsible AI usage. - 3.4. Regulatory Compliance
As AI becomes increasingly pervasive, regulatory bodies worldwide are developing frameworks to govern its use. Businesses must stay updated on these regulations to ensure compliance. This is especially important in sectors like healthcare and finance, where data privacy is paramount. Not adhering to these regulations can lead to hefty penalties and reputational damage.
Keep up-to-date with the latest changes in AI-related laws, especially in areas like data privacy and protection. Consider consulting with legal experts specializing in AI and data to ensure your practices align with regulatory requirements. - 3.5 AI Transparency and Explainability
Generative AI models, including ChatGPT, often function as a ‘black box’, with their internal workings being complex and difficult to interpret.
Enhancing AI transparency and explainability is key to gaining trust and mitigating risks. This could involve using techniques that make AI decisions more understandable to humans or adopting models that provide an explanation for their outputs.
4. Navigating the Generative AI Landscape: A Step-by-Step Approach
As generative AI continues to evolve and redefine business operations, it is essential for business leaders to strategically navigate this landscape. Here’s an in-depth look at how you can approach this:
- 4.1. Encourage Continuous Learning
The first step in leveraging the power of AI in your business is building a culture of continuous learning. Encourage your team to deepen their understanding of AI, its applications, and its implications. You can do this by organizing workshops, sharing learning resources, or even bringing in an AI expert (like myself) to educate your team on the best ways to leverage the potential of AI. The more knowledgeable your team is about AI, the better equipped they will be to harness its potential. - 4.2. Identify Opportunities for AI Integration
Next, identify the areas in your business where generative AI can be most beneficial. Start by looking at routine, repetitive tasks that could be automated, freeing up your team’s time for more strategic work. Also, consider where personalization could enhance the customer experience – from marketing and sales to customer service. Finally, think about how generative AI can support innovation, whether in product development, strategy formulation, or creative brainstorming. - 4.3. Develop Ethical and Responsible Use Guidelines
As you integrate AI into your operations, it’s essential to create guidelines for its ethical and responsible use. These should cover areas such as data privacy, accuracy of information, and prevention of misuse. Having a clear AI ethics policy not only helps prevent potential pitfalls but also builds trust with your customers and stakeholders. - 4.4. Stay Abreast of AI Developments
In the fast-paced world of AI, new developments, trends, and breakthroughs are constantly emerging. Make it a point to stay updated on these advancements. Subscribe to AI newsletters, follow relevant publications, and participate in AI-focused forums or conferences. This will help you keep your business at the cutting edge of AI technology. - 4.5. Consult Experts
AI implementation is a significant step and involves complexities that require expert knowledge. Don’t hesitate to seek expert advice at different stages of your AI journey, from understanding the technology to integrating it into your operations. An AI consultant or specialist can help you avoid common pitfalls, maximize the benefits of AI, and ensure that your AI strategy aligns with your overall business goals. - 4.6. Prepare for Change Management
Introducing AI into your operations can lead to significant changes in workflows and job roles. This calls for effective change management. Prepare your team for these changes through clear communication, training, and support. Help them understand how AI will impact their work and how they can upskill to stay relevant in an AI-driven workplace.
In conclusion, navigating the generative AI landscape requires a strategic, well-thought-out approach. By fostering a culture of learning, identifying the right opportunities, setting ethical guidelines, staying updated, consulting experts, and managing change effectively, you can harness the power of AI to drive your business forward.
5. Conclusion: The Promise and Prudence of Generative AI
Generative AI like ChatGPT carries immense potential to revolutionize business operations, from streamlining mundane tasks to sparking creative innovation. However, as with all powerful tools, its use requires a measured approach. Understanding its limitations, such as data dependency, hallucination, and ethical and regulatory challenges, is as important as recognizing its capabilities.
As a business leader, balancing the promise of generative AI with a sense of prudence will be key to leveraging its benefits effectively. In this exciting era of AI-driven transformation, it’s crucial to navigate the landscape with a keen sense of understanding, responsibility, and strategic foresight.
If you have questions or want to identify ways to enhance your organization’s AI capabilities, I’m happy to chat. Feel free to reach out to me at jfuqua@ccpace.com or connect with me on LinkedIn
In early 2022, my wife and I noticed a slight drip in the kitchen sink that progressively worsened. We quickly discovered that if we adjusted the lever just right, the drip would subside for a while. Right before the holidays, we had an issue with our water heater that wasn’t so livable, so we called an expert. After the plumber finished up with our hot water tank, we asked him to check our sink. In 5 minutes, he fixed the problem that had inconvenienced us for months! We had become so used to our workaround that we didn’t even realize how much time we spent in frustration trying to get the faucet handle just right (let alone double checking it all hours of the day and night) when we could have resolved the issue in just a few minutes of effort from an expert!
Sometimes, our workarounds are only efficient in our minds. They may save us time or a few dollars upfront, but do they save us anything in the long run? In my case, the answer was a resounding ‘no’.
In our professional lives, we’re trained to seek out our solutions for the ‘leaky faucets’ and typically only bring in an expert when we encounter a major problem that we can’t live with. For credit unions, the ‘water heater’ problems are typically the front office, as they live and die by the member experience. Having the right technology and interface (along with a myriad of other things) tends to be the core focus when it comes to investing in process and technology improvements.
Where are the ‘leaky faucets’ usually located? The back office. Temporary workarounds are created and then become standard practice; these workarounds are largely unknown to the broader organization, time-consuming, and surprisingly easy to correct. They add up and lead to frustrations and inefficiencies, just like my family experienced with our sink. I recently sat down with Mike Lawson of CU Broadcast and John Wyatt, CIO of Apple Federal Credit Union, to discuss the value of a back-office assessment.
You can check out a clip of that conversation here:
To see the entire segment, click here (just scroll down to the bottom of the page). I enjoyed speaking with Mike and John and encourage everyone in the credit union arena to subscribe to CU Broadcast if you haven’t already – it’s a great show, and one I’ve enjoyed for years.
So, when is the last time you checked your faucets? We’d love to hear from you – reach out to me if you have questions or want to learn more about maximizing your back-office efficiency.
In reading about all the FinTech lenders that have been re-imagining the customer digital journey, if you were an outsider (or a PE investor? ), you’d be left thinking that the banks and credit unions were all customer-journey challenged elephants destined to lose customers. Elephants, who are sitting on their hands, even though they oftentimes have better rates. But this is far from true. “Online-oriented” banks and credit unions have an ever-strengthening hand that they are playing. They are providing their customers and members with the service they’re looking for while outrunning the fintech models by improving their customer experience, but also leveraging their sophisticated call center capabilities. Here are two examples:
My dear 80-year-old mother, my iPhone-using, cross-country skiing mother, recently got a talking to from her son about how much more interest she could garner by moving her savings from a local brick-and-mortar-focused bank into an online outfit. We found that her interest rate would go from 0.5% to 4.0% instantly – it adds up. Easy right? We fired up the iPad on Sunday morning and got her started in opening an account at one of the leading online banks that you’ve undoubtedly heard of. But no! She didn’t pass the onboarding identity check as they couldn’t verify her existence and would need to wait to hear from a rep. Crazy right? In my Gen X point of view, I was furious and impatient, “what do you mean it can’t be done instantaneously?!”. Then again, I should have tempered this with the understanding that she hasn’t needed consumer credit since the 80s. Much to my surprise, come Monday after I had returned home contemplating how hard it would be to act as ‘remote support’ for her ongoing digital journey, my mom was called by “a nice woman,” who walked her through the process and <boom> the online bank now has a significant, sticky, new online customer that is quite pleased with their rates.
What would a fintech mantra have called for? I’d bet a paycheck they’d respond only to those that made it through the digital native process. Before you think of that as ‘the cream of the crop strategy’, what percentage of society is that really? And does that percentage have as much money to put into an online savings account as the older generation? On the surface, her new online bank seems to care more about her making a bit of money on her money than… dare I say it… her hometown bank. By combining technology and training staff on end-to-end processes (this was not that call center agent’s first rodeo), the elephant can dance. The whole experience was a real surprise to me.
In another example, at a large online-oriented credit union client of ours, I got a front-row seat of the customer journey from the other side of the fence. I had the pleasure of meeting the heads of open banking, customer journey, and customer interaction at a holiday luncheon. Those groups don’t all talk together about end-to-end journeys that often. While open banking at first is perceived as “allowing our members to provide access to their personal financial data to others,” it is now being understood as an opportunity to initiate the customer journey. A journey to keep a member from being pulled into the fintech journey, where the rate is much worse than the credit union can provide! In this case, the online credit union is working to establish a small set of predictive features from the open banking request that will trigger outreach to the member, highlighting the basics of the credit union’s competing offer (as in we think you are shopping for a personal loan; our rates are highly competitive, and we can approve you in 5 minutes…). It’s not hard to imagine the Elephant moving to a more sophisticated dance (let’s call it a ‘cha-cha-cha’) by running a soft inquiry and providing a firm offer of credit, but I suggest that a timely, but basic, triggered communication will start to become very common. And, in most cases, it’s good enough.
Execution is the hardest part of any plan, and the team at CC Pace excels at helping to turn strategy into results. Give us a call or reach out to me on LinkedIn to explore the ways we can help you tap into a more efficient customer journey process. We know your elephant can dance, and we can help.
In a previous blog post, we covered the top 10 challenges organizations face when adopting Agile. In this article, we’ll dive deep into one of these challenges: “Inadequate management support and sponsorship.” We will explore why organizations face this challenge and what can be done to address it.
Lack of management support is one of the most widespread and yet hard-to-uncover challenges. Agile transformation requires an enterprise-wide shift in mindset and culture and this requires buy-in from all levels of the organization, otherwise, it can cause the entire effort to lose alignment.
With Agile transformations, middle management is usually caught, well, in the middle. While expected to embrace the significant amount of change they will be experiencing in their own roles and responsibilities, they are also expected to be the torchbearers and lead their teams through the transformation.
Traditional management in a non-Agile culture operates largely in a command-and-control mode. It’s how they make decisions, forecast, and get work done. Agile principles strive to decentralize some of that control in favor of building self-organized and autonomous teams. This factor directly threatens management’s perception of control and can make them feel a loss of power. The root of this problem can sometimes trace back to a poorly executed change management strategy, or lack thereof. Agile is not just a small incremental change in the way organizations plan and deliver, but a fundamentally different way altogether that requires careful planning and execution to accomplish.
Below are some of the indicators that show management is not fully aligned and supportive:
- Teams do not get enough support from management when delivery challenges manifest
- Management puts a greater emphasis on following the practices and mechanics than the mindset
- There is a lack of psychological safety in the organization
- Management overvalues Agile metrics and dashboards, leading to an increase in Agile antipatterns
- Agile coaches struggle to maintain a coach and influence teams
- Key Agile tenets get ignored in favor of the sponsor’s wishes, e.g., artificial deadlines
To prevent this issue from becoming a challenge in the first place, it is important to acknowledge and plan for it while developing the transformation strategy. Here are a few ways to address this challenge:
- Address the “why”. Organizations can prevent resistance from happening by making sure management understands the “why” behind the change. Leadership needs to invest in and sponsor education and awareness campaigns to make sure people managers understand and align with the spirit of agile and are able to effectively articulate the value expected from the transformation.
- Engage and recruit management as advocates of change: Change is hard, and advocating change requires skill. If we want our management to be those advocates, it is necessary to ensure they have the right knowledge and training to play their part. Key members of management should be strategically identified and trained for advocacy, early in the transformation.
- Answer “What’s in it for me?”. While it might be well understood that Agile helps organizations deliver value early and often, it is human nature to seek personal benefit. Ensure they get a clear understanding of their future, feel secure from a career and work-life standpoint, and are able to see the benefits they stand to receive if the transformation goes well.
- Let managers know that you have their back. Agile transformations are always full of ups and downs. And the ‘downs’ ideally should be learning opportunities. However, if management is held to their old expectations and they feel they will be penalized for failures as a result of the new delivery approach, they will be less inclined to support the transformation because of potential negative impacts. Leadership needs to understand and acknowledge how ‘fail fast’ manifests itself when using Agile, and ensure management understands that they won’t be penalized when failures do occur.
- Create a ‘Growth Roadmap’. Just like ‘what’s in it for me,’ in the short term, it is important to address long-term personal benefits. As part of change management, ensure there is a clear understanding of how people will advance in their new roles in the next 3-5-8 years, what resources they will have to gain, new knowledge and credentials to consider, and which career paths they can pursue. Leadership should explain to middle-management how to be successful in an Agile culture, and ensure they recognize and reward managers for the new behaviors they want to see in them.
There is a lot that can be discussed about this challenge, and I would love to hear more from you. Have you faced similar challenges? What did you do to address them? At CC Pace, we love to engage with the community and provide value wherever possible. Reach out, we’d love to help.
Data science in simple terms takes large amounts of data and breaks it down to solve a problem or determine a specific pattern. Business analytics are used to evaluate data and related statistics to gain perspective on trends and interpretations for making organizational decisions.
While both data science and business analytics professionals draw insights from data using statistics and software tools, deciphering between the two can be complicated. This article does a good job describing the key differences between them and the important role each plays in the way data is analyzed.
PI planning is considered the heartbeat of Scaled Programs. It is a high-visibility event, that takes up considerable resources (and yes, people too). It is critical for organizations to realize the value of PI planning, otherwise, leadership tends to lose patience and gives up on the approach, leading to the organization sliding back on their SAFe Agile journey. There are many reasons PI planning can fall short of achieving its intended outcome. For the purposes of quick readability, I will limit the scope of this post to the following 5 reasons which I have found to have the most adverse effect.
- Insufficient preparation
- Inviting only a subset (team leads)
- Lack of focus on dependencies
- Risks not addressed effectively
- Not leaving a buffer
Let’s do a deeper dive and try to understand how each of the above anti-patterns in the SAFe implementation can impact PI planning.
Insufficient Preparation: By preparation, I don’t mean the event logistics and the preparation that goes along with it. The logistics are very important, but I am focusing on the content side of it. Often, the Product Owner and team members are entirely focused on current PI work, putting out fires, and scrambling to deliver on the PI commitments, so much so that even thinking about a future PI seems like an ineffective use of time. When that happens, teams often learn about the PI scope almost on the day of PI planning, or a few days before, which is not enough time to digest the information and provide meaningful input. PI planning should be an event for people from different teams/programs to come together and collaboratively create a plan for the PI. To do that, participants need to know what they are preparing for and have time to analyze it so when they come to the table to plan, the discussions are not impeded by questions that require analysis. Specifically, this means, that teams should know well in advance, what are the top 10 features that teams should prioritize, what is the acceptance criteria, and which teams will be involved in delivering those features. This gives the involved teams a runway to analyze the features, iron out any unknowns, and come to the table ready to have a discussion that leads to a realistic plan.
Inviting only a subset: As I said in the beginning, PI planning is a high-cost event. Many leaders see it as a waste of resources and choose to only include team leads/SMs/POs/Tech Leads/Architects and managers in the planning. This is more common than you might think. It might seem obvious why this is not a good practice, but let’s do a deep dive to make sure we are on the same page. The underlying assumption behind inviting a subset of people is that the invitees are experts in their field and can analyze, estimate, plan, and commit to the work with high accuracy. What’s missing from that assumption is, that they are committing on behalf of someone else (teams that are actually going to perform the work) with entirely different skill levels, understanding of the systems, organizational processes, and people. The big gap that emerges from this approach to planning is that work that is analyzed by a subset of folks tends not to account for quite a few complexities in the implementation, and the estimate is often based on the expert’s own assessment of effort. Teams do not feel ownership of the work, because they didn’t plan for or commit to it, and eventually the delivery turns into a constant battle of sticking to the plan and putting out fires.
Lack of focus on dependencies: The primary focus of PI planning should be the coordination and collaboration between teams/programs. Effectively identifying the dependencies that exist between teams and proper collaboration to resolve them is a major part of the planning event to achieve a plan with higher accuracy. However, teams sometimes don’t prioritize dependency management high enough and focus more on doing team-level planning, writing stories, estimating, adding them to the backlog, and slotting them for sprints. The dependencies are communicated, but the upstream and downstream teams don’t have enough time to actually analyze and assess the dependency and make commitments with confidence. The result is a PI plan with dependencies communicated to respective teams but not fully committed. Or even worse, some of the dependencies are missed only to be identified later in the PI when the team takes up the work. A mature ART prioritizes dependencies and uses a shift-left approach to begin conversations, capture dependencies, and give ample time to teams to analyze and plan for meeting them.
Risks not addressed effectively: During PI planning, the primary goal of program leadership should be to actively seek and resolve risks. I will acknowledge that most leaders do try to resolve the risks, but when teams bring up risks that require tough decisions, change in prioritization, and a hard conversation with a stakeholder, program leadership is not swift to act and make it happen. The risk gets “owned” by someone who will have follow-up action items to set up meetings to talk about it. This might seem like the right approach, but it ends up hurting the teams that are spending so much time and effort to come up with a reasonable plan for the PI. There is nothing wrong with “owning” the risks and acting on them in due time, however, during PI planning, time is of the essence. A risk that is not resolved right away, can lead to plans based on assumptions. If the resolution, which happens at a later date/time, turns out to be different from the original assumption made by the team, it can lead to changes in the plan and usually ends up putting more work on the team’s plate. The goal should be to completely “resolve” as many risks as possible during planning, and not avoid tough conversations/decisions necessary to make it happen.
Not leaving a buffer: We all know that trying to maximize utilization is not a good practice. Most leaders encourage teams to leave a buffer during the planning context on the first day. But, in practice, most teams have more in the backlog than they can accomplish in a PI. During the 2 days of planning, it is usually a battle to fit as much work as possible to make the stakeholders happy. For programs that are just starting to use SAFe, even the IP sprint gets eaten up by planned feature development work. One of the root causes for this is having a false sense of accuracy in the plan. Teams tend to forget that this is a plan for about 5 to 6 sprints that span over a quarter. A 1 sprint plan can be expected to have a higher level of accuracy because of a shorter timebox, less scope, and more refined stories. However, when a program of more than 50 people (sometimes close to 150 people) plans for a scope full of interdependencies, expecting the same level of accuracy is a recipe for failure. In order to make sure the plan is realistic, teams should leave the needed buffer and allow teams to adjust course when changes occur.
As I mentioned at the start of this post, there are many ways a high-stakes event like PI planning can fail to achieve the intended outcomes. These are just the ones I have experienced first-hand. I would love to know your thoughts and hear about some of the anti-patterns that affected your PI Planning and how you went about addressing them.
When I was first introduced to Agile development, it felt like a natural flow for developers and business stakeholders to collaborate and deliver functionality in short iterations. It was rewarding (and sometimes disappointing) to demo features every two weeks and get direct feedback from users. Continuous Integration tools matured to make the delivery process more automated and consistent. However, the operations team was left out of this process. Environment provisioning, maintenance, exception handling, performance monitoring, security – all these aspects were typically deprioritized in favor of keeping the feature release cadence. The DevOps/DevSecOps movement emerged as a cultural and technical answer to this dilemma, advocating for a much closer relationship between development and operations teams.
Today, companies are rapidly expanding their cloud infrastructure footprint. What I’ve heard from discussions with customers is that the business value driven by the cloud is simply too great to ignore. However, much like the relationship between development and ops teams during the early Agile days, a gap is forming between Finance and DevOps teams. Traditional infrastructure budgeting and planning doesn’t work when you’re moving from a CapEx to OpEx cost structure. Engineering teams can provision virtually unlimited cloud resources to build solutions, but cost accountability is largely ignored. Call it the pandemic cloud spend hangover.
Our customers see the flexibility of the cloud as an innovation driver rather than simply an expense. But they still need to understand the true value of their cloud spend – which products or systems are operating efficiently? Which ones are wasting resources?
I decided to look into FinOps practices to discover techniques for optimizing cloud spend. I researched the FinOps Foundation and read the book, Cloud FinOps. Much like the DevOps movement, FinOps seeks to bring cross-functional teams together before cloud spend gets out of hand. It encompasses both cultural and technical approaches.
Here are some questions that I had before and the answers that I discovered from my research:
Where do companies start with FinOps without getting overwhelmed by yet another oversight process?
Start by understanding where your costs are allocated. Understand how the cloud provider’s billing details are laid out and seek to apply the correct costs to a business unit or project team. Resource tagging is an essential first step to allocating costs. The FinOps team should work together to come up with standard tagging guidelines.
Don’t assume the primary goal is cost savings. Instead, approach FinOps as a way to optimize cloud usage to meet your business objectives. Encourage reps from engineering and finance to work together to define objectives and key results (OKRs). These objectives may be different for each team/project and should be considered when making cloud optimization recommendations. For example, if one team’s objective is time-to-market, then costs may spike as they strive to beat the competition.
What are some common tagging/allocation strategies?
Cloud vendors provide granular cost data down to the millisecond of usage. For example, AWS Lambda recently went from rounding to the nearest 100ms of duration to the nearest millisecond. However, it’s difficult to determine what teams/projects/initiatives are using which resources and for how long. For this reason, tagging and cost allocation are essential to FinOps.
According to the book, there are generally two approaches for cost allocation:
- Tagging – these are resource-level labels that provide the most granularity.
- Hierarchy-based – these are at the cloud account or subscription-level. For example, using separate AWS accounts for prod/dev/test environments or different business units.
Their recommendation is to start with hierarchy-based allocations to ensure the highest level of coverage. Tagging is often overlooked or forgotten by engineering teams, leading to unallocated resources. This doesn’t suggest skipping tags, but make sure you have a consistent strategy for tagging resources to set team expectations.
How do you adopt a FinOps approach without disrupting the development team and slowing down their progress?
The nature of usage-based cloud resources puts spending responsibility on the engineering team since inefficient use can affect the bottom line. This is yet another responsibility that “shifts left”, or earlier in the development process. In addition to shifting left on security/testing/deployment/etc., engineering is now expected to monitor their cloud usage. How can FinOps alleviate some of this pressure so developers can focus on innovation?
Again, collaboration is key. Demands to reduce cloud spend cannot be a one-way conversation. A key theme in the book is to centralize rate reduction and decentralize usage reduction (cost avoidance).
- Engineering teams understand their resource needs so they’re responsible for finding and reducing wasted/unused resources (i.e., decentralized).
- Rate reduction techniques like using reserved instances and committed use discounts are best handled by a centralized FinOps team. This team takes a comprehensive view of cloud spend across the organization and can identify common resources where reservations make sense.
Usage reduction opportunities, such as right sizing or shutting down unused resources, should be identified by the FinOps team and provided to the engineering teams. These suggestions become technical debt and are prioritized along with other work in the backlog. Quantifying the potential savings of a suggestion allows the team to determine if it’s worth spending the engineering hours on the change.

https://www.finops.org/projects/encouraging-engineers-to-take-action/
How do you account for cloud resources that are shared among many different teams?
Allocating cloud spend to specific teams or projects based on tagging ensures that costs are distributed fairly and accurately. But what about shared costs like support charges? The book provides three examples for splitting these costs:
- Proportional – Distribute proportionally based on each team’s actual cloud spend. The more you spend, the higher the allocation of support and other shared costs. This is the recommended approach for most organizations.
- Evenly – split evenly among teams.
- Fixed – Pre-determined fixed percentage for each team.
Overall, I thought the authors did a great job of introducing Cloud FinOps without overwhelming the reader with another rigid set of practices. They encourage the Crawl/Walk/Run approach to get teams started on understanding their cloud spend and where they can make incremental improvements. I had some initial concerns about FinOps bogging down the productivity and innovation coming from engineering teams. But the advice from practitioners is to provide data to inform engineering about upward trends and cost anomalies. Teams can then make decisions on where to reduce usage or apply for discounts.
The cloud providers are constantly changing, introducing new services and cost models. FinOps practices must also evolve. I recommend checking out the Cloud FinOps book and the related FinOps Foundation website for up-to-date practices.
We’ve been using the AWS Amplify toolkit to quickly build out a serverless infrastructure for one of our web apps. The services we use are IAM, Cognito, API Gateway, Lambda, and DynamoDB. We’ve found that the Amplify CLI and platform is a nice way to get us up and running. We then update the resulting CloudFormation templates as necessary for our specific needs. You can see our series of videos about our experience here.
The Problem
However, starting with the Amplify CLI version 7, AWS changed the way to override Amplify-generated resource configurations in the form of CFT files. We found this out the hard way when we tried to update the generated CFT files directly. After upgrading the CLI and then calling amplify push, our changes were overridden with default values – NOT GOOD! Specifically, we wanted to add a custom attribute to our Cognito pool.
After a few frustrating hours of troubleshooting and support from AWS, we realized that the Amplify CLI tooling changed how to override Amplify-generated content. AWS announced the changes here, but unfortunately, we didn’t see the announcement or accompanying blog post.
The Solution
Amplify now generates an “overrides.ts” Typescript file for you to provide your own customizations using Cloud Development Kit (CDK) constructs.
In our case, we wanted to create a Cognito custom attribute. Instead of changing the CFT directly (under the new “build” folder in Amplify), we generate an “override.ts” file using the command: “amplify override auth”. We then added our custom attribute using the CDK:
Important Note: The amplify folder structure gets changed starting with CLI version 7. To avoid deployment issues, be sure to keep your CLI version consistent between your local environment and the build settings in the AWS console. Here’s the Amplify Build Setting window in the console (note that we’re using “latest” version):
If you’re upgrading your CLI, especially to version 7, make sure to test deployments in a non-production environment, first.
What are some other uses for this updated override technique? The Amplify blog post and documentation mention examples like Cognito overrides for password policies and IAM roles for auth/unauth users. They also mention S3 overrides for bucket configurations like versioning.
For DynamoDB, we’ve found that Amplify defaults to a provisioned capacity model. There are benefits to this, but this model charges an hourly rate for consumption whether you use it or not. This is not always ideal when you’re building a greenfield app or a proof-of-concept. We used the amplify override tools to set our billing mode to On-demand or “Pay per request”. Again, this may not be ideal for your use case, but here’s the override.ts file we used:
Conclusion
At first, I found this new override process frustrating since it discourages direct updates to the generated CFT files. But I suppose this is a better way to abstract out your own customizations and track them separately. It’s also a good introduction to the AWS CDK, a powerful way to program your environment beyond declarative yaml files like CFT.
Further reading and references:
DynamoDB On-Demand: When, why and how to use it in your serverless applications
Authentication – Override Amplify-generated Cognito resources – AWS Amplify Docs
Override Amplify-generated backend resources using CDK | Front-End Web & Mobile (amazon.com)
Top reasons why we use AWS CDK over CloudFormation – DEV Community
Here is our final video in the 3-part series Building and Securing Serverless Apps using AWS Amplify. In case you missed Part 1 you can find it here along with Part 2 here. Please let us know if you would like to learn more about this series!
The video below is Part 2 of our 3-part series: Building and Securing Serverless Apps using AWS Amplify. In case you missed Part 1 – take a look at it here. Be sure to stay tuned for Part 3!
AWS Amplify is a set of tools that promises to make full-stack, cloud-native development quicker and easier. We’ve used it to build and deploy different products without getting bogged down by heavy infrastructure configuration. On one hand, Amplify gives you a rapid head start with services like Lambda functions, APIs, CI/CD pipelines, and CloudFormation/IaC templates. On the other hand, you don’t always know what it’s generating and how it’s securing your resources.
If you’re curious about rapid development tools that can get you started on the road to serverless but want to understand what’s being created, check out our series of videos.
We’ll take a front-end web app and incrementally build out authentication, API/function, and storage layers. Along the way, we’ll point out any gotchas or lessons learned from our experience.
Background
We all are humans and tend to take the easy route when we come across certain scenarios in life. Remembering passwords is one of the most common things in life these days, and we often tend to create a password that can be easily remembered to avoid the trouble of resetting it in case we forget it. In this blog, I am going to discuss a tool called “Have I Been Pwned”(HIBP) which is going to help us find any passwords that were seen in recent cybersecurity or data breaches.
What is HIBP? What is it used for?
“Have I Been Pwned” is an open-source initiative that helps people to check if their login information has been included in any breached data archives circling the dark web. In addition, it also allows users to check how often a given password has been found in the dataset – testing the strength of a password against dictionary-style brute force attacks. Recently, the FBI released a statement that they are going to closely work with the HIBP team to share the breached passwords for users to check against it. This open-source initiative is going to help a lot of customers avoid using breached passwords when creating accounts on the web. We used the HIBP API to help our customers who use custom web-based applications get alerted of any pwned passwords that they used while creating accounts. In this way, the users will be aware of not using such breached passwords that have been seen multiple times on the dark web.
How does it work?
HIBP stores more than half a billion pwned passwords that have previously been exposed in data breaches. The entire data set is both downloadable and searchable online via the Pwned Passwords page. Each password is stored as an SHA-1 hash of a UTF-8 encoded password and the password count with a colon (:) and separated by each line with a CRLF.
If we must use an API to search online for the password that was breached multiple times, we cannot send the actual source password over the web as it will compromise the integrity of the user’s password that got entered during account creation.
To maintain anonymity and protect the value of the source password being searched for, Pwned Passwords implements a k-Anonymity model that allows a password to be searched for by partial hash using search by range. In this way, we just need to pass the first 5 characters of an SHA-1 password hash (not case-sensitive) to the API which will respond with the suffix of every hash beginning with the specified prefix, followed by a count of how many times it appears in the dataset. The API consumer now can search the results that match the source password hash by comparing them with the prefix and the suffix of the hash results. If the source hash was not found in the results, it means that the password was not breached until date.
Integrated Solution
Pass2Play is one of our custom web-based solutions where we integrated the password breach API to detect any breached passwords during the sign-up process. Below is the workflow:
- The user goes to sign up for the account.
- Enters username and password to sign up.
- After entering the password, the user gets a warning message if the password was ever breached and how many times was it seen.
In the above screen, the user entered the password as “P@ssword” and got a warning message which clearly says that the entered password has been seen 7491 times based on the dataset circling in the dark web. We do not want our users using such passwords for their accounts which could get compromised later using dictionary-style brute-force attacks.
Architecture and Process flow diagram:
API Request and Response example:
SHA-1 hash of P@ssword: 9E7C97801CB4CCE87B6C02F98291A6420E6400AD
API GET: https://api.pwnedpasswords.com/range/9E7C9
Response: Returns 550 lines of hash suffixes that matches the first 5 chars
The highlighted text in the above image is the suffix that matches the first 5 hash chars’ prefix of the source password and has been seen 7491 times.
Conclusion
I would like to conclude this blog by saying that integration of such methods in your applications can help organizations avoid larger security issues since passwords are still the most common way of authenticating users. Alerting the end-users during account creation will make them aware of breached passwords which will also train the end users on using strong passwords.
Recently, I read an article titled, “Why Distributed Software Development Teams Work Infinitely Better”, by Boris Kontsevoi.
It’s a bit hyperbolic to say that distributed teams work infinitely better, but it’s something that any software development team should consider now that we’ve all been distributed for at least a year.
I’ve worked on Agile teams for 10-15 years and thought that they implicitly required co-located teams. I also experienced the benefits of working side-by-side with (or at least close to) other team members as we hashed out problems on whiteboards and had adhoc architecture arguments.
But as Mr. Kontsevoi points out, Agile encourages face-to-face conversation, but not necessarily in the same physical space. The Principles behind the Agile Manifesto were written over 20 years ago, but they’re still very much relevant because they don’t prescribe exactly “how” to follow the principles. We can still have face-to-face conversations, but now they’re over video calls.
This brings me to a key point of the article -” dispersed teams outperform co-located teams and collaboration is key”. The Manifesto states that building projects around motivated individuals is a key Agile principle.
Translation: collaboration and motivated individuals are essential for a distributed team to be successful.
- You cannot be passive on a team that requires everyone to surface questions and concerns early so that you can plan appropriately.
- You cannot fade into the background on a distributed team, hoping that minimal effort is good enough.
- If you’re leading a distributed team, you must encourage active participation by having regular, collaborative team meetings. If there are team members that find it difficult to speak above the “din” of group meetings, seek them out for 1:1 meetings (also encouraged by Mr. Kontsevoi).
Luckily, today’s tools are vastly improved for distributed teams. They allow people to post questions on channels where relevant team members can respond, sparking adhoc problem-solving sessions that can eventually lead to a video call.
Motivated individuals will always find a way to make a project succeed, whether they’re distributed, co-located, or somewhere in between. The days of tossing software development teams into a physical room to “work it out” are likely over. The new distributed paradigm is exciting and, yes, better – but the old principles still apply.
Last year, we worked with experts from George Mason University to build a COVID screening and tracing platform called Pass2Play. We used this opportunity to implement a Serverless architecture using the AWS cloud.
This video discusses our experience, including our solution goals, high-level design, lessons learned and product outcomes.
It’s specific to our situation, but we’d love to hear about other experiences with the Serverless tools and services offered by AWS, Azure and Google. There are a lot of opinions on Serverless, but there’s no doubt that it’s pushing product developers to rethink their delivery and maintenance processes.
Feel free to leave a comment if we’re missing anything or to share your own experience.
Small and medium sized companies are trying to return to “normal.” This short blog by guest blogger, Dr. Amira Roess, provides some guidelines. Dr. Amira Roess is a Professor of Global Health and Epidemiology at George Mason University.
It’s not going to be easy, and we may not get back to a pre-COVID workplace for another few years, but it can be done.
A critical factor is employee peace-of-mind. There are three actions an employer can take to ensure employee peace-of-mind:
- Take steps to prevent employees that may be infected from coming to work (i.e. Daily Symptom and Exposure Screening)
- Take steps to remove opportunities to become infected in the workplace (i.e. workplace hygiene and air flow)
- Take steps to rapidly remove employees from workplace that have been exposed (i.e. Accurate and Rapid Contact Tracing)
Daily Symptom & Exposure Screening apps provide a simple way for employees to check their symptoms. The questions on the screening app should be curated by an epidemiologist based on the latest scientific finding from the Center for Disease Control (CDC) as well as other credible sources.

An app that automatically sends daily reminders and/or alerts to employees to complete the screening can reduce the workload in managing the process.
Workplace hygiene must be maintained. Employees must avoid using other employees’ phones, desks, offices or other work tools and equipment, when possible. If you cannot avoid using someone else’s workstation, clean and disinfect before and after use.
Clean and disinfect frequently touched objects and surfaces, like workstations, keyboards, telephones, handrails and doorknobs. Dirty surfaces can be cleaned with soap and water before disinfection. Choose the right disinfectant for your surface from EPA’s List N: Disinfectants for Coronavirus (COVID-19).
Wear a mask in all shared spaces, especially where staying 6 feet apart (about two arm lengths) is not possible. Interacting without wearing a mask increases your risk of getting infected. Note: wearing a mask does not replace the need to practice social distancing.
Employees should wash hands often with soap and water for at least 20 seconds or use hand sanitizer with at least 60% alcohol if soap and water are not available. If your hands are visibly dirty, use soap and water over hand sanitizer.
All medical professionals know to avoid touching your eyes, nose and mouth if you haven’t washed your hands.
Employees must remember to cover mouth and nose with a tissue when coughing or sneezing, or use the inside of your elbow. Throw used tissues into no-touch trash cans and immediately wash hands with soap and water for at least 20 seconds.
Indoor spaces should be evaluated to ensure that maximum airflow is supported. High quality portable HEPA filters can provide an additional layer of protection.
Contact Tracing is very important. Should any employee find out that they are positive for COVID, anybody that was exposed (i.e. more than 15 minutes in less than 6 feet proximity) should be notified and instructed to quarantine immediately.

Here is a link to the CDC website with Quarantine instructions. https://www.cdc.gov/coronavirus/2019-ncov/if-you-are-sick/quarantine.html.
It is strongly recommended to use an automated Contract Tracing system to get accurate time and distance between employees. These systems can also identify where workplace operations result in unintended congregation.
CC Pace has teamed up with George Mason Professors, Dr. Roess and Dr. Lance Shirley, to create Pass2Play.
Pass2Play is a combined Daily Symptom & Exposure Screening and Contract Tracing App that is designed to Provide for Employee Peace-Of-Mind, Ensure Employee Health & Safety, and Maximize Workspace Uptime.
For a demo and purchase: sales@pass2play.com
CC Pace was recently featured in Agile Uprising’s Blog series. Agile Uprising is a network that is focused on the advancement of the Agile mindset and professional networking between leading Agilists. In the blog, CC Pace created a short video where we highlighted one of our latest projects! Bobby Pantall, CC Pace Lead Technology Consultant, speaks to our experience building an app for a startup company named Twisty Systems. This app that was developed is a navigation app aimed towards driving enthusiasts. In the video we describe the framework of the Lean Startup methodology and some of the highs and lows in the context of the pandemic and releasing a new app.
Enjoy and please share your thoughts on this project!
As 2020 has unfolded, our development team has been working on a brand new app: Pass2Play! Check out the video below to see all of its features and capabilities!
To learn more about Pass2Play click here!
In the first installment of our Product Owner Empowerment series, we talked about the three crucial dimensions of ‘Knowledge’ that affect a Product Owner’s effectiveness. This post is going to take a deeper dive into the impact Empathy has on a Product Owner’s ability to succeed.
Empathy: Assuming positive intent, empathy is something that comes naturally to a person. However, environmental factors can influence a person’s ability to relate or connect with another person or team. Let’s explore some aspects of empathy and how they may impact a Product Owner’s success.
- Empathy towards the team(s): To facilitate an empathetic relationship between a Product Owner and the team, the PO must be able to meet the team where they are (literally and figuratively). Getting to know the team members and building a rapport requires the Product Owner to extensively interact with the team and proactively work to build such relationships. Organizations should facilitate this by making sure Product Owners are physically located where the team is and is empowered to not only represent the team to the business but also play the role of protector from external interruptions, so that team can function effectively. As alluded to above, having a good understanding of what it takes to deliver, helps tremendously with the ability to place themselves in the team’s shoes and see things from their perspective.
- Empathy towards the customer(s): It is easy to assume that a Product Owner acting on behalf of the business will automatically have empathy and an understanding of their needs to adequately represent their business interests. However, organizational culture can sometimes influence how a Product Owner prioritizes work. If it is only the sponsors directing the team’s scope and prioritization, a critical element of customer input is missed. Product Owners should place sufficient emphasis on obtaining customer opinion and feedback to inform the direction of product development.
- Empathy in the Organization: This factor relates to the organizational culture. As companies embrace Agile and expect its benefits to be equally realized, more emphasis on the desire to be lean begins to form. While being lean is a goal every organization should have, it is important to understand what kind of impact a lean organization has on individual teams or team members. A systemic push to be lean, in combination with less than optimal Agile maturity and the presence of antipatterns, can lead to teams being held against unsustainable delivery expectations. This problem is more common than you would think. Most organizations are going through some level of Agile transformation, but leadership expectations of benefits realization are always a few steps ahead of where the organization truly is on the Agile journey. Having the right set of expectations and the empathy necessary to reset them based on continuous learning and feedback is needed at an organizational level.
Check back next week to see how a Product Owner’s success is tied to psychological safety for themselves and the teams they are working with.
If you are a Product Owner or your Agile team struggles with this role, you won’t want to miss our upcoming webinar on Product Owner Empowerment. This webinar will be held on December 15th and you can register today here! Space is limited and on a first-come basis.
I have a deep interest in cybersecurity, and to keep up with the latest threats, policies and security practices, I became a member of ACT-IAC organization and enrolled in the Cybersecurity Community of Interest group. This is where I got the opportunity to work as a volunteer in the Zero Trust Architecture Phase 2 project. Hence, I am trying to share the knowledge I gained around ZTA strategy and principles. I am planning to break my blog into four series based on how the project progresses.
- What is ZTA?
- Real world deployment scenarios
- ZTA core capabilities
- Vendors providing ZTA capabilities
What is ZTA and how did it come into existence?
Traditionally, perimeter-based security has been used to protect the network infrastructure behind a firewall where if the user gets authenticated, they can access all the resources behind the firewall assuming all network users/devices as trustworthy. This caused a lot of security breaches across the globe where attackers could move laterally and exploit resources to which they were not authorized. The attackers only had to get through the firewall and later crawl across any resource available in the network causing potential damage in terms of data loss and other financial implications that can come via ransomware attacks.
Currently, an enterprise’s infrastructure operates around several networks like cloud-based services, remote users connecting from their own network using their enterprise-owned or personal devices (laptops, mobile devices), network location can change based on where the users/devices are connected from for e.g. public WIFI, internal enterprise networks etc. All these complex use cases made the possibility of moving away from perimeter-based security to “perimeter less” security (not confined to one network infrastructure) which led to the evolution of a new concept called as “Zero-Trust” where you “trust no one, but verify”. ZT approach is primarily based on data protection but it can be applied across other enterprise assets like users, devices, applications and infrastructure.
ZTA is basically an enterprise cybersecurity strategy that prevents data breaches and limits lateral movement within the network infrastructure. It assumes all the internal or external agents (user, device, application, infrastructure) that wants to access an enterprise resource (internal network or externally in the cloud) is not trustworthy and needs to be verified for each request before granting access to them.
What does Zero Trust mean in a ZTA?

(Image courtesy: NIST SP 800-27 publication)
In the above diagram, the user who is trying to access the resource must go through the PDP/PEP. PDP/PEP decides whether to grant access to this request based on enterprise policies (data/access/risk), user identity, device profile, location of the user, time of request and any other attributes needed to gain enough confidence. Once granted, the user is on an “Implicit Trust Zone” where it can access all the resources based on network infrastructure design. “Implicit Trust Zone” is basically the boarding area in an airport where all the passengers are considered trustworthy once they verify themselves through immigration/security check.
You can still limit access to certain resources in the network using a concept called “Micro-Segmentation”. For example, after getting through the security check and reaching the boarding area, passengers are again checked at the boarding gate to make sure they are entering the authorized flight to reach their destination. This is what “Micro Segmentation” means where the resources are more isolated to a segment and access requests are verified separately in addition to PDP/PEP.
Tenets of ZTA: (As per NIST SP 800-27 publication)
All the resources whether its data related, or services provided should be communicating in a secure fashion irrespective of their network location. Each individual access request will be verified before granting access to any resource based on the client’s identity, device they are using to request, type of application used, location coordinates and other behavioral attributes. Each access request granted will be authenticated and authorized dynamically and strictly enforced. In addition, the enterprise should collect all activity information, log decisions, audit logs and monitor the network infrastructure to improve the overall security posture.
What are the logical components of ZTA?

(Image courtesy: NIST SP 800-27 publication)
Policy Engine: Responsible to make and log decisions based on enterprise policy and inputs from external resources (CDM, threat intelligence etc.) to grant access or not to a request.
Policy Administrator: Responsible for establishing or killing the communication path between the subject and enterprise resource based on the decision made by PE. It can generate authentication tokens for the client to access the resource. PA communicates with PEP via the control plane.
Policy Enforcement Point: Responsible for enabling, monitoring and terminating communication between subject and enterprise resource. It can be either used as a single logical component or can be broken into two components: the client agent and resource gateway component that controls access. Beyond the PEP is the “Implicit Trust Zone” to access enterprise resources.
Control Plane/Data Plane: The control plane is made up of components that receive and process requests from the data plane components that wish to access network resources. The control and data planes are more like zones in the ZTA. All the resources, devices, and users within the network can have their own control plane component within them to decide whether the data should be routed further or not. In this diagram, it is just used to explain how control plane works for data plane components. Data plane simply passes packets around and the control plane routes them appropriately based on decisions made.
Note: The dotted line that you see in the image above is the hidden network that is used for communication between the various logical components.
Why should organizations adopt ZTA?
When adopting a ZTA, organizations must weigh all the potential benefits, risks, costs, and ROI. Core ZT outcomes should be focused on creating secure networks, securing data that travels within the network or at rest, reducing impacts during breaches, improving compliance and visibility, reducing cybersecurity costs and improving the overall security posture of an organization.
Lost or stolen data, ransomware attacks, and network and application layer breaches cost organizations huge financial losses and market reputation. It takes a lot of time and money for an organization to resume back to normal if the security breach was of the highest degree. ZT adoption can help organizations avoid such breaches which is the key to survive in today’s world, where state funded hackers are always ahead of the game.
As with all technology changes, the biggest challenge to demonstrate higher ROI and lower cybersecurity costs is the time needed to deliver the desired results. Organizations should consider the following:
- Assess what components of ZTA pillars they currently have in their infrastructure. Integration of components with existing tools can reduce the overall investment needed to adopt ZTA.
- Consider including costs or impacts associated with risk levels and occurrences when doing ROI calculations.
- ZT adoption should simplify, and not complicate, the overall security strategy to reduce costs.
What are the threats to ZTA?
ZTA can reduce the overall risk exposure in an enterprise but there are some threats that can still occur in a ZTA environment.
- Wrongly or mistakenly configured PE and PA could cause disruptions to the users trying to access the resources. Sometimes, the access requests which would get unapproved previously could get through due to misconfiguration of PE and PA by the security administrator. Now, the attackers or subjects could access resources from which they were restricted before.
- Denial of service attacks on PA/PEP can disrupt enterprise operations. All access decisions are made by PA and enforced by PEP to make a successful connection of a device trying to access a resource. If the DoS attack happens on the PA, then no subject would be able to get access as the service would be unavailable due to a flood of requests.
- Attackers could compromise an active user account using social engineering techniques, phishing or any other way to impersonate the subject to access resources. Adaptive MFA may reduce the possibility of such attacks on network resources but still in traditional enterprises with or without ZTA adoption, an attacker might still be able to access resources to which the compromised user has access. Micro-segmentation may protect resources against these attacks by isolating or segmenting the resource using technologies like NGFW, SDP.
- Enterprise network traffic is inspected and analyzed by policy administrators via PEPs but there are other non-enterprise-owned assets that can’t be monitored passively. Since the traffic is encrypted and it’s difficult to perform deep packet inspection, a potential attack could happen on the network from non-enterprise owned devices. ML/AI tools and techniques can help analyze traffic to find anomalies and remediate it quickly.
- Vendors or ZT solution providers could cause interoperability issues if they don’t follow certain standards or protocols when interacting. If one provider has a security issue or disruption, it could potentially disrupt enterprise operations due to service unavailability or the time taken to switch to another provider which can be very costly. Such disruptions can affect core business functions of an enterprise when working in a ZTA environment.
References
[ACT-IAC] American Council for Technology and Industry Advisory Council (2019) Zero Trust Cybersecurity Current Trends. Available at https://www.actiac.org/zero-trust-cybersecurity-current-trends
Draft (2nd 1) NIST Special Publication 800-207. Available at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207-draft2.pdf
NIST Zero Trust Architecture Release: https://www.nccoe.nist.gov/projects/building-blocks/zero-trust-architecture
What is App Modernization
Legacy application modernization is a process to update existing and aging applications with modern architecture to enhance features and capabilities. By migrating your legacy applications, you can include the latest functionalities that better align with what your business needs to succeed. Keeping legacy applications running smoothly while still being able to meet current day needs can be a time consuming and resource intensive affair. That is doubly the case when software becomes so outdated that it may not even be compatible with modern day systems.
A Quick Look at a Sample Legacy Monolithic Application
For this article, say a decade and half year-old, Legacy Monolithic Application is considered as depicted in the following diagram.
This depicts a traditional, n-tier architecture that was very common in the past 20 years or so. There are several shortcomings with this architecture, including the “big bang” deployment that had to be tightly managed when rolling out a release. Most of the resources on the team would sit idle while requirements and design were ironed out. Multiple source control branches had to be managed across the entire system, adding complexity and risk to the merge process. Finally, scalability applied to the entire system, rather than smaller subsystems, causing increase costs for hardware resources.
Why Modernize?
We define modernization as migrating from a monolithic system to many decoupled subsystems, or microservices.
The advantages are:
- Reduce cost
- Costs can be reduced by redirecting computing power only to the subsystems that need it. This allows for more granular scalability.
- Avoid vendor lock-in
- Each subsystem can be built with a technology for which it is best suited
- Reduce operational overhead
- Monolithic systems that are written in legacy technologies tend to stay that way, due to increased cost of change. This requires resources with a specific skillset.
- De-coupling
- Strong coupling makes it difficult to optimize the infrastructure budget
- De-coupling the subsystems makes it easier to upgrade components individually.
Finally, a modern, microservices architecture is better suited for Agile development methodologies. Since work effort is broken up into iterative chunks, each microservice can be upgraded, tested and deployed with significantly less risk to the rest of the system.
Legacy App Modernization Strategies
Legacy application modernization strategies can include the re-architecting, re-factoring, re-coding, re-building, re-platforming, re-hosting or the replacement and retirement of your legacy systems. Applications dating back decades may not be optimized for mobile experiences on smartphones or tablets, which could require entire re-platforming. Lift and Shift will not add any business value if you migrate legacy applications just for the sake of Modernization. Instead, it’s about taking the bones, or DNA, of the original software, and modernizing it to better represent current business needs.
Legacy Monolithic App Modernization Approaches
Having examined the nightmarish aspects of continuing to maintain Legacy Monolithic Applications, this article presents you with two Application Modernization Strategies. Both listed below will be explained at length to get basic idea on to pick whatever is feasible with constraints you might have.
- Migrating to Microservices Architecture
- Migrating to Microservices Architecture with Realtime Data Movement (Aggregation/Deduping) to Data Lake
Microservices Architecture
In this section, we shall take a dig at how re-architecting, re-factoring and re-coding per microservices paradigm will help avoid a lot of overheads of maintaining a legacy monolithic system. The following diagram helps you better understand Microservice Architecture – a leap forward from legacy monolithic architecture.
At a quick glance of above diagram, you can understand there is a big central piece called API Gateway with Discovery Client. This is comparable to a Façade in a Monolithic Application. API Gateway is essentially an entry point to access several microservices which are comparable to modules in Monolithic Application and are identified/discovered with the help of Discovery Client. In this Design/Architecture of Microservices, API Gateway also acts as API Orchestrator as it resorts to one Database set via Database Microservice in the diagram. In other words, API Gateway/Orchestrator orchestrates the sequence of calls based on the business logic to call Database Microservice as individual Microservices have no direct access to database. One can also notice this architecture supports various client systems such as Mobile App, Web App, IOT APP, MQTT App et al.
Although this architecture gives an edge to using different technologies in different microservices, it leaves us with a heavy dependency on the API Gateway/Orchestrator. The Orchestrator is tightly coupled to the business logic and object/data model, which requires it to be re-deployed and tested after each microservice change. This dependency prevents each microservice from having its own separate and distinct Continuous Integration/Continuous Delivery (CI/CD) pipeline. Still, this architecture is a huge step towards building heterogenous systems that work in tandem to provide a complete solution. This goal would otherwise be impossible with a Monolithic Legacy Application Architecture.
Microservices Architecture with Realtime Data Movement to Data Lake
In this section, we shall take a dig at how re-architecting, re-factoring, re-coding, re-building, re-platforming, re-hosting or the replacement and retirement of your legacy systems per microservices paradigm will help avoid a lot of overheads of maintaining a legacy monolithic system. The following diagram helps you understand a complete advanced Microservices Architecture.
At the outset, most part of the diagram for this approach looks like the previous approach. But this adheres to the actual Microservice paradigm more than the previous. In this case, each microservice is individual and has its own micro database of any flavor it chooses to be based on the business needs and avoids dependency on a microservice called database microservice or overload API Gateway to act as Orchestrator with business logic. The advantage of this approach is, each Microservice can have its own CI/CD pipeline release. In other words, a part of application can be released with TDD/ATDD properly implemented avoiding costs incurred for Testing/Deploying and Release Management. This kind of architecture does not limit the overall solution to stick to any particular technical stack but encourages to provide quick solutions with various technical stacks. And gives flexibility to scale resources for highly hit microservices when necessary.
Besides this architecture encourages one to have a Realtime Engine (which can be a microservice itself) that reads data from various databases asynchronously and apply data aggregation and data deduping algorithms and send pristine data to Data lake. Advanced Applications can then use the data from Data lake for Machine Learning and Data Analytics to cater to the business needs.
Note: This article has not been written any cloud flavor in mind. This is general App Modernization Microservices architecture that can run anywhere on-prem or OpenShift (Private Cloud) or Azure Cloud or Google Cloud or AWS (Private Cloud)
I’m in the process of reading a book on Agile database warehouse design titled, appropriately enough, Agile Data Warehouse Design and by Lawrence Corr.
While Agile methodologies have been around for some time – going on two decades – they haven’t permeated all aspects of software design and development at the same pace. It’s only in recent years that Agile has been applied to data warehouse design in any significant way.
I’m sure many Agile consultants have worked on projects in the past where they were asked to come up with a complete design up-front. That’s true with data warehouse projects too where a client’s database team wanted the entire schema designed up-front – even before the requirements for the reports the data warehouse would be supporting were identified. What would appear to be driving the design was not the business and their report priorities, but the database team and their desire to have a complete data model.
While Agile Data Warehouse Design introduces some new methods, it emphasizes a common-sense approach that is present in all Agile methodologies. In this case, build the data warehouse or data mart one piece at a time. Instead of thinking of the data warehouse as one big star schema, think of it as a collection of smaller star schemas – each one consisting of a fact table and its supporting dimension tables.
The book covers the basics of data warehouse design including an overview of fact tables, dimension tables, how to model each and as mentioned, star schemas. The book stresses the 7-Ws when designing a data warehouse – who, what, where, when, why, how and how many. These are the questions to ask when talking to business to come up with an appropriate design. “How many” is applicable for the fact tables, while the other questions apply to dimension table design.
Agile Data Warehouse Design stresses collaboration with the business stakeholders, keeping them fully engaged so that they feel like they are not just users, but owners of the data. Agile Data Warehouse Design focuses on modeling the business processes that the business owners want to measure, not the reports to be produced or the data to be collected.
I still have a way to go before I’ve finished the book and then applied what I’ve learned, but so far, it’s been a worthwhile learning experience.
Uber. Eight years ago, the company did not exist and the word was simply a rarely used adjective of German origin meaning “ultra”, like an uber intellectual. Today, Uber has become one of the most successful startups in history and the word has become a commonplace verb in English parlance. Transcending to “verb” status puts Uber in the highly exclusive class of innovative business disrupters like Google and FedEx whose business names and processes have become synonymous with an action that didn’t previously exist but is now done on a regular basis. Who today wouldn’t understand what actions you had taken if you said, “I quickly googled the address for the nearest drop-off spot and uber-ed over there so I could fed-ex my package out on time”?
Uber owns no cars, has no drivers, and has minimal fixed assets. Instead, they created an incredibly user-friendly software that improves aspects of the taxi ride industry we didn’t know needed improvement. Not surprisingly, the full legal name is Uber Technologies, Inc. While the only technologies typically found in traditional taxi cabs are the decades old meter clicking away the increments of the cost of your ride, the Uber software provides new value to both the driver and the customer with useful information such as the location of both the driver and the customer, time estimate for pick-up, exact pricing, car options, driving directions, and much more.
By creating this simple way to get a ride, Uber has reached another pinnacle accomplishment whereby the creativity of its business model has become a noun: uber-fication. According to Dr. Paul Marsden in his Reading Room article, The Uberfication of Everything, “…the real genius of Uber lies in a deep understanding of convenience – what it is and why it matters. That’s what Uberfication is all about; pivoting your business to deliver on a core under-exploited consumer need – convenience”.
One thing that every startup has is a dream and a vision. But, let’s be honest, that simply isn’t enough to successfully build a booming new business like Uber. You need the right partners, you need money, and you need passion for the project at hand. We believe that we can help in all these areas, which lead us to formalize an offering exclusively for startups.
When I formed CC Pace nearly 36 years ago, I was driven by a vision of a new model for a consulting company – one where integrity and the client’s best interest were ingrained in the firm’s culture and successful delivery could almost be guaranteed by the quality, drive and teamliness of the employees who worked there. While my dream may not have been as wide-reaching as Uber, when I think back to that time, I just remember energy, excitement and that ‘anything is possible’ feeling. Over the years, we’ve been very fortunate to work with clients in all phases, from startups to Fortune 500 organizations—all of which we value a great deal. I get excited to work with clients of all sizes, but there is something about working with startups that brings about an energy that you can’t replicate in other environments. Being a part of someone else’s vision coming to life brings me right back to where I stood over 35 years ago and is an environment in which I’ve seen our project teams thrive in.
Our experience working with startups combined with our project teams’ passion has lead us to formalize an offering to help startups get off the ground with the right technology. To enable us to work on more of these type of efforts, we are officially launching a new risk/reward program for startups. Here, we are able to combine our technical prowess with our business acumen that result in a software component that fully and effectively supports the start-up’s vision. The premise of our offering is to build the technological platform for your business with less cash required. In exchange for this discount, we agree upon a fair share of some downstream benefits of your startup reflective of the risk we take.
If you like the idea of maintaining control of your vision while paying less up-front to get the results you need, then I would love to hear from you. Interesting companies with challenging technology needs has been a driver for us for over 35 years. For this reason, we are confident that we have the ability to help better enable your dream. After all, it’s only a matter of time before the next “Uber” shocks the world.
For more information on the risk/reward program, check out our offering here.
In my personal experience working on various software development projects, the concept of team energy often appears to be either undervalued or benignly ignored by management teams. The reasons are many. First of all, the term may be confused with “team velocity” which is a relative measurement of a team’s average output or productivity. If the velocity appears to be at either predictable or positive levels, the management team may choose to believe that team energy is also at satisfactory levels. Organizations may attempt to boost employee morale by putting together team-building exercises and outing events. This macro approach may result in generating a perceived positive effect on team energy, thus obscuring the need for focus on individual teams. So, what is team energy and why should managers consider devoting some attention to it?
When thinking in terms of team energy, one can look at it as building credit with each individual team. One can also look at it from an analogy of having a rainy day fund. From a management’s standpoint it is important to keep team energy in the positive territory. This helps ensure that the team will likely empower themselves to exceed expectations, as well as step up during times of crisis or high pressure situations. I have worked in high energy teams, in which members voluntarily pushed themselves past regular working hours to produce deliverables. These cases did not involve any direct increase in compensation or promotion. People naturally wanted to succeed because they possessed enough energy to do so. I have also witnessed the opposite, where a team’s energy was low, deliverables were in a perpetual state of tardiness and the backlog was steadily accruing bugs. Developers and testers did not feel empowered to succeed and entered a cycle of doing the absolute bare minimum to “get the management off their back”.
Science behind building teams
Agile methodologies, whether Scrum or Kanban, prescribe various techniques that are focused on continuous improvement that may positively affect team energy. Regardless of whether an organization has truly embraced Agile, it is difficult to find managers that would oppose efforts in improving a team’s processes of delivering faster and at a higher value. After all, who is against a boost in productivity? There is a hidden psychological component to continuous improvement that has a causal effect on team energy. This component is more associated with the experience of individual team members.
Studies of team dynamics such as ones conducted by MIT’s Human Dynamics Laboratory, and documented in Harvard Business Review suggest that there is a science to building high performing and high energy teams. One of the keys is to focus on the human brain and social dynamics of a group. Your teams may be composed of introverts as well as extroverts and a wide range of personalities, but there is a common factor that seems to persist. The studies show that humans feel good when they achieve their goals and overcome obstacles. A human brain actually rewards its owner with extra levels of dopamine when a goal is achieved. When the team feels good more often than not, the team energy goes up. When the opposite occurs, team energy goes down. Therefore, focusing on small achievable goals not only helps the organization to shift focus of deliverables, but it also fosters this psychological benefit of achievement for each individual team member.
Measurements
An organization may choose to periodically measure team energy. One way to achieve such measurement is through anonymous surveys. Usually this is done at a more enterprise level to gauge the overall organization energy. There is certainly value in doing that, but the effort is not focused and may not necessarily apply to teams. Small teams may not produce very accurate results. There may be disincentives to be frank when answering a survey because team members may feel singled out and fear reprisals from management. In addition, more introverted team members may choose not to “rock the boat”. A more effective and team-focused approach is to have an Agile coach periodically take team energy measurements. An opportune time may be during team retrospectives, when a team is usually more receptive to be candid. Most importantly, these measurements do not need to be secretly stored in a manager’s vault but should be shared with the team. Adding transparency to the team building and management process will not only increase team energy, but also foster leadership skills among the more proactive and extraverted team members.
Building a new software product is a risky venture – some might even say adventure. The product ideas may not succeed in the marketplace. The technologies chosen may get in the way of success. There’s often a lot of money at stake, and corporate and personal reputations may be on the line.
I occasionally see a particular kind of team dysfunction on software development teams: the unwillingness to share risk among all the different parts of the team.
The business or product team may sit down at the beginning of a project, and with minimal input from any technical team members, draw up an exhaustive set of requirements. Binders are filled with requirements. At some point, the technical team receives all the binders, along with a mandate: Come up with an estimate. Eventually, when the estimate looks good, the business team says something along the lines of: OK, you have the requirements, build the system and don’t bother us until it’s done.
(OK, I’m exaggerating a bit for effect – no team is that dysfunctional. Right? I hope not.)
What’s wrong with this scenario? The business team expects the technical team to accept a disproportionate share of the product risk. The requirements supposedly define a successful product as envisioned by the business team. The business team assumes their job is done, and leaves implementation to the technical team. That’s unrealistic: the technical team may run into problems. Requirements may conflict. Some requirements may be much harder to achieve than originally estimated. The technical team can’t accept all the risk that the requirements will make it into code.
But the dysfunction often runs the other way too. The technical team wants “sign off” on requirements. Requirements must be fully defined, and shouldn’t change very much or “product delivery is at risk”. This is the opposite problem: now the technical team wants the business team to accept all the risk that the requirements are perfect and won’t change. That’s also unrealistic. Market dynamics may change. Budgets may change. Product development may need to start before all requirements are fully developed. The business team can’t accept all the risk that their upfront vision is perfect.
One of the reasons Agile methodologies have been successful is that they distribute risk through the team, and provide a structured framework for doing so. A smoothly functioning product development team shares risk: the business team accepts that technical circumstances may need adjustment of some requirements, and the technical team accepts that requirements may need to change and adapt to the business environment. Don’t fall into the trap of dividing the team into factions and thinking that your faction is carrying all the weight. That thinking leads to confrontation and dysfunction.
As leaders in Agile software development, we at CC Pace often encourage our clients to accept this risk sharing approach on product teams. But what about us as a company? If you founded a startup and you’ve raised some money through venture capital – very often putting your control of your company on the line for the money – what risk do we take if you hire us to build your product? Isn’t it glib of us to talk about risk sharing when it’s your company, your money, and your reputation at stake and not ours?
We’ve been giving a lot of thought to this. In the very near future, we’ll launch an exciting new offering that takes these risk sharing ideas and applies them to our client relationships as a software development consultancy. We will have more to say soon, so keep tuning in.
I used to attend Agile conferences pretty frequently, but at some point I got burned out on them and the last one I attended was a 2007 conference in Washington, DC. This year, when the Agile Alliance conference returned to the DC region, and I decided it was time to give them a try again.
It’s interesting to see how things changed since I last attended an Agile conference. Agile 2015 felt much more stage managed than in previous years, with its superhero party, the keynotes making at least glancing reference to it (the opening keynote, Awesome Superproblems, appears to have been retitled for the theme, since all the references in the presentation were to “wicked” problems instead of “super” problems), and making one go through the vendor to get to lunch. It also seemed like there were mostly “experts” making presentations, whereas previously I felt like there were more presentations by community members. I have mixed feelings about all of this, but on the whole I felt that my time was well spent. Although I didn’t really plan it, I seem to have had three themes in mind when I picked my sessions, team building, DevOps and craftsmanship. Today I’ll tell you about my experiences with the team building sessions.
Two of the keynotes supported this theme: Jessie Shternshus’ Individuals, Interactions and Improvisation and James Tamm’s Want Better Collaboration? Don’t be so Defensive. I’d heard of using the skills associated with improvisation to improve collaborative skills, but the Agile analogy seemed labored. Tamm’s presentation was much more interesting to me. I’m not sure he’s aware of the use of the pigs and chickens story in Scrum, but he started out with a story about chickens. Red zone and green zone chickens, to be precise. Apparently there are those chickens (we’re outside of the scrum metaphor here, incidentally) that become star egg layers by physically abusing other chickens to suppress their egg production. These were termed red zone chickens, while the friendly, cooperative chickens were termed green zone chickens. Tamm described a few unpleasant solutions (such as trimming the chickens’ beaks) that people had tried to deal with the problem, and ended up by describing an experiment whereby the the red zone chickens were segregated from the green zone chickens, with the result that the green zone chickens’ egg production went up 260% while only the mortality rate went up for the red zone chickens (http://blog.pgi.com/2015/05/what-can-chickens-teach-us-about-collaboration/). Tamm then went on to compare this to human endeavors, pointing out the signs that an organization might be in the red zone (low trust/high blame, threats and fear, and risk avoidance, for example) or in the green zone (high trust/low blame, mutual support and a sense of contribution, for example), while explaining that no organization is going to be wholly in either zone. He wound up with showing us ways to identify when we, as individuals, are moving into the red zone and how to try to avoid it. This was easily the most thought provoking of the three keynotes, and I picked up a copy of Tamm’s book, Radical Collaboration, to further explore these ideas. The full presentation and slides are available at the Agile Alliance website (www.agilealliance.org).
In the normal sessions, I also attended Lyssa Adkins Coaching v. Mentoring, Jake Calabrese’s Benefiting from Conflict – Building Antifragile Relationships and Teams, and two presentations by Judith Mills: Can You Hear Me Now? Start Listening Instead and Emotional Intelligence in Leadership. Alas, It was only in hindsight that I realized that I’d read Adkins’ book. In this presentation, she engaged in actual coaching and mentoring sessions with two people she’d brought along specifically for the purpose. Unfortunately the sound in the room was poor and I feel like I lost a fair amount of the nuance of the sessions; the one thing I came away with was that mentoring seems like coaching while also being able to provide more detailed information to them.
Jake Calabrese turned out to be a dynamic and engaging speaker and I enjoyed his presentation and felt like it was useful, but that was before I went to Tamm’s keynote on collaboration. I did enjoy one of the exercises that Calabrese did, though. After describing the four major “team toxins”: Stonewalling, Blaming, Defensiveness and Contempt, he had us take off our name badges, write down which toxin we were most prone to on a separate name badge, and go and introduce ourselves to other people in the room using that toxin as our name. Obviously this is not something you want to do in a room full of people that work together all the time, but it was useful to talk to other people about how they used these “toxins” to react to conflict. In the end, though, I felt that Calabrese’s toxins boil down to the signs of defensiveness that Tamm described and that Tamm’s proposals for identifying signs of defensiveness in ourselves and trying to correct them are more likely to be useful than Calabrese’s idea of a “Team Alliance.”
The two presentations by Judith Mills that I attended were a mixed bag. I thought the presentation on listening was excellent, although there’s a certain irony in watching many of the other attendees checking their e-mail, being on Facebook, shopping, etc., while sitting in a presentation about listening (to be fair, there was probably less of that here than in other presentations). Mills started by describing the costs of not listening well and then went into an exercise designed to show how hard listening really is: one person would make three statements and their partner would then repeat the sentences with embellishments (unfortunately, the number of people trying this at once made it difficult to hear, never mind listen. The point was made, though). We then discussed active listening and the habits and filters we can have that might prevent us from listening well and how communication involved more than just the words we use. This was a worthwhile session and my only disappointment was that we didn’t get to the different types of question that one might use to promote communication and how they can be used.
Mills’ presentation on Emotional Intelligence in Leadership, on the other hand, was not what I anticipated. I went in expecting a discussion on EI, but the presentation was more about leadership styles and came across as another description “new” leadership. It would probably be useful for the people that haven’t experienced or heard about anything other than Taylorist scientific management, but I didn’t find anything particularly new or useful to my role in this presentation.
In previous installments in this series, I’ve talked about what Product Owners and development team members can do to ensure iteration closure. By iteration closure, I mean that the system is functioning at the end of each iteration, available for the Product Owner to test and offer feedback. It may not have complete feature sets, but what feature sets are present do function and can be tested on the actual system: no “prototypes”, no “mock-ups”, just actual functioning albeit perhaps limited code. I call this approach fully functional but not necessarily fully featured.
In this installment, I’ll take a look at the Scrum Master or Project Manager and see what they can do to ensure full functionality if not full feature sets at the end of each iteration. I’ll start out by repeating the same caveat I gave at the start of the Product Owner installment: I’m a developer, so this is going to be a developer-focused look at how the Scrum Master can assist. There’s a lot more to being a Scrum Master, and a class goes a long way to giving you full insight into the responsibilities of the role.
My personal experience is that the most important thing you as a Scrum Master can do is to watch and listen. You need to see and experience the dynamics of the team.
At Iteration Planning Meetings (IPMs), are Product Owners being intransigent about priorities or functional decomposition? Are developers resisting incremental functional delivery, wanting to complete technical infrastructure tasks first? These are the two most serious obstacles to iteration closure. Be prepared to intervene and discuss why it’s in everyone’s interest to achieve this iteration closure.
At the daily stand-up meetings, ensure that every team member speaks (that includes you!), and that they only answer the three canonical questions:
- What did I do since the last stand-up?
- What will I do today?
- What is in my way?
Don’t allow long-winded discussions, especially technical “solution” discussions. People will tune out.
You’re listening for:
- Someone who always answers (1) and (2) with the same tasks every day and yet says they have no obstacles
- Whatever people say in response to (3)
Your task immediately after the stand-up is to speak with team members who have obstacles and find out what you can do to clear the obstacles. Then address any team members who’re always doing the same task every day and find out why they’re stuck. Are they inexperienced and unwilling to ask for help? Are they not committed to the project mission and need to be redeployed?
Guard against an us-versus-them mentality on teams, where the developers see Product Owners or infrastructure teams as “the enemy” or at least an obstacle, and vice versa. These antagonistic relationships come from lack of trust, and lack of trust comes from lack of results. Again, actual working deliverables at the close of each iteration go a long way to building trust. Look for intransigence on either the developer team or with the Product Owner: both should be willing to speak freely and frankly with each other about how much work can be done in an iteration and what constitutes Minimal Value Product for this iteration. It has to be a negotiation; try to facilitate that negotiation.
Know your team as human beings – after all, that is what they are. Learn to empathize with them. How do individuals behave when they’re happy or when they’re frustrated? What does it take to keep Jim motivated? It’s probably not the same things as Bill or Sally. I’ve heard people advocate the use of Meyers-Briggs Personality Tests or similar to gain this understanding. I disagree. People are more complex than 4 or 5 letters or numbers at one moment in time. I may be an introvert today and an extrovert tomorrow, depending on how my job is going. Spend time with people to really know them, and don’t approach people as test subjects or lab rats. Approach them as human beings, the complex, satisfying, irritating, and ultimately rewarding organisms that we actually are.
Occasionally, when I speak at technical or project management meet-ups, an audience member will ask, “I’m a Scrum Manager and I can’t get the Product Owner to attend the IPM; what should I do?” or, “My CIO comes in and tasks my developer team directly without going through the IPM; how do I handle this?” I try to give them hints, but the answer I always give is, “Agile will only expose your problems; it won’t solve them.” In the end, you have to fall back on your leadership and management skills to effect the kind of change that’s necessary. There’s nothing in Scrum or XP or whatever to help you here. Like any other process or tool, just implementing something won’t make the sun come out. You still have to be a leader and a manager – that’s not going away anytime soon.
Before I close, let me point out one thing I haven’t listed as something a Scrum Master ought to be adept at: administration. I see projects where the Scrum Master thinks their primary role is to maintain the backlog, measure velocity, track completion, make sure people are updating their Jira entries, and so on. I’m not saying this isn’t important – it is. It’s very important. But if you’re doing this stuff to the exclusion of the other stuff I talked about up there, you’re kind of missing the point. Those administrative tasks give you data. You need to act on the data, or what’s the point? Velocity is decreasing. OK…what are you and the team going to do about it? That’s the important part of your role.
When we at CC Pace first started doing Agile XP projects back in 2000-2001, we had a role on each project called a Tracker. This person would be part time on the project and would do all the data collection and presentation tasks. I’d like to see this role return on more Agile projects today, because it makes it clear that that’s not the function of the Scrum Master. Your job is to lead the team to a successful delivery, whatever that takes.
So here we are at the end of my series. If there’s one mantra I want you to take away from this entire series, it’s Keep the system fully functional even if not fully featured. Full functionality – the ability of the system to offer its implemented feature set to the Product Owner for feedback – should always come before full features – the completeness of the features and the infrastructure. Of course, you must implement the complete feature set and the full infrastructure – but evolve towards it. Don’t take an approach that requires that the system be complete to be even minimally useful.
If you’re a Product Owner:
- Understand the value proposition not just of the entire system, but of each of its components and subsets.
- Be prepared to see, use, and test subsets, or subsets of subsets of subsets, of the total feature set. Never say, “Call me only when the system is complete.” I guarantee this: your phone will never ring.
If you’re a developer:
- Adopt Agile Engineering techniques such as TDD, CI, CD, and so on. Don’t just go through the motions. Become really proficient in them, and understand how they enable everything else in Agile methodologies.
- Use these techniques to embrace change, and understand that good design and good architecture demand encapsulation and abstraction. Keeping the subsystems isolated so that the system is functional even if not complete is not just good for business. It’s good engineering. A car’s engine can (and does) run even before it’s installed into the car. Just don’t expect it to take you to the grocery store.
- Be an active team member. Contribute to the success of the mission. Don’t just take orders.
If you’re a Scrum Master:
- Watch and listen. Develop your sense of empathy so you “plug in” to the team’s dynamics and understand your team.
- Keep the team focused on the mission.
- If you want to sweat the details of metrics and data, fine – but your real job is to act on the data, not to collect it. If you aren’t good at those collection details, delegate them to a tracking role.
I hope you’ve enjoyed this series. Feel free to comment and to connect with me and with CC Pace through LinkedIn. Please let me hear how you’ve managed when you were on a supposedly Agile project and realized that the sound of rushing water you heard was the project turning into a waterfall.
In Part 1 of this blog series, I presented a high-level summary of the many different opportunities that Business Analysts (BA) can pursue in an Agile BA role, often resulting in new and exciting experiences. I also highlighted some of the differences between today’s “Agile BA” and the traditional “Waterfall BA”. Finally, I presented an interesting metaphor for today’s Agile BA, one of a Major League Baseball “utility player” (in this case, Jose Oquendo – a player who accomplished the rare feat of playing at every position during his 12-year MLB career.)
Part 2 of this blog entry focuses on the five key functional areas (aka “opportunities”) where I feel Agile BA’s can contribute or take outright ownership of a certain project task or responsibility. A key point to remember is that in order for these opportunities to present themselves, circumstances need to exist which ultimately depend on the dynamics of a project and the makeup of the particular team. Surely we don’t want to step on any toes or introduce team conflict. But if the opportunity is presented and a need has clearly been established, take the bull by the horns and run with these five opportunities:
- Project Management
- Product Management (aka the Product Backlog)
- Testing
- Documentation
- Collaboration (with Project Stakeholders and Team Members)
PROJECT MANAGEMENT – WORK WITH (OR AS) THE SCRUMMASTER
On today’s project teams, Agile BA’s are usually best-suited to provide project management/ScrumMaster support whenever the need arises. And let’s be real – with PM’s and ScrumMasters constantly being pulled in several different directions, the Agile BA can tackle a numerous amount of responsibilities associated with this role.
In all likelihood, Agile BA’s have the experience necessary to handle many of the day-to-day responsibilities of a PM or ScrumMaster. The Agile BA can facilitate any of the recurring “events” as needed – the daily scrum (or “standup”), sprint planning, sprint review and sprint retrospective.
In many cases, the Agile BA is as close to (or even sometimes more engaged) with the project’s product backlog than the actual PM. This knowledge of the past, current and future state of the product backlog enables the Agile BA to assist with several project-related artifacts – for example, sprint and release burn-up/burn-down charts.
Successfully leading and delivering on many of these crucial project events and tasks not only contributes to the success of the team, but it also provides valuable on-the-job training. For Agile BA’s who want to eventually move into a PM or ScrumMaster role, this experience is invaluable.
THE “PROXY PRODUCT OWNER” – TODAY’S “PRODUCT OWNER” REALITY
Lately, it seems that a fully-engaged Product Owner is more of a luxury than a norm on today’s Agile projects. Agile BA’s can benefit from the potential subject-matter knowledge gained and added exposure by bridging this gap and acting as a “Proxy Product Owner”. In cases where a truly-dedicated Product Owner is not a reality, no one is better suited to step into this role than the Agile BA.
BA’s usually develop a solid rapport with the customer and can act as a liaison between the customer and the project team whenever needed. And as stated earlier, the Agile BA probably has the most experience working with the project’s user stories and backlog. On fast-moving development projects, many decisions are needed real-time, and waiting for answers from an absent Product Owner usually hinders the team’s progress.
TESTING – AFTERALL, WHO KNOWS THE STORY BETTER THAN THE BA?
Many Agile teams have already moved to this model, but for teams which have not, here is another opportunity. As previously mentioned, BA’s handing their work over to testers ‘waterfall-style’ is an outdated and inefficient practice. I have seen that 2-3 fully engaged Agile BA’s can efficiently handle the workload of 2-3 full-time BA’s and 2-3 full-time testers. Instead of separating requirements and functional testing tasks for a particular piece of functionality (e.g. user story), Agile BA’s focus on the user story as a whole – from origination (story creation) through implementation (fully-tested, potentially shippable product.) This is not to say that other methods of specialized testing aren’t needed, but in many cases, the best person to drive a user story to “done” is the Agile BA.
Automated testing has also become an invaluable practice on software development projects and provides yet another opportunity for Agile BA’s to contribute in the testing arena. With working knowledge of the current state of the application and product backlog, Agile BA’s have the capability to define and develop a project’s ongoing automated testing suite. Depending on the testing tools employed by the team, BA’s can “pair” with a technical resource (e.g. java developer). In this scenario, the developer handles the technical components of the automated testing suite while the BA designs, builds and manages the suite from a functional perspective.
COLLABORATION – BEFORE YOU KNOW IT, YOU’RE THE PROJECT’S “GO-TO” PERSON
I have included “collaboration” as an opportunity for Agile BA’s because collaboration, indeed, leads to opportunities. In my previous Waterfall experiences, BA’s didn’t collaborate much. In today’s Agile world, the Agile BA can really become a project’s “go-to” person. The collaboration piece also closely ties in with the previously mentioned PM/ScrumMaster and Product Owner opportunities.
For example, facilitating a sprint review or presenting a product demo provides invaluable experience and exposure. Sprint review meetings often include executives and/or stakeholders whom otherwise do not participate at all towards the project (basically, you see them once every two weeks). Leading these sessions provides direct communication with the “customer”, providing valuable feedback which can be relayed back to the team. Since entire project teams rarely attend these informational sessions, the team will start looking to you to provide the important feedback that we all value working on Agile projects. Personally, I have always looked forward to returning to the team room and have always appreciated team members asking, “so, how did it go!?”, after each sprint demo. At the same time, I am always glad to be able to provide that feedback to the team which we can use for future success.
DOCUMENTATION – CHANGE DOCUMENTATION FROM A TEDIUS TASK TO A VALUABLE COMMODITY
Even in today’s Agile world, most software development projects require some essential “dirty work”. The BA role has certainly evolved, but we should not completely abandon our roots. While we’ve all heard repeatedly (sometimes to our detriment) that the Agile Manifesto preaches valuing working software over comprehensive documentation, certain documentation can be critical to the success of projects.
I have seen that, if applied effectively, this basic Agile tenet not only reduces redundant documentation, but it helps teams focus on where documentation actually adds value to a project. Instead of documenting a requirement or process as part of an extensive list of deliverables promised six months ago (which will never be read or will become irrelevant), we document exactly what is needed, today. Most likely, information and processes which need to be documented aren’t even known at the outset of the project. Due to the fact that writing is a core skillset possessed by most BA’s, Agile BA’s are well-positioned to accomplish many of the documentation deliverables needed over the duration of a project.
NOW, GET TO WORK!
You’ve just finished reading this blog entry and your new sprint starts next week. It’s not entirely unrealistic that you can begin working in each of the areas mentioned in this blog over the next two weeks (if you haven’t been already). Offer to facilitate your upcoming sprint planning or review session. Ask the PM if you can contribute to the upcoming metrics reports (e.g. sprint/release burndown). If you aren’t already, start testing user stories – start at a high-level, ensuring that all acceptance criteria is met. Take a few hours, dive into and familiarize yourself with the product backlog. Offer to facilitate the sprint review and invite stakeholders who have become disengaged. And finally, document that process flow which has been taking up valuable whiteboard real-estate for the last several weeks (and really needs to be erased)!
Before you know it, you’re pursuing five completely new opportunities in a matter of two weeks. It might be similar to trying out three completely new positions on a baseball diamond. More importantly, you may even finally be able to explain exactly why Jose Oquendo was such a valuable player to have on a Major League Baseball team.
For the past 3 months we’ve had the pleasure of working with a charitable organization called the Ceca Foundation.
Ceca, which is derived from “celebrating caregivers”, was established in 2013 to celebrate caregiver excellence and “to promote high patient satisfaction by recognizing and rewarding outstanding caregivers”. They do this by providing employees of caregiving facilities with a platform for recognizing and nominating their peers for the Ceca Award – a cash reward given throughout the year. These facilities include rehabilitation centers, hospitals, assisted living centers and similar organizations.
CC Pace partnered with Ceca to build their next generation, customized nomination platform.
This was one of those projects that fills you with pride. First, for the obvious reason – Ceca’s worthwhile mission. Second, the not-so-obvious reason, which was the development process. It was a great example of why I enjoy helping customers build products.
The process
For various reasons, Ceca was under a tight deadline to get the new platform up-and-running for several facilities. The Agile process turned out to be a great fit, as it allowed for frequent customer feedback and weekly deployments to a testable environment. We developed the platform using high-level feature stories, rather than detailed specifications. This allowed the team to concentrate on the desired outcome, rather than getting caught up in the technical details. At times, we had to forgo a software-based solution in favor of a manual process. When you have limited resources and time, you have to make these types of decisions.
In February, after about 3 weeks, the Ceca Foundation launched the new web platform for one facility and then quickly brought on several more. There was immediate gratification for the team as we watched the nominations flood in.
The “feel good” story
What made this project successful and enjoyable at the same time? I’m reminded of the first value in the Agile manifesto – individuals and interactions over processes and tools. Some factors were technical but most were not:
- a motivated and enthusiastic customer (Ceca)
- a set of agreed upon features to provide the Minimum Viable Product
- frequent collaboration with the customer
- a cloud-hosted environment to provide infrastructure on-demand for testing and live versions
- a software-as-a-service model that allowed us to quickly bring on new facilities
For me, it was Agile at its most fundamental: discuss the desired features; provide a cost estimate for those features; negotiate priority with the customer; provide frequent releases of working software.
Check out the Ceca Foundation for more information. You can see a demo of the software under “Technology“.
As I mentioned in the introductory post in this series, an issue I frequently see with underperforming Agile teams is that work always spills over from one iteration into the next. Nothing ever seems to finish, and the project feels like a waterfall project that’s adopted a few Agile ceremonies. Without completed tasks at iteration planning meetings (IPMs), there’s nothing to demo, and the feedback loop that’s absolutely fundamental to any Agile methodology is interrupted. Rather than actual product feedback, IPMs become status meetings.
In this post, let’s look at how a Product Owner can help ensure that tasks can be completed within an iteration. As a developer myself, I’m focusing on the things that make life easier for the development team. Of course, there’s a lot more to being a good Product Owner, and I encourage you to consider taking a Scrum Product Owner training class (CC Pace offers an excellent one).
First and foremost, be willing to compromise and prioritize. An anti-pattern I observe on some underperforming teams is a Product Owner who’s asked to prioritize stories and replies, “Everything is important. I need everything.” I advise such teams that when you ask for all or nothing, you will never get all; you will always get nothing. So don’t ask for “all or nothing”, which is what a Product Owner is saying when they say everything has a high priority.
As a Product Owner, understand two serious implications of saying that every task has a high priority.
- You are saying that everything has an equal priority. In other words, to the developer team, saying that everything has a high priority is indistinguishable from saying that everything has medium or indeed low priority. The point of priority isn’t to motivate or scare the developers, it’s to allow them to choose between tasks when time is pressing. You’re basically leaving the choice up to the developer team, which is probably not what you had in mind.
- You’re really delegating your job to the developer team, and that isn’t fair. You’re the Product Owner for a reason: the ultimate success of the product depends on you, and you need to make some hard choices to ensure success. You know which bits of the system have the most business value. The developers signed on to deliver functionality, not to make decisions about business value. That’s your responsibility, and it’s one of the most important responsibilities on the entire team.
The consequence of “everything has a high priority” is that the developers have no way to break epic stories down into smaller stories that fit within an iteration. Everything ends up as an epic, and the developer team tends to focus on one epic after another, attempting to deliver complete epics before moving on to the next. It’s almost certain that no epic can be completed in one 2-week iteration, and so work keeps on spilling over to the next iteration and the next and the next.
Second, work with the developer team to find out how much of each story can be delivered in an iteration. Keep in mind that “delivered” means that you should be able to observe and participate in a demo of the story at the end of the iteration. Not a “prototype”, but working software that you can observe. Encourage the developers to suggest reduced functionality that would allow the story to fit in an iteration. For example, how about dealing only with “happy path” scenarios – no errors, no exceptions? Deal with those edge cases in a later iteration. At all costs, move towards a scenario where fully functional (that is, actually working) software is favored over fully featured software. Show the team that you’re willing to work with the evolutionary and incremental approach that Agile demands.
Third, be wary of stories that are all about technical infrastructure rather than business value. Sure, the development team very often need to attend to purely technical issues, but ask how each such story adds business value. You are entitled to a response that convinces you of the business need to spend time on the infrastructure stories.
At the end of a successful IPM, you as the Product Owner should have:
- Seen some working software – remember, fully functional but perhaps not fully featured
- Offered the developer team your feedback on what you saw
- Worked with the developer team to have another set of stories, each of which is deliverable within one iteration
- Prioritized those stories into High, Medium, and Low buckets, with the mutual understanding that nobody will work on a medium story if there are high stories remaining, and nobody will work on low stories if there are medium stories remaining
- A clear understanding of why any technical infrastructure stories are required, and what business problem will be addressed by such stories
Finally, make yourself available for quick decisions during the iteration. No plan survives its first encounter with reality. There will always be questions and problems the developers need to talk to you about. Be available to talk to them, face to face, or with an interactive medium like instant messaging or video chat. Be prepared to make decisions…
Developer: “Sorry, I know we said we could get this story done this iteration, but blahblah happened and…”
You: “OK, how much can you get done?”
Developer: “We would have to leave out the blahblah.”
You: “Fine, go ahead.” (Or, “No, I need that, what else can we defer?”)
Be decisive.
Next post, I’ll talk about what developers should concentrate on to make sure some functionality is delivered in each iteration.