Navigating the Loan Origination System Modernization Maze

Navigating the Loan Origination System Modernization Maze

In the ever-changing landscape of lending and financial services, one critical factor remains unchanged: the importance of a robust and efficient Loan Origination System (LOS). However, modernizing your LOS is no small initiative. It’s a project with unexpected pitfalls, challenges, and opportunities to navigate at every turn. In our upcoming white paper, “Unlocking the Secrets of Success: Navigating Consumer Loan Origination System Modernization,” we give you our best advice for a successful project. CC Pace has been a trusted advisor for financial institutions of all stripes over the last four decades. This paper shares actionable insights and practical experience from consultants who have ‘seen it all’. These takeaways will help you overcome obstacles that come with LOS modernization projects and set you up to realize the full potential of this substantial investment.

The Need for Modernization: Why now?

With the ever-growing demands from borrowers for a streamlined, user-friendly, and efficient LOS, this white paper comes at a time when nearly all financial institutions are looking at their technology stacks. With the digital revolution elevating customer expectations, a streamlined, user-friendly, and efficient LOS is no longer a luxury; it’s a necessity.

However, this quest for modernization is loaded with challenges. Numerous financial institutions have found themselves entangled in a maze of complexities, which if not navigated carefully, can lead to failed or challenged (read: costly) modernization projects. The statistics are daunting, with a significant number of LOS modernization efforts falling short of their intended goals, wasting valuable time, resources, and ultimately, money.

Unlocking the Secrets to Success

In our white paper, we shed light on the common pitfalls and challenges faced by institutions when modernizing their LOS. From under the project scope to struggling with data migration, from compliance hurdles to technology integration issues, each challenge presents its unique set of obstacles. Each of those obstacles poses significant consequences to the health of your project if left unaddressed.

However, this white paper is not just about exposing the pitfalls. It offers a first-hand perspective from our seasoned team of consultants with clear and actionable steps to improve your path to success and steer clear of the common pitfalls. While this isn’t intended to solve all your challenges, it will serve as a guardrail to help you turn common challenges into opportunities.

In this white paper, you’ll gain valuable insights into:

  • Identifying and mitigating common pitfalls
  • Crafting a robust modernization strategy
  • Leveraging technology to optimize your LOS
  • Understanding vendor relationships (and what to look for in a partner)
  • Realizing the full potential of a modernized LOS

If you want a head start on your journey through the LOS Modernization maze, you won’t want to miss this white paper. We’re giving early access to our blog and social media followers. Email us here for a free ‘early bird’ copy sent directly to your email address!

So, earlier today, I was having coffee with a friend who’s a technology program manager at a large financial services firm and fellow nerd – let’s call him Steve (since that’s his name).  Given our shared interests, the discussion eventually turned to generative AI and, in particular, Large Language Models (LLM) like ChatGPT.  Steve is a technologist like me but hasn’t had a lot of exposure to LLMs beyond what he’s seen on Medium and in YouTube videos, and he was asking the predictable questions – “Is this stuff for real?  Can it do all the things they’re saying it can?  What’s up with the whole hallucination thing?”

Overall, it was a great discussion, and we covered a broad range of topics.  In the end, I think we both came away from the conversation with interesting perspectives on the potential for generative AI as well as some of the potential shortcomings.

I plan to share my perspective on our discussion in a series of posts over the next few weeks, beginning with the first topic we discussed — AI Hallucination. I hope you find value in these perspectives and, as always, comments are most welcome.

So, what’s up with AI hallucination?

For AI, hallucination refers to when an LLM generates text that sounds plausible but is factually incorrect or unsupported by evidence. For example, if you prompted an LLM to “write about the first American woman in space,” it might pull plausible-sounding information from its vast training data but get the facts wrong by hallucinating fictional details and attributing them to the first female American astronaut. This tendency of large language models to confidently generate fake details and pass them off as truthful accounts when prompted for topics outside its actual training data is extremely problematic, especially if the user is unaware of this tendency and takes the output at face value.

When I say “tendency”, I mean this is a very common issue that arises frequently with large language models today. The propensity to hallucinate false details with high confidence is extremely prevalent in modern LLMs, even sophisticated models trained on huge datasets. For example, a 2021 study from Anthropic found that LLMs hallucinated over 40% of the time when asked simple factual questions from datasets they were not trained on. And openAI has warned that its GPT models “sometimes write plausible-sounding but incorrect or nonsensical text” and should not be relied upon for factual accuracy without oversight.

This is especially dangerous in high-stakes fields like medicine or law, and in fact, there is a recent story of a lawyer using an LLM to prepare a court filing, inadvertently referencing fake cases (fortunately, the court caught this prior to the case proceeding).

As to why LLMs hallucinate, there are several potential reasons:

  • They are trained on limited data that does not cover all possible topics, so they try to fill in gaps.
  • Their goal is to generate coherent, fluent text, not necessarily accurate
  • They lack grounding in common sense or the real world.
  • Their statistical nature means they will occasionally sample incorrect or imaginary information.

An important point is that the LLM does not intentionally construct false information (unless asked to do so); rather it builds its responses based on available data (the data it was trained on). The models attempt to continue patterns and maintain internal coherence in their generated text, which can result in persuasive but false details when their knowledge is imperfect or they are asked to extrapolate beyond their training.  In some ways, this exacerbates the problem, as the model can respond with high confidence while, in fact, having no factual basis for the response.  Perhaps more worrisome, with further scaling up of models, this tendency may only become more pronounced as they get better at producing persuasive human-like text.

Clearly, better techniques are needed to detect and reduce hallucination.

There are some approaches that are being explored to reduce the occurrence of hallucination and/or to correct it prior producing generated responses. Here are some of the techniques being explored by researchers:

  • Human Feedback with Reinforcement Learning (HFRL): Having humans flag hallucinated text during the training process, then using Reinforcement Learning (RL) to adjust the model to reduce false information.
  • Incorporating Knowledge Bases: Connecting the LLM to an external knowledge base like Wikipedia can ground its output in facts.
  • Causal Modeling: Modeling cause-and-effect relationships helps the LLM better understand interactions in the real world.
  • Self-Consistency: Penalizing the model when its predictions contradict each other can minimize internal inconsistencies.
  • Robust Question Answering: Training the model to carefully consider a question before answering reduces speculative responses.
  • Hallucination Detection Systems: Separate classifiers can be developed specifically to detect hallucinated text.
  • Retrieval Augmented Generation (RAG): Retrieving relevant text and data before generating from them improves grounding.
  • Human-in-the-Loop: Letting humans interactively guide the model during text generation can steer it away from hallucination.

Which solution(s) will perform best is determined in part by the particular use case (for example, HFRL might not be practical for very large datasets) and more than likely a combination of techniques will be required to achieve desired levels of confidence in responses.

Even with these additional controls and safeguards in place, it will continue to be important to perform some level of quality control prior to using LLM output.

As a thought experiment, let’s take a private equity firm — the firm wishes to use LLMs to streamline the summarization and analysis of corporate data for acquisition targets.  Indeed, LLMs can provide significant productivity lift in consuming and condensing large volumes of structured and unstructured data, and the firm can certainly use an appropriately fine-tuned LLM to facilitate the process of analyzing an organization’s fitness for acquisition.  Having said that, fact-checking any specific conclusions produced by the LLM must be scrutinized closely to ensure its veracity prior to use in decision-making and, where necessary, adjustments made.  Note that this should be no different than the same level of scrutiny that would be applied to human-generated analysis; the point is to not make the assumption that because the analysis is ‘computer generated’ that it is somehow more reliable – in fact, the opposite is true.

All said, hallucination remains a significant obstacle to leveraging the full power and potential of large language models. But proper controls, along with continued research into techniques like the ones discussed here provides a pathway for leveraging LLMs to generate accurate, trustworthy text as easily as they currently produce fluent, creative text.

If you’re ready to take advantage of AI in a meaningful way but want to avoid the growing pains and pitfalls (including hallucinations), we should talk! Our 5-day AI assessment takes the guesswork out of maximizing the value of AI while minimizing the risks associated with LLMs. You can find out more about this offering here or connect with me on LinkedIn.

(Note: Artwork for this and subsequent posts in this series are part of my collection, produced by MidJourney.  Lined here.)

Today’s business leaders find themselves navigating a world in which artificial intelligence (AI) plays an increasingly pivotal role. Among the various types of AI, generative AI – the kind that can produce novel content – has been a game changer. One such example of generative AI is OpenAI’s ChatGPT. Though it’s a powerful tool with significant business applications, it’s also essential to understand its limitations and potential pitfalls.

1. What are Generative AI and ChatGPT?
Generative AI, a subset of AI, is designed to create new content. It can generate human-like text, compose music, create artwork, and even design software. This is achieved by training on vast amounts of data, learning patterns, structures, and features, and then producing novel outputs based on what it has learned.

In the realm of generative AI, ChatGPT stands out as a leading model. Developed by OpenAI, GPT, or Generative Pre-training Transformer, uses machine learning to produce human-like text. By training on extensive amounts of data from the internet, ChatGPT can generate intelligent and coherent responses to text prompts.

Whether it’s crafting detailed emails, writing engaging articles, or offering customer service solutions, ChatGPT’s potential applications are vast. However, the technology is not without its drawbacks, which we’ll delve into shortly.

2. Strategic Considerations for Business Leaders
Adopting a generative AI model like ChatGPT in your business can offer numerous benefits, but the key lies in understanding how best to leverage these tools. Here are some areas to consider:

  • 2.1. Efficiency and Cost Savings
    Generative AI models like ChatGPT can automate many routine tasks. For example, they can provide first-level customer support, draft emails, or generate content for blogs and social media. Automating these tasks can lead to considerable time savings, freeing your team to focus on more strategic, creative tasks. This not only enhances productivity but could also lead to significant cost savings.
  • 2.2. Scalability
    One of the biggest advantages of generative AI models is their scalability. They can handle numerous tasks simultaneously, without tiring or requiring breaks. For businesses looking to scale, generative AI can provide a solution that doesn’t involve a proportional increase in costs or resources. Moreover, the ability of ChatGPT to learn and improve over time makes it a sustainable solution for long-term growth.
  • 2.3. Customization and Personalization
    In today’s customer-centric market, personalization is key. Generative AI can create content tailored to individual user preferences, enhancing personalization in your services or products. Whether it’s customizing email responses or offering personalized product recommendations, ChatGPT can drive customer engagement and satisfaction to new heights.
  • 2.4. Innovation
    Generative AI is not just about automating tasks; it can also stimulate innovation. It can help in brainstorming sessions by generating fresh ideas and concepts, assist in product development by creating new design ideas, and support marketing strategies by providing novel content ideas. Leveraging the innovative potential of generative AI could be a game-changer in your business strategy.

3. The Pitfalls of Generative AI
While the benefits of generative AI are clear, it’s essential to be aware of its potential drawbacks and pitfalls:

  • 3.1. Data Dependence and Quality
    Generative AI models learn from the data they’re trained on. This means the quality of their output is directly dependent on the quality of their training data. If the input data is biased, inaccurate, or unrepresentative, the output will likely be flawed as well. This necessitates rigorous data selection and cleaning processes to ensure high-quality outputs.
    Employing strategies like AI auditing and fairness metrics can help detect and mitigate data bias and improve the quality of AI outputs.
  • 3.2. Hallucination
    Generative AI models can sometimes produce outputs that appear sensible but are completely invented or unrelated to the input – a phenomenon known as “hallucination”. There are numerous examples in the press regarding false statements or claims made by these models, sometimes funny (like claiming that someone ‘walked’ across the English Channel) to the somewhat frightening (claiming someone has committed a crime, when in fact, they did not). This can be particularly problematic in contexts where accuracy is paramount. For example, if a generative model hallucinates while generating a financial report, it could lead to serious misinterpretations and errors. It’s crucial to have safeguards and checks in place to mitigate such risks.
    Implementing robust quality checks and validation procedures can help. For instance, combining the capabilities of generative AI with verification systems, or cross-checking the AI outputs with trusted data sources, can significantly reduce the risk of hallucination.
  • 3.3. Ethical Considerations
    The ability of generative AI models to create human-like text can lead to ethical dilemmas. For instance, they could be used to generate deepfake content or misinformation. Businesses must ensure that their use of AI is responsible, transparent, and aligned with ethical guidelines and societal norms.
    Regular ethics training for your team, and keeping lines of communication open for ethical concerns or dilemmas, can help instill a culture of responsible AI usage.
  • 3.4. Regulatory Compliance
    As AI becomes increasingly pervasive, regulatory bodies worldwide are developing frameworks to govern its use. Businesses must stay updated on these regulations to ensure compliance. This is especially important in sectors like healthcare and finance, where data privacy is paramount. Not adhering to these regulations can lead to hefty penalties and reputational damage.
    Keep up-to-date with the latest changes in AI-related laws, especially in areas like data privacy and protection. Consider consulting with legal experts specializing in AI and data to ensure your practices align with regulatory requirements.
  • 3.5 AI Transparency and Explainability
    Generative AI models, including ChatGPT, often function as a ‘black box’, with their internal workings being complex and difficult to interpret.
    Enhancing AI transparency and explainability is key to gaining trust and mitigating risks. This could involve using techniques that make AI decisions more understandable to humans or adopting models that provide an explanation for their outputs.

4. Navigating the Generative AI Landscape: A Step-by-Step Approach
As generative AI continues to evolve and redefine business operations, it is essential for business leaders to strategically navigate this landscape. Here’s an in-depth look at how you can approach this:

  • 4.1. Encourage Continuous Learning
    The first step in leveraging the power of AI in your business is building a culture of continuous learning. Encourage your team to deepen their understanding of AI, its applications, and its implications. You can do this by organizing workshops, sharing learning resources, or even bringing in an AI expert (like myself) to educate your team on the best ways to leverage the potential of AI. The more knowledgeable your team is about AI, the better equipped they will be to harness its potential.
  • 4.2. Identify Opportunities for AI Integration
    Next, identify the areas in your business where generative AI can be most beneficial. Start by looking at routine, repetitive tasks that could be automated, freeing up your team’s time for more strategic work. Also, consider where personalization could enhance the customer experience – from marketing and sales to customer service. Finally, think about how generative AI can support innovation, whether in product development, strategy formulation, or creative brainstorming.
  • 4.3. Develop Ethical and Responsible Use Guidelines
    As you integrate AI into your operations, it’s essential to create guidelines for its ethical and responsible use. These should cover areas such as data privacy, accuracy of information, and prevention of misuse. Having a clear AI ethics policy not only helps prevent potential pitfalls but also builds trust with your customers and stakeholders.
  • 4.4. Stay Abreast of AI Developments
    In the fast-paced world of AI, new developments, trends, and breakthroughs are constantly emerging. Make it a point to stay updated on these advancements. Subscribe to AI newsletters, follow relevant publications, and participate in AI-focused forums or conferences. This will help you keep your business at the cutting edge of AI technology.
  • 4.5. Consult Experts
    AI implementation is a significant step and involves complexities that require expert knowledge. Don’t hesitate to seek expert advice at different stages of your AI journey, from understanding the technology to integrating it into your operations. An AI consultant or specialist can help you avoid common pitfalls, maximize the benefits of AI, and ensure that your AI strategy aligns with your overall business goals.
  • 4.6. Prepare for Change Management
    Introducing AI into your operations can lead to significant changes in workflows and job roles. This calls for effective change management. Prepare your team for these changes through clear communication, training, and support. Help them understand how AI will impact their work and how they can upskill to stay relevant in an AI-driven workplace.
    In conclusion, navigating the generative AI landscape requires a strategic, well-thought-out approach. By fostering a culture of learning, identifying the right opportunities, setting ethical guidelines, staying updated, consulting experts, and managing change effectively, you can harness the power of AI to drive your business forward.

5. Conclusion: The Promise and Prudence of Generative AI
Generative AI like ChatGPT carries immense potential to revolutionize business operations, from streamlining mundane tasks to sparking creative innovation. However, as with all powerful tools, its use requires a measured approach. Understanding its limitations, such as data dependency, hallucination, and ethical and regulatory challenges, is as important as recognizing its capabilities.

As a business leader, balancing the promise of generative AI with a sense of prudence will be key to leveraging its benefits effectively. In this exciting era of AI-driven transformation, it’s crucial to navigate the landscape with a keen sense of understanding, responsibility, and strategic foresight.

If you have questions or want to identify ways to enhance your organization’s AI capabilities, I’m happy to chat. Feel free to reach out to me at jfuqua@ccpace.com or connect with me on LinkedIn

Pt 1 of a two-part series

Effective communication remains at the very heart of team efficiency. Entire business models are based on improving team communications, look no further than SalesForce’s acquisition of Slack. Microsoft Teams, BaseCamp, Sharepoint, Zoom, Webex, instant messenger, and text are just a few of the frequently used communication tools in the workplace. Meeting facilitation is now a ‘service offering’ – take a moment and search LinkedIn… perhaps you’ll get more than the 7,700 results that I returned!

Yet here we are in this post-COVID, hybrid/remote work environment, and one of the most effective and proven communication channels has been cast aside. COVID brought most technology work into the home somewhat permanently, and the overscheduling of meetings proliferated, where thanks were provided to higher powers when a meeting was gracefully ended 50 mins after the hour, allowing a few minutes to check on the kids, feed the whining dog, or run to the bathroom—really anything other than staring at an array of faces staring back at their screens. What this also brought was the feeling that since schedules were so solidly stacked, the last thing that made sense was to just call someone. It felt intrusive and presumptuous.

While COVID remains in circulation, the work world is transitioning with some firms fully remote forever, others hybrid with overlap day mandates, and others fully back to pre-pandemic norms.  At CC Pace, we remain convinced that flexibility is critical to our employee’s success, but we are also strong believers that face-to-face time is time well spent. I’m reminded of my first job in technology working for one of the Big 4 firms; at my first project, if I hadn’t had the opportunity to strategically place myself at a senior developer’s desk as his day started to ask a few questions and get a few answers, I’d have been sunk. The informal communication channel is critical—and I couldn’t imagine posting my question to him on Teams (even if it had existed back then), nor would he have answered! So how can that experience be replicated within the modern hybrid/remote work environment?

Virtual stand-ups remain useful. Pair programming has its place.  Yet we see questions left overnight, where there is a desire not to bother a fellow developer. One of the most basic behaviors that our Agile coaches are pressing on is to push developers to ‘pick up the phone and call’.  This deference has a huge cost on team productivity as “stuck developer time” ticks away to no effect.

This is Sachin, picking up with the perspective of a Sr. Scrum Master and Agile coach, I have seen a shift in mindset with team members content with completing their part in a user story. It’s more of a “throw it over the wall” kind of mindset. I usually relate it to a very popular British party game we used to play as kids, i.e., “Pass the parcel.” The parcel, which is the user story, is passed on from one person to another until the music stops, which is when the sprint ends. The result is an incomplete story that just moves on from one sprint to another.  The concerning part is team members are not willing to take responsibility for a user story. A simple question, such as “can you finish the user story in a sprint?” goes unanswered. Take, for example, a backlog refinement session: this session is productive when team members communicate and ask questions. But when you consider them as a waste of time and just want to get done with them, assumptions are made without proper analysis, leading to improper sizing and stories remaining “not done”. To some extent, teams and programs have started to blame the very process for missing a deadline.

Effective interactions remain paramount for Agile teams.  The efficiency of text, email, and other one-way transmissions is not always effective.

In Part II of this series, Mike Wittrup will talk about a framework that CC Pace uses to assess and improve team communication protocols in the co-working world that we all live in.

In the meantime, we remain at your service in providing tailored approaches to driving business agility.  For over 20 years we’ve earned clients’ trust across all stages of the Agile journey. Give us a call!

In early 2022, my wife and I noticed a slight drip in the kitchen sink that progressively worsened. We quickly discovered that if we adjusted the lever just right, the drip would subside for a while. Right before the holidays, we had an issue with our water heater that wasn’t so livable, so we called an expert. After the plumber finished up with our hot water tank, we asked him to check our sink. In 5 minutes, he fixed the problem that had inconvenienced us for months! We had become so used to our workaround that we didn’t even realize how much time we spent in frustration trying to get the faucet handle just right (let alone double checking it all hours of the day and night) when we could have resolved the issue in just a few minutes of effort from an expert! 

Sometimes, our workarounds are only efficient in our minds. They may save us time or a few dollars upfront, but do they save us anything in the long run? In my case, the answer was a resounding ‘no’. 

In our professional lives, we’re trained to seek out our solutions for the ‘leaky faucets’ and typically only bring in an expert when we encounter a major problem that we can’t live with. For credit unions, the ‘water heater’ problems are typically the front office, as they live and die by the member experience. Having the right technology and interface (along with a myriad of other things) tends to be the core focus when it comes to investing in process and technology improvements.  

Where are the ‘leaky faucets’ usually located? The back office. Temporary workarounds are created and then become standard practice; these workarounds are largely unknown to the broader organization, time-consuming, and surprisingly easy to correct. They add up and lead to frustrations and inefficiencies, just like my family experienced with our sink. I recently sat down with Mike Lawson of CU Broadcast and John Wyatt, CIO of Apple Federal Credit Union, to discuss the value of a back-office assessment. 

You can check out a clip of that conversation here:

To see the entire segment, click here (just scroll down to the bottom of the page). I enjoyed speaking with Mike and John and encourage everyone in the credit union arena to subscribe to CU Broadcast if you haven’t already – it’s a great show, and one I’ve enjoyed for years. 

So, when is the last time you checked your faucets? We’d love to hear from you – reach out to me if you have questions or want to learn more about maximizing your back-office efficiency.  

Data science in simple terms takes large amounts of data and breaks it down to solve a problem or determine a specific pattern. Business analytics are used to evaluate data and related statistics to gain perspective on trends and interpretations for making organizational decisions.

While both data science and business analytics professionals draw insights from data using statistics and software tools, deciphering between the two can be complicated. This article does a good job describing the key differences between them and the important role each plays in the way data is analyzed.

The Difference Between Data Science And Business Analytics

PI planning is considered the heartbeat of Scaled Programs. It is a high-visibility event, that takes up considerable resources (and yes, people too). It is critical for organizations to realize the value of PI planning, otherwise, leadership tends to lose patience and gives up on the approach, leading to the organization sliding back on their SAFe Agile journey. There are many reasons PI planning can fall short of achieving its intended outcome. For the purposes of quick readability, I will limit the scope of this post to the following 5 reasons which I have found to have the most adverse effect.

  1. Insufficient preparation
  2. Inviting only a subset (team leads)
  3. Lack of focus on dependencies
  4. Risks not addressed effectively
  5. Not leaving a buffer

Let’s do a deeper dive and try to understand how each of the above anti-patterns in the SAFe implementation can impact PI planning.

Insufficient Preparation: By preparation, I don’t mean the event logistics and the preparation that goes along with it. The logistics are very important, but I am focusing on the content side of it. Often, the Product Owner and team members are entirely focused on current PI work, putting out fires, and scrambling to deliver on the PI commitments, so much so that even thinking about a future PI seems like an ineffective use of time. When that happens, teams often learn about the PI scope almost on the day of PI planning, or a few days before, which is not enough time to digest the information and provide meaningful input. PI planning should be an event for people from different teams/programs to come together and collaboratively create a plan for the PI. To do that, participants need to know what they are preparing for and have time to analyze it so when they come to the table to plan, the discussions are not impeded by questions that require analysis. Specifically, this means, that teams should know well in advance, what are the top 10 features that teams should prioritize, what is the acceptance criteria, and which teams will be involved in delivering those features. This gives the involved teams a runway to analyze the features, iron out any unknowns, and come to the table ready to have a discussion that leads to a realistic plan.

Inviting only a subset: As I said in the beginning, PI planning is a high-cost event. Many leaders see it as a waste of resources and choose to only include team leads/SMs/POs/Tech Leads/Architects and managers in the planning. This is more common than you might think. It might seem obvious why this is not a good practice, but let’s do a deep dive to make sure we are on the same page. The underlying assumption behind inviting a subset of people is that the invitees are experts in their field and can analyze, estimate, plan, and commit to the work with high accuracy. What’s missing from that assumption is, that they are committing on behalf of someone else (teams that are actually going to perform the work) with entirely different skill levels, understanding of the systems, organizational processes, and people. The big gap that emerges from this approach to planning is that work that is analyzed by a subset of folks tends not to account for quite a few complexities in the implementation, and the estimate is often based on the expert’s own assessment of effort. Teams do not feel ownership of the work, because they didn’t plan for or commit to it, and eventually the delivery turns into a constant battle of sticking to the plan and putting out fires.

Lack of focus on dependencies: The primary focus of PI planning should be the coordination and collaboration between teams/programs. Effectively identifying the dependencies that exist between teams and proper collaboration to resolve them is a major part of the planning event to achieve a plan with higher accuracy. However, teams sometimes don’t prioritize dependency management high enough and focus more on doing team-level planning, writing stories, estimating, adding them to the backlog, and slotting them for sprints. The dependencies are communicated, but the upstream and downstream teams don’t have enough time to actually analyze and assess the dependency and make commitments with confidence. The result is a PI plan with dependencies communicated to respective teams but not fully committed. Or even worse, some of the dependencies are missed only to be identified later in the PI when the team takes up the work. A mature ART prioritizes dependencies and uses a shift-left approach to begin conversations, capture dependencies, and give ample time to teams to analyze and plan for meeting them.

Risks not addressed effectively: During PI planning, the primary goal of program leadership should be to actively seek and resolve risks. I will acknowledge that most leaders do try to resolve the risks, but when teams bring up risks that require tough decisions, change in prioritization, and a hard conversation with a stakeholder, program leadership is not swift to act and make it happen. The risk gets “owned” by someone who will have follow-up action items to set up meetings to talk about it. This might seem like the right approach, but it ends up hurting the teams that are spending so much time and effort to come up with a reasonable plan for the PI. There is nothing wrong with “owning” the risks and acting on them in due time, however, during PI planning, time is of the essence. A risk that is not resolved right away, can lead to plans based on assumptions. If the resolution, which happens at a later date/time, turns out to be different from the original assumption made by the team, it can lead to changes in the plan and usually ends up putting more work on the team’s plate. The goal should be to completely “resolve” as many risks as possible during planning, and not avoid tough conversations/decisions necessary to make it happen.

Not leaving a buffer: We all know that trying to maximize utilization is not a good practice. Most leaders encourage teams to leave a buffer during the planning context on the first day. But, in practice, most teams have more in the backlog than they can accomplish in a PI. During the 2 days of planning, it is usually a battle to fit as much work as possible to make the stakeholders happy. For programs that are just starting to use SAFe, even the IP sprint gets eaten up by planned feature development work. One of the root causes for this is having a false sense of accuracy in the plan. Teams tend to forget that this is a plan for about 5 to 6 sprints that span over a quarter. A 1 sprint plan can be expected to have a higher level of accuracy because of a shorter timebox, less scope, and more refined stories. However, when a program of more than 50 people (sometimes close to 150 people) plans for a scope full of interdependencies, expecting the same level of accuracy is a recipe for failure. In order to make sure the plan is realistic, teams should leave the needed buffer and allow teams to adjust course when changes occur.

As I mentioned at the start of this post, there are many ways a high-stakes event like PI planning can fail to achieve the intended outcomes. These are just the ones I have experienced first-hand.  I would love to know your thoughts and hear about some of the anti-patterns that affected your PI Planning and how you went about addressing them.

 

 

 

 

 

 

How do you move forward as an organization to achieve your vision? What’s working well? What’s holding you back or bogging things down? As a mortgage consultant who’s worked in the industry for years, I hear these questions all the time from organizational leaders. And there is help! One tool that’s very effective in helping you answer these challenging questions is an Operational Assessment. Operational Assessments are a pulse check for your business. Every company can benefit from an operational assessment; it provides an open, honest, and objective viewpoint of your business’s current processes, procedures, technology, people, and risks. And once you have this information, you’ll be able to create a plan and chart a path for the future.

So how do you get started on an operational assessment? Where do you find the information? Again, the operational assessment is to provide a checkup of your organization. There is no better way to get to the truth than talking with your employees and clients.

Most Operational Assessments include the following components, so you’ll want to frame your questions and surveys around these business areas.

So, what’s the best approach to collecting data? Staff interviews and ride-a-longs are a great option, and it is through these conversations with employees that you’ll gather a lot of information to complete the assessment. Listen carefully. Because without question, you’ll gain tremendous insight into the formal and informal processes and cultural norms that drive the business. For example, does the current technology effectively support the business? Does it help employees complete their jobs, or is it a constant issue? Are leaders delivering a consistent message, uniting around common goals and direction? In short, are we all together in the boat rowing in the same direction?  People are the core of the business, and it’s important to understand their feedback, comments, perspectives, and observations. Through open dialogue, you’ll uncover things that are not always seen, like workarounds, completing work in a strange order, missing key items that could make reporting better, outdated policies and procedures, etc.

For external customers, consider using interviews, or to reach a broader audience, surveys are also very effective. Do customers have a positive experience when engaging with your organization? Are they satisfied with the business relationships?  You’ll want to incorporate feedback from closed and unclosed loan clients, as well as your realtor/builder partners.

An operational assessment is only as good as the honest and open feedback received, a clear view into the company’s current operations, AND leadership that is willing to listen, then adjust and apply any apt learnings.

This blog is a very brief overview of an operational assessment that could help you objectively determine the status of your organization. Once you can see things with complete transparency, it can help define intentional growth or organizational change steps.  Once you know where you are, THAT VALUE can help you get where you want to be.

Have you heard of OKRs? Is your organization considering adopting OKRs? If so, this post is for you.

OKR stands for Objectives and Key Results. Andy Grove created them while at Intel, and they’ve been growing in their use ever since. The Objective equals the “what” we will achieve, and the Key Result is the benchmark we use to measure how we are doing.

OKRs have been working for organizations like Google and Intel for years. Implementing them for your organization can help drive focus and alignment around working on the right things. While anyone can read the book Measure What Matters, by John Doerr to learn how to write OKRs, it is by following a tried-and-true implementation plan that OKRs truly help organizations achieve the desired focus.

According to Scaled OKRs, Inc. the following key steps should be included:

  1. Build the team
  2. Communicate
  3. Train
  4. Execute the OKR cycle
  5. Calibrate Regularly
  6. Continue to the next OKR cycle

Step 1: Build the team

Build a team to lead the implementation of OKRs. Identify a sponsor and champion. These leaders should understand how keeping OKRs visible throughout the organization will lead to success. As with any change, practicing good change management is important. Introducing OKRs is no exception. Be sure to include a change manager in your team. In addition, your team should include someone familiar with how to write OKRs to guide and mentor those new to writing them.

Step 2: Communicate

Once you have identified the team that will lead and support implementing OKRs, the next step is all about change management communications. Your first communication should occur about two months before roll-out. In your communication, be sure and answer the questions, Why OKRs? And why now? Aside from creating a sense of urgency to adopt, create a vision for the change and share it out too.  Have the communication come from Leadership to show the importance of implementing OKRs. Our change manager follows the Prosci ADKAR Model. The next message should come about one month prior to the roll-out and should be the formal kick-off announcement of the OKR process. Followed shortly by sharing the company level OKRs, and training workshops schedule.

Step 3: Train

Next, you’ll need to do some training. While OKRs may seem easy to write, putting pen to paper and coming up with the right OKRs can be a daunting task. A training workshop with a writing exercise will help attendees get orientated around what makes good OKRs. Here they will learn that the objective is qualitative, and the Key Results are quantitative. You may need a train-the-trainer session to enable others to assist teams in writing their OKRs. I like to share John Doerr’s “Super Powers” of OKRS as a reminder. These include:

  • Focus & commit to priorities
  • Align & connect teamwork
  • Track for accountability
  • Stretch for amazing

Step 4: Execute the OKR Cycle

With company OKRs in hand and training underway, the next step is to start the OKR Cycle.

In this cycle, teams write and share their OKRs to ensure vertical and horizontal alignment.

The Enterprise Context grounds teams to the highest level OKRs that were developed by leadership. This gives everyone something to tie their OKRs to and sets the direction for the organization. This allows teams to work on co-creating the localized OKRs.

The OKR Cycle looks like this:

 

The next step is for individual teams to write their localized OKRs. When it comes to writing OKRs, one of my favorite tips for writing OKRs is that you should be able to read them in the following format:

We will achieve (objective), as measured by (Key result).

Before everyone starts writing OKRS, you may want to think about how they will be kept. If you haven’t picked a tool to manage your OKRs, writing them in a shareable manner can become difficult. It’s easy enough to share across one or two verticals and even one or two horizontals. However, the more widely your OKR implementation, the more imperative a tool becomes.

Once the teams have created their OKRs, it’s time for them to develop their “Action plan,” or backlog of epics they will use to accomplish their OKRs. As part of an action plan, identify key result owners, and the frequency of review huddles. You’ll want a regular cadence of review. This step can be done at Scrum events, like the Sprint Review. The worst thing that can happen is to write OKRs and then forget about them. Finally, identify what scoring mechanism will be used.

Around the second month of the OKR cycle, check-ins and scoring occur. Many organizations follow Google’s lead when it comes to scoring. In their system, 0 is a failure, and 1 is a success. Here is a view of what the scores look like:

Figure 1: https://www.whatmatters.com/faqs/how-to-grade-okrs

You can see from this scale that a .7 is green. It is considered a success. This is especially true for OKRs that are ambitious and represent a real stretch for the team. A team that consistently gets a 1.0 could be considered as not creating ambitious enough OKRs.

 

In the timeline above, we allocated four weeks to work on the first set of OKRs and two weeks for subsequent quarters. The first draft of OKRs tends to take the longest. Be sure and allocate plenty of time for creating your first set of OKRs.

One last comment about this cycle, tracking and sharing progress is an ongoing effort. All-hands meetings are great places to talk about progress. This will keep them from being just another goal-setting activity and keeps them at the forefront of everyone’s work plans.

Step 5: Calibration

Calibration reviews should happen quarterly. This is where you gather the data and identify if anything has changed before we move on to creating our next quarter’s OKRs. Calibration is a great time to do a retrospective on the OKRs. It is also a good time to ask questions like has the company level OKR’s changed? Before you start a new quarter of OKRs, calibrate where you ended on the current quarter and what, if anything, do you need to update or change before writing new OKRs for the upcoming quarter.

Step 6: Continue to the next OKR cycle

Repeat Steps 4 and 5 as part of your ongoing OKR program.

As you can see, rolling out a successful OKR program takes a bit of effort, but it is well worth it. When you incorporate your OKR roll-out into a program that is well planned, you are sure to get the entire organization on board. Once in place, you can use your OKRs to measure the outcomes the organization is striving to achieve, and everyone will be aligned. You can use your OKRs to determine the right things to be working on and say no to things that don’t align with your OKRs. With a good tool, everyone can see how their work aligns with the big picture. Regular check-ins give you the opportunity to see ongoing progress or make course corrections. Most importantly, your organization will be on the path to measuring progress towards desirable outcomes. If you have questions or want to know more, just reach out to me: jbrace@ccpace.com

We’ve been using the AWS Amplify toolkit to quickly build out a serverless infrastructure for one of our web apps. The services we use are IAM, Cognito, API Gateway, Lambda, and DynamoDB. We’ve found that the Amplify CLI and platform is a nice way to get us up and running. We then update the resulting CloudFormation templates as necessary for our specific needs. You can see our series of videos about our experience here.

The Problem

However, starting with the Amplify CLI version 7, AWS changed the way to override Amplify-generated resource configurations in the form of CFT files. We found this out the hard way when we tried to update the generated CFT files directly. After upgrading the CLI and then calling amplify push, our changes were overridden with default values – NOT GOOD! Specifically, we wanted to add a custom attribute to our Cognito pool.

After a few frustrating hours of troubleshooting and support from AWS, we realized that the Amplify CLI tooling changed how to override Amplify-generated content. AWS announced the changes here, but unfortunately, we didn’t see the announcement or accompanying blog post.

The Solution

Amplify now generates an “overrides.ts” Typescript file for you to provide your own customizations using Cloud Development Kit (CDK) constructs.

In our case, we wanted to create a Cognito custom attribute. Instead of changing the CFT directly (under the new “build” folder in Amplify), we generate an “override.ts” file using the command: “amplify override auth”. We then added our custom attribute using the CDK:

Important Note: The amplify folder structure gets changed starting with CLI version 7. To avoid deployment issues, be sure to keep your CLI version consistent between your local environment and the build settings in the AWS console. Here’s the Amplify Build Setting window in the console (note that we’re using “latest” version):

 

If you’re upgrading your CLI, especially to version 7, make sure to test deployments in a non-production environment, first.

What are some other uses for this updated override technique? The Amplify blog post and documentation mention examples like Cognito overrides for password policies and IAM roles for auth/unauth users. They also mention S3 overrides for bucket configurations like versioning.

For DynamoDB, we’ve found that Amplify defaults to a provisioned capacity model. There are benefits to this, but this model charges an hourly rate for consumption whether you use it or not. This is not always ideal when you’re building a greenfield app or a proof-of-concept. We used the amplify override tools to set our billing mode to On-demand or “Pay per request”. Again, this may not be ideal for your use case, but here’s the override.ts file we used:

Conclusion

At first, I found this new override process frustrating since it discourages direct updates to the generated CFT files. But I suppose this is a better way to abstract out your own customizations and track them separately. It’s also a good introduction to the AWS CDK, a powerful way to program your environment beyond declarative yaml files like CFT.

Further reading and references:

DynamoDB On-Demand: When, why and how to use it in your serverless applications

Authentication – Override Amplify-generated Cognito resources – AWS Amplify Docs

Override Amplify-generated backend resources using CDK | Front-End Web & Mobile (amazon.com)

Top reasons why we use AWS CDK over CloudFormation – DEV Community

Here is our final video in the 3-part series Building and Securing Serverless Apps using AWS Amplify.  In case you missed Part 1 you can find it here along with Part 2 here.  Please let us know if you would like to learn more about this series!

The video below is Part 2 of our 3-part series: Building and Securing Serverless Apps using AWS Amplify.  In case you missed Part 1 – take a look at it here.  Be sure to stay tuned for Part 3!

AWS Amplify is a set of tools that promises to make full-stack, cloud-native development quicker and easier. We’ve used it to build and deploy different products without getting bogged down by heavy infrastructure configuration. On one hand, Amplify gives you a rapid head start with services like Lambda functions, APIs, CI/CD pipelines, and CloudFormation/IaC templates. On the other hand, you don’t always know what it’s generating and how it’s securing your resources.

If you’re curious about rapid development tools that can get you started on the road to serverless but want to understand what’s being created, check out our series of videos.

We’ll take a front-end web app and incrementally build out authentication, API/function, and storage layers. Along the way, we’ll point out any gotchas or lessons learned from our experience.

Recently, I read an article titled, “Why Distributed Software Development Teams Work Infinitely Better”, by Boris Kontsevoi.

It’s a bit hyperbolic to say that distributed teams work infinitely better, but it’s something that any software development team should consider now that we’ve all been distributed for at least a year.

I’ve worked on Agile teams for 10-15 years and thought that they implicitly required co-located teams. I also experienced the benefits of working side-by-side with (or at least close to) other team members as we hashed out problems on whiteboards and had adhoc architecture arguments.

But as Mr. Kontsevoi points out, Agile encourages face-to-face conversation, but not necessarily in the same physical space. The Principles behind the Agile Manifesto were written over 20 years ago, but they’re still very much relevant because they don’t prescribe exactly “how” to follow the principles. We can still have face-to-face conversations, but now they’re over video calls.

This brings me to a key point of the article -” dispersed teams outperform co-located teams and collaboration is key”. The Manifesto states that building projects around motivated individuals is a key Agile principle.

Translation: collaboration and motivated individuals are essential for a distributed team to be successful.

  • You cannot be passive on a team that requires everyone to surface questions and concerns early so that you can plan appropriately.
  • You cannot fade into the background on a distributed team, hoping that minimal effort is good enough.
  • If you’re leading a distributed team, you must encourage active participation by having regular, collaborative team meetings. If there are team members that find it difficult to speak above the “din” of group meetings, seek them out for 1:1 meetings (also encouraged by Mr. Kontsevoi).

Luckily, today’s tools are vastly improved for distributed teams. They allow people to post questions on channels where relevant team members can respond, sparking adhoc problem-solving sessions that can eventually lead to a video call.

Motivated individuals will always find a way to make a project succeed, whether they’re distributed, co-located, or somewhere in between. The days of tossing software development teams into a physical room to “work it out” are likely over. The new distributed paradigm is exciting and, yes, better – but the old principles still apply.

Organizations with internal cultures that are aligned with their strategies are far more effective than those without aligned cultures. Decades of data prove this.[1] For example, over the last 50 years, culture specialist Human Synergistics has compiled data on more than 30,000 organizations and it clearly shows strong correlations between specific organizational culture attributes and business performance. Yet it is common for organizations to ignore culture when trying to implement their strategies.

Agile 2 is a more mature version of Agile, and it relies on having a supporting healthy culture. In fact, analysis that Agile 2 Academy has done with Human Synergistics shows that Agile 2 ideas strongly align with what Human Synergistics calls a Constructive culture, which is the most effective kind.

When an organization decides to adopt Agile 2 (or any Agile) methods, it is common to define a set of “practices” that development teams must follow. This is an essential step, but there are some great perils in assuming this approach is enough:

  1. Many, if not most, practices require people to learn new skills, make new judgments, and behave in new ways. Practices alone are not enough.
  2. Most of the obstacles to using Agile 2 (or legacy Agile) methods actually exist outside of the development teams. These obstacles are widespread and manifest as management behaviors, lack of supporting systems that Agile teams need, and processes and procedures that make it nearly impossible for teams to operate with agility.

Peril #1 means that people will not be able to execute the practices. They will “go through the motions”—but Agile 2 (agility) is, in its essence, a replacement of step-by-step processes with just-in-time contextual decision-making. If people follow practices and make poor judgments, then the organization will suffer from ongoing bad decisions and poor outcomes. But if the organization’s culture is one that encourages people to seek safety through following procedure, rather than relying on their judgment, then they will not be willing to make judgments: they will copy what others do, and perhaps do the wrong thing.

Peril #2, that most obstacles to agility originate from beyond the teams, is seldom appreciated by organizations beginning an Agile journey. Senior leadership often views Agile as something that development teams and individual contributors do. They don’t realize the extent to which Agile—having agility—relies on having the right support systems in place and the right kinds of leadership supporting the teams.

If the organization has a culture of hands-off leadership, then people who find themselves in a leadership role will not know how to behave when leadership is needed. For example, a common situation is when managers have learned the Agile practice that teams “self organize” but do not realize that that is just a placeholder or reminder. Most teams cannot self-organize well; they need leadership. Self-organization is an aspiration, not a starting point.

The need for leadership is even more acute when one has many teams, and they need to coordinate, and resolve issues such as “How will we design the product? How will we involve real users? How are we going to integrate? How will we manage quality? How will we support our product? How will we agree on branch and merge strategies for the product as a whole?”

When people in a non-Agile organization implement Agile practices, they look for a rule book or procedure to follow, because that is what they are used to, but there isn’t one. If you were to create one, it would not work everywhere, because every Agile decision and judgment is contextual. It always depends; that is what yields agility and makes it possible for people to select the shortest path for each situation.

The above are aspects of the organization’s culture: the ability to discuss issues openly and honestly so that they can be resolved, the willingness to take risks when making a decision, and the patterns of leadership that people have learned. There are many other dimensions of culture that are essential for agility, such as the inclination to learn, the tendency to try things on a small scale before scaling up, and the acceptance of things not going perfectly the first time.

As Peter Drucker said, “Culture eats strategy for breakfast,” and that certainly is true for Agile transformations. If you don’t address your organization’s culture, your agile strategy with its new practices will fail to yield the desired outcomes, and Agile will become a source of problems instead of a driving force for business agility. The good news is that culture can be changed, with the right commitment and the right approach. Agile 2 Academy considers culture improvement to be an important element in business agility. An Agile transformation strategy that includes analyzing and improving your organization’s culture is far more likely to succeed than simply adopting a set of agile practices or frameworks and hoping for the best.

[1] The best-selling book Accelerate documents research that makes this connection in the context of Agile and DevOps.

I recently attended a Data Connectors Cybersecurity strategies conference in Reston, VA. Companies practicing various security solutions had speakers’ sharing knowledge about security threats that are currently affecting the market and how to protect an IT organization against such attacks. Interestingly, Sophos speaker Paul Lawrence (cybersecurity sales engineer) discussed Ransomware as a Service (RaaS) and how to protect against these attacks. Below you will find the high-level information that I gathered in this conference which I feel will help others who are unaware of this threat.

P.S. – This is just an informatory blog on what RaaS is and how to prevent IT organizations from this attack.

What is Ransomware as a Service?

In layman’s term, RaaS is an unusual type of software as a service provided over the internet by criminals to attack IT systems and get paid ransom for it.

In 2018, 53% of the organizations were hit by ransomware and 1/3 of them paid ransom to recover from the ransomware attack.

How it works?

Suppose I am the bad guy who wants to hack machines, data, information but doesn’t want to reveal the identity and, I want to get paid ransom for hacking.

I can use RaaS (Ransomware as a Service).

I need to register my account by providing the bank details where I want to be paid the ransom. All my information that I provide to this service platform will be safe and it won’t be tracked (presumably).

Next, I download the viruses from this service platform and start infecting machines. Once infected, I can provide details about where they can pay the ransom to recover from the attack.

Figure 1: Shows how RaaS services are hosted on the web with their malicious intent. (Image downloaded from Google)

Figure 2: Another RaaS model where you can purchase the malicious software online. (Image downloaded from Google)

Now anybody can be a hacker using this RaaS service since malicious actors have created various models to attack any IT system. All you need is to follow the guidelines they provide with step by step details.

How do RaaS providers make revenue?

They will collect ransom from the organizations or individual vendors who were attacked through RaaS account payment system. Once they get paid the full ransom, a share of that money goes to the criminal who initiated this account payment by registering for this service.

Basically, a win-win situation for both the RaaS provider and the malicious actor who used this service to attack the IT system of the organization or individual vendors.

Types of Ransomware attacks

Two types:

  1. Traditional ransomware attack: The attack is automated and doesn’t need manual intervention. It can spread rapidly across the globe. WannaCry is the most widely known traditional ransomware variant that infected nearly 125,000 organizations in over 150 countries.
  2. Targeted ransomware attack: This is a well-planned manually targeted attack by attacking the network and computers on the network. RobbinHood variant was used in the Baltimore ransomware attack which compromised most of Baltimore’s government computer systems.  13 bitcoins was the ransom demanded to unlock the computers.

Prevent from Ransomware attacks

Ransomware attacks are getting more targeted. One of the primary attack vectors for Ransomware attacks is Remote Desktop Protocol (RDP)

  1. Lock down RDP
    1. Use Strong passwords.
    2. Do not disable Network Level Authentication (NLA), as it offers extra authentication level.
    3. To learn more, please go to Malwarebytes Labs.
  2. Patch to prevent privilege elevation
  3. Limit the users to those that really need it
  4. Secure your network both from the outside and inside
  5. Disaster Recovery plan or Aftermath of an attack
  6. We need to ask this question to ourselves “Do we really need remote access?”

Selfie taken at the Data Connectors cybersecurity event 😊

I recently came across a blog written by a former developer at ORACLE.  

The author highlighted the trials and tribulations of maintaining and modifying the codebase of Oracle (version 12) database. This version, by the way, is not some legacy piece of software, but is the current major version that is running all over the world. According to the author, the codebase is close to 25 million lines of C code and described as “unimaginable horror”. It is riddled with thousands of flags, multitudes of macros and extremely complex routines that were added by generations of developers trying to meet various deadlines. According to the author, this is a common experience for an Oracle developer at the company (their words, not mine): 

  • Start working on a new bug.
  • Spend two weeks trying to understand the 20 different flags that interact in mysterious ways to cause this bug.
  • Add one more flag to handle the new special scenario. Add a few more lines of code that checks this flag and works around the problematic situation and avoids the bug. 
  • Submit the changes to a test farm consisting of about 100 to 200 servers that would compile the code, build a new Oracle DB, and run the millions of tests in a distributed fashion. 
  • Go home. Come the next day and work on something else. The tests can take 20 hours to 30 hours to complete. 
  • Go home. Come the next day and check your farm test results. On a good day, there would be about 100 failing tests. On a bad day, there would be about 1000 failing tests. Pick some of these tests randomly and try to understand what went wrong with your assumptions. Maybe there are some 10 more flags to consider to truly understand the nature of the bug. 
  • Add a few more flags in an attempt to fix the issue. Submit the changes again for testing. Wait another 20 to 30 hours. 
  • Rinse and repeat for another two weeks until you get the mysterious incantation of the combination of flags right. 
  • Finally one fine day you would succeed with 0 tests failing. 
  • Add a hundred more tests for your new change to ensure that the next developer who has the misfortune of touching this new piece of code never ends up breaking your fix. 
  • Submit the work for one final round of testing. Then submit it for review. The review itself may take another 2 weeks to 2 months. So now move on to the next bug to work on. 
  • After 2 weeks to 2 months, when everything is complete, the code would be finally merged into the main branch. 

Millions of tests 

As I read this post, I started getting a little nervous, being a software developer and currently interacting with an Oracle 12 database. Quickly it became clear that millions of tests were acting as a true line of defense against the release of disastrous software. It was interesting to read that the author was diligent in adding tests not just because it was his job, but out of concern for ensuring that future developers will not break the bug fix being added. There are probably many ways of streamlining the development lifecycle described by the author, and maybe it is a topic for another blog, but it seems that having hundreds or thousands of failed tests is infinitely better than the alternative. 

The black box 

Product owners and business stakeholders are not the only people in an organization that may view their codebase as a black box. Developers may as well. As the number of lines of code grows over time, no institutional knowledge can cover the various nuances and complexities that various features introduce. When new developers are added to a project, the value of having automated tests becomes more pronounced. Organizations should strive to remove the perception of the code being a “black box” as much as possible. Failure to do so will result in loss of confidence in quality and viability of the products and features being developed, not to mention placing unnecessary burden on test teams. 

Legacy Systems 

I have heard horror stories from colleagues and friends about maintaining “legacy” systems. One question that comes to mind is if the system is being modified, is it really a legacy system, or is it just an old system that is still updated? In my experience, if a bug fix or a new feature is being added to an older system, the cost of not automating the corresponding tests most often results in some other seemingly unrelated bug being introduced. Sometimes it can be very difficult to automate tests especially in old systems. There may not be any available testing frameworks that could be easily plugged in. In such cases I would advocate for writing your own testing framework. It may not be as sophisticated as some of the commercial or open source packages, but it will help immensely. Testing automation of older systems can also ensure a smooth retirement of obsolete features that keep the codebase bloated and more complex that it needs to be. 

Flaky Tests 

In my experience, having more automated tests is better than having less. Over time, tests may be become redundant as they are being added. This is not the worst problem to have and can generally be managed as technical debt. What is worse, is presence of flaky tests. These are tests that change from passing to failing from day to day with no clear explanation. These result in significant time sink and loss of confidence in the product being developed. Getting rid of such tests should be the top priority. Ideally, they should be rewritten and made more robust, but in some cases, they may be completely removed as they do not reliably prove that the software is working as it should.  

Conclusion 

Test automation is not new, but it still appears to be lacking in many systems. Organizations should embrace the cost of test automation as part of the cost of developing new features and modifying existing ones. Doing so will help build confidence in ensuring quality for product owners as well as developers. 

Happy testing! 

We only have to take a look around to know that while great strides have been made providing technology and applications to help customers with their everyday lives, there is still a long way to go. I don’t mean until life is fully automated, it is more about the refinement needed to what is in use to fully serve its purpose and the public.

Issues customers encounter can be anything from a bifurcated process still needing a combination of personal touch and automation, to technology geared toward some but not all customers, to a limited-scope solution that only addresses part of the customer’s needs (and wants).  While the assumption is that it is always quicker to do something online, there are times where that is a fallacy.

Many organizations are realizing there is a gap in their offerings and some have created customer experience (CXP) officers or departments to specifically address the customer experience.  The goal for CXP is specifically to “delight” the customer by designing interactions that places the customer’s needs first.

These CXP departments are still nascent in many organizations, but the concept has gained a foothold and momentum.  Their focus goes beyond customer service or application usability, they are looking at any and every interaction the customer may have with the organization, across channels, technology and throughout the entire process for their various customer categories using journey maps as a way of mapping out the interactions.  In a recent American Banker article ‘Where is everyone going’, Rebecca Wooters, managing director and head of global cards customer experience digital and journey strategy at Citigroup, was quoted saying “Each journey has a starting point or multiple starting points and an intended outcome… What is everything happening in between those two spots, and are we doing what we need to do for the customer to provide a seamless, frictionless experience?”

Allowing a customer to begin a process at the branch, transition to a mobile application to enter their information, and then reach out to the customer support line to continue the process without having to explain their entire situation, re-enter data, or any other duplicative effort would be a nirvana of sort for many organizations. That seamless experience is what organizations strive for but have yet to achieve in the vast majority of cases.  That is a world where the tools and information the customer wants and needs are provided, how they want it and when they need it.  This foundation will very likely encourage customers to be more independent through self-service transactions, as they will have confidence that they get the answers they need or support they want without wasting time and effort needing duplicative explanations or repetitive data entry.

Does your company have a Customer Experience Division?  What changes have been introduced due to this group’s activities? We would love to hear from you!

Can you imagine a world without engineering? It’s a tougher question to answer than you might think. Many people do not actually know the extent to which we rely on engineering for our daily lives to function, and the amount of work that has gone into it by different types of engineers.

The discipline of engineering is one of the oldest, arguably as old as civilization itself. The first engineers were those who developed the lever, the pulley and the inclined plane. Egyptian engineers designed and built the Pyramids, and Roman engineers conceptualized the famous aqueducts. Today, engineering covers a broad range of disciplines, all devoted to keeping the “engine” running — a world without engineering would soon come to a standstill.

In a recent conversation, I answered the usual “What does CC Pace do?” question with my standard response regarding our services, to which she replied, “Oh, you’re process engineers!” Her response was compact, narrowly focused, and remarkably spot on. With people processes and systems processes at the heart of much of what we at CC Pace do, yes, we are process engineers, yet it had never occurred to me to describe us as such.

Quoting liberally from Wikipedia, process engineering focuses on the design, operation, control and optimization of a series of interrelated tasks that, together, transform the needs of the customer into value-added components and outputs. Systems engineering is a close corollary activity that brings interdisciplinary thinking to focus on how to design and manage complex systems, beginning with discovering the real problems that need to be resolved and finding solution to them. That’s what we do here at CC Pace in a nutshell.

Some folks make the mistake of thinking business processes are in place to only ensure internal controls remain strong and to make people accountable for what they are doing.  In fact, business processes constitute all the activities your company engages in—using people, technology, and information—to carry out its mission, measure performance, serve customers, and address the inevitable challenges that arise while doing so.  Processes determine the effectiveness and efficiency of your company’s operations, the quality of your customers’ experience and ultimately, your organization’s financial success.

At CC Pace, we pride ourselves in achieving organizational excellence by being the industry leader in business process and technology engineering. We are dedicated to driving innovation and delivering exceptional quality in everything we do. As systems and process engineers, we help our clients streamline, standardize and improve their processes to retain a competitive edge. You see problems; we see solutions.

With Star Trek: Discovery’s television debut rapidly approaching, I can’t help but reflect on the many valuable lessons on project management I took away from the original series, Star Trek, and its successor, Star Trek: the Next Generation. Those two TV series counted on the strengths of their ships’ captains, James T. Kirk and Jean-Luc Picard, respectively, not only to help entertain viewers, but to provide fascinating insights into the characteristics of leadership. In so doing, the shows created timeless archetypes of starkly contrasting project management styles.

Kirk and Picard both had the title “Captain,” yet could not have been more different.  In project terms, both series featured a Starship Captain operating as Project Manager, Project Sponsor and Project Governance all rolled into one.  Yet despite common responsibilities, they were very different in how they carried them out, each with different strengths and weaknesses; one would often succeed in roles where the other would fail, and vice versa.  Each episode was like a project, but on the show, thanks to their writers, each ship’s captain seemed to always get a “project” that they were well-suited for.  But that only happens in real life if someone makes it happen, and most real-life projects don’t have writers working on the scripts.

Kirk and Picard were polar opposites in management style in many ways, and most people involved with project execution have common traits with each. Suppose you are like one of them – are you a Kirk or a Picard? – what should you do to maximize your strengths and minimize your weaknesses?  Suppose one is on your project – how do you ensure they are put to their best use?  How should you be using them and what role should they play?  What would they be good at and not so good at?

To get down to basics, the biggest difference between the two is Kirk is “hands on” versus Picard is “hands off.”

Kirk is clever and energetic.  Because he is “hands on,” he is always part of the “away team” – the group of people who “beam down” to the whatever this week’s show is.  The senior management here is typically the project team.  When additional work was found, he did it himself, or with the existing team.

Picard is visionary and a delegator.  He is more of a leader than a manager.  He set objectives, made decisions and, obviously “hands off,” told Number One to “make it so.” Number One led the project team; Picard rarely went himself.

The best use of a Kirk is as a project manager with delegated authority on a short-term project with a fixed deadline, like a due diligence effort requiring the current situation to be assessed and a longer-term plan of action defined to address deficiencies.  Kirk’s style and authority allows the team to move quickly; if additional tasks appear, Kirk will summon enough energy to get himself and the team through it.  When decisions need to be made, he makes them.  He will shine.  But on a long-term project, if there is additional scope discovered (and there always is), Kirk will become a martyr, skipping vacations and asking his team to do the same.  He will fail at some of his primary tasks – staffing the project properly, as example – and will inadvertently overestimate the current state of the project to his stakeholders, and underestimate the risks.  Here again, it only works on TV.

The best use of Picard is as a project sponsor – he has the vision and needs you to implement it.  The trick will be keeping him involved.  On a short-term project he wouldn’t be your first choice for a PM – unless it was a subject that he cared deeply about – because he might delegate without being very involved.

If Number One got into trouble, but didn’t know it (e.g., the boiling frog parable: as the water heats up, the frog never notices until it is too late), Picard wouldn’t be providing enough oversight to know it either.  Or, if his insight was needed, then there might be a delay while waiting for him to decide.  On a long-term project, the project will need strong oversight to monitor progress and ensure engagement.  That way Picard can keep the team focused on the right things.  Picard could also be a PM on a long-term project like a process transformation.  If there was a new requirement, it would never occur to him to try and do it himself – he would go to the sponsor to explain the tradeoffs of doing or skipping the new requirement, and get the right additional staff to do it.  His management style is great for delegation and building a team, as well as developing the people on that team.

I know that both Kirk and Picard have their fans, and their project management skills both work on TV and in movies – but because there they always the type of project to work on that suits them, as the writer made it be so.  In real life you need to be more flexible in how you use them, and apply the right one, or at least the right traits, to meet your business objective. Understanding this, and acting accordingly, may be as close to having a script writer for our projects as most of us will ever get.

Boy this summer flew by quickly! CC Pace’s summer intern, Niels, enjoyed his last day here in the CC Pace office on Friday, August 18th. Niels made the rounds, said his final farewells, and then he was off, all set to return to The University of Maryland, Baltimore County, for his last hurrah. Niels is entering his senior year at UMBC, and we here at CC Pace wish him all the best. We will miss him.

Niels left a solid impression in a short amount of time here at CC Pace. In a matter of 10 weeks, Niels interacted with and was able to enhance several internal processes for virtually all of CC Pace’s internal departments including Staffing, Recruiting, IT, Accounting and Financial Services (AFA), Sales and Marketing. On his last day, I walked Niels around the office and as he was thanked by many of the individuals he worked with, there were even a few hugs thrown around. Many folks also expressed wishes that Niels’ and our paths will hopefully soon cross again. In short, Niels made a very solid impression on a large group of my colleagues in a relatively short amount of time.

Back in June I gladly accepted the challenge of filling Niels’ ‘mentor’ role as he embarked on his internship. I’d like to think I did an admirable job, which I hope Niels will prove many times over in the years to come as he advances his way up the corporate ladder. As our summer internship program came to a close, I couldn’t help reminiscing back to my days as a corporate intern more than 20 years ago. Our situations were similar; I also interned during the spring/summer semesters of my junior year at Penn State University, with the assurance of knowing I had one more year of college remaining before I entered the ‘real world’. My internship was only a taste of the ‘corporate world’ and what was in store for me, and I still had one more year to learn and figure things out (and of course, one more year of fun in the Penn State football student section – priorities, priorities…)

Penn State’s Business School has a fantastic internship program, and I was very fortunate to obtain an internship at General Electric’s (GE) Corporate Telecommunications office in Princeton, NJ. My role as an intern at GE was providing support to the senior staff in the design and implementation of voice, data and video-conferencing services for GE businesses worldwide. Needless to say, this was both a challenging and rewarding experience for a 21-year-old college student, participating in the implementation of GE’s groundbreaking Global Telecommunications Network during the early years of the internet, among other things.

As I reminisced back to my eight months at GE, I couldn’t help but notice the similarities between my internship and a few of the ‘lessons learned’ I took away from my experience 20+ years ago, and how they compared or contrasted to my recent observations and feedback I provided to Niels as his mentor. Of course, there are pronounced differences – after all, many things have changed in the last 20 years – the technology we use every day is clearly the biggest distinction. I would be remiss not to also mention the obvious generation gap – I am a proud ‘Gen X’er’, raised on Atari and MTV, while Niels is a proud Millennial, raised on the Internet and smartphones. We actually had a lot of fun joking about the whole ‘generation gap thing’ and I’m sure we both learned a lot about each other’s demographic group. Niels wasn’t the only person who learned something new over the summer – I learned quite a bit myself.

In summary, my reminiscing back to the late 90’s certainly helped make my daily music choices easier for a few weeks this summer led to the vision for this blog post. I thought it would be interesting to list a few notable experiences and lessons I learned as an intern at GE, 20 odd years ago, along with how my experiences compared or contrasted with what I observed in the last 10 weeks working side-by-side with our intern, Niels. These observations are based on my role as his mentor, and were provided as feedback to Niels in his summary review, and they are in no particular order.

Have you similarly had the opportunity to engage in both roles within the intern/mentor relationship as I have? Maybe your example isn’t separated by 20 years, but several years? Perhaps you’ve only had the chance to fulfill one of these roles in your career and would love the opportunity to experience the other? In any case, see if you recognize some of the lessons you may have learned in the past and how they present themselves today. I think you’ll be amazed at how even though ‘the more things change, the more they stay the same’.

 

 

A startling statistic that often gets overlooked is that 70% of projects world-wide fail. Each year, more than one trillion dollars are lost to failed projects. Most importantly, statistics show that these failures are frequently not the result of a lack of technical, hardware or software capabilities. Instead, these failures are typically due to a lack of adequate attention being paid to program management.

After seventeen years working in program management―implementing enterprise business strategies and technology solutions―I continue to be surprised by business leaders who misunderstand the differences between project management and program management, or simply think them to be two terms that refer to the same thing. Fact is,  program management and project management are distinct but complementary disciplines, each equally important to ensuring the success of any large-scale initiative.

Let’s take just a minute to level-set the roles of both. Project management is responsible for managing the delivery of a ‘singular’ project, one that has defined start and end dates and is accompanied by a schedule with a pre-defined set of tasks that must be completed to ensure successful delivery. Project management is focused on ‘output’. Program management, on the other hand, takes a more holistic approach to leading and coordinating a ‘group’ of related projects to ensure successful business alignment and organizational end-to-end execution. A program doesn’t always have start and end dates, a pre-defined schedule or tasks to define delivery. Program management is primarily responsible for driving specific ‘outcomes’, such as ensuring the targeted ROI of an initiative is achieved. Put another way, program management is basically the ‘insurance policy’ of a project, the discipline needed to make sure all the right things are done to ensure the likelihood of success.

One analogy I often use to help differentiate the roles of a program manager and project manager is that of a restaurant. The executive chef (project manager) works within a defined budget, makes certain the kitchen is adequality staffed and creates the menu. The executive chef will provide defined tasks, processes, tools and strategies that ensure efficient and consistent delivery of meals. The meals are a tangible delivery (output). Overseeing the chef, the restaurant owner (program manager) will provide the executive chef with a budget to work from and will closely monitor the output of the kitchen. The owner will make sure each delivery and support role is adequately staffed, trained and paid (e.g., wait staff, hostess desk, dishwasher, bussers and bartender). The owner will also make certain all the details like music and lighting are in place and establish an appropriate ambiance. The owner will make sure the right tools are in place for flawless execution (such as utensils, glasses, napkins, water pitchers, pens and computers), while making sure expected standards and key performance indicators are being met to achieve overall profitability targets and a great end-to-end customer experience (outcomes). The restaurant owner’s primary responsibility is to focus on merging the tangibles with the intangibles to support successful business strategy execution.

When it comes to mortgage banking, an industry that’s known more than its fair share of failed implementations, it is critical that we start giving program management a greater priority, and ensuring that those commissioned to perform the role are equipped with the requisite skills and tools. Whether it’s adding a new imaging platform, bolting on new CRM or POS technology, or something as expansive as replacing an LOS, every enterprise initiative requires a project manager to be leading the implementation effort and a program manager focused on change management and roll-out. Consider the addition of an end-to-end imaging system. A program manager’s tool box should include strategies and frameworks to effectively manage the roadmap for each critical impact point. This would include things like training, updating policies and procedures, executing an internal change management strategy, synchronizing marketing communications, and updating key performance indicators. In some instances, the project may require staff analysis, skills assessments, compensation analysis and adjustments, or even right-sizing of the organization. All of these are key components of the program manager’s toolbox, and not generally covered within the role of a project manager.

Bringing this dialog back full-circle, program management helps reduce project failure rates by maintaining a holistic approach to guiding an organization’s successful adoption of the impending change, leaving the nuts and bolts of build-out in the hands of project management. By addressing the myriad of intangibles required to orchestrate successful adoption and acceptance of change by an organization’s personnel, program management also helps ensure that business strategies and projects remain in full alignment and ROI objectives are achievable. Preparing management and staff for the impending changes defuses fears that can send adoption off the rails and eases the transitions and realignment of resources and roles that often accompany larger initiatives.

In closing, it’s not surprising to find the lines between project and program management will easily get blurred. Our experience is that it is often difficult to identify a really good project manager that is proven capable of undertaking a large-scale effort, but even more so to find someone truly adept at managing all the moving parts of the program. This difficulty is even more apparent in organizations where undertaking significant projects is a relatively rare occurrence and these skills are simply not found among existing staff. While it may seem adequate to budget for a singular project manager and hope that the program elements will be attended, managed and executed, unfortunately, “hope” is not a viable strategy when it comes to business-critical initiatives. The assignment of a skilled program manager, whether sourced internally or externally, will ultimately prove to serve as an effective insurance policy to your project investment. In an industry where failure cannot be afforded, it’s time to stop gambling on project execution and begin implementing program management

Many people ask, what is the distinction between an Agile adoption and an Agile transformation? The former being the adoption of an Agile method and tools, and the latter encompassing those plus the people, culture and mindset shift that must accompany the adoption for an organization to fully realize the benefit.

The real difference between adoption and transformation is that adoption fails and transformation sticks. You don’t really choose one over the other – people that fail to see the difference often do the adoption because they don’t realize that culture is what makes it really stick. Most organizations start with the adoption of a method at the team level within IT. Some will also covey the message to ‘do Agile’ from the top. However, this type of adoption rarely sticks. Middle management is lost in the shuffle, often just waiting out the change and expecting it to fail. The focus is on processes and tools – not people and interactions. The mindset of transparency, adaption and continuous improvement is misplaced in favor of mechanics and metrics.

A true transformation requires the organization to think and ‘be Agile’.  It requires the organization to look at their people, organization and culture, as well as their process and tools.  An organization will move through stages improving and absorbing changes in these areas.  Typically, an organization will begin with a pilot with one or more teams.  This allows the organization to best see how Agile will affect the current process and roles, and help to uncover gaps and potential areas of conflict, such as, central versus decentralized control.

It takes time to move through all the stages that are mapped out for the process, and much depends on the attitude and behaviors of leadership. Does leadership model the behavior they seek in others? Do they look to break down barriers to value delivery? Do they reward the team and system success rather than individual or functional success? Another important factor is that the organization knows what it is driving towards. This vision and accompanying goals are what will drive the pilot and future transformation activities. This alignment is the first step. As CC Pace continues to work with companies through this process we see the positive effects gained throughout each organization.

If you are interested in learning more about Agile Transformations, join us on Tuesday, June 13, for a free webinar on the topic! Click here to register.

As a continuation of our blog series on system selection, it’s time to discuss helpful tips to facilitate a successful product demonstration. The organization and management of the entire process requires upfront preparation. If you drive the process, your demo evaluations will be far more effective.

Demonstrations are one of the most critical components of the software selection process. Seeing a system in action can be a great learning experience. But not all demos are created equal. Let’s talk about how you can level the playing field. To make the most of everyone’s time, CC Pace recommends the following best practices for product evaluations.

Tip One – Keep your process manageable by evaluating no more than five systems. If you evaluate too many vendors, it becomes difficult to drill down deep enough into each offering. You will inevitably suffer from memory loss and start asking questions like, “which system was it that had that cool fee functionality that would be really helpful?”

Tip Two – For each software vendor, set a well thought out date and time for the on-site demo. Depending on your team’s travel schedule, try to space out the demos a few days apart so that you have time to prepare and properly analyze between sessions.

Tip Three – Logistics play a big role in understanding how a system looks and functions, so do your part to help your vendors present well. Whenever possible, arrange for a high-quality projector or large HD screen for the attendees in the room. Hard-wired internet connections are always better. There’s nothing worse than being told, “the screen issues are because of a resolution problem” or “it’s running slow because the air card only has one bar.” Providing these two items can easily remove doubts about external factors causing appearance and performance issues.

Tip Four – Involve the right people from your organization. It’s important to have executive sponsorship as well as hands-on managers involved to assess the software modules. This is also the best opportunity to get “buy-in” from all parts of your organization.

Tip Five – Be sure to head into these demonstrations knowing your key requirements. Visualize it as a day in the life of a loan and follow a natural progression from initial lead into funding. Jumping around causes confusion and can be difficult on the vendor.

Build a list of requirements based on the bulk of your business. Asking to see how the software handles the most complicated scenarios can send the demo down needless paths. No one wants to watch a sales person jump through a bunch of unnecessary hoops for a low-volume loan product.

If you highlight which functional capabilities are most important to your organization, the vendors can spend more time demonstrating those capabilities in their software. Communicate how you think their software can help. But be careful not to justify why something is done a certain way today, but rather focus on how it should be done in the future.

Tip Six – The easiest way to take control of the demo process is to draft demo scripts for your vendors. Start by identifying the ‘must-have’ processes that the software should automate. Don’t worry about seeing everything during this demo. Set the expectation that if the demo goes well, the vendor will likely be called back again for a deeper dive. Provide a brief description of each process and send it to the vendor participants so they can show how their software automates each process. The best vendor partners will have innovative ways to automate your processes, so give them a chance to show their approach.

As you watch the demos, keep track of how many screens are navigated to accomplish a specific task. The fewer clicks and screens, the better. Third-party integrations can significantly help with the data collection and approval process. Always have an open mind regarding different ways to accomplish tasks and don’t expect your new software to look or act just like your legacy system.

Simple scorecards should be completed immediately following each demonstration. This will make it easier to remember what you liked and disliked and prove invaluable when comparing all the systems side-by-side when your demos are complete.

One final suggestion: always request copies of the presentations. Not only will this help you remember what each system offers, it’s useful when the time comes to create presentations for senior management.

 

photo credit: http://www.freepik.com/free-vector/business-presentation_792712.htm Designed by Freepik

I have enjoyed using analogies between baseball and software development in a few of my previous blog entries, so with Major League Baseball’s season underway, there’s no better time than now to write another baseball-oriented blog entry. But don’t fret, non-baseball fans, because this message, like so many others, applies to both life AND baseball.

In recent years, I’ve observed many software development teams engaged in long-term, multiple-release software development projects. I would classify these as projects incorporating stable, unchanging teams, spanning a year or more and challenged with navigating through fluctuations of tension – followed by calm – which usually accompany projects with multiple production releases.

An interesting and alarming behavioral tendency seems to have emerged – or more likely, I’m only now noticing it – with enduring, static teams working together on projects or applications spanning multiple releases. This behavior isn’t an obvious, tangible issue like a team member missing meetings. Rather, it’s a human behavioral trait that seemingly emerges unnoticed and isn’t uncovered – if it’s even uncovered at all – until it’s usually too late, after the damage has been done.

What is this negative tendency you may ask? And what in the world does this have to do with baseball? Well, it can be defined with many words or explanations, but for this blog, we’re using one word: LOLLYGAGGER. Trust me, as you will see, you don’t want to be called a lollygagger, as it’s definitely not a compliment or a term of endearment. And you certainly don’t want to be developing software with or managing a team full of lollygaggers.

thefreedictionary.com defines the verb ‘lollygag’ as follows: To waste time by puttering aimlessly; dawdle. This simple definition does not do proper justice to this word.

Side note: It would also be prudent to mention here that this behavior doesn’t seem to manifest itself on greenfield projects, or short-term engagements with newly-formed teams for obvious reasons – arrangements which usually permeate high-energy and team enthusiasm during the early ‘forming’ stages of team development (see Tuckman’s Stages of Group Development). This would make sense as a lack of enthusiasm is usually not a characteristic of newly-formed teams.

So, this odd linkage came to mind when the movie Bull Durham appeared on television a few nights ago, which – aside from providing the vision for this blog entry – is one of my all-time favorites. Critically acclaimed as one of the greatest American sports movies of all time, Bull Durham is a must-see for any baseball enthusiast. The movie is based on one person’s experiences in minor-league baseball and depicts players and fans of the Durham Bulls, a minor-league baseball team residing in Durham, North Carolina.

One gains a TRUE SENSE of how lollygagging can hurt any team watching one of my favorite scenes in the movie which casts “Crash” Davis, the wise, veteran catcher and the Bulls’ manager, known as “Skip” (of course). The Bulls are playing awful baseball, mired in a long losing streak, and the manager has run out of patience and ideas. Which takes us to the scene inside the Bulls locker room after yet another painful loss:

Skip: “I don’t know what to do with these guys. I beg… I plead… I try and be a nice guy… I’m a nice guy.”
Crash: “Scare ‘em.”
Skip: “Huh?”
Crash: “Scare ‘em. They’re kids, scare ‘em. That’s what I’d do…”

After a chuckle, and now armed with this ingenious managerial advice, “Skip” proceeds to forcefully assemble his unsuspecting group of apathetic ballplayers into the shower area, throwing an entire rack of baseball bats into the shower after them, which certainly draws their attention (and those watching the movie). “Skip” then barks out an epic rant that would make Earl Weaver proud:

Skip: “You guys… You LOLLYGAG the ball around the infield. You LOLLYGAG your way down to first. You LOLLYGAG in and out of the dugout. You know what that makes you? Larry!”
Larry: “LOLLYGAGGERS!”
Skip: “LOLLYGAGGERS. What’s our record, Larry?”
Larry: “Eight and 16.”
Skip: “Eight and 16. How’d we ever win eight!?!”

So, in summary, a hilarious scene from a baseball movie taught me early on that lollygagging is not a good thing. I am now seeing that it is also a bad way to start off the early stages of your software development project. Think about it – as a team, once we clear that release hurdle, our instinct is to stop, take a deep breath, and relax. We just hit a major milestone, and more than likely the team worked some intense hours, days, weeks and sprints leading up to the actual release. (No matter how good your team is or how well you apply agile techniques, the days leading up to a release are ALWAYS more intense than those at the outset of a project.)

Picture the scene: our big release is deployed over an entire weekend, and everyone arrives to work on Monday. Whatever velocity, urgency and momentum generated and sustained through the prior release has seemingly dissipated into thin air. Because our next release is several weeks or months out, a feeling of tranquility sets in, as if the level of urgency no longer exists. We have now become…wait for it…OH NO! LOLLYGAGGERS!

This period of relaxation, or lollygagging, poses many threats to the next phase of the project.

  • That next release schedule – which was planned and completed last week and is now posted on the team room wall for everyone to see? Unfortunately, that new release schedule does not include any extra time for and does not tolerate relaxation, tranquility, or LOLLYGAGGING in the early sprints of the next release.
  • Due to the team carryover (with little or no change in personnel) the work to be done in Sprint #1 of the succeeding release is also likely projected based on established velocity from earlier sprints (i.e., the previous release). A slow, unproductive start to the first few sprints will undoubtedly result in a negative cascading effect on the entire release. For example: your release plan calls for 200 points with another production deployment after 10 sprints, because your team has proven time and time again this is an achievable goal (i.e., team velocity ~20 points). Your first two sprints start slowly and end up totaling 20 points, which already puts the team in trouble and in catch-up mode. Nobody likes being in catch-up mode. And catch-up mode tends to have a snowball effect.
  • Unless the release plan provisions for it, don’t allow carryover from the previous production deployment (or any associated issues or complications) to bleed into the early phases of the new release. Lollygagging will set in if team members continually remain in the mindset of the preceding release – in other words, looking back and not forward. Make sure the page is turned quickly on the weekend of the release (i.e., over to product support) and not turned slowly in the following few weeks.

As mentioned earlier, this behavior isn’t usually noticeable early on, but in those last few weeks before the subsequent deployment, when things become hectic and crazy again, everyone will wish they all hustled a bit more in the early weeks of the project.

Don’t get me wrong – celebrate the occasion! We’re not robots, and I’m a firm believer in recognizing and/or celebrating milestones at the end of a sprint or even better, after a successful production deployment. Those team milestones and achievements that we recognize working as teams are one of my favorite concepts of Scrum. But the party, or I should say, ‘mental break’, should not last for four weeks.

Here’s another way to think of it, and yes, we’ll use another baseball analogy. There is a baseball maxim which is often spoken at this time of the year. When a baseball team with high expectations opens the season with a poor start, you’ll often hear the following quote (or something similar):

“You can’t win the pennant in April, but you can certainly lose it.”

Keep that in mind when your next release starts – you can’t guarantee a successful project or production deployment in the first couple sprints of your new release, but if you come out of the gate slowly and LOLLYGAGGING, things can certainly come off the rails in a hurry. And everyone will be paying the price a few months later as the release date draws near.

The solution is simple, right? Make “Skip”, “Crash” and me proud, and just DON’T BE A LOLLYGAGGER!

Pete Rose, aka “Charlie Hustle”, was certainly not a lollygagger. He surely wouldn’t allow any of his teams to lollygag through the first few sprints of a release.                

The Durham Bulls, on the other hand, captured the true essence of lollygagging. I can assure you that in this scene, only one or two people were actually concentrating on baseball.

If I told my 6-year-old daughter, Gracie, to get out a piece of paper and a pencil, she would likely get out a pencil and a piece of paper.  But if I made the same request of my officemate, Ron, he would likely look at me funny and ask me why.  Of course, if I made the same request of my 14-year-old daughter, she would completely ignore me but that’s another story.

So why the difference in reactions from Ron and Gracie?  It really boils down to how we learn.  As children, we learn by being fed information.  During our early academic career, we take what teachers tell us as gospel.  However, as we get older, we begin to question authority, as if it’s a sort of rite of passage to adulthood.  I can hear those of you with teenagers, passionately shaking your heads.  What I am trying to say is that adult learners need to be involved in the learning process. As adults, we need to understand why we are in this learning experience and how it’s going to make our lives better.

Um, CC Pace is an Agile coaching and training company, why are you talking about learning stuff? 

Well, I am glad that you’re paying attention.  Aside from the fact that CC Pace recently hired me to focus entirely on creating effective learning programs and I gotta do something to show them that I actually know stuff, how we learn stuff is part of how we create lean, mean, efficient machines…er, organizations.  Just because you consider yourself an Agile environment doesn’t mean that you actually are!  Waving the magic wand does not make a company Agile, although it’s kinda fun to suggest it to the senior leadership team to see if they bite.   Agile needs to be embraced at the top AND at the bottom.  Getting buy-in from the folks in the trenches means that the training needs to be created with them, very clearly, in mind.

Think of the last time you had to go to training because management said so, or because of a requirement for some regulation.  What did you learn?  Did you pick up lots of good stuff?  Would you wish it on your worst enemy?  Learning experiences that fulfill a check box are often set up for failure because we forget to explain how this information will ultimately help the learner.  If we just chalk it up to a requirement or some mandate, then we have missed an opportunity to help reinforce the content.

It’s critical that a successful learning program understand that key difference; that adult learners need to be engaged with the process, rather than as just a passive participant.

So how do we get the learners engaged?  Well, hopefully, you can see where I am going with this.  If we start by successfully demonstrating how this information will connect back to the participant, then we stand a much better chance of helping the learner retain the information.  Otherwise, participants are left pondering all of the other ways that they could be wasting time while still on the clock. But if the participant sees why the training will impact them, then they will submit to the learning process.

And as Johnnie Cochran once so famously said, “If the training reason is legit, then you must submit!” Ok, so Johnny never said that, but I bet he would have if he read this post.  Ok, that might be a bit of a stretch, too.  But it’s a nice way to remind ourselves of the importance of setting up a training for success. Keep this in mind when you are planning Agile training for a team within your organization.  Thanks for sticking with me this long, how about we pick this up next time with some more ideas on building engagement in your learning program?

Picture this: you’ve recently been hired as the CIO of a start-up company.  You’ve been tasked with producing the core software that will serve as the lynchpin for allowing the company’s business concept to soar and create disruption in the market.  (Think Amazon, Facebook, or Uber.)  Lots of excitement combined with a huge amount of pressure to deliver.  You’ve got many decisions to make, not the least of which is whether or not to build an in-house team to develop the core system or to outsource it to a software development consultancy. So, how do you decide and to whom do you turn to if you do opt to outsource?

CC Pace is one of the software development consultancies that a company might turn to as we focus in the start-up market. Developing greenfield systems for an innovative new company is an environment that our development team members greatly enjoy.

A question that has been posed to me fairly frequently is: why  would a start-up company outsource their software development? While I had my own impressions, I decided to pose the question to some CIOs of the start-up companies we’ve worked with, along with some CEOs who ultimately signed-off on this critical decision.

The answers I received contained a common theme –  neither approach is necessarily better, and the proper decision depends on your specific circumstances.  With some of these circumstances being inter-related , here are the 4 primary factors that I heard the decision should depend on:

  1. Time-to-Market – It takes time to assemble a quality team, often up to 6 months or more.  Even then, it will take some additional time for this group to jell and perform at its peak.  As such, the shorter the time-to-market that the business needs to have the initial system delivered, the more likely you would lean toward an outsourced approach.  Conversely, if  there is less time sensitivity, it makes sense to put in an in-house team.  This team will be able to not only deliver the initial system, but they will also be able to handle future support and development needs without requiring a hand-off.
  2. Workload Peak – For some new businesses, the bulk of the system requirements will be contained in the initial release(s) while others will have a steady, if not growing, stream of desired functionality.  If the former, hiring up to handle the initial peak workload and then having to down-size is not desirable and can be avoided with the outsourced model.  On the other hand, a steady stream of development requirements for the foreseeable future would cause you to lean towards building an in-house team from the start.
  3. Availability of Resources – While there is a scarcity of good IT resources seemingly everywhere, certain markets are definitely tighter than others.  In addition, some CIOs have a greater network of talent that they know and could more easily tap than others.  The scarcer your resource availability, the more likely you would lean in calling upon outsourced providers.  Conversely, if you have ready access to quality talent, take advantage of that asset.
  4. CIO Preference – Finally, some CIOs just have a particular preference of one approach over the other.  This may simply be a result of how they’ve worked predominately in the past and what’s been successful for them.  So, going down a path that’s been successful is a logical course to take. Interestingly, one CEO commented that his decision in choosing a CIO would be that person’s ability to make these types of decisions based upon the business needs and not personal preference.

I would love to hear from anyone who has been (or will be) involved in this type of decision either from the start-up side or the consulting provider side, as to whether this jives with your experience and thinking. The one variable that wasn’t mentioned by anyone as a factor was cost.  That surprised me a lot and I’d also welcome any of your thoughts as to why this wasn’t mentioned as a factor.

Building a new software product is a risky venture – some might even say adventure. The product ideas may not succeed in the marketplace. The technologies chosen may get in the way of success. There’s often a lot of money at stake, and corporate and personal reputations may be on the line.

I occasionally see a particular kind of team dysfunction on software development teams: the unwillingness to share risk among all the different parts of the team.

The business or product team may sit down at the beginning of a project, and with minimal input from any technical team members, draw up an exhaustive set of requirements. Binders are filled with requirements. At some point, the technical team receives all the binders, along with a mandate: Come up with an estimate. Eventually, when the estimate looks good, the business team says something along the lines of: OK, you have the requirements, build the system and don’t bother us until it’s done.

(OK, I’m exaggerating a bit for effect – no team is that dysfunctional. Right? I hope not.)

What’s wrong with this scenario? The business team expects the technical team to accept a disproportionate share of the product risk. The requirements supposedly define a successful product as envisioned by the business team. The business team assumes their job is done, and leaves implementation to the technical team. That’s unrealistic: the technical team may run into problems. Requirements may conflict. Some requirements may be much harder to achieve than originally estimated. The technical team can’t accept all the risk that the requirements will make it into code.

But the dysfunction often runs the other way too. The technical team wants “sign off” on requirements. Requirements must be fully defined, and shouldn’t change very much or “product delivery is at risk”. This is the opposite problem: now the technical team wants the business team to accept all the risk that the requirements are perfect and won’t change. That’s also unrealistic. Market dynamics may change. Budgets may change. Product development may need to start before all requirements are fully developed. The business team can’t accept all the risk that their upfront vision is perfect.

One of the reasons Agile methodologies have been successful is that they distribute risk through the team, and provide a structured framework for doing so. A smoothly functioning product development team shares risk: the business team accepts that technical circumstances may need adjustment of some requirements, and the technical team accepts that requirements may need to change and adapt to the business environment. Don’t fall into the trap of dividing the team into factions and thinking that your faction is carrying all the weight. That thinking leads to confrontation and dysfunction.

As leaders in Agile software development, we at CC Pace often encourage our clients to accept this risk sharing approach on product teams. But what about us as a company? If you founded a startup and you’ve raised some money through venture capital – very often putting your control of your company on the line for the money – what risk do we take if you hire us to build your product? Isn’t it glib of us to talk about risk sharing when it’s your company, your money, and your reputation at stake and not ours?

We’ve been giving a lot of thought to this. In the very near future, we’ll launch an exciting new offering that takes these risk sharing ideas and applies them to our client relationships as a software development consultancy. We will have more to say soon, so keep tuning in.

Development of your business strategy requires a long and hard look ahead to the future. Anticipating what the industry may look like, what your customer profile may be and what technology might be available is critical.  Yet as important as looking forward is, it is equally critical for an organization to look back and analyze how you got here, what has made your company great and how you’ve managed to retain and build your customer base over the years.  The combination of these views will play a significant role in designing a powerful ‘move forward’ strategy for your organization.

In recent years the financial services industry has been heads down focused on navigating the regulatory environment, including the most widely recognized and intrusive regulation of TRID. As a result, any vision for long-term strategic planning has taken a back seat. As TRID begins its final march toward implementation, it’s high time for the industry to begin looking beyond the recent strains of compliance and begin to recall the lessons learned from the past and imagine what the future might be with the development of an effective business strategy.  Adidas learned this lesson by regaining control of their future after taking a long, hard look at their past to ensure they break the chains of recent history.  Read more about it in Strategy+Business, who wrote “How Adidas Found Its Second Wind”. It’s now time for the financial services industry to get its “Second Wind”.

Being a Product Owner is a really hard job. I have been hearing this for a long time and it has come out in almost all of the retrospectives in one way or the other over the years.  Here are a few recurring themes I hear:

Product Owners and decision – making
I often hear things like – “My Product Owner doesn’t make decisions quickly enough or by themselves. It feels like they have to check with their leaders even before making even the most trivial decisions.”

My questions or thoughts revolve around – is it really a person thing, or is it pointing to the management not providing Product Owner the authority to do so? Is it the management’s mandate for the Product Owner to seek approvals before making any decisions?

If that’s the case then not getting the questions answered in timely manner can impact the completion of work and eventually has a significant impact on Time to Market. It is probably how the organization is structured and the organization probably needs to shift from a Do Agile to a Be Agile mode.

Product Owner is not technical
When did the requirement to be a Product Owner become technical?  A Product Owner doesn’t need to be technical. They need to answer the team’s what’s not really the how’s.

Going back to the Agile Principles: The best architectures, requirements, and designs emerge from self-organizing teams.

Product Owners should not be the ones providing teams the technical direction. They should be able to provide the vision to the teams.

Teams are the ones who make the technical decisions around the design and architecture. The entire team and the Product Owner can look into the various design options and discuss the trade-offs but it is the team that is responsible for the how the implementation will take place and how long would it take.

Product Owner is not available to make the decisions
Being a Product Owner is a full time job. A Product Owner should have ample time to sustainably carry out their responsibilities. If the individual is overworked, the team’s progress suffers and the resulting product may be suboptimal.

At a workshop this week many of the Product Owner attendees were overwhelmed with the length and breadth of the Product Owner role.  They said things like, “As a Product Owner should I be making all the decisions?”, and “Are they really a single- wringable Neck…..but why??”

This role is by no means is an easy role. They have to be constantly engaged in conversations with the stakeholders, keeping a check on the pulse of the market, looking at trends and innovation but at the same time be available with the team to answer all the questions.

I feel very strongly about the Product Owner making the decisions – at a team level the Product Owner is expected to make the decisions. They are responsible for providing the answers to the team. Be the single point of query or decision making body for the team. The Product Owners should be available to the team at least for some dedicated team time or core hours every day.

You should be that leader for the team so the Team does not have to look any further.  The Team can just ask questions of you and get an answer and be sure what is required off them to fulfill the commitment that they made during the sprint planning or the PI planning.

But in all honesty is it Team Vs Product Owner….not really
The memories of best teams that I carry in my heart are the ones in which all of us worked together. We were open enough to call out ‘hey, need you for this and that,” or “I need …….this from you,” and “you need to show up for this meeting and be available at this time…”.

The team (onsite and distributed), Scrum Master and Product Owner all were available during core hours and would address things as they came. Does the level/seniority of Product Owner in the organization matter in the quality of decisions they make? Or is it how much confidence and trust their team and leadership places on them?

Before we start to worry if we have the best Product Owner in the whole world, or if we can be the best Product Owner, it’s important to remember that Product Owners, like the rest of the team, need time and support to acquire the necessary skills.

Everyone has their unique strengths and weaknesses and that’s what we bring to the team. A Product Owner might be great at providing the vision for the product, interacting with the team and stakeholders, but may not be the best at writing user stories.  In areas of inexperience or weakness, the team would pitch in, helping with the work and enabling Product Owner growth.

So, ultimately what matters is the ‘We’ as in team not ‘I’ as a Product Owner.

I used to attend Agile conferences pretty frequently, but at some point I got burned out on them and the last one I attended was a 2007 conference in Washington, DC.  This year, when the Agile Alliance conference returned to the DC region, and I decided it was time to give them a try again.

It’s interesting to see how things changed since I last attended an Agile conference.  Agile 2015 felt much more stage managed than in previous years, with its superhero party, the keynotes making at least glancing reference to it (the opening keynote, Awesome Superproblems, appears to have been retitled for the theme, since all the references in the presentation were to “wicked” problems instead of “super” problems), and making one go through the vendor to get to lunch.  It also seemed like there were mostly “experts” making presentations, whereas previously I felt like there were more presentations by community members.  I have mixed feelings about all of this, but on the whole I felt that my time was well spent.  Although I didn’t really plan it, I seem to have had three themes in mind when I picked my sessions, team building, DevOps and craftsmanship.  Today I’ll tell you about my experiences with the team building sessions.

Two of the keynotes supported this theme: Jessie Shternshus’ Individuals, Interactions and Improvisation and James Tamm’s Want Better Collaboration? Don’t be so Defensive.  I’d heard of using the skills associated with improvisation to improve collaborative skills, but the Agile analogy seemed labored.  Tamm’s presentation was much more interesting to me.  I’m not sure he’s aware of the use of the pigs and chickens story in Scrum, but he started out with a story about chickens.  Red zone and green zone chickens, to be precise.  Apparently there are those chickens (we’re outside of the scrum metaphor here, incidentally) that become star egg layers by physically abusing other chickens to suppress their egg production.  These were termed red zone chickens, while the friendly, cooperative chickens were termed green zone chickens.  Tamm described a few unpleasant solutions (such as trimming the chickens’ beaks) that people had tried to deal with the problem, and ended up by describing an experiment whereby the the red zone chickens were segregated from the green zone chickens, with the result that the green zone chickens’ egg production went up 260% while only the mortality rate went up for the red zone chickens (http://blog.pgi.com/2015/05/what-can-chickens-teach-us-about-collaboration/).  Tamm then went on to compare this to human endeavors, pointing out the signs that an organization might be in the red zone (low trust/high blame, threats and fear, and risk avoidance, for example) or in the green zone (high trust/low blame, mutual support and a sense of contribution, for example), while explaining that no organization is going to be wholly in either zone.  He wound up with showing us ways to identify when we, as individuals, are moving into the red zone and how to try to avoid it.  This was easily the most thought provoking of the three keynotes, and I picked up a copy of Tamm’s book, Radical Collaboration, to further explore these ideas.  The full presentation and slides are available at the Agile Alliance website (www.agilealliance.org).

In the normal sessions, I also attended Lyssa Adkins Coaching v. Mentoring, Jake Calabrese’s Benefiting from Conflict – Building Antifragile Relationships and Teams, and two presentations by Judith Mills: Can You Hear Me Now?  Start Listening Instead and Emotional Intelligence in Leadership.  Alas, It was only in hindsight that I realized that I’d read Adkins’ book.  In this presentation, she engaged in actual coaching and mentoring sessions with two people she’d brought along specifically for the purpose.  Unfortunately the sound in the room was poor and I feel like I lost a fair amount of the nuance of the sessions; the one thing I came away with was that mentoring seems like coaching while also being able to provide more detailed information to them.

Jake Calabrese turned out to be a dynamic and engaging speaker and I enjoyed his presentation and felt like it was useful, but that was before I went to Tamm’s keynote on collaboration.  I did enjoy one of the exercises that Calabrese did, though. After describing the four major “team toxins”: Stonewalling, Blaming, Defensiveness and Contempt, he had us take off our name badges, write down which toxin we were most prone to on a separate name badge, and go and introduce ourselves to other people in the room using that toxin as our name.  Obviously this is not something you want to do in a room full of people that work together all the time, but it was useful to talk to other people about how they used these “toxins” to react to conflict.  In the end, though, I felt that Calabrese’s toxins boil down to the signs of defensiveness that Tamm described and that Tamm’s proposals for identifying signs of defensiveness in ourselves and trying to correct them are more likely to be useful than Calabrese’s idea of a “Team Alliance.”

The two presentations by Judith Mills that I attended were a mixed bag.  I thought the presentation on listening was excellent, although there’s a certain irony in watching many of the other attendees checking their e-mail, being on Facebook, shopping, etc., while sitting in a presentation about listening (to be fair, there was probably less of that here than in other presentations).  Mills started by describing the costs of not listening well and then went into an exercise designed to show how hard listening really is: one person would make three statements and their partner would then repeat the sentences with embellishments (unfortunately, the number of people trying this at once made it difficult to hear, never mind listen.  The point was made, though).  We then discussed active listening and the habits and filters we can have that might prevent us from listening well and how communication involved more than just the words we use.  This was a worthwhile session and my only disappointment was that we didn’t get to the different types of question that one might use to promote communication and how they can be used.

Mills’ presentation on Emotional Intelligence in Leadership, on the other hand, was not what I anticipated.  I went in expecting a discussion on EI, but the presentation was more about leadership styles and came across as another description “new” leadership.  It would probably be useful for the people that haven’t experienced or heard about anything other than Taylorist scientific management, but I didn’t find anything particularly new or useful to my role in this presentation.

We recently ran into an issue with ASP.NET authentication that I thought I would share.

The Setup 

We’re running an ASP.NET MVC 5 web application which uses Microsoft ASP.NET Identity for authentication and authorization. We allow users to choose the built-in “Remember Me” option to allow them to automatically login even if they close the browser. We first set this to expire after 2 weeks, after which they are forced to login again (unless you’re using Sliding Expiration https://msdn.microsoft.com/en-us/library/dn385548(v=vs.113).aspx)

When we first implemented this feature, users would complain that they had to keep logging into the site, despite checking the “Remember Me” option. We first made sure that within the Startup configuration, the CookieAuthenticationOptions.ExpireTimeSpan was explicitly set to 2 weeks (even though that is the default).

After some more troubleshooting and, of course, a stackoverflow hint, we discovered the problem.

What we checked

  1. We checked to make sure the browser cookie that contains the encrypted authentication token had an expiration date (instead of “Session” or “When the browsing session ends” in Chrome).

In our case the cookie contained the expected expiration date:

expires_token

 

  1. We observed that the “Remember Me” automatic login worked throughout the day, but typically stopped working the next day.

So what happened every night that might trash or invalidate our authentication token? We discovered that the Application Pool for our IIS site was getting recycled nightly.

app_pool_settings

 

​This alone did not seem to be the culprit, since a user should still be able to automatically login (if they choose to) even if the app pool restarts. Taken a step further, even if the server restarts, this functionality should still work. What about a load-balanced environment where the user doesn’t even hit the same web server? This should all be transparent and the “Remember Me” option should just work.

  1. We searched the web for scenarios where a user must relogin after a server restart and came across this article: http://stackoverflow.com/questions/29791804/asp-net-identity-2-relogin-after-deploy

This seemed to be the root of the problem – we needed to configure the machinekey settings in order to maintain consistent encryption and decryption keys, even after a server or app pool restart. If this is not set explicitly, new encryption/decryption keys are generated after each restart, which in turn invalidates all outstanding authentication tokens (and forces users to login again).

The Solution

It turns out that it is easy to generate random keys in IIS, which will set the values directly in your web.config file:

machine_key_settings

 

Now we can store the consistent keys in our web.config file and not worry about invalidating a user’s authentication token, even if we restart, redeploy or recycle the app pool on a consistent basis.

In a load-balanced, web farm environment, this setting is critical to allow users to bounce between load-balanced servers with the same authentication token. You just have to make sure that the same encryption/decryption keys are used on each website (configured via the machinekey setting).

If you’re considering using Azure Web Apps, this is transparent since “Azure automatically manages the ASP.NET machineKey for services deployed using IIS”. More details here.

That is a subject for another day.

As an Agile enthusiast, trainer and coach I’m pretty passionate about being Agile regardless of the specific framework being followed. In fact, my passion lies in the culture of being Agile, rather than a dogmatic adherence to a framework.

Following a Framework

A dogmatic approach to a framework may work well if you are a “.com”, start-up, or other application development shop. But, for those of us working with large corporations a dogmatic approach feels impossible. Here are a few of the reasons why:

  • Team members are not co-located
  • Teams are not developing software
  • Team members are not fungible
  • Teams cannot deliver anything fully functional at the end of a sprint (1 – 4 weeks)
  • Teams rely heavily on other teams to deliver components, and struggle with dependencies
  • Organizations have a legacy structure that doesn’t support being Agile
  • Organizations want the teams to be Agile, but they don’t want to change anything else

I fully support adopting a framework, so don’t get me wrong. Organizations should try to adopt as much as possible of their chosen framework, and specifically note exceptions acknowledging deviations from a given framework. However, before the organization gets concerned about the framework they are trying to follow, I ask them to look at the Agile Manifesto and Twelve Principles. How much cultural change is the organization willing or able to accept in order to adopt a framework? As true agility requires both, a change to the new framework and a change in culture.

Cultural Change
When the “gurus” got together in Colorado, they not only defined the Scrum Framework, they created the Agile Manifesto, and Twelve Principles. These two items identify the Culture of Agile. To truly be Agile, organizations must work on the cultural change required regardless of the framework.

From the Manifesto:
Does your organization really put Individuals and Interactions over Processes and Tools?

Carte blanche rules for processes and tools don’t always work for everyone within the organization. Some tailoring must be done to truly be effective. Marketing teams may not need to use the same story writing and management tools as Software Teams.

Do they favor Working Software (Value Delivery) or Comprehensive Documentation?

Note: For non-development teams, I prefer to consider what “value” the team is delivering as working software does not apply.

This is not an excuse for skipping documentation all together.

Does your environment allow for true Customer Collaboration over Contract Negotiation?

Or do you have a hard time trying to figure out who is the recipient of the value you are delivering? A culture of “us versus them” may keep workers away from collaboration

Does the organization Respond to Change over Following a Plan?

Or are we all so worried about scope creep, we have a rigorous change policy? Or has the pendulum swung the other way, and you’re experiencing the “Chicken Little -sky is falling” scenario all the time?

Acknowledging the Agile Manifesto and how an organization may adopt their culture to it, is one of the first steps to agility.

Twelve Principles
To be honest, I find the majority of teams I work with have no understanding of the 12 Principles of Agile. How can that be? Does leadership really believe a framework will work without other changes? Yes, it is hard. Yes, someone will always be unhappy. Welcome to the real world.  If you fail at adopting Agile, and you haven’t tried to change culturally, is it really Agile that doesn’t work? Are teams working to be Agile, while the rest or the organization continues with business as usual?

Think of the simple scale from “Somewhat Agree to Somewhat Disagree”, and for each of the Principles score your organization.

  1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. (or just value delivery)

This allows our customers to see steady and ongoing progress towards are end goal.

  1. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.

Changes of scope may have an impact, but we need to quit complaining about change.

  1. Deliver working software (value) frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

Reduce risk, increase collaboration, break work down into small pieces, and get feedback after each delivery.

  1. Business people and developers must work together daily throughout the project.

If the team doing the work has no access to the recipients of the value, are you playing the “telephone game” with requirements and feedback?

  1. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

Agile teams are self-organizing, self-managed, and empowered to do what it takes to deliver quality value at regular intervals. If you’ve truly hired good people who want to do a good job, why micromanage them? Empower your folks, and see what happens. With any luck teams will learn to pick up the stick, and run with it.

  1. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

Skip multiple emails, and meet face-to-face even if it is over the internet!

  1. Working software (quality value) is the primary measure of progress.

If you’re not creating software, deliver small pieces that act as building blocks towards completing your value delivery.

  1. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

A little extra time here and there is okay. However if working 50 – 60 hours is the norm, it’s hardly sustainable.

  1. Continuous attention to technical excellence and good design enhances agility.

Going faster doesn’t cut it if quality drops. The focus should always be on delivering quality.

  1. Simplicity – the art of maximizing the amount of work not done – is essential.

Do you remember the 80/20 rule? We have a phrase “no gold plating”, so focus on what really matters to the 80%.

  1. The best architectures, requirements, and designs emerge from self-organizing teams.

Learning new practices, and engaging regularly to ensure the foundation is sound enables teams to take advantage of emerging technology and practices.

  1. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Without continuous improvement, being Agile slips further and further away from reality.

Summary

Cultural change is key within organizations in order to really support the Agile Manifesto and its Twelve Principles.  Where does your organizations culture fall when using the Manifesto’s scales, and the Twelve Principles? Work to move the pendulum a little at a time as needed. The origins of Agile lie not in black and white answers but in collaborating to do what is best in our drive to deliver value. Your customers will be happy, and you’ll retain employees and keep them and shareholders happy.

CC Pace uses the Organizational Component Model as a framework for ensuring optimal results are achieved in our projects.  The model highlights the inter-dependencies of process, technology, and organizational roles and responsibilities, and how that interaction is critical for optimizing the benefits of change within the enterprise.

Ideally, these three components work in harmony to support achievement of the strategic plan and in reaction to the business environment. The critical point of viewing business structure in terms of these three components is the realization that you can’t introduce change in one component without affecting the others. Importantly, the realization of the intended positive benefits of introducing change into any aspect of the business can in fact only be realized by considering the resulting effects on the three components and adjusting each to embrace the change. A brief discussion of the individual pieces of the model is helpful to understanding this paradigm.

A company’s business environment is best understood as those external influences whose very existence requires the company to operate in a certain way to succeed.  Unable to change or control the environment in which it operates, a company must find ways to operate within its parameters. This is typically done in the context of a Strategic Plan, the documented direction and goals of the firm that take into consideration both the corporate vision and the effects of the environment within which it operates.

The operational processes the company employs serve to establish an effective and efficient means to accomplish the operating needs expressed in the plan.  These processes are typically designed by analyzing inputs and desired outputs, and mapping out the process logic and business rules, initially with only limited consideration of supporting technologies or the people in the organization.

With the optimal operational structure documented, the next consideration—the second component of the model—is to develop suitable technological solutions that serve to enable the operating processes.  The objective of this component is typically to automate those processes that rely on a “do” mentality (versus a human “think” mentality).   In some cases, companies can utilize existing technology solutions already in place to accomplish this objective.  In other instances, technology solutions need to be developed and implemented over time to enable achievement of the desired operational results.

The final component of the model is the company’s organizational structure, along with the roles and responsibilities of management and staff.   Bringing the operation to “life” depends on creating the necessary structure and hiring and training the right people to make the organization run effectively.

Successfully effecting operational change of any magnitude requires that companies understand the inter-dependencies intrinsic in this model.  Attempting to introduce meaningful change by revising only one of the components is too often viewed as an easy means to an end, but this approach more often yields a high level of disappointment or outright failure.  Understanding that all the components need to work together to support change, and taking the appropriate steps to address the needs of each, increases an organization’s chances of being successful.

This model provides the framework for a rational and effective approach to change. By understanding the company’s environment and desired strategic direction, the interlocking components of Operations, Technology and Organization can be developed to work effectively together to accomplish the corporate objectives.

 

Grounded in how it’s always been done while continuing to live on razor-thin margins?

Perhaps it is time to take a chapter out of the airline industry’s book of survival and profitability.

Sitting on the tarmac of Denver International, I reflect for a moment and begin to quantify the amount of changes I’ve endured as a top-tier frequent flyer the last twenty years (most obviously, gone are the days of blankets, pillows and peanuts). Subtly here to stay is a culture of continuous change for the airlines that carefully balances increased consumer satisfaction while driving operating costs down.

During the last twenty years, the airline industry has successfully adopted and deployed paperless tickets, on-line check in, self-service kiosks, and texting customers to provide flight updates and gate changes. Less successful has been shifting from cloth seats to some unrefined and uncomfortable blend of plastic and leather while slowly changing seat backs from solid pockets to nets (where my ear buds are continually snagged). So why all the changes? The airline industry is simply leveraging all aspects of technology and process reengineering to reduce operating expenses. The airline industry has effectively reduced cleaning times for planes to increase on-time departure rates. They have trimmed call center volumes and customer complaints while successfully driving a self-service customer model, a self-service freedom that fliers like myself embrace as it helps off-set other cost-saving features that are often difficult to digest. For instance, smaller seats and lower arm rests or having to fly a small and cramped regional jet for flights over two hours (a plane historically reserved for shorter flights). Not to mention the highly publicized baggage fees. While baggage fees are painful, it’s genius if a company wants to force consumers to reengineer packing habits in order to help reduce baggage load time and employee disability claims.

These gains in efficiency have certainly come with some trial and error risk, such as the famed boarding of window seats first and then aisle, but risks were taken and ROI was eventually achieved with the projects that succeeded. Simply said, the airline industry lives in a perpetual state of change. Change is never easy for either employees or customers, however it is ultimately required to ensure the long-term success of any organization. The airline industry has successfully adopted a culture of continual improvement that strikes a careful balance between convenience and inconvenience in order to lower operating expenses while increasing customer satisfaction.

In order to develop a culture of continual process improvement, organizations first have to take a ‘philosophical’ stance that this is the direction they will be moving in. Second, the organization needs to design and implement a ‘strategy’ to achieve and maintain a culture of continuous process improvement. Finally, it’s all about ‘execution’. My dad always told me that nothing is worth thinking, complaining or talking about unless you’re going to do something about it. Carefully plot out and define what improvements in process, technology or tools will be implemented year over year to achieve increased customer satisfaction, lower operating expenses while potentially risking a little customer inconvenience. Now execute!

In today’s new banking reality, a commitment to increasing productivity, service and profitability while reducing cost will be imperative to a firm reaching its destination of long-term success. Does your organization need to take a hard look at itself and ask if it is truly paperless or allows customers to boldly self-serve? Is the organization providing tools or changes in process that require consumers to reengineer the way they operate in order to help lower costs and reduce disability claims? What perpetual state is your organization living in? More poignantly, what is your organization going to do to develop and drive a culture of continuous change and improvement? Is your organization ready to take the bold step and board a non-stop flight to efficiency and ongoing process improvement?