Evolving to Zero Trust Architecture (ZTA) – Part 1

Evolving to Zero Trust Architecture (ZTA) – Part 1

I have a deep interest in cybersecurity, and to keep up with the latest threats, policies and security practices, I became a member of ACT-IAC organization and enrolled in the Cybersecurity Community of Interest group. This is where I got the opportunity to work as a volunteer in the Zero Trust Architecture Phase 2 project. Hence, I am trying to share the knowledge I gained around ZTA strategy and principles. I am planning to break my blog into four series based on how the project progresses.

  • What is ZTA?
  • Real world deployment scenarios
  • ZTA core capabilities
  • Vendors providing ZTA capabilities

What is ZTA and how did it come into existence?

Traditionally, perimeter-based security has been used to protect the network infrastructure behind a firewall where if the user gets authenticated, they can access all the resources behind the firewall assuming all network users/devices as trustworthy. This caused a lot of security breaches across the globe where attackers could move laterally and exploit resources to which they were not authorized. The attackers only had to get through the firewall and later crawl across any resource available in the network causing potential damage in terms of data loss and other financial implications that can come via ransomware attacks.

Currently, an enterprise’s infrastructure operates around several networks like cloud-based services, remote users connecting from their own network using their enterprise-owned or personal devices (laptops, mobile devices), network location can change based on where the users/devices are connected from for e.g. public WIFI, internal enterprise networks etc. All these complex use cases made the possibility of moving away from perimeter-based security to “perimeter less” security (not confined to one network infrastructure) which led to the evolution of a new concept called as “Zero-Trust” where you “trust no one, but verify”. ZT approach is primarily based on data protection but it can be applied across other enterprise assets like users, devices, applications and infrastructure.

ZTA is basically an enterprise cybersecurity strategy that prevents data breaches and limits lateral movement within the network infrastructure. It assumes all the internal or external agents (user, device, application, infrastructure) that wants to access an enterprise resource (internal network or externally in the cloud) is not trustworthy and needs to be verified for each request before granting access to them.

What does Zero Trust mean in a ZTA?

(Image courtesy: NIST SP 800-27 publication)

In the above diagram, the user who is trying to access the resource must go through the PDP/PEP. PDP/PEP decides whether to grant access to this request based on enterprise policies (data/access/risk), user identity, device profile, location of the user, time of request and any other attributes needed to gain enough confidence. Once granted, the user is on an “Implicit Trust Zone” where it can access all the resources based on network infrastructure design. “Implicit Trust Zone” is basically the boarding area in an airport where all the passengers are considered trustworthy once they verify themselves through immigration/security check.

You can still limit access to certain resources in the network using a concept called “Micro-Segmentation”. For example, after getting through the security check and reaching the boarding area, passengers are again checked at the boarding gate to make sure they are entering the authorized flight to reach their destination. This is what “Micro Segmentation” means where the resources are more isolated to a segment and access requests are verified separately in addition to PDP/PEP.

Tenets of ZTA: (As per NIST SP 800-27 publication)

All the resources whether its data related, or services provided should be communicating in a secure fashion irrespective of their network location. Each individual access request will be verified before granting access to any resource based on the client’s identity, device they are using to request, type of application used, location coordinates and other behavioral attributes. Each access request granted will be authenticated and authorized dynamically and strictly enforced. In addition, the enterprise should collect all activity information, log decisions, audit logs and monitor the network infrastructure to improve the overall security posture.

What are the logical components of ZTA?

(Image courtesy: NIST SP 800-27 publication)

Policy Engine: Responsible to make and log decisions based on enterprise policy and inputs from external resources (CDM, threat intelligence etc.) to grant access or not to a request.

Policy Administrator: Responsible for establishing or killing the communication path between the subject and enterprise resource based on the decision made by PE. It can generate authentication tokens for the client to access the resource. PA communicates with PEP via the control plane.

Policy Enforcement Point: Responsible for enabling, monitoring and terminating communication between subject and enterprise resource. It can be either used as a single logical component or can be broken into two components: the client agent and resource gateway component that controls access. Beyond the PEP is the “Implicit Trust Zone” to access enterprise resources.

Control Plane/Data Plane: The control plane is made up of components that receive and process requests from the data plane components that wish to access network resources. The control and data planes are more like zones in the ZTA. All the resources, devices, and users within the network can have their own control plane component within them to decide whether the data should be routed further or not. In this diagram, it is just used to explain how control plane works for data plane components.  Data plane simply passes packets around and the control plane routes them appropriately based on decisions made.

Note: The dotted line that you see in the image above is the hidden network that is used for communication between the various logical components.

Why should organizations adopt ZTA?

When adopting a ZTA, organizations must weigh all the potential benefits, risks, costs, and ROI. Core ZT outcomes should be focused on creating secure networks, securing data that travels within the network or at rest, reducing impacts during breaches, improving compliance and visibility, reducing cybersecurity costs and improving the overall security posture of an organization.

Lost or stolen data, ransomware attacks, and network and application layer breaches cost organizations huge financial losses and market reputation. It takes a lot of time and money for an organization to resume back to normal if the security breach was of the highest degree. ZT adoption can help organizations avoid such breaches which is the key to survive in today’s world, where state funded hackers are always ahead of the game.

As with all technology changes, the biggest challenge to demonstrate higher ROI and lower cybersecurity costs is the time needed to deliver the desired results. Organizations should consider the following:

  • Assess what components of ZTA pillars they currently have in their infrastructure. Integration of components with existing tools can reduce the overall investment needed to adopt ZTA.
  • Consider including costs or impacts associated with risk levels and occurrences when doing ROI calculations.
  • ZT adoption should simplify, and not complicate, the overall security strategy to reduce costs.

What are the threats to ZTA?

ZTA can reduce the overall risk exposure in an enterprise but there are some threats that can still occur in a ZTA environment.

  • Wrongly or mistakenly configured PE and PA could cause disruptions to the users trying to access the resources. Sometimes, the access requests which would get unapproved previously could get through due to misconfiguration of PE and PA by the security administrator. Now, the attackers or subjects could access resources from which they were restricted before.
  • Denial of service attacks on PA/PEP can disrupt enterprise operations. All access decisions are made by PA and enforced by PEP to make a successful connection of a device trying to access a resource. If the DoS attack happens on the PA, then no subject would be able to get access as the service would be unavailable due to a flood of requests.
  • Attackers could compromise an active user account using social engineering techniques, phishing or any other way to impersonate the subject to access resources. Adaptive MFA may reduce the possibility of such attacks on network resources but still in traditional enterprises with or without ZTA adoption, an attacker might still be able to access resources to which the compromised user has access. Micro-segmentation may protect resources against these attacks by isolating or segmenting the resource using technologies like NGFW, SDP.
  • Enterprise network traffic is inspected and analyzed by policy administrators via PEPs but there are other non-enterprise-owned assets that can’t be monitored passively. Since the traffic is encrypted and it’s difficult to perform deep packet inspection, a potential attack could happen on the network from non-enterprise owned devices. ML/AI tools and techniques can help analyze traffic to find anomalies and remediate it quickly.
  • Vendors or ZT solution providers could cause interoperability issues if they don’t follow certain standards or protocols when interacting. If one provider has a security issue or disruption, it could potentially disrupt enterprise operations due to service unavailability or the time taken to switch to another provider which can be very costly. Such disruptions can affect core business functions of an enterprise when working in a ZTA environment.

References

[ACT-IAC] American Council for Technology and Industry Advisory Council (2019) Zero Trust Cybersecurity Current Trends. Available at https://www.actiac.org/zero-trust-cybersecurity-current-trends

Draft (2nd 1) NIST Special Publication 800-207. Available at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207-draft2.pdf

NIST Zero Trust Architecture Release: https://www.nccoe.nist.gov/projects/building-blocks/zero-trust-architecture

What is App Modernization

Legacy application modernization is a process to update existing and aging applications with modern architecture to enhance features and capabilities. By migrating your legacy applications, you can include the latest functionalities that better align with what your business needs to succeed. Keeping legacy applications running smoothly while still being able to meet current day needs can be a time consuming and resource intensive affair. That is doubly the case when software becomes so outdated that it may not even be compatible with modern day systems.

A Quick Look at a Sample Legacy Monolithic Application

For this article, say a decade and half year-old, Legacy Monolithic Application is considered as depicted in the following diagram.

 

This  depicts a traditional, n-tier architecture that was very common in the past 20 years or so. There are several shortcomings with this architecture, including the “big bang” deployment that had to be tightly managed when rolling out a release. Most of the resources on the team would sit idle while requirements and design were ironed out. Multiple source control branches had to be managed across the entire system, adding complexity and risk to the merge process. Finally, scalability applied to the entire system, rather than smaller subsystems, causing increase costs for hardware resources.

Why Modernize?

We define modernization as migrating from a monolithic system to many decoupled subsystems, or microservices.

The advantages are:

  1. Reduce cost
    1. Costs can be reduced by redirecting computing power only to the subsystems that need it. This allows for more granular scalability.
  2. Avoid vendor lock-in
    1. Each subsystem can be built with a technology for which it is best suited
  3. Reduce operational overhead
    1. Monolithic systems that are written in legacy technologies tend to stay that way, due to increased cost of change. This requires resources with a specific skillset.
  4. De-coupling
    1. Strong coupling makes it difficult to optimize the infrastructure budget
    2. De-coupling the subsystems makes it easier to upgrade components individually.

Finally, a modern, microservices architecture is better suited for Agile development methodologies. Since work effort is broken up into iterative chunks, each microservice can be upgraded, tested and deployed with significantly less risk to the rest of the system.

Legacy App Modernization Strategies

Legacy application modernization strategies can include the re-architecting, re-factoring, re-coding, re-building, re-platforming, re-hosting or the replacement and retirement of your legacy systems. Applications dating back decades may not be optimized for mobile experiences on smartphones or tablets, which could require entire re-platforming. Lift and Shift will not add any business value if you migrate legacy applications just for the sake of Modernization. Instead, it’s about taking the bones, or DNA, of the original software, and modernizing it to better represent current business needs.

Legacy Monolithic App Modernization Approaches

Having examined the nightmarish aspects of continuing to maintain Legacy Monolithic Applications, this article presents you with two Application Modernization Strategies. Both listed below will be explained at length to get basic idea on to pick whatever is feasible with constraints you might have.

  • Migrating to Microservices Architecture
  • Migrating to Microservices Architecture with Realtime Data Movement (Aggregation/Deduping) to Data Lake

Microservices Architecture

In this section, we shall take a dig at how re-architecting, re-factoring and re-coding per microservices paradigm will help avoid a lot of overheads of maintaining a legacy monolithic system. The following diagram helps you better understand Microservice Architecture – a leap forward from legacy monolithic architecture.

 

At a quick glance of above diagram, you can understand there is a big central piece called API Gateway with Discovery Client. This is comparable to a Façade in a Monolithic Application. API Gateway is essentially an entry point to access several microservices which are comparable to modules in Monolithic Application and are identified/discovered with the help of Discovery Client. In this Design/Architecture of Microservices, API Gateway also acts as API Orchestrator as it resorts to one Database set via Database Microservice in the diagram. In other words, API Gateway/Orchestrator orchestrates the sequence of calls based on the business logic to call Database Microservice as individual Microservices have no direct access to database. One can also notice this architecture supports various client systems such as Mobile App, Web App, IOT APP, MQTT App et al.

Although this architecture gives an edge to using different technologies in different microservices, it leaves us with a heavy dependency on the API Gateway/Orchestrator. The Orchestrator is tightly coupled to the business logic and object/data model, which requires it to be re-deployed and tested after each microservice change. This dependency prevents each microservice from having its own separate and distinct Continuous Integration/Continuous Delivery (CI/CD) pipeline. Still, this architecture is a huge step towards building heterogenous systems that work in tandem to provide a complete solution. This goal would otherwise be impossible with a Monolithic Legacy Application Architecture.

Microservices Architecture with Realtime Data Movement to Data Lake

In this section, we shall take a dig at how re-architecting, re-factoring, re-coding, re-building, re-platforming, re-hosting or the replacement and retirement of your legacy systems per microservices paradigm will help avoid a lot of overheads of maintaining a legacy monolithic system. The following diagram helps you understand a complete advanced Microservices Architecture.

 

At the outset, most part of the diagram for this approach looks like the previous approach. But this adheres to the actual Microservice paradigm more than the previous. In this case, each microservice is individual and has its own micro database of any flavor it chooses to be based on the business needs and avoids dependency on a microservice called database microservice or overload API Gateway to act as Orchestrator with business logic. The advantage of this approach is, each Microservice can have its own CI/CD pipeline release. In other words, a part of application can be released with TDD/ATDD properly implemented avoiding costs incurred for Testing/Deploying and Release Management. This kind of architecture does not limit the overall solution to stick to any particular technical stack but encourages to provide quick solutions with various technical stacks. And gives flexibility to scale resources for highly hit microservices when necessary.

Besides this architecture encourages one to have a Realtime Engine (which can be a microservice itself) that reads data from various databases asynchronously and apply data aggregation and data deduping algorithms and send pristine data to Data lake. Advanced Applications can then use the data from Data lake for Machine Learning and Data Analytics to cater to the business needs.

 

Note: This article has not been written any cloud flavor in mind. This is general App Modernization Microservices architecture that can run anywhere on-prem or OpenShift (Private Cloud) or Azure Cloud or Google Cloud or AWS (Private Cloud)

I’m in the process of reading a book on Agile database warehouse design titled, appropriately enough, Agile Data Warehouse Design and by Lawrence Corr.

While Agile methodologies have been around for some time – going on two decades – they haven’t permeated all aspects of software design and development at the same pace. It’s only in recent years that Agile has been applied to data warehouse design in any significant way.

I’m sure many Agile consultants have worked on projects in the past where they were asked to come up with a complete design up-front. That’s true with data warehouse projects too where a client’s  database team wanted the entire schema designed up-front – even before the requirements for the reports the data warehouse would be supporting were identified. What would appear to be driving the design was not the business and their report priorities, but the database team and their desire to have a complete data model.

While Agile Data Warehouse Design introduces some new methods, it emphasizes a common-sense approach that is present in all Agile methodologies. In this case, build the data warehouse or data mart one piece at a time. Instead of thinking of the data warehouse as one big star schema, think of it as a collection of smaller star schemas – each one consisting of a fact table and its supporting dimension tables.

The book covers the basics of data warehouse design including an overview of fact tables, dimension tables, how to model each and as mentioned, star schemas. The book stresses the 7-Ws when designing a data warehouse – who, what, where, when, why, how and how many. These are the questions to ask when talking to business to come up with an appropriate design. “How many” is applicable for the fact tables, while the other questions apply to dimension table design.

Agile Data Warehouse Design stresses collaboration with the business stakeholders, keeping them fully engaged so that they feel like they are not just users, but owners of the data. Agile Data Warehouse Design focuses on modeling the business processes that the business owners want to measure, not the reports to be produced or the data to be collected.

I still have a way to go before I’ve finished the book and then applied what I’ve learned, but so far, it’s been a worthwhile learning experience.

At CC Pace, our Agile practitioners are sometimes asked whether Scrum is useful for activities other than software development. The answer is a definite yes.

Elizabeth (“Elle”) Gan is Director of Portfolio Management at a client of ours. She writes the blog My Scrummy Life. Recently, she wrote a fascinating post on how she used Scrum to plan her upcoming wedding.

How have you used Scrum outside its “natural” setting in software development?

You are embarking on a new software development project. Presumably, if it’s a Scrum project, a team is assembled, space and workstations for the team room are configured and the first sprint is right around the corner. The time has come for the initial gathering of project team members and stakeholders – the Project Kickoff. The Project Kickoff meeting can range from an hour to several days, and provides the opportunity for the project team, and any associated stakeholders, to come together and officially begin (i.e., ‘kick off’) a project. The ultimate goal is for everyone to leave this meeting on the same page, with a clear understanding of the project’s structure and goals.

One of the more common kickoff meeting agenda items for Scrum teams today is establishing the product vision, or the product vision statement. Many definitions and examples of product vision statements are available with a simple internet search; a solid summary of the product vision can be found here in a 2009 member article written by Roman Pichler for the Scrum Alliance:

The product vision paints a picture of the future that draws people in. It describes who the customers are, what customers need, and how these needs will be met. It captures the essence of the product – the critical information we must know to develop and launch a winning product.’

Here is where I have to come clean – I personally never thought much about the importance or impact of product vision statements until recently. It seemed to me that on many development projects, the product vision would simply exist as a feel-good placeholder or as a feeble attempt to energize the team: “We’re going to build this application and SAVE THE WORLD!”

I felt that the product vision statement was a guise for what seemed like the customary objective of a project which – in an admittedly negative opinion – was to provide a solid return on someone’s investment. Bluntly stated: “If we are successful, someone I’ve never met is going to make a lot of money on this product.” I observed that as a project ensued and the team became preoccupied with day to day tasks, reality would eventually kick in. At a certain point, the vision – which we were so excited about several weeks ago – usually becomes an afterthought.

Eventually, a point is reached several sprints into a project where the team’s project vision statement is scribbled on a large post-it sheet, taped to the wall in the team room, collecting dust – never to be spoken of again. In the past, my observation was that by the time our project wrapped up, we wouldn’t always take the time to measure project success against our original product vision statement. (In fact, many team members were probably already working towards achieving a new product vision on a completely new project.) We didn’t always ask the questions: Did we accomplish our mission? Did we meet all of our objectives? If not, why? (Some of these topics would assuredly be discussed in a Project Retrospective-type meeting, but in today’s reality, that isn’t always the case.)

Fortunately, times have changed. Several recent and personal discoveries (through complete happenstance) have improved my outlook; you could now say that I have a ‘newfound respect’ for the product vision statement. This inspiration is a result of successfully delivering on several development projects in the education research field. CC Pace has had the fortunate opportunity to partner with the National Student Clearinghouse (NSC) on several of their software development initiatives since 2010. Our first project supported NSC’s Research Center, whose mission is defined as ‘providing educators and policymakers with accurate longitudinal data on student outcomes to enable informed decision making.’

In June of 2010, CC Pace began the journey with NSC to redesign the StudentTracker for High Schools application (STHS 2.0), which contributes to NSC’s aforementioned mission as ‘a unique program designed to help high schools and districts more accurately gauge the college success of their graduates’. We began the project with an informative and efficient Kickoff meeting and established our product vision statement. Truth be told, I didn’t put much thought into it. After all, I was working with a new client on a newly-formed team, and Sprint #1 was approaching fast.

With all of that in mind, the following is a high-level summary (i.e., not verbatim and with some information added for clarification) of our product vision statement for the StudentTracker for High Schools 2.0 project from June, 2010:

NSC’s business goal is to leverage its unique assets and capabilities to provide the secondary-postsecondary longitudinal information required to inform the secondary education system in its efforts to increase rates of college readiness.

Redesigning the STHS 2.0 application will enhance the capacity and scalability to provide integrated secondary-postsecondary education information – in a timely and efficient fashion – to the maximum number of secondary customers possible.

The objectives to meet this business goal are as follows:

  • Enhance STHS reports to include more insightful and actionable data resulting in more valuable and accurate information available to secondary schools and districts
  • Provide for a more efficient file management process, reducing the turnaround time for data collection, processing and report distribution
  • Increase data collection and storage capacity, allowing for more robust reports
  • Improve NSC matching algorithms to enhance data quality and add reliability
  • Design, configure and implement a robust set of longitudinal reports stratified by school type, demographics, gender, academics and degree

So, there it was. Our product vision statement was posted on the wall for all to see. As anyone starting out on a new software development project can attest, I still had some questions: Where is all of this leading us? Will we succeed? Will this application – which we are completely redesigning from scratch – launch successfully a year from now?

Undoubtedly these were realistic questions and concerns. At the same time, however, I began to realize that I was working as a member of a team on a development project with a realistic, measurable and highly-motivational product vision statement. I thought at the time that if we truly achieve our vision and successfully implement STHS 2.0 in the next year, our product will have a profound impact on educational research and potentially improve the college success rates of millions of high school and college students for years to come.

Fast-forward to 2015 – I can proudly say that I have seen several firsthand accounts demonstrating that we did indeed achieve the product vision that we established several years ago. The intriguing element of this discovery is that I never personally set out to measure whether or not we achieved our vision as a result of successfully delivering STHS 2.0. After all, this was over five years ago and I have worked on many different projects in that time. Instead, I discovered the answer to that question completely by chance, and on more than one occasion. Five years later, I came to the realization that our team’s product vision had indeed become a reality, and it was a really great feeling.

Check back for a follow-up post for the recent chain of events validating that our project’s product vision statement truly became a reality – more than five years after it was established.

In my last post, I talked about the interesting Agile 2015 sessions on team building that I’d attended. This time we’ll take a look at some sessions on DevOps and Craftsmanship.

On the DevOps’ side, Seth Vargo’s The 10 Myths of DevOps, was by far the most interesting and useful presentation that I attended. Vargo’s contention is that the DevOps concept has been over-hyped (like so many other things) and people are soon going to be becoming disenchanted with the DevOps concept (the graphic below shows where Vargo believes DevOps stands on the Gartner Hype Cycle right now). I might quibble about whether we’ve passed the cusp of inflated expectations yet or not, but this seems just about right to me. It’s only recently that I’ve heard a lot of chatter about DevOps and seen more and more offerings and that’s probably a good indication that people are trying to take advantage of those inflated expectations. Vargo also says that many organizations either mistake the DevOps concept for just plain operations or use the title to try to hire SysAdmins under the more trendy title of DevOps. Vargo didn’t talk to it, but I’d also guess that a lot of individuals are claiming to be experienced in DevOps when they were SysAdmins who didn’t try to collaborate with other groups in their organizations.

n1

The other really interesting myth in Vargo’s presentation was the idea that DevOps is just between engineers and operators. Although that’s certainly one place to start, Vargo’s contention is that DevOps should be “unilaterally applied across the organization.” This was characteristic of everything in Vargo’s presentation: just good common sense and collaboration.

Abigail Bangser was also focused on common sense and collaboration in Team Practices Applied to How We Deploy, Not Just What, but from a narrower perspective. Her pain point seems to have been that technical stories that weren’t well defined and were treated differently than business stories. Her prescription was to extend the Three Amigos practice to technical stories and generally treat techincal stories like any other story. This was all fine, but I found myself wondering why that kind of collaboration wasn’t happening anyway. It seems like doing one’s best to understand a story and deliver the best value regardless of whether the story is a business or a technical one. Alas, Bangser didn’t go into how they’d gotten to that state to start with.

CaptureOn the craftsmanship side, Brian Randell’s Science of Technical Debt helped us come to a reasonably concise definition of technical debt and used Martin Fowler’s Technical Debt Quadrant distinguish between different types of technical debt: prudent vs. reckless, and deliberate vs. inadvertent. He also spent a fair amount of time demonstrating SonarQube and explaining how it had been integrated into the .NET ecosystem. SonarQube seemed fairly similar to NDepend, which I’ve used for some years now, with one really useful addition: both NDepend and SonarQube evaluate your codebase compared to various configurable design criteria, but SonarQube also provides an estimated time to fix all the issues that it found with your codebase. Although it feels a little gimmicky, I think it would be more useful than just having the number of instances of failed rules in explaining to Product Owners the costs that they are incurring.

I also attended two divergent presentations on improving our quality as developers. Carlos Sirias presented Growing a Craftsman through Innovation & Apprenticeship. Obviously, Sirias advocates for an apprenticeship model, a la blacksmiths and cobblers, to help improve developer quality. The way I remember the presentation, Sirias’ company, Pernix, essentially hired people specifically as apprentices and assigned them to their “lab” projects, which are done at low-cost for startups and small entrepreneurs. The apprenticeship aspect came from their senior people devoting 20% of their time to the lab projects. I’m now somewhat perplexed, though, because the Pernix website says that “Pernix apprentices learn from others; they don’t work on projects” and the online PDF of the presentation doesn’t have any text in it, so I can’t double check my notes. Perhaps the website is just saying that the apprentices don’t work as consultants on the full-price projects, and I do remember Sirias saying that he didn’t feel good about charging clients for the apprentices. On the other hand, I can’t imagine that the “lab” projects, which are free for NGOs and can be financed by micro-equity or actual money, don’t get cross-subsidised by the normal projects. I feel like just making sure that junior people are always pairing and get a fair chance to pair with people they can learn from, which isn’t always “senior” people, is a better apprenticeship model than the one that Sirias presented.

The final craftsmanship presentation I attended, Steve Ropa’s Agile Craftsmanship and Technical Excellence, How to Get There was both the most exciting and the most challenging presentation for me. Ropa recommends “micro-certifications,” which he likens to Boy Scout merit badges, to help people improve their technical abilities. It’s challenging to me for two reasons. First, I’m just not a great believe in credentialism because I don’t find they really tell me anything when I’m trying to evaluate a person’s skills. What Ropa said about using internally controlled micro-certifications to show actual competence in various skill areas make a lot of sense, though, since you know exactly what it takes to get one. That brings me to the second challenge: the combination of defining a decent set of micro-certifications, including what it takes to get each certification, and a fair way of administering such a system. For the most part, the first part of this concern just takes work. There are some obvious areas to start with: TDD, refactoring, continuous integration, C#/Java/Python skills, etc., that can be evaluated fairly objectively. After that, there are some softer areas that would be more difficult to figure out certifications for, though. How, for example, do you grade skills in keeping a code base continually releasable? It seems like an all-or-nothing kind of thing. And how does one objectively certify a person’s ability to take baby steps or pair program?

Administering such a program also presents me with a real challenge: even given a full set of objective criteria for each micro-certification, I worry that the certifications could become diluted through cronyism or that the people doing the evaluations wouldn’t be truly competent to do so. Perhaps this is just me being overly pessimistic, but any organization has some amount of favoritism and I suspect that the sort of organizations that would benefit most from micro-certifications are the ones where that kind of behavior has already done the most damage. On the other hand, I’ve never been a boy scout and these concerns may just reflect my lack of experience with such things. For all that, the concept of micro-certifications seems like one worth pursuing and I’ll be giving more thought on how to successfully implement such a system over the coming months.

Notes from Agile 2015 Washington, D.C. 

Having lived in Washington DC area for over 25 years, my experience caused me to presume that the majority of the audience at the Agile 2015 Washington DC would primarily consist of people working in the public sector, given our geographical proximity to a long list of federal agencies. It was not unrealistic for me to expect that the speakers at the conference might tailor their presentations and discussions to this type of audience. The audience actually turned out to be quite diverse, rendering my assumptions inaccurate. However, I could not help to feel somewhat validated after listening to the first key note speaker.  Indeed, the opening presentation by Luke Hohmann entitled “Awesome Super Problems” focused on tackling “wicked problems” such as budget deficits and environmental challenges. Wicked problems, as described by Mr. Hohmann, are not technical in nature and cannot be solved by small Agile teams of 6-8 people. These problems deal more with strategic decision-making that may result in long-term consequences, intended as well as unintended. They impact millions of people and they require broad consent as well as governance. Hailing from the San Jose, California, Mr. Hohmann discussed how implementation of Agile methodologies helped the city tackle some “wicked problems” such as a budget deficit of 100 million dollars.

Planning and Executing
Solving major problems such as budget shortfalls generally require a great deal of collaboration between stakeholders with competing priorities. Mr. Hohmann stressed that the approach should focus on collaboration over competition, or in Agile terminology, “customer collaboration over contract negotiation”. Easier said than done? Maybe… To help facilitate this collaboration, Mr. Hohmann assembled a conference of public servants such as city planners, police and fire chiefs, and other community leaders. There were several discovery sessions where people could get answers to questions like how much money would be saved if the fire department removed one firefighter from their teams and what impact to safety that may entail. The group was broken down into small tables of no more than 8 people and one facilitator provided by Mr. Hohmann. The group was presented with the list of major budget items and subsequently was compelled to engage in budget games  in which participants were basically bidding to get their high priority budget items included in the next budget and negotiate cuts by trading these items. Afterwards the players had a retrospective and offered feedback.

Retrospective and Outcome
Feedback provided by the participants showed that competition was replaced by collaboration. Participants tended not to get into heated arguments because the games inherently encouraged compromise. Small groups helped cut down on distractions and side conversations. The participants also reported that the game was fair since every player possessed equal bidding power. Interestingly, the final outcome resulted in surprising consensus over the budget items, as the majority of the participants actually ended up prioritizing their items very similarly after the competition aspect was removed. The “democratic” aspect of the collaborative approach helped eliminate animosity and partisanship which are not uncommon, as have been witnessed in the U.S federal budget negotiations. This experiment seemed to yield the desired outcome of tackling the imbalanced budget and was touted as a success, attracting attention of more San Jose residents.

Scaling
To tackle other public issues such as school overcrowding and water shortages, Mr. Hohmann attempted to repeat the process, but the number of participants has increased to the point where a large conference hall was needed. In order not to upset the budget by renting a giant conference hall, Mr. Hohmann and the local government set up an online forum that accepted a virtually unlimited number of participants, yet still assigning them to groups of about eight people and increasing the number of groups. The participants played other games such as Prune the Product Tree which basically involves prioritizing the list of problems the public wants to tackle. The feedback was even more positive as the majority of the participants actually preferred the online setting even more. They reported even less distractions. The data was easier to collect and aggregate, giving the participants almost an immediate view of how the game was progressing and how the priorities were moving.

Conclusion
One main takeaway I got from Mr. Hohmann’s presentation is one of encouragement to be creative. Mr. Hohmann stressed the importance of focusing on what he described as “common ground for action”. The idea is to focus on generating a list or backlog of actionable items. The process or exercise to get to the desired state can vary, and Agile methodologies can help folks get there, even when tackling wicked problems.

External Links:
http://www.innovationgames.com/budget-games-guide/

http://www.innovationgames.com/prune-the-product-tree/

Further reading:
http://conteneo.co/san-jose-residents-play-4th-annual-budget-games/

As a developer, I tend to think of YAGNI in terms of code.  It’s a good way to help avoid building in speculative generality and over-designing code.  While I do sometimes think about YAGNI at the feature level, I have a tendency to view the decision to implement a feature as a business decision that should be made by the customer rather than something the developers should decide.  That’s never stopped me from giving an opinion, though!  Until recently, my argument against implementing a feature has generally been a simple argument about opportunity cost.   Happily, Martin Fowler’s recent post on YAGNI (http://martinfowler.com/bliki/Yagni.html) adds greatly to my understanding of all the cost that not considering YAGNI can add to a project.  Well worth a read regardless of whether you’re a developer, Product Owner, Scrum Master or fill any other role.

In previous installments in this series, I’ve talked about what Product Owners and development team members can do to ensure iteration closure. By iteration closure, I mean that the system is functioning at the end of each iteration, available for the Product Owner to test and offer feedback. It may not have complete feature sets, but what feature sets are present do function and can be tested on the actual system: no “prototypes”, no “mock-ups”, just actual functioning albeit perhaps limited code. I call this approach fully functional but not necessarily fully featured.

In this installment, I’ll take a look at the Scrum Master or Project Manager and see what they can do to ensure full functionality if not full feature sets at the end of each iteration. I’ll start out by repeating the same caveat I gave at the start of the Product Owner installment: I’m a developer, so this is going to be a developer-focused look at how the Scrum Master can assist. There’s a lot more to being a Scrum Master, and a class goes a long way to giving you full insight into the responsibilities of the role.

My personal experience is that the most important thing you as a Scrum Master can do is to watch and listen. You need to see and experience the dynamics of the team.

At Iteration Planning Meetings (IPMs), are Product Owners being intransigent about priorities or functional decomposition? Are developers resisting incremental functional delivery, wanting to complete technical infrastructure tasks first? These are the two most serious obstacles to iteration closure. Be prepared to intervene and discuss why it’s in everyone’s interest to achieve this iteration closure.

At the daily stand-up meetings, ensure that every team member speaks (that includes you!), and that they only answer the three canonical questions:

  1. What did I do since the last stand-up?
  2. What will I do today?
  3. What is in my way?

Don’t allow long-winded discussions, especially technical “solution” discussions. People will tune out.

You’re listening for:

  • Someone who always answers (1) and (2) with the same tasks every day and yet says they have no obstacles
  • Whatever people say in response to (3)

Your task immediately after the stand-up is to speak with team members who have obstacles and find out what you can do to clear the obstacles. Then address any team members who’re always doing the same task every day and find out why they’re stuck. Are they inexperienced and unwilling to ask for help? Are they not committed to the project mission and need to be redeployed?

Guard against an us-versus-them mentality on teams, where the developers see Product Owners or infrastructure teams as “the enemy” or at least an obstacle, and vice versa. These antagonistic relationships come from lack of trust, and lack of trust comes from lack of results. Again, actual working deliverables at the close of each iteration go a long way to building trust. Look for intransigence on either the developer team or with the Product Owner: both should be willing to speak freely and frankly with each other about how much work can be done in an iteration and what constitutes Minimal Value Product for this iteration. It has to be a negotiation; try to facilitate that negotiation.

Know your team as human beings – after all, that is what they are. Learn to empathize with them. How do individuals behave when they’re happy or when they’re frustrated? What does it take to keep Jim motivated? It’s probably not the same things as Bill or Sally. I’ve heard people advocate the use of Meyers-Briggs Personality Tests or similar to gain this understanding. I disagree. People are more complex than 4 or 5 letters or numbers at one moment in time. I may be an introvert today and an extrovert tomorrow, depending on how my job is going. Spend time with people to really know them, and don’t approach people as test subjects or lab rats. Approach them as human beings, the complex, satisfying, irritating, and ultimately rewarding organisms that we actually are.

Occasionally, when I speak at technical or project management meet-ups, an audience member will ask, “I’m a Scrum Manager and I can’t get the Product Owner to attend the IPM; what should I do?” or, “My CIO comes in and tasks my developer team directly without going through the IPM; how do I handle this?” I try to give them hints, but the answer I always give is, “Agile will only expose your problems; it won’t solve them.” In the end, you have to fall back on your leadership and management skills to effect the kind of change that’s necessary. There’s nothing in Scrum or XP or whatever to help you here. Like any other process or tool, just implementing something won’t make the sun come out. You still have to be a leader and a manager – that’s not going away anytime soon.

Before I close, let me point out one thing I haven’t listed as something a Scrum Master ought to be adept at: administration. I see projects where the Scrum Master thinks their primary role is to maintain the backlog, measure velocity, track completion, make sure people are updating their Jira entries, and so on. I’m not saying this isn’t important – it is. It’s very important. But if you’re doing this stuff to the exclusion of the other stuff I talked about up there, you’re kind of missing the point. Those administrative tasks give you data. You need to act on the data, or what’s the point? Velocity is decreasing. OK…what are you and the team going to do about it? That’s the important part of your role.

When we at CC Pace first started doing Agile XP projects back in 2000-2001, we had a role on each project called a Tracker. This person would be part time on the project and would do all the data collection and presentation tasks. I’d like to see this role return on more Agile projects today, because it makes it clear that that’s not the function of the Scrum Master. Your job is to lead the team to a successful delivery, whatever that takes.

So here we are at the end of my series. If there’s one mantra I want you to take away from this entire series, it’s Keep the system fully functional even if not fully featured. Full functionality – the ability of the system to offer its implemented feature set to the Product Owner for feedback – should always come before full features – the completeness of the features and the infrastructure. Of course, you must implement the complete feature set and the full infrastructure – but evolve towards it. Don’t take an approach that requires that the system be complete to be even minimally useful.

If you’re a Product Owner:

  • Understand the value proposition not just of the entire system, but of each of its components and subsets.
  • Be prepared to see, use, and test subsets, or subsets of subsets of subsets, of the total feature set. Never say, “Call me only when the system is complete.” I guarantee this: your phone will never ring.

If you’re a developer:

  • Adopt Agile Engineering techniques such as TDD, CI, CD, and so on. Don’t just go through the motions. Become really proficient in them, and understand how they enable everything else in Agile methodologies.
  • Use these techniques to embrace change, and understand that good design and good architecture demand encapsulation and abstraction. Keeping the subsystems isolated so that the system is functional even if not complete is not just good for business. It’s good engineering. A car’s engine can (and does) run even before it’s installed into the car. Just don’t expect it to take you to the grocery store.
  • Be an active team member. Contribute to the success of the mission. Don’t just take orders.

If you’re a Scrum Master:

  • Watch and listen. Develop your sense of empathy so you “plug in” to the team’s dynamics and understand your team.
  • Keep the team focused on the mission.
  • If you want to sweat the details of metrics and data, fine – but your real job is to act on the data, not to collect it. If you aren’t good at those collection details, delegate them to a tracking role.

I hope you’ve enjoyed this series. Feel free to comment and to connect with me and with CC Pace through LinkedIn. Please let me hear how you’ve managed when you were on a supposedly Agile project and realized that the sound of rushing water you heard was the project turning into a waterfall.

As I write this blog entry, I’m hoping that the curiosity (or confusion) of the title captures an audience. Readers will ask themselves, “Who in the heck is Jose Oquendo? I’ve never seen his name among the likes of the Agile pioneers. Has he written a book on Agile or Scrum? Maybe I saw his name on one of the Agile blogs or discussion threads that I frequent?”

In fact, you won’t find Oquendo’s name in any of those places. In the spirit of baseball season (and warmer days ahead!), Jose Oquendo was actually a Major League Baseball player in the 1980’s, playing most of his career with the St. Louis Cardinals.

Perhaps curiosity has gotten the better of you yet again, and you look up Oquendo’s statistics. You’ll discover that Oquendo wasn’t a great hitter, statistically-speaking. His career .256 batting average and 14 homeruns over a 12 year MLB career is hardly astonishing.

People who followed Major League Baseball in the 1980’s, however, would most likely recognize Oquendo’s name, and more specifically, the feat which made him unique as a player. Oquendo has done something that only a handful of players have ever done in the long history of Major League Baseball – he’s played EVERY POSITION on the baseball diamond (all nine positions in total).

Oquendo was an average defensive player and his value obviously wasn’t driven from his aforementioned offensive statistics. He was, however, one of the most valuable players on those successful Cardinal teams of the 80’s, as the unique quality he brought to his team was derived from a term referred to in baseball lingo as “The Utility Player”. (Interestingly enough, Oquendo’s nickname during his career was “Secret Weapon”.)

Over the course of a 162-game baseball season, players get tired, injured and need days off. Trades are executed, changing the dynamic of a team with one phone call. Further complicating matters, baseball teams are limited to a set number of roster spots. Due to these realities and constraints of a grueling baseball season, every team needs a player like Oquendo who can step up and fill in when opportunities and challenges present themselves. And that is precisely why Oquendo was able to remain in the big leagues for an amazing 12 years, despite the glaring deficiency in his previously noted statistics.

Oquendo’s unique accomplishment leads us directly into the topic of the Agile Business Analyst (BA), as today’s Agile BA is your team’s “Utility Player”. Today’s Agile BA is your team’s Jose Oquendo.

A LITTLE HISTORY – THE “WATERFALL BUSINESS ANALYST”

Before we get into the opportunities afforded to BA’s in today’s Agile world, first, a little walk down memory lane. Historically (and generally) speaking – as these are only my personal observations and experiences – a Business Analyst on a Waterfall project wrote requirements. Maybe they also wrote test cases to be “handed off” and used later. In many cases, requirements were written and reviewed anywhere from six to nine months before documented functionality was even implemented. As we know, especially in today’s world, a lot can change in six months.

I can remember personally writing requirements for a project in this “Waterfall BA” role. After moving onto another project entirely, I was told several months down the road, “’Project ABC’ was implemented this past weekend – nice work.” Even then, it amazed me that many times I never even had the opportunity to see the results of my work. Usually, I was already working on an entirely new project, or more specifically, another completely new set of requirements.

From a communications perspective, BA’s collaborated up-front mostly with potential users or sellers of the software in order to define requirements. Collaboration with developers was less common and usually limited to a specific timeframe. I actually worked on a project where a Development Manager once informed our team during a stressful phase of a project, “please do not disturb the developers over the next several weeks unless absolutely necessary.” (So much for collaboration…) In retrospective, it’s amazing to me that this directive seemed entirely normal to me at the time.

Communication with testers seemed even rarer – by the very definition, on a Waterfall project, I’ve already passed my knowledge on to the testers – it’s now their responsibility. I’m more or less out of the loop. By the time the specific requirements are being tested, I’m already off onto an entirely new project.

In my personal opinion the monotony of the BA role on a Waterfall project was sometimes unbearable. Month-long requirements cycles, workdays with little or no variation, and some days with little or no collaboration with other team members outside of standard team meetings became a day to day, week to week, month to month grind, with no end in sight.

AND NOW INTRODUCING… THE “AGILE BUSINESS ANALYST”

Fast-forward several years (and several Agile project experiences) and I have found that the role of today’s Agile Business Analyst has been significantly enhanced on teams practicing Agile methodologies and more specifically, Scrum. Simply as a result of team set-up, structure, responsibilities – and most importantly, opportunities – I feel that Agile teams have enhanced the role of the Business Analyst by providing opportunities which were never seemingly available on teams using the traditional Waterfall approach. There are new opportunities for me to bring value to my team and my project as a true “Utility Player”, my team’s Jose Oquendo.

The role of the Agile BA is really what one makes of it. I can remain content with the day to day “traditional” responsibilities and barriers associated with the BA role if I so choose; back to the baseball analogy – I can remain content playing one position. Or, I can pursue all of the opportunities provided to me in this newly-defined role, benefitting from new and exciting experiences as a result; I can play many different positions, each one further contributing to the short and long-term success of the team.

Today, as an Agile BA, I have opportunities – in the form of different roles and responsibilities – which not only enhance my role within the team but also allow me to add significant value to the project. These roles and responsibilities span not only across functional areas of expertise (e.g. Project Management, Testing, etc.) but they also span over the entire lifetime of a software development project (i.e. Project Kickoff to Implementation). In this sense, Agile BA’s are not only more valuable to their respective teams, they are more valuable AND for a longer period of time – basically, the entire lifespan of a project. I have seen specifically that Agile BA’s can greatly enhance their impact on project teams and the quality of their projects in the following five areas:

  • Project Management
  • Product Management (aka the Product Backlog)
  • Testing
  • Documentation
  • Collaboration (with Project Stakeholders and Team Members)

We’ll elaborate – in a follow-up blog entry – specifically how Agile BA’s can enhance their role and add value to a project by directly contributing to the five functional areas listed above.

Occasionally, as part of our strategic advisory service, I work with clients who don’t want custom application delivery from us, but rather want me to provide advice to their own Agile development teams. Many of them don’t need a lot of help, but perhaps the single issue I observe most often is that iteration (or sprint, in Scrum terminology) planning meetings (IPMs) don’t go well. Rather than being an interactive exchange of ideas and a negotiation between developers and product owners for the next iteration, I observe that the IPMs become 2-week status meetings that don’t accomplish much. The developer team doesn’t have much or anything to demo, there’s little feedback from the product owner, and everyone just routinely agrees to meet in two weeks to go through the same thing again.

One of the main reasons for these lackluster IPMs is the failure to close tasks at iteration boundaries. If the developer team can’t close tasks at iteration boundaries, then the product can’t be usefully demoed, which means the product owner can’t offer any feedback. This isn’t any form of Agile – it’s just waterfall with 2-week status meetings.

Failure to close tasks at iteration boundaries has other implications too, because what it’s telling you is that stories are too big, and stories that are too big have big consequences.

First, big stories are hard to estimate accurately. Think of estimates as sort of like weather forecasts: anything over 2-3 days is probably too inaccurate to use for planning. The smaller the story, the more accurate will be the estimate.

Second, big stories make it harder to change business priorities. That may seem like a non sequitur, but when developers are working on any story, the system is in an unstable, non-functioning state. To change direction, the developers have to bring the system to a stable state where it can be taken in a different direction. Those stable states are achieved when stories are completed and the system is ready to demo.

An analogy I like to use is to think of the system as a big truck proceeding down a controlled-access highway, like an American interstate. You can exit only at certain points. If you’re heading north and you realize you want to head east instead, you have to wait for the next exit to make that direction change – you can’t just immediately turn east and start driving through the underbrush. The farther apart the exits are, the farther you’re going to have to go out of your way before you can adjust. Think of each exit as being the close of a story. The closer together the exits (the smaller the stories), the sooner you’ll reach an exit (a system steady state) where you can change direction.

In this series of blog posts, I’m going to look at what it takes to ensure task closure at iteration boundaries. Each post will focus on a different team role, and how that role can help ensure that iterations end in an actual delivery of working software that can be demoed in an IPM. I’ll write about what product owners, developers, and project managers (or Scrum masters) can do to reduce story size, ensure product stability and functionality at iteration boundaries, and keep the system always ready to quickly change directions – the very definition of agility.

Watch this space.