You Might Be A …?

You Might Be A …?

Surely, we’ve all heard about the 16 Myers-Briggs personality types identified in the workplace, and while they are useful, they are not necessarily the stuff that promotes a good laugh. We at CC Pace are all about working hard and having a bit of fun while doing it, so we thought we’d take a crack at pinpointing some of the traits that identify a few of the more common roles in the corporate world. Let’s see how many you can relate to:

If your average Coke consumption is 5 or more a day … you might be a Developer

If you think knowing binary takes someone from a “6” to a “10”… you might be a Software Engineer

If you have a retrospective for your daughter’s Kindergarten graduation … you might be a ScrumMaster

If you missed a “;” and couldn’t sleep for 2 days … you might be a Java Developer

If you can keep smiling on the outside while crying on the inside … you might be a Receptionist

If you camp in a random forest, get bitten by a Python because you couldn’t C# … you might be a Data Scientist

If bagels and donuts day at work seems like an Instagramable moment … you might work in Marketing

If the phrase “Hello from the other side, must have called a thousand times” means more to you than a line in an Adele song … you might be a Recruiter

If a large part of your day consists of fielding questions like a White House Press Secretary … you might be a COO

If your future Disney vacation is visually planned on your kitchen wall … you might be an Agile Coach

If the definition of a good day includes eliminating the idea that things work … you might be a Test Engineer

If Mad Max doesn’t come close to the amount of chaos you deal with daily … you might work in Human Resources

If you find yourself measuring the flow efficiency of the Starbucks queue … you might be a Kanban Trainer

If you jam out to the hold music while on a call with your bank … you might work in IT Support

If a year that starts in April doesn’t weird you out … you might work in Accounting

If you feel you need to follow up on yesterday’s follow-up… you might be a Project Manager

If you like your coffee hot and your calls cold … you might be in Sales/Business Development

If everybody stops the chit-chat when you login to a Zoom meeting … you might be the President

When I was first introduced to Agile development, it felt like a natural flow for developers and business stakeholders to collaborate and deliver functionality in short iterations. It was rewarding (and sometimes disappointing) to demo features every two weeks and get direct feedback from users. Continuous Integration tools matured to make the delivery process more automated and consistent. However, the operations team was left out of this process. Environment provisioning, maintenance, exception handling, performance monitoring, security – all these aspects were typically deprioritized in favor of keeping the feature release cadence. The DevOps/DevSecOps movement emerged as a cultural and technical answer to this dilemma, advocating for a much closer relationship between development and operations teams. 

Today, companies are rapidly expanding their cloud infrastructure footprint. What I’ve heard from discussions with customers is that the business value driven by the cloud is simply too great to ignore. However, much like the relationship between development and ops teams during the early Agile days, a gap is forming between Finance and DevOps teams. Traditional infrastructure budgeting and planning doesn’t work when you’re moving from a CapEx to OpEx cost structure.  Engineering teams can provision virtually unlimited cloud resources to build solutions, but cost accountability is largely ignored. Call it the pandemic cloud spend hangover. 

Our customers see the flexibility of the cloud as an innovation driver rather than simply an expense. But they still need to understand the true value of their cloud spend – which products or systems are operating efficiently? Which ones are wasting resources? 

I decided to look into FinOps practices to discover techniques for optimizing cloud spend. I researched the FinOps Foundation and read the book, Cloud FinOps. Much like the DevOps movement, FinOps seeks to bring cross-functional teams together before cloud spend gets out of hand. It encompasses both cultural and technical approaches. 

Here are some questions that I had before and the answers that I discovered from my research: 

Where do companies start with FinOps without getting overwhelmed by yet another oversight process? 

Start by understanding where your costs are allocated. Understand how the cloud provider’s billing details are laid out and seek to apply the correct costs to a business unit or project team. Resource tagging is an essential first step to allocating costs. The FinOps team should work together to come up with standard tagging guidelines. 

Don’t assume the primary goal is cost savings. Instead, approach FinOps as a way to optimize cloud usage to meet your business objectives. Encourage reps from engineering and finance to work together to define objectives and key results (OKRs). These objectives may be different for each team/project and should be considered when making cloud optimization recommendations. For example, if one team’s objective is time-to-market, then costs may spike as they strive to beat the competition.  

What are some common tagging/allocation strategies?

Cloud vendors provide granular cost data down to the millisecond of usage. For example, AWS Lambda recently went from rounding to the nearest 100ms of duration to the nearest millisecond. However, it’s difficult to determine what teams/projects/initiatives are using which resources and for how long. For this reason, tagging and cost allocation are essential to FinOps. 

According to the book, there are generally two approaches for cost allocation: 

  1. Tagging – these are resource-level labels that provide the most granularity. 
  1. Hierarchy-based – these are at the cloud account or subscription-level. For example, using separate AWS accounts for prod/dev/test environments or different business units. 

Their recommendation is to start with hierarchy-based allocations to ensure the highest level of coverage.  Tagging is often overlooked or forgotten by engineering teams, leading to unallocated resources. This doesn’t suggest skipping tags, but make sure you have a consistent strategy for tagging resources to set team expectations. 

 How do you adopt a FinOps approach without disrupting the development team and slowing down their progress? 

The nature of usage-based cloud resources puts spending responsibility on the engineering team since inefficient use can affect the bottom line. This is yet another responsibility that “shifts left”, or earlier in the development process. In addition to shifting left on security/testing/deployment/etc., engineering is now expected to monitor their cloud usage. How can FinOps alleviate some of this pressure so developers can focus on innovation? 

Again, collaboration is key. Demands to reduce cloud spend cannot be a one-way conversation. A key theme in the book is to centralize rate reduction and decentralize usage reduction (cost avoidance).  

  • Engineering teams understand their resource needs so they’re responsible for finding and reducing wasted/unused resources (i.e., decentralized).  
  • Rate reduction techniques like using reserved instances and committed use discounts are best handled by a centralized FinOps team. This team takes a comprehensive view of cloud spend across the organization and can identify common resources where reservations make sense. 

Usage reduction opportunities, such as right sizing or shutting down unused resources, should be identified by the FinOps team and provided to the engineering teams. These suggestions become technical debt and are prioritized along with other work in the backlog. Quantifying the potential savings of a suggestion allows the team to determine if it’s worth spending the engineering hours on the change. 

https://www.finops.org/projects/encouraging-engineers-to-take-action/

How do you account for cloud resources that are shared among many different teams?

Allocating cloud spend to specific teams or projects based on tagging ensures that costs are distributed fairly and accurately. But what about shared costs like support charges? The book provides three examples for splitting these costs:

  • Proportional – Distribute proportionally based on each team’s actual cloud spend. The more you spend, the higher the allocation of support and other shared costs. This is the recommended approach for most organizations.
  • Evenly – split evenly among teams.
  • Fixed – Pre-determined fixed percentage for each team.

Overall, I thought the authors did a great job of introducing Cloud FinOps without overwhelming the reader with another rigid set of practices. They encourage the Crawl/Walk/Run approach to get teams started on understanding their cloud spend and where they can make incremental improvements. I had some initial concerns about FinOps bogging down the productivity and innovation coming from engineering teams. But the advice from practitioners is to provide data to inform engineering about upward trends and cost anomalies. Teams can then make decisions on where to reduce usage or apply for discounts.

The cloud providers are constantly changing, introducing new services and cost models. FinOps practices must also evolve. I recommend checking out the Cloud FinOps book and the related FinOps Foundation website for up-to-date practices.

We’ve been using the AWS Amplify toolkit to quickly build out a serverless infrastructure for one of our web apps. The services we use are IAM, Cognito, API Gateway, Lambda, and DynamoDB. We’ve found that the Amplify CLI and platform is a nice way to get us up and running. We then update the resulting CloudFormation templates as necessary for our specific needs. You can see our series of videos about our experience here.

The Problem

However, starting with the Amplify CLI version 7, AWS changed the way to override Amplify-generated resource configurations in the form of CFT files. We found this out the hard way when we tried to update the generated CFT files directly. After upgrading the CLI and then calling amplify push, our changes were overridden with default values – NOT GOOD! Specifically, we wanted to add a custom attribute to our Cognito pool.

After a few frustrating hours of troubleshooting and support from AWS, we realized that the Amplify CLI tooling changed how to override Amplify-generated content. AWS announced the changes here, but unfortunately, we didn’t see the announcement or accompanying blog post.

The Solution

Amplify now generates an “overrides.ts” Typescript file for you to provide your own customizations using Cloud Development Kit (CDK) constructs.

In our case, we wanted to create a Cognito custom attribute. Instead of changing the CFT directly (under the new “build” folder in Amplify), we generate an “override.ts” file using the command: “amplify override auth”. We then added our custom attribute using the CDK:

Important Note: The amplify folder structure gets changed starting with CLI version 7. To avoid deployment issues, be sure to keep your CLI version consistent between your local environment and the build settings in the AWS console. Here’s the Amplify Build Setting window in the console (note that we’re using “latest” version):

 

If you’re upgrading your CLI, especially to version 7, make sure to test deployments in a non-production environment, first.

What are some other uses for this updated override technique? The Amplify blog post and documentation mention examples like Cognito overrides for password policies and IAM roles for auth/unauth users. They also mention S3 overrides for bucket configurations like versioning.

For DynamoDB, we’ve found that Amplify defaults to a provisioned capacity model. There are benefits to this, but this model charges an hourly rate for consumption whether you use it or not. This is not always ideal when you’re building a greenfield app or a proof-of-concept. We used the amplify override tools to set our billing mode to On-demand or “Pay per request”. Again, this may not be ideal for your use case, but here’s the override.ts file we used:

Conclusion

At first, I found this new override process frustrating since it discourages direct updates to the generated CFT files. But I suppose this is a better way to abstract out your own customizations and track them separately. It’s also a good introduction to the AWS CDK, a powerful way to program your environment beyond declarative yaml files like CFT.

Further reading and references:

DynamoDB On-Demand: When, why and how to use it in your serverless applications

Authentication – Override Amplify-generated Cognito resources – AWS Amplify Docs

Override Amplify-generated backend resources using CDK | Front-End Web & Mobile (amazon.com)

Top reasons why we use AWS CDK over CloudFormation – DEV Community

Here is our final video in the 3-part series Building and Securing Serverless Apps using AWS Amplify.  In case you missed Part 1 you can find it here along with Part 2 here.  Please let us know if you would like to learn more about this series!

The video below is Part 2 of our 3-part series: Building and Securing Serverless Apps using AWS Amplify.  In case you missed Part 1 – take a look at it here.  Be sure to stay tuned for Part 3!

AWS Amplify is a set of tools that promises to make full-stack, cloud-native development quicker and easier. We’ve used it to build and deploy different products without getting bogged down by heavy infrastructure configuration. On one hand, Amplify gives you a rapid head start with services like Lambda functions, APIs, CI/CD pipelines, and CloudFormation/IaC templates. On the other hand, you don’t always know what it’s generating and how it’s securing your resources.

If you’re curious about rapid development tools that can get you started on the road to serverless but want to understand what’s being created, check out our series of videos.

We’ll take a front-end web app and incrementally build out authentication, API/function, and storage layers. Along the way, we’ll point out any gotchas or lessons learned from our experience.

Background 

We all are humans and tend to take the easy route when we come across certain scenarios in life. Remembering passwords is one of the most common things in life these days, and we often tend to create a password that can be easily remembered to avoid the trouble of resetting it in case we forget it. In this blog, I am going to discuss a tool called “Have I Been Pwned”(HIBP) which is going to help us find any passwords that were seen in recent cybersecurity or data breaches.  

What is HIBP? What is it used for? 

“Have I Been Pwned” is an open-source initiative that helps people to check if their login information has been included in any breached data archives circling the dark web. In addition, it also allows users to check how often a given password has been found in the dataset – testing the strength of a password against dictionary-style brute force attacks. Recently, the FBI released a statement that they are going to closely work with the HIBP team to share the breached passwords for users to check against it. This open-source initiative is going to help a lot of customers avoid using breached passwords when creating accounts on the web. We used the HIBP API to help our customers who use custom web-based applications get alerted of any pwned passwords that they used while creating accounts. In this way, the users will be aware of not using such breached passwords that have been seen multiple times on the dark web. 

How does it work? 

HIBP stores more than half a billion pwned passwords that have previously been exposed in data breaches. The entire data set is both downloadable and searchable online via the Pwned Passwords page. Each password is stored as an SHA-1 hash of a UTF-8 encoded password and the password count with a colon (:) and separated by each line with a CRLF. 

If we must use an API to search online for the password that was breached multiple times, we cannot send the actual source password over the web as it will compromise the integrity of the user’s password that got entered during account creation. 

To maintain anonymity and protect the value of the source password being searched for, Pwned Passwords implements a k-Anonymity model that allows a password to be searched for by partial hash using search by range. In this way, we just need to pass the first 5 characters of an SHA-1 password hash (not case-sensitive) to the API which will respond with the suffix of every hash beginning with the specified prefix, followed by a count of how many times it appears in the dataset. The API consumer now can search the results that match the source password hash by comparing them with the prefix and the suffix of the hash results. If the source hash was not found in the results, it means that the password was not breached until date. 

Integrated Solution 

Pass2Play is one of our custom web-based solutions where we integrated the password breach API to detect any breached passwords during the sign-up process. Below is the workflow: 

  1. The user goes to sign up for the account. 
  2. Enters username and password to sign up. 
  3. After entering the password, the user gets a warning message if the password was ever breached and how many times was it seen. 


In the above screen, the user entered the password as “P@ssword” and got a warning message which clearly says that the entered password has been seen 7491 times based on the dataset circling in the dark web. We do not want our users using such passwords for their accounts which could get compromised later using dictionary-style brute-force attacks.

Architecture and Process flow diagram:

API Request and Response example:

SHA-1 hash of P@ssword: 9E7C97801CB4CCE87B6C02F98291A6420E6400AD

API GET: https://api.pwnedpasswords.com/range/9E7C9

Response: Returns 550 lines of hash suffixes that matches the first 5 chars

The highlighted text in the above image is the suffix that matches the first 5 hash chars’ prefix of the source password and has been seen 7491 times.

Conclusion

I would like to conclude this blog by saying that integration of such methods in your applications can help organizations avoid larger security issues since passwords are still the most common way of authenticating users. Alerting the end-users during account creation will make them aware of breached passwords which will also train the end users on using strong passwords.

Recently, I read an article titled, “Why Distributed Software Development Teams Work Infinitely Better”, by Boris Kontsevoi.

It’s a bit hyperbolic to say that distributed teams work infinitely better, but it’s something that any software development team should consider now that we’ve all been distributed for at least a year.

I’ve worked on Agile teams for 10-15 years and thought that they implicitly required co-located teams. I also experienced the benefits of working side-by-side with (or at least close to) other team members as we hashed out problems on whiteboards and had adhoc architecture arguments.

But as Mr. Kontsevoi points out, Agile encourages face-to-face conversation, but not necessarily in the same physical space. The Principles behind the Agile Manifesto were written over 20 years ago, but they’re still very much relevant because they don’t prescribe exactly “how” to follow the principles. We can still have face-to-face conversations, but now they’re over video calls.

This brings me to a key point of the article -” dispersed teams outperform co-located teams and collaboration is key”. The Manifesto states that building projects around motivated individuals is a key Agile principle.

Translation: collaboration and motivated individuals are essential for a distributed team to be successful.

  • You cannot be passive on a team that requires everyone to surface questions and concerns early so that you can plan appropriately.
  • You cannot fade into the background on a distributed team, hoping that minimal effort is good enough.
  • If you’re leading a distributed team, you must encourage active participation by having regular, collaborative team meetings. If there are team members that find it difficult to speak above the “din” of group meetings, seek them out for 1:1 meetings (also encouraged by Mr. Kontsevoi).

Luckily, today’s tools are vastly improved for distributed teams. They allow people to post questions on channels where relevant team members can respond, sparking adhoc problem-solving sessions that can eventually lead to a video call.

Motivated individuals will always find a way to make a project succeed, whether they’re distributed, co-located, or somewhere in between. The days of tossing software development teams into a physical room to “work it out” are likely over. The new distributed paradigm is exciting and, yes, better – but the old principles still apply.

Last year, we worked with experts from George Mason University to build a COVID screening and tracing platform called Pass2Play. We used this opportunity to implement a Serverless architecture using the AWS cloud.

This video discusses our experience, including our solution goals, high-level design, lessons learned and product outcomes.

It’s specific to our situation, but we’d love to hear about other experiences with the Serverless tools and services offered by AWS, Azure and Google. There are a lot of opinions on Serverless, but there’s no doubt that it’s pushing product developers to rethink their delivery and maintenance processes.

Feel free to leave a comment if we’re missing anything or to share your own experience.

As 2020 has unfolded, our development team has been working on a brand new app: Pass2Play!  Check out the video below to see all of its features and capabilities!

To learn more about Pass2Play click here!

In the previous blog, I had provided insights on what ZTA is, what the core components that belong to ZTA are, why organizations should adopt ZTA and what the threats to ZTA are. In this blog, I will go through some of the common deployment use cases/scenarios for ZTA using software defined perimeters and move away from enterprise network-based perimeter security.

Scenario 1:  Enterprise using cloud provider to host applications as cloud services and accessed by employees from the enterprise owned network or external private/public untrusted network

In this case, the enterprise has hosted enterprise resources or applications in a public cloud, and users want to access those to perform their tasks. This kind of infrastructure helps the organization provide services at geographically dispersed locations who might not connect to the enterprise owned network but could still work remotely using personal devices or enterprise owned assets. In such cases, the enterprise resources can be restricted based on the user identity, device identity, device posture/health, time of access, geographic location and behavioral logs. Based on these risk factors, the enterprise cloud gateway may wish to grant access to resources like employee email service, employee calendar, employee portal, but may restrict access to services that provide sensitive data like the H.R. database, finance services or account management portal. The Policy Engine/Policy Administrator will be hosted as a cloud service which will provide the decision to the gateway based on the trust score calculated from various sources like the enterprise system agent installed on devices, CDM system, activity logs, threat intelligence, SIEM, ID management, PKI certificates management, data access policy and industry compliance. The enterprise local network could also host the PE/PA service instead of the cloud provider, but it won’t provide much benefit due to an additional round trip to the enterprise network to access cloud hosted services which will impact overall performance.

Scenario 2:  Enterprise using two different cloud providers to host separate cloud services as part of the application and accessed by employees from the enterprise owned network or external private/public untrusted network

The enterprise has broken the monolithic application into separate microservices, or components hosted in multiple cloud providers even though it has its own enterprise network. The web front end can be deployed in Cloud Provider A, which communicates directly to the database component hosted in Cloud Provider B, instead of tunneling through the enterprise network. It is basically a server-server implementation with software defined perimeters instead of relying on enterprise perimeters for security. The PEPs are deployed at the access points of web front end and database components which will decide whether to grant access to the service requested based on the trust score. The PE and PA can be services hosted either in cloud or other third-party cloud provider. The enterprise owned assets that have agents installed on them can request access through PEPs directly and the enterprise can still manage resources even when hosted outside the enterprise network.

Scenario 3:  Enterprise having contractors, visitors and other non-employees that access the enterprise network

In this scenario, the enterprise network hosts applications, databases, IoT devices and other assets that can be accessed by employees, contractors, visitors, technicians and guests. Now we have a situation where the assets like internal applications, sensitive information data should only be accessed by employees and should be prevented from visitors, guests and technicians accessing it. The technicians who show up when there is a need to fix the IoT devices like smart HVAC and lighting systems still need to access the network or internet. The visitors and guests also need access to the local network to connect to the internet so that they could perform their operations. All these situations described earlier can be achieved by creating user, device profiles, and enterprise agents installed on their system to prevent network reconnaissance/east-west movement when connected to the network. The users based on their identity and device profile will be placed on either the enterprise employee network or BYOD guest network, thus obscuring resources using the ZTA approach of SDPs. The PE and PA could be hosted either on the LAN or as a cloud service based on the architecture decided by the organization. All enterprise owned devices that have an installed agent could access through the gateway portal that grants access to enterprise resources behind the gateway. All privately owned devices that are used by visitors, guests, technicians, employee owned personal phones, or any non-enterprise owned assets will be allowed to connect to BYOD or guest network to use the internet based on their user and device profile.

Zero Trust Maturity

As organizations mature and adopt zero trust, they go through various stages and adapt to it based on the cost, talent, awareness and business domain needs. Zero trust is a marathon, and not a sprint, hence incrementally maturing the level of zero trust is the desired approach.

Stage 0: Organizations have not yet thought about the zero trust journey but have on-premises fragmented identity, no cloud integration and passwords are used everywhere to access resources.

Stage 1: Adopting unified IAM by providing single sign-on across employees, contractors and business partners using multi-factor authentication (MFA) to access resources and starting to focus on API security.

Stage 2: In this stage, organizations move towards deploying safeguards such as context-based (user profile, device profile, location, network, application) access policies to make decisions, automating provisioning and deprovisioning of employee/external user accounts and prioritizing secure access to APIs.

Stage 3: This is the highest maturity level that can be achieved, and it adopts passwordless and frictionless solutions by using biometrics, email magic links, tokens and many others.

Most of the organizations in the world are either in stage 0 or stage 1 except for large corporations who have matured to stage 2. Due to the current COVID situation, organizations have quickly started to invest heavily to improve their ZT maturity level and the overall security posture.

Acronyms

References

Draft (2nd 1) NIST Special Publication 800-207. Available at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207-draft2.pdf

The State of Zero Trust Security in Global Organizations

Effective Business Continuity Plans Require CISOs to Rethink WAN Connectivity

Zero Trust Security For Enterprise Mobility

What is App Modernization

Legacy application modernization is a process to update existing and aging applications with modern architecture to enhance features and capabilities. By migrating your legacy applications, you can include the latest functionalities that better align with what your business needs to succeed. Keeping legacy applications running smoothly while still being able to meet current day needs can be a time consuming and resource intensive affair. That is doubly the case when software becomes so outdated that it may not even be compatible with modern day systems.

A Quick Look at a Sample Legacy Monolithic Application

For this article, say a decade and half year-old, Legacy Monolithic Application is considered as depicted in the following diagram.

 

This  depicts a traditional, n-tier architecture that was very common in the past 20 years or so. There are several shortcomings with this architecture, including the “big bang” deployment that had to be tightly managed when rolling out a release. Most of the resources on the team would sit idle while requirements and design were ironed out. Multiple source control branches had to be managed across the entire system, adding complexity and risk to the merge process. Finally, scalability applied to the entire system, rather than smaller subsystems, causing increase costs for hardware resources.

Why Modernize?

We define modernization as migrating from a monolithic system to many decoupled subsystems, or microservices.

The advantages are:

  1. Reduce cost
    1. Costs can be reduced by redirecting computing power only to the subsystems that need it. This allows for more granular scalability.
  2. Avoid vendor lock-in
    1. Each subsystem can be built with a technology for which it is best suited
  3. Reduce operational overhead
    1. Monolithic systems that are written in legacy technologies tend to stay that way, due to increased cost of change. This requires resources with a specific skillset.
  4. De-coupling
    1. Strong coupling makes it difficult to optimize the infrastructure budget
    2. De-coupling the subsystems makes it easier to upgrade components individually.

Finally, a modern, microservices architecture is better suited for Agile development methodologies. Since work effort is broken up into iterative chunks, each microservice can be upgraded, tested and deployed with significantly less risk to the rest of the system.

Legacy App Modernization Strategies

Legacy application modernization strategies can include the re-architecting, re-factoring, re-coding, re-building, re-platforming, re-hosting or the replacement and retirement of your legacy systems. Applications dating back decades may not be optimized for mobile experiences on smartphones or tablets, which could require entire re-platforming. Lift and Shift will not add any business value if you migrate legacy applications just for the sake of Modernization. Instead, it’s about taking the bones, or DNA, of the original software, and modernizing it to better represent current business needs.

Legacy Monolithic App Modernization Approaches

Having examined the nightmarish aspects of continuing to maintain Legacy Monolithic Applications, this article presents you with two Application Modernization Strategies. Both listed below will be explained at length to get basic idea on to pick whatever is feasible with constraints you might have.

  • Migrating to Microservices Architecture
  • Migrating to Microservices Architecture with Realtime Data Movement (Aggregation/Deduping) to Data Lake

Microservices Architecture

In this section, we shall take a dig at how re-architecting, re-factoring and re-coding per microservices paradigm will help avoid a lot of overheads of maintaining a legacy monolithic system. The following diagram helps you better understand Microservice Architecture – a leap forward from legacy monolithic architecture.

 

At a quick glance of above diagram, you can understand there is a big central piece called API Gateway with Discovery Client. This is comparable to a Façade in a Monolithic Application. API Gateway is essentially an entry point to access several microservices which are comparable to modules in Monolithic Application and are identified/discovered with the help of Discovery Client. In this Design/Architecture of Microservices, API Gateway also acts as API Orchestrator as it resorts to one Database set via Database Microservice in the diagram. In other words, API Gateway/Orchestrator orchestrates the sequence of calls based on the business logic to call Database Microservice as individual Microservices have no direct access to database. One can also notice this architecture supports various client systems such as Mobile App, Web App, IOT APP, MQTT App et al.

Although this architecture gives an edge to using different technologies in different microservices, it leaves us with a heavy dependency on the API Gateway/Orchestrator. The Orchestrator is tightly coupled to the business logic and object/data model, which requires it to be re-deployed and tested after each microservice change. This dependency prevents each microservice from having its own separate and distinct Continuous Integration/Continuous Delivery (CI/CD) pipeline. Still, this architecture is a huge step towards building heterogenous systems that work in tandem to provide a complete solution. This goal would otherwise be impossible with a Monolithic Legacy Application Architecture.

Microservices Architecture with Realtime Data Movement to Data Lake

In this section, we shall take a dig at how re-architecting, re-factoring, re-coding, re-building, re-platforming, re-hosting or the replacement and retirement of your legacy systems per microservices paradigm will help avoid a lot of overheads of maintaining a legacy monolithic system. The following diagram helps you understand a complete advanced Microservices Architecture.

 

At the outset, most part of the diagram for this approach looks like the previous approach. But this adheres to the actual Microservice paradigm more than the previous. In this case, each microservice is individual and has its own micro database of any flavor it chooses to be based on the business needs and avoids dependency on a microservice called database microservice or overload API Gateway to act as Orchestrator with business logic. The advantage of this approach is, each Microservice can have its own CI/CD pipeline release. In other words, a part of application can be released with TDD/ATDD properly implemented avoiding costs incurred for Testing/Deploying and Release Management. This kind of architecture does not limit the overall solution to stick to any particular technical stack but encourages to provide quick solutions with various technical stacks. And gives flexibility to scale resources for highly hit microservices when necessary.

Besides this architecture encourages one to have a Realtime Engine (which can be a microservice itself) that reads data from various databases asynchronously and apply data aggregation and data deduping algorithms and send pristine data to Data lake. Advanced Applications can then use the data from Data lake for Machine Learning and Data Analytics to cater to the business needs.

 

Note: This article has not been written any cloud flavor in mind. This is general App Modernization Microservices architecture that can run anywhere on-prem or OpenShift (Private Cloud) or Azure Cloud or Google Cloud or AWS (Private Cloud)

At heart, I’m still a developer. Titles or positions aside, what I enjoy most is solving problems and writing good code to implement the solutions. And so, when I work on strategic technology engagements, I often empathize with the developer teams, and can clearly see their points of view. 

One issue I often see on such engagements is that the developer teams find little or no value in the Agile retrospective meetings. I hear expressions like, “waste of time,” or, “lots of psychobabble.” I understand where developers are coming from when they express these sentiments, but I disagree with them. If done well (emphasis on well), retrospectives are absolutely essential and make developers’ lives much better. When developers are so cynical about retrospectives, I know that retrospectives are not being conducted very well. 

Let’s take a look at some of the “anti-patterns” I’ve seen. 

Pro forma retrospectives 

In a pro forma retrospective, everybody takes turns to say what they like and what they think can be improved. So far, so good. But nobody is encouraged to develop their thoughts or to expound. So week after week, the retrospective’s answers are, “Everyone is doing a great job, working hard,” and, “Nothing is really wrong.” At that point, the meeting is no longer a retrospective, it’s just a ceremony.

The coach or the scrum master should encourage everyone to speak their minds. Developers coming from a waterfall background, or from a “command and control” background, may not be comfortable expressing their doubts and reservations. Make sure they’re encouraged to do so. 

Using attendees as lab rats 

Some coaches I worked with thought that projects were an ideal way to test out new approaches to retrospectives that they had dreamed up. One involved putting stickies on team members’ foreheads and trying to guess something… I don’t even remember the details, the whole thing was so silly. Another thought that putting things in verse was a good idea. I have no idea where in left field these notions come from, but all they succeeded in doing was making the team members feel totally patronized. 

Stick with tried and true methods: ask for things that are going well and should be reinforced, then things that need improvement, then action items for reinforcement and improvement. If you want to experiment, that’s fine, but make it clear it’s an experiment and ask for feedback on the experiment itself. 

Not following up on action items 

Some retrospectives do everything right in the meeting, but nobody acts on the action items. Now it’s just a gab session, not a useful discussion. Make sure the Scrum Master acts outside the meeting to reinforce the good and improve the bad. Then discuss the actions taken at the next retrospective, so that everyone sees something is being done. If there’s time, revisit the improvement areas from the last meeting and ask if things have improved. 

Without actual followup, developers will chalk up another “I’ve seen it all before” score, and will see the meetings as wastes of time. 

Putting people on the spot 

Retrospectives should be about making things better, not about forcing members to be defensive. I’ve seen them run as blame sessions, where “who’s to blame for this?” is the aim. Don’t be that person. The team should be accountable, yes, but trying to affix blame on individuals or groups isn’t productive. Instead, ask everyone for suggested action items to improve the situation, and make sure everyone signs on to the action items. 

As developers, we’re trained to expect the worst and to see risk everywhere. Sometimes, it makes us cynical and resistant to faddish concepts. Retrospectives are not that. They’re useful forums to express frustrations in a civil and productive way so that frustrations don’t build up and lead to disruptive behavior in other settings. If conducted well and if effective action is taken, they really do lead to tangible improvements that make everyone’s jobs much easier, including developers. 

If you notice that your developer team thinks retrospectives are worthless, glance at this checklist: 

  • Don’t patronize the developers or ask them to defend themselves 
  • Show you’re listening by actually effecting requested changes 
  • Value the outcome of the meetings over the process of the meetings 

Introduction
In the last post, we looked at pattern matching and data structures in Elixir.  In this post, we’re going to look particularly at Elixir processes and how they affect handling errors and state.

Supervisors & Error Handling
Supervisors, or rather supervision trees as controlled by supervisors, are another differentiating feature of Elixir.  Processes in Elixir are supposed to be much lighter weight than threads in other languages,[1] and you’re encouraged to create separate processes for many different purposes.  These processes can be organized into supervision trees that control what happens when a process within the tree terminates unexpectedly.  There are a lot of options for what happens in different cases, so I’ll refer you to the documentation to learn about them and just focus on my experiences with them.  Elixir has a philosophy of “failing fast” and letting a supervision tree handle restarting the failing process automatically.  I tried to follow this philosophy to start with, but I’ve been finding that I’m less enamoured with this feature than I thought I would be.  My initial impression was that this philosophy would free me of the burden of thinking about error handling, but that turned out not to be the case.  If nothing else, you have to understand the supervision tree structure and how that will affect the state you’re storing (yes, Elixir does actually have state, more below) because restarting a process resets its state.  I had one case where I accidentally brought a server down because I just let a supervisor restart a failed process.  The process dutifully started up again, hit the same error and failed again, over and over again.  In surprisingly short order, this caused the log file to take up all the available disk space and took down the server entirely.  Granted this could happen if I had explicit error handling, too, but at least there I’d have to think about it consciously and I’m more likely to be aware of the possibilities.  You also have to be aware of how any child processes you created, either knowingly or unknowingly through libraries, are going to behave and add some extra code to your processes if you want to prevent the children bringing your process down with them.  So, I’m finding supervisors a mixed blessing at best.  They do offer some convenience, but they don’t really allow you to do anything that couldn’t be done in other languages with careful error handling.  By hiding a lot of the necessity for that error handling, though, I feel like it discourages me from thinking about what might happen and how I would want to react to those events as much as I should.  As a final note tangentially related to supervisors, I also find that the heavy use of processes and Elixir’s inlining of code can make stack traces less useful than one might like.

All that being said, supervisors are very valuable in Elixir because, at least to my thinking, the error handling systems are a mess.  You can have “errors,” which are “raised” and may be handled in a “rescue” clause.  You have “throws,” which appear not not have any other name and are “thrown” and may be handled in a “catch” clause and you have “exits,” which can also be handled in a “catch” clause or trapped and converted to a message and sent to the process outside of the current flow of control.  I feel like the separation of errors and throws is a distinction without a difference.  In theory, throws should be used if you intend to control the flow of execution and errors should be used for “unexpected and/or exceptional situations,” but the idea of what’s “unexpected and/or exceptional” can vary a lot between people, so there’s been a few times when third-party libraries I’ve been using raise an error in a situation that I think should have been perfectly expectable and should have been handled by returning a result code instead of raising an error.  (To be fair, I haven’t come across any code using throws.  The habit seems to be handle things with with statements, instead.)  There is a recommendation that one should have two versions of a method, one that returns a result code and the other, with the same name suffixed with a bang (File.read vs. File.read!, for example) and raises an error, but library authors don’t follow this convention religiously for themselves, and sometimes you’ll find a function that’s supposed to return no matter what calling a function in yet another library that does raise an error, so you can’t really rely on it.

I haven’t seen any code explicitly using an exit statement or any flow control trying to catch exits, but I have found that converting the unintentional errors that sometime happen into messages allows me to avoid other processes taking down my processes, thus allowing me to maintain the state of my process rather than having it start from its initial state again.  That’s something I appreciate, but I still find it less communicative when reading code as the conversion is handled by setting a flag on the process, and that flag persists until explicitly turned off, so I find it harder to pinpoint where in my code I was calling the thing that actually failed.

All-in-all, I think that three forms of error handling are too many and that explicit error handling is easier to understand and forces one into thinking about it more.  (I happen to believe that the extra thinking is worthwhile, but others may disagree.)  That being said, adding supervision trees to other languages might be a nice touch.

State
Given that the hallmark of functional programming languages is being stateless, it may seem odd to conclude with a discussion of state, but the world is actually a stateful place and Elixir provides ways to maintain state that are worth discussing.

At one level, Elixir is truly stateless in that it’s not possible to modify the value of a variable.  Instead, you create a new value that’s a function of the old value.  For example, to remove an element from a list, you’d have to write something like:

my_list = List.delete_at(my_list, 3)

It’s not just that the List.delete_at function happens to create a new list with one fewer element, there’s no facility in the language for modifying the original value.  Elixir, however, does let you associate a variable with a process, and this effectively becomes the state of the process.  There are several different forms that these processes can take, but the one that I’ve used (and seen most used) are GenServers.  The example in the linked document is quite extensive, and I refer you to that for details, but essentially there are two things to understand for the purposes of my discussion.  On the client side, just like in an OOP, you don’t need to know anything about the state associated with a process, so, continuing with Stack example from the documentation, a client would add an element to a stack by calling something like:

GenServer.call(process_identifier, {:push, “value”})

(Note that process_identifier can be either a system generated identifier or a name that you give it when creating the process.  I’ll explain why that’s important in a little while.)  On the server side, you need code something like:

def handle_call({:push, value}, from, state) do

{:reply, :ok, new_state}
end

Here, the tuple {:push, value} matches the second parameter of the client’s invocation.  The from parameter is the process identifier for the client, and state is the state associated with the server process.  The return from the server function must be a tuple, the first value of which is a command to the infrastructure code.  In the case of the :reply command, the second value of the tuple is what will be returned to the client, and the final value in the tuple is the state that should be associated with the process going forward.  Within a process, only one GenServer call back (handle_call, handle_cast, handle_info, see the documentation for details) can be running at a time.  Strictly in terms of the deliberate state associated with the process, this structure makes concurrent programming quite safe because you don’t need to worry about concurrent modification of the state.  Honestly, you’d achieve the same effect in Java, for example, by marking every public method of a class as synchronized.  It works, and it does simplify matters, but it might be overkill, too.  On the other hand, when I’m working with concurrency in OOP languages,  I tend towards patterns like Producer/Consumer relationships, which I feel are just as safe and offer me more flexibility.

There is a dark side to Elixir’s use of processes to maintain state, too: Beyond the explicit state discussed above, I’ve run into two forms of implicit state that have tripped me up from time to time.  Not because they’re unreasonable in-and-of themselves, but because they are hidden and as I become more aware of them, I realize that they do need thinking about.  The first form of implicit state is the message queue that’s used to communicate with each process.  When a client makes a call like GenServer.call(pid, {:push, “something”}), the system actually puts the {:push, “something”} message onto the message queue for the given pid and that message waits in line until it gets to the front of the queue and then gets executed by a corresponding handle_call function.  The first time this tripped me up was when I was using a third-party library to talk to a device via a TCP socket.  I used this library for a while without any issues, but then there was one case where it would start returning the data from one device when I was asking for data from another device.  As I looked at the code, both the library code and mine, I realized that my call was timing out (anytime you use GenServer.call, there’s a timeout involved; I was using the default of 5 seconds and it was that timeout being triggered instead of a TCP timeout), but the device was still returning its data somewhat after that.  The timing was such that the code doing the low-level TCP communications put the response data into the higher-level library’s message queue before my code put its next message requesting data into the library’s queue, so the library sent my query to the device but gave my code the response to the previous query and went back to waiting for another message from either side, thus making everything one query off until another query call timed out.  This was easy enough to solve by making the library use a different process for each device, but it illustrates that you’re not entirely relieved of thinking about state when writing concurrent programs in Elixir.  Another issue I’ve had with the message queue is that anything that has your process identifier can send you any message.  I ran into this when I was using a different third-party library to send text messages to Slack.  The documentation for that library said that I should use the asynchronous send method if I didn’t care about the result.  In fact, it described the method as “fire-and-forget.”  These were useful but non-essential status messages, so that seemed the right answer for me.  Great, until the library decided to put a failure message into the message queue of my process and my code wasn’t written to accept it.  Because there was no function that matched the parameters sent by the library, the process crashed, the supervisor restarted it, and it just crashed again, and again, and again until the original condition that caused the problem went away.  Granted this was mostly a problem with library’s documentation and was easy enough to solve by adding a callback to my process to handle the library’s messages, but it does illustrate another area where the state of the message queue can surprise you.

The second place that I’ve been surprised to find implicit state is the state of the processes themselves.  To be fair, this has only been an issue in writing unit tests, but it does still make life somewhat more difficult.  In the OOP world, I’m used to having a new instance of the class I’m testing created for each and every test.  That turns out not to be the case when testing GenServers.  It’s fine if you’re using the system generated process identifiers, but I often have processes that have a fixed identifier so that any other process can send a message to them.  I ended up with messages from one test getting into the message queue of the process while another test was running.  It turns out that tests run asynchronously by default, but even when I went through and turned that off, I still had trouble controlling the state of these specially named processes.  I tried restarting them in the test setup, but that turned out to be more work than I expected because the shutdown isn’t instantaneous and trying to start a new process with the same name causes an error.  Eventually I settled on just adding a special function to reset the state without having to restart the process.  I’m not happy about this because that function should never be used in production, but it’s the only reliable way I’ve found to deal with the problem.  Another interesting thing I’ve noticed about tests and state is that the mocking framework is oddly stateful, too.  I’m not yet sure why this happens, but I’ve found that when I have some module, like the library that talks over TCP, a failing test can cause other tests to fail because the mocking framework still has hold of the module.  Remove the failing test and the rest pass without issue, same thing if you fix the failing test.  But having a failing test that uses the mocking framework can cause other tests to fail.  It really clutters up the test results to have all the irrelevant failures.

Conclusion
I am reminded of a story wherein Jascha Heifetz’s young nephew was learning the violin and asked his uncle to demonstrate something for him.  Heifetz apparently took up his nephew’s “el cheapo student violin” and made it sound like the finest Stradivarius.  I’m honestly not sure if this is a true story or not, but I like the point that we shouldn’t let our tools dictate what we’re capable of.  We may enjoy playing a Stradivarius/using our favorite programming language more than an “el cheapo” violin/other programming language, but we shouldn’t let that stop us achieving our ends.  Elixir is probably never going to be my personal Strad, but I’d be happy enough to use it commercially once it matures some more.  Most of the issues I’ve described in these postings are likely solvable with more experience on my part, but the lack of maturity would hold me back from recommending it for commercial purposes at this point.  When I started using it last June, Dave Thomas’ Programming Elixir was available for version 1.2 and the code was written in version 1.3, although later version were already available.  Since then, the official release version has gotten up to 1.6.4 with some significant seeming changes in each minor version.  Other things that make me worry about the maturity of Elixir include Ecto, the equivalent of an ORM for Elixir, which doesn’t list Oracle as an officially supported database and the logging system, which only writes to the console out of the box.  (Although you can implement custom backends for it.  I personally would prefer to spend my time solving business problems rather than writing code to make more useful logs, and I haven’t seen a third-party library yet that would do what I want for logging.)  For now, my general view is that Elixir should be kept for hobby projects and person interest, but it could be a viable tool for commercial projects once the maturity issues are dealt with.

About the Author
Narti is a former CC Pace employee now working at ConnectDER.  He’s been interested in the design of programming languages since first reading The Dragon Book and the proceedings of HOPL I some thirty years ago.

[1] I’ve no reason to doubt this.  I just haven’t tested it myself.

Introduction
This is the second in a series of posts about our experience using Visual Studio Team Services (VSTS) to build a deployment pipeline for an ASP.NET Core Web Application. Future posts will cover release artifacts and deployment to Azure cloud services.

Prerequisites
It’s assumed that you have an ASP.NET Core project set up in VSTS and connected to a Git repository.
See the previous blog post for details.

Goal
The goal of this post is to set up your ASP.NET Core project to automatically build and run Unit Tests after every commit to source code repository (I.e. Continuous Integration).

Here is a video summarizing the steps described below:

 

Adding a New Build Definition
Log into VSTS. For this demo, The VSTS account that we will be using is https://ccpacetest.visualstudio.com, and the Microsoft user is CCPaceTest@outlook.com

Select the project from our previous post.

In VSTS, go to Build under the Build and Release tab

 

  1. Select “ + New Definition”
  2. Select ASP.NET Core Template                              
  3. Enter any Name that can help you to identify this build.
  4. Select an appropriate Agent queue. In this example, we will use Hosted VS2017. Use this agent if you’re using Visual Studio 2017 and you want the VSTS service to maintain your queue.  
  5. To simplify the process, use the default value for other fields.
  6. Go to “Trigger” and Enable continuous integrations. This will cause a build to automatically kick off after every code commit.
  7. Save the definition.

 

Adding a Test Project to the Solution

  1. Open the solution (from the previous post) in Visual Studio.
  2. Add a new Test Project.  
  3. Select .NET Core > Unit Test Project. We will name this project MyFirstApp.Tests. Note: the default build definition will look for test projects under folders that end with the word, “Tests“. So, make sure that a new folder that contains this word is created when you add your Unit Test Project.   
  4. For a proof of concept, we are going to write a dummy Test Method. Enter the following code in UnitTest1.cs
  5. Rebuild the project.
  6. Commit the changes locally and push it to the remote source repo.
  7. Back in VSTS, you can see that a Build has been triggered. 
  8. Click on the build number #2018XXXX.X to view the details of the build. Normally this will take a few minutes to complete.
  9. Ensure all of the steps passed. You can click on each step to view log details. 

What’s Next?
We’ll demonstrate how to deploy builds to different environments, either via push-button deployment or triggered automatically after each build (I.e. continuous deployment).

Stay tuned!

Introduction
This is the first in a series of posts about our experience using Visual Studio Team Services (VSTS) to build a deployment pipeline for an ASP.NET Core Web application. Future posts will cover automated builds, release artifacts and deployment to Azure cloud services.

The purpose of this post is to show how to create a new Visual Studio Team Services, or VSTS, project, set up a Git code repository and integrate everything through your Visual Studio IDE.

Prerequisites:

Here is a video summarizing the steps described below:

 

Creating a New Project

  1. If you haven’t already done so, create a VSTS account using your Microsoft Account. Your Microsoft Account is the “owner” of this VSTS account.
  2. Log into VSTS. By default, the URL should look like (VSTS account name).visualstudio.com. For this demo, The VSTS account that we will be using is https://ccpacetest.visualstudio.com, and the Microsoft user is CCPaceTest@outlook.com
  3. Create a new project. This Microsoft Account is the “owner” of this project.

4. For this demo, we will select “Git” as our Version Control and “SCRUM” as the work item process.

Connecting VSTS Project to Visual Studio IDE

  1. There are many ways you can work on your project. A list of supported IDE’s is listed in the dropdown below. For this demo, we will use Visual Studio 2017 Professional Edition.
  2. Click “Clone in Visual Studio”. This will prompt you to launch your Visual Studio.
  3. If this is your first time running Visual Studio, you will be prompted to log in to your Visual Studio account. To simplify the process, log in to the Microsoft Account that you used to create the demo project previously. This should give you the admin access to the project.
  4.  Go to View > Team Explorer > Manage Connection. Visual Studio will automatically show you a list of the hosted repositories for the account you used to log in. If you are following the previous steps, you should be able to see HelloWorld.Demo project. If you are not seeing a project that you are expecting to have access to, verify the account you log in to or check with the project owner to make sure you are given the right permission.

5. Connect to the project.

6. If this is the first time you are accessing this project in your local environment, you will be prompted to clone the repository to your local git repository.

 

Initial Check In

  1. Within the Team Explorer, click the Home button. Under the “Solutions”, select “new…”. Using this method will ensure the solution is added to the right folder.
  2. For this Demo, we will create a new project using the ASP.NET Core Web Application template. The solution name doesn’t have to be the same as the project name in VSTS, but to avoid confusion, it is recommended to use the same name. 
  3. Once the solution is created, go back to Team Explorer and select “Changes”, you should be able to view all the items you have just added. 
  4. Enter a comment and click “Commit All”. This will update your local repository. 
  5. To “Push” these changes to the remote Repository, click “sync”, and finally “push”. 
  6.  You can verify this by logging into your VSTS, go to “Code” and you should be able to see all of the codes you have just checked in.

Collaborating with other Team Members

  1. You can add additional members to your project by going to the Settings > Security
  2. Please note that:

a:  If your VSTS account is Azure Active Directory backed, then you can only add email addresses that are internal to the tenant.

b: You must add email addresses for users who have “personal” Microsoft accounts unless your VSTS account uses your organization’s directory to authenticate users and control account access through Azure Active Directory (Azure AD). If new users don’t have Microsoft accounts, have them sign up.

‘Agile’… ‘Lean’… ‘Fitnesse’… ‘Fit’… ‘(Win)Runner’… ‘Cucumber’… ‘Eggplant’… ‘Lime’… As 2018 draws near, one might hear a few of these words bantered around the water cooler at this time of year as part of the trendy discussion topic: our personal New Year’s resolutions to get back into shape and eat healthy. While many of my well-intentioned colleagues are indeed well on their way to a healthier 2018, many of these words were actually discussed during a strategy session I attended recently –  which, surprisingly, based on the fact that many of these words are truly foods – did not cover new diet and exercise trends for 2018. Instead, this planning session agenda focused on another trendy discussion topic in our office as we close out 2017 and flip the calendar over to 2018: software test automation.

SOFTWARE TEST AUTOMATION?!?” you ask?

“SERIOUSLY – cucumbers and limes and fitness(e)?!?”

This thought came to mind after the planning session and gave me a chuckle. I thought, “If a complete stranger walked by our meeting room and heard these words thrown around, what would they think we were talking about?”

This humorous thought resonated further when recently – and rather coincidentally – a client asked me for a high-level, summary explanation as to how I would implement automated testing on a software development project. It was a broad and rather open-ended question – not meant to be technical in nature or to solicit a solution. Rather, how would I, with a background in Agile Business Analysis and Testing (i.e. I am not a developer) go about kickstarting and implementing a test automation framework for a particular software development project?

This all got me thinking. I’ve never seen an official survey, but I assume many people employed or with an interest in software development could provide a reasonable and well-informed response if ever asked to define or discuss software test automation, the many benefits of automated testing and how the practice delivers requisite business value. I believe, however, that there is a substantial dividing line between understanding the general concepts of test automation and successfully implementing a high-quality and sustainable automated testing suite. In other words, those who are considered experts in this domain are truly experts – they possess a unique and sought-after skill set and are very good at what they do. There really isn’t any middle ground, in my opinion.

My reasoning here is that getting from ‘Point A’ (simply understanding the concepts) to ‘Point B’ (implementing and maintaining an effective and sustainable test automation platform) is often an arduous and laborious effort, which unfortunately, in many cases, does not always result in success. At a fundamental level, the journey to a successful test automation practice involves the following:

  • Financial investment: Like with any software development quality assurance initiative, test automation requires a significant financial investment (in both tools and personnel). The notion here, however – like any other reasonable investment – is that an upfront financial investment should provide a solid return down the line if the venture is successful. This is not simply a two-point ‘spike’ user story assigned to someone to research the latest test automation tools. To use the poker metaphor – if you are ready to commit, then you should go all-in.
  • Time investment: How many software development project teams have you heard stating that they have extra time on their hands? Surely, not many, if any at all. Kicking off an automated testing initiative also requires a significant upfront time investment. Resources otherwise assigned to standard analysis, development or testing tasks will need to shift roles and contribute to the automated testing effort. Researching and learning the technical aspects of automated testing tools, along with the actual effort to design, build out and execute a suite of automated tests requires an exceptional team effort. Reassigning team tasks initially will reduce a team’s velocity, although similar to the financial investment concept, the hope is significant time savings and improved quality down the line in later sprints as larger deployments and major releases draw near.
  • Dedicated resources with unique, sought after skill sets: In my experience, I’ve seen that usually the highest rated employees with the most institutional/system knowledge and experience are called on to manage and drive automated testing efforts. These highly rated employees are also more than likely the most expensive, as the roles require a unique technical and analytical skill set, along with a significant knowledge of corresponding business processes. Because these organizational ‘all-stars’ initially will be focused solely on the test automation effort, other quality assurance tasks will inherently assume added risk. This risk needs to be mitigated in order to prevent a reduction in quality in other organizational efforts.

It turns out that the coincidental internal automated testing discussion and timely client question – along with the ongoing challenge in the QA domain associated with the aforementioned ‘Point A to Point B’ metaphor – led to a documented, bulleted list response to the client’s question. Let’s call it an Agile test automation best-practices checklist. This list can be found below and provides several concepts and ideas an organization could utilize in order to incorporate test automation into their current software testing/QA practice. Since I was familiar with the client’s organization, personnel and product offerings, I could provide a bit more detail than necessary. The idea here is not the ‘what’, as you will not find any specific automation tools mentioned. Instead, this list covers the ‘how’: the process-oriented concepts of test automation along with the associated benefits of each concept.

This list should provide your team with a handy starting point, or a ‘bridge’ between Point A and Point B. If your team can identify with many of the concepts in this list and relate them to your current testing process and procedures, then further pursuing an automated testing initiative should be a reasonable option for your team or project.

More importantly, this list can be used as a tool and foundation for non-technical members of a software development team (e.g. BA, Tester, ScrumMaster, etc.) in order to start the conversation – essentially, to decide if automated testing fits in with your established process and procedures, whether or not it will provide a return on investment and to ensure if you do indeed embark down the test automation path, that you continue to progress forward as applications, personnel and teams mature, grow and inevitably, change. Understand these concepts and when to apply them, and you can learn more about cucumbers, limes and eggplants as you progress further down the test automation path:

To successfully implement and advance an effective and sustainable automated testing initiative, I make every effort to follow the following strategy which combines proven Agile test automation best-practices with personal, hands-on project and testing experience. As such, this is not an all-inclusive list, rather just one IT consultant’s answer to a client’s question:

For folks new to the world of test automation and for those who had absolutely no idea that ‘Cucumber’ is not only a healthy vegetable but is also the name of an automated testing tool, I hope this blog entry is a good start for your journey into the world of test automation. For the ‘experts’ out there, please respond and let me know if I missed any important steps or tasks, or, how you might do things differently. After all, we’re all in this together, and as more knowledge is spread throughout the IT world, the more we can further enhance our processes.

So, if you’ll excuse me now, I’m going to go ahead and plan out my 2018 New Year’s resolution exercise regimen and diet. Any additional thoughts on test automation will have to wait until next year.

Boy this summer flew by quickly! CC Pace’s summer intern, Niels, enjoyed his last day here in the CC Pace office on Friday, August 18th. Niels made the rounds, said his final farewells, and then he was off, all set to return to The University of Maryland, Baltimore County, for his last hurrah. Niels is entering his senior year at UMBC, and we here at CC Pace wish him all the best. We will miss him.

Niels left a solid impression in a short amount of time here at CC Pace. In a matter of 10 weeks, Niels interacted with and was able to enhance several internal processes for virtually all of CC Pace’s internal departments including Staffing, Recruiting, IT, Accounting and Financial Services (AFA), Sales and Marketing. On his last day, I walked Niels around the office and as he was thanked by many of the individuals he worked with, there were even a few hugs thrown around. Many folks also expressed wishes that Niels’ and our paths will hopefully soon cross again. In short, Niels made a very solid impression on a large group of my colleagues in a relatively short amount of time.

Back in June I gladly accepted the challenge of filling Niels’ ‘mentor’ role as he embarked on his internship. I’d like to think I did an admirable job, which I hope Niels will prove many times over in the years to come as he advances his way up the corporate ladder. As our summer internship program came to a close, I couldn’t help reminiscing back to my days as a corporate intern more than 20 years ago. Our situations were similar; I also interned during the spring/summer semesters of my junior year at Penn State University, with the assurance of knowing I had one more year of college remaining before I entered the ‘real world’. My internship was only a taste of the ‘corporate world’ and what was in store for me, and I still had one more year to learn and figure things out (and of course, one more year of fun in the Penn State football student section – priorities, priorities…)

Penn State’s Business School has a fantastic internship program, and I was very fortunate to obtain an internship at General Electric’s (GE) Corporate Telecommunications office in Princeton, NJ. My role as an intern at GE was providing support to the senior staff in the design and implementation of voice, data and video-conferencing services for GE businesses worldwide. Needless to say, this was both a challenging and rewarding experience for a 21-year-old college student, participating in the implementation of GE’s groundbreaking Global Telecommunications Network during the early years of the internet, among other things.

As I reminisced back to my eight months at GE, I couldn’t help but notice the similarities between my internship and a few of the ‘lessons learned’ I took away from my experience 20+ years ago, and how they compared or contrasted to my recent observations and feedback I provided to Niels as his mentor. Of course, there are pronounced differences – after all, many things have changed in the last 20 years – the technology we use every day is clearly the biggest distinction. I would be remiss not to also mention the obvious generation gap – I am a proud ‘Gen X’er’, raised on Atari and MTV, while Niels is a proud Millennial, raised on the Internet and smartphones. We actually had a lot of fun joking about the whole ‘generation gap thing’ and I’m sure we both learned a lot about each other’s demographic group. Niels wasn’t the only person who learned something new over the summer – I learned quite a bit myself.

In summary, my reminiscing back to the late 90’s certainly helped make my daily music choices easier for a few weeks this summer led to the vision for this blog post. I thought it would be interesting to list a few notable experiences and lessons I learned as an intern at GE, 20 odd years ago, along with how my experiences compared or contrasted with what I observed in the last 10 weeks working side-by-side with our intern, Niels. These observations are based on my role as his mentor, and were provided as feedback to Niels in his summary review, and they are in no particular order.

Have you similarly had the opportunity to engage in both roles within the intern/mentor relationship as I have? Maybe your example isn’t separated by 20 years, but several years? Perhaps you’ve only had the chance to fulfill one of these roles in your career and would love the opportunity to experience the other? In any case, see if you recognize some of the lessons you may have learned in the past and how they present themselves today. I think you’ll be amazed at how even though ‘the more things change, the more they stay the same’.

 

 

PowerApps Basics
PowerApps is one of the most recent additions to the Microsoft Office suite of products. PowerApps has been marketed as “programming for non-programmers”, but make no mistake; the seamless interconnectivity PowerApps has with other software products allows it to be leveraged in highly complex enterprise applications. PowerApps basic is included in an Office 365 License, but for additional features and advanced data connections, a plan must be purchased. When I was brought on to CC Pace as an intern to assist with organizational change regarding SharePoint, I assumed that the old way of using SharePoint Designer and InfoPath would be the framework I would be working with. However, as I began to learn more about PowerApps and CC Pace’s specific organizational structure and needs, I realized that it was essential to work with a framework geared towards the future.

Solutions with PowerApps
While Big Data and data warehousing become common practices, data analytics and organized data representation become more and more valuable. On the small to medium organizational scale, bringing together scattered data which is stored in a wealth of different applied business practices and software options has been extremely difficult. This one of the areas where PowerApps can create immense value for your organization. Rather than forcing an extreme and expensive organizational change where everyone submits expense reports, recruitment forms, and software documentation to a brand new custom database management system, PowerApps can be used to pull data from its varying locations and organize it.

PowerApps is an excellent solution for data entry applications, and this is the primary domain I’ve been working in. A properly designed PowerApp allows the end user to easily manipulate entries from all sorts of different database management systems. Create, Read, Update, Delete (CRUD) applications have been around and necessary for a long time, and PowerApps makes it easy to create these types of applications. Input validation and automated checks can even help to prevent mistakes and improve productivity. If your organization is constantly filling out their purchase orders with incorrectly calculated sales tax, a non-existent department code, or forgetting to add any number of fields, PowerApps allows some of those mistakes to be caught extremely early.

Integration with Flow (the upgraded version of SharePoint designer WorkFlows), allows for even greater flexibility using PowerApps. An approval email can be created to ensure to prevent mistakes from being entered into a database management system, push notifications can be created when PowerApps actions are taken, the possibilities are (almost) endless.

Pros and Cons
There are both advantages and disadvantages to leveraging software in an enterprise solution that is still under active development. One of the disadvantages is that many of the features that a user might expect to be included aren’t possible yet. While Flow integration with PowerApps is quite powerful, it is missing several key features, such as an ability to attach a document directly from PowerApps, or to write data over multiple records at a time (i.e. add multiple rows to a SQL database). Additionally, I would not assume that PowerApps is an extremely simple, programming free solution to business apps. Knowledge of the different data types as well as the use of functions gives PowerApps a steep learning curve. While you may not be writing any plaintext code, other than HTML, PowerApps still requires a good amount of knowledge of technology and programming concepts.

The main advantage to PowerApps being new software is just that; it’s brand new software. You may have heard that PowerApps is currently on track to replace, at least partially, the now shelved InfoPath project. InfoPath may continue to work until 2026, but without any new updates to the program, it may become obsolete on newer environments well before that. Here at CC Pace, we focus on innovation and investing in the solutions of tomorrow, and using PowerApps internally rather than creating a soon non-supported InfoPath framework was the right choice.

 
Author Bio
As a programmer and cybersecurity enthusiast, creating pieces of enterprise systems is something I never knew I would be so interested in. I’m Niels Verhoeven, a summer IT Intern at CC Pace Systems. I study Information Systems with a focus on Cybersecurity Informatics at University of Maryland, Baltimore County. My experiences at CC Pace and my programming background have given me quite a bit of insight into how users, systems, and business can fit together, improving productivity and quality of work.

Is your business undergoing an Agile Transformation? Are you wondering how DevOps fits into that transformation and what a DevOps roadmap looks like?

Check out a webinar we offered recently, and send us any questions you might have!

Recently, I was part of a successful implementation of a project at a big financial institution. The project was the center of attention within the organization mainly because of its value addition to the line of business and their operations.

The project was essentially a migration project and the team partnered with the product vendor to implement it. At the very core of this project was a batch process that integrated with several other external systems. These multiple integration points with the external systems and the timely coordination with all the other implementation partners made this project even more challenging.

I joined the project as a Technical Consultant at a rather critical juncture where there were only a few batch cycles that we could run in the test regions before deploying it into production. Having worked on Agile/Scrum/XP projects in the past and with experience working on DevOps projects, I identified a few areas where we could improve to either enhance the existing development environment or to streamline the builds and releases. Like with most projects, as the release deadline approaches, the team’s focus almost always revolves around ‘implementing functionality’ while everything else gets pushed to the backburner. This project was no different in that sense.

When the time had finally come to deploy the application into production, it was quite challenging in itself because it was a four-day continuous effort with the team working multiple shifts to support the deployment. At the end of it, needless to say, the whole team breathed a huge sigh of relief when we deployed the application rather uneventfully, even a few hours earlier than what we had originally anticipated.

Once the application was deployed to production, ensuring the stability of the batch process became the team’s highest priority. It was during this time, I felt the resistance to any new change or enhancement. Even fixes to non-critical production issues were delayed because of the fear that they could potentially jeopardize the stability of the batch.

The team dreaded deployments.

I felt it was time for me to build my case to have the team reassess the development, build and deployment processes in a way that would improve the confidence level of any new change that is being introduced. During one of my meetings with my client manager, I discussed a few areas where we could improve in this regard. My client manager was quickly onboard with some of the ideas and he suggested I summarize my observations and recommendations. Here are a few at a high level:

blog-image

 

It’s common for these suggestions to fall through the cracks while building application functionality. In my experience, I have noticed they don’t get as much attention because they are not considered ‘project work’. What project teams, especially the stakeholders, fail to realize is the value in implementing some of the above suggestions. Project teams should not consider this as additional work but rather treat it as part of the project and include the tasks in their estimations for a better, cleaner end product.

“Once I started looking around behind the port frames, I figured I could just….”

And so began a summer of endless sailboat projects and no sailing.  One project lead to the start of another without resolving the first.  What does this possibly have to do with software development and Agile techniques?

My old man and I own and are restoring an older sailboat.  He is also in the IT profession, and is steeped in classic waterfall development methodology.  After another frustrating day of talking past each other, he asked how I felt things could be handled differently in our boat projects.

“Stop starting and start finishing!”

It is the key mindset for Agile.  Take a small task that provides value, focus on it, and get it done.  It eliminates distraction and gives the user something usable quickly.

Applying this mindset outside of software may not be intuitive, but can pay dividends quickly.  On the boat, we cleared space on the bulkhead, grabbed a stack of post-its and planned through the next project, rewiring the boat.  The discussion started with the goal of the project.  “We’re just to tear everything out and rewire everything.” Talk about ignoring non-breaking changes!  I suggested that we focus on always having a working product – a sail-able boat – and break the project into smaller tasks that can be worked from start to finish in short, manageable pieces of time.

Approaching the project from that angle, we quickly developed a list of sub tasks, prioritized them, and put them up on our make-shift Kanban board.  This was planning was so intuitive and rewarding on its own that we did the same for other projects we want to tackle before April.

So stop starting, start finishing, and start providing value quicker for your stakeholders.

Building a new software product is a risky venture – some might even say adventure. The product ideas may not succeed in the marketplace. The technologies chosen may get in the way of success. There’s often a lot of money at stake, and corporate and personal reputations may be on the line.

I occasionally see a particular kind of team dysfunction on software development teams: the unwillingness to share risk among all the different parts of the team.

The business or product team may sit down at the beginning of a project, and with minimal input from any technical team members, draw up an exhaustive set of requirements. Binders are filled with requirements. At some point, the technical team receives all the binders, along with a mandate: Come up with an estimate. Eventually, when the estimate looks good, the business team says something along the lines of: OK, you have the requirements, build the system and don’t bother us until it’s done.

(OK, I’m exaggerating a bit for effect – no team is that dysfunctional. Right? I hope not.)

What’s wrong with this scenario? The business team expects the technical team to accept a disproportionate share of the product risk. The requirements supposedly define a successful product as envisioned by the business team. The business team assumes their job is done, and leaves implementation to the technical team. That’s unrealistic: the technical team may run into problems. Requirements may conflict. Some requirements may be much harder to achieve than originally estimated. The technical team can’t accept all the risk that the requirements will make it into code.

But the dysfunction often runs the other way too. The technical team wants “sign off” on requirements. Requirements must be fully defined, and shouldn’t change very much or “product delivery is at risk”. This is the opposite problem: now the technical team wants the business team to accept all the risk that the requirements are perfect and won’t change. That’s also unrealistic. Market dynamics may change. Budgets may change. Product development may need to start before all requirements are fully developed. The business team can’t accept all the risk that their upfront vision is perfect.

One of the reasons Agile methodologies have been successful is that they distribute risk through the team, and provide a structured framework for doing so. A smoothly functioning product development team shares risk: the business team accepts that technical circumstances may need adjustment of some requirements, and the technical team accepts that requirements may need to change and adapt to the business environment. Don’t fall into the trap of dividing the team into factions and thinking that your faction is carrying all the weight. That thinking leads to confrontation and dysfunction.

As leaders in Agile software development, we at CC Pace often encourage our clients to accept this risk sharing approach on product teams. But what about us as a company? If you founded a startup and you’ve raised some money through venture capital – very often putting your control of your company on the line for the money – what risk do we take if you hire us to build your product? Isn’t it glib of us to talk about risk sharing when it’s your company, your money, and your reputation at stake and not ours?

We’ve been giving a lot of thought to this. In the very near future, we’ll launch an exciting new offering that takes these risk sharing ideas and applies them to our client relationships as a software development consultancy. We will have more to say soon, so keep tuning in.

Recently, I attended a meetup for Loudoun’s Tech Startups in Ashburn, VA. It was a great opportunity to discuss ideas in various stages of development, as well as the resources available to bring these ideas to market. It was encouraging to see so many motivated entrepreneurs share their experiences in a local setting, with Loudoun showing its promise as a business incubator.

Michelle Chance from the Innovative Solutions Consortium gave a great speech about the services provided by her organization, such as “hard challenge” events, think-tank style meetings and student/recent graduate mentoring.

The hard challenge events seemed particularly interesting to me since they provide a collaborative environment for solving difficult technology problems. Organizations compete for the best solution and receive awards for most disruptive and innovative technologies.

Another sponsor of the event was the Mason Enterprise Center, which provides consultation and training to small business owners and entrepreneurs. They have regional offices in Fairfax, Fauquier, Loudoun and Prince William counties.

We attended this event because of our past experience developing collaborative solutions for startups. We’ve found that the Agile development philosophy fits nicely with the entrepreneurial spirit of organizations that want to quickly build a product that provides value. In fact, principles such as Minimum viable product and Continuous deployment are core to the Lean startup philosophy championed by Eric Ries. This method encourages startups to build a minimum amount of high-value features that can be released quickly. With frequent releases, a company can immediately begin collecting and responding to customer feedback.

If you’re at any stage in the startup process and have a technical idea that you want to explore or expand, there are plenty of resources available in the NoVa area. Also, I encourage you to attend one of the various meetups for startups, such as the Loudoun Tech Startups group.

Senior IT managers starting a new project often have to answer the question: build or buy? Meaning, should we look for a packaged solution that does mostly what we need, or should we embark on a custom software development project?

Coders and application-level programmers also face a similar problem when building a software product. To get some part of the functionality completed, should we use that framework we read about, or should we roll our own code? If we write our own code, we know we can get everything we need and nothing we don’t – but it could take a lot of time that we may not have. So, how do we decide?

Your project may (and probably does) vary, but I typically base my decision by distinguishing between infrastructure and business logic.

I consider code to be infrastructure-related if it’s related to the technology required to implement the product. On the other hand, business logic is core to the business problem being solved. It is the reason the product is being built.

Think of it this way: a completely non-technical Product Owner wouldn’t care how you solve an infrastructure issue, but would deeply care about how you implement business logic. It’s the easiest way to distinguish between the two types of problems.

Examples of infrastructure issues: do I use a relational or non-relational database? How important are ACID transactions? Which database will I use? Which transactional framework will I use?

Examples of business logic problems: how do I handle an order file sent by an external vendor if there’s an XML syntax error? How important is it to find a partial match for a record if an exact match cannot be found? How do you define partial?

Note that a business logic question could be technical in nature (XML syntax error) but how you choose to solve it is critical to the Product Owner. And a seemingly infrastructure-related question might constitute business logic – for example, if you are a database company building a new product.

After this long preamble, finally my advice: Strongly favor using existing frameworks to solve infrastructure problems, but prefer rolling your own code for business logic problems.

My rationale is simple: you are (or should be) expert in solving the business logic problems, but probably not the infrastructure problems.

If you’re working on a system to match names against a data warehouse of records, your team knows or can figure out all the details of what that involves, because that’s what the system is fundamentally all about. Your Product Owner has a product idea that includes market differentiators and intellectual property, making it very unlikely that an existing matching framework will fulfill all requirements. (If an existing framework does meet all the requirements, why is the product being developed at all?)

Secondly, the worst thing you want to do as a developer is to use an existing business logic framework “to make things simple”, find that it doesn’t handle your Product Owner’s requirements, and then start pushing back on requirements because “our technology platform doesn’t allow X or Y”. For any software developer with professional pride: I’m sorry, but that’s just weak sauce. Again, the whole point of the project is to build a unique product. If you can’t deliver that to the Product Owner, you’re not holding up your end of the bargain.

On the other hand, you are very likely not experts on transactional frameworks, message buses, XML parsing technology, or elastic cloud clusters. Oracle, Microsoft, Amazon, etc., have large expert teams and have put their own intellectual property into their products, making it highly unlikely you’ll be able to build infrastructure that works as reliably and is as bug free.

Sometimes the choice is harder. You need to validate a custom file format. Should you use an existing framework to handle validations or roll your own code? It depends. It may not even be possible to tell when the need arises. You may need to use an existing framework and see how easy it is to extend and adapt. Later, if you find you’re spending more time extending and adapting than rolling your own optimized code, you can change the implementation of your validation subsystem. Such big changes are much easier if you’ve consistently followed Agile engineering practices such as Test Driven Design.

As always, apply a fundamental Agile principle to any such decision: how can I spend my programming time generating the most business value?

As I write this blog entry, I’m hoping that the curiosity (or confusion) of the title captures an audience. Readers will ask themselves, “Who in the heck is Jose Oquendo? I’ve never seen his name among the likes of the Agile pioneers. Has he written a book on Agile or Scrum? Maybe I saw his name on one of the Agile blogs or discussion threads that I frequent?”

In fact, you won’t find Oquendo’s name in any of those places. In the spirit of baseball season (and warmer days ahead!), Jose Oquendo was actually a Major League Baseball player in the 1980’s, playing most of his career with the St. Louis Cardinals.

Perhaps curiosity has gotten the better of you yet again, and you look up Oquendo’s statistics. You’ll discover that Oquendo wasn’t a great hitter, statistically-speaking. His career .256 batting average and 14 homeruns over a 12 year MLB career is hardly astonishing.

People who followed Major League Baseball in the 1980’s, however, would most likely recognize Oquendo’s name, and more specifically, the feat which made him unique as a player. Oquendo has done something that only a handful of players have ever done in the long history of Major League Baseball – he’s played EVERY POSITION on the baseball diamond (all nine positions in total).

Oquendo was an average defensive player and his value obviously wasn’t driven from his aforementioned offensive statistics. He was, however, one of the most valuable players on those successful Cardinal teams of the 80’s, as the unique quality he brought to his team was derived from a term referred to in baseball lingo as “The Utility Player”. (Interestingly enough, Oquendo’s nickname during his career was “Secret Weapon”.)

Over the course of a 162-game baseball season, players get tired, injured and need days off. Trades are executed, changing the dynamic of a team with one phone call. Further complicating matters, baseball teams are limited to a set number of roster spots. Due to these realities and constraints of a grueling baseball season, every team needs a player like Oquendo who can step up and fill in when opportunities and challenges present themselves. And that is precisely why Oquendo was able to remain in the big leagues for an amazing 12 years, despite the glaring deficiency in his previously noted statistics.

Oquendo’s unique accomplishment leads us directly into the topic of the Agile Business Analyst (BA), as today’s Agile BA is your team’s “Utility Player”. Today’s Agile BA is your team’s Jose Oquendo.

A LITTLE HISTORY – THE “WATERFALL BUSINESS ANALYST”

Before we get into the opportunities afforded to BA’s in today’s Agile world, first, a little walk down memory lane. Historically (and generally) speaking – as these are only my personal observations and experiences – a Business Analyst on a Waterfall project wrote requirements. Maybe they also wrote test cases to be “handed off” and used later. In many cases, requirements were written and reviewed anywhere from six to nine months before documented functionality was even implemented. As we know, especially in today’s world, a lot can change in six months.

I can remember personally writing requirements for a project in this “Waterfall BA” role. After moving onto another project entirely, I was told several months down the road, “’Project ABC’ was implemented this past weekend – nice work.” Even then, it amazed me that many times I never even had the opportunity to see the results of my work. Usually, I was already working on an entirely new project, or more specifically, another completely new set of requirements.

From a communications perspective, BA’s collaborated up-front mostly with potential users or sellers of the software in order to define requirements. Collaboration with developers was less common and usually limited to a specific timeframe. I actually worked on a project where a Development Manager once informed our team during a stressful phase of a project, “please do not disturb the developers over the next several weeks unless absolutely necessary.” (So much for collaboration…) In retrospective, it’s amazing to me that this directive seemed entirely normal to me at the time.

Communication with testers seemed even rarer – by the very definition, on a Waterfall project, I’ve already passed my knowledge on to the testers – it’s now their responsibility. I’m more or less out of the loop. By the time the specific requirements are being tested, I’m already off onto an entirely new project.

In my personal opinion the monotony of the BA role on a Waterfall project was sometimes unbearable. Month-long requirements cycles, workdays with little or no variation, and some days with little or no collaboration with other team members outside of standard team meetings became a day to day, week to week, month to month grind, with no end in sight.

AND NOW INTRODUCING… THE “AGILE BUSINESS ANALYST”

Fast-forward several years (and several Agile project experiences) and I have found that the role of today’s Agile Business Analyst has been significantly enhanced on teams practicing Agile methodologies and more specifically, Scrum. Simply as a result of team set-up, structure, responsibilities – and most importantly, opportunities – I feel that Agile teams have enhanced the role of the Business Analyst by providing opportunities which were never seemingly available on teams using the traditional Waterfall approach. There are new opportunities for me to bring value to my team and my project as a true “Utility Player”, my team’s Jose Oquendo.

The role of the Agile BA is really what one makes of it. I can remain content with the day to day “traditional” responsibilities and barriers associated with the BA role if I so choose; back to the baseball analogy – I can remain content playing one position. Or, I can pursue all of the opportunities provided to me in this newly-defined role, benefitting from new and exciting experiences as a result; I can play many different positions, each one further contributing to the short and long-term success of the team.

Today, as an Agile BA, I have opportunities – in the form of different roles and responsibilities – which not only enhance my role within the team but also allow me to add significant value to the project. These roles and responsibilities span not only across functional areas of expertise (e.g. Project Management, Testing, etc.) but they also span over the entire lifetime of a software development project (i.e. Project Kickoff to Implementation). In this sense, Agile BA’s are not only more valuable to their respective teams, they are more valuable AND for a longer period of time – basically, the entire lifespan of a project. I have seen specifically that Agile BA’s can greatly enhance their impact on project teams and the quality of their projects in the following five areas:

  • Project Management
  • Product Management (aka the Product Backlog)
  • Testing
  • Documentation
  • Collaboration (with Project Stakeholders and Team Members)

We’ll elaborate – in a follow-up blog entry – specifically how Agile BA’s can enhance their role and add value to a project by directly contributing to the five functional areas listed above.

For the past 3 months we’ve had the pleasure of working with a charitable organization called the Ceca Foundation.

Ceca, which is derived from “celebrating caregivers”, was established in 2013 to celebrate caregiver excellence and “to promote high patient satisfaction by recognizing and rewarding outstanding caregivers”. They do this by providing employees of caregiving facilities with a platform for recognizing and nominating their peers for the Ceca Award – a cash reward given throughout the year. These facilities include rehabilitation centers, hospitals, assisted living centers and similar organizations.

CC Pace partnered with Ceca to build their next generation, customized nomination platform.

This was one of those projects that fills you with pride. First, for the obvious reason – Ceca’s worthwhile mission. Second, the not-so-obvious reason, which was the development process. It was a great example of why I enjoy helping customers build products.

The process

For various reasons, Ceca was under a tight deadline to get the new platform up-and-running for several facilities. The Agile process turned out to be a great fit, as it allowed for frequent customer feedback and weekly deployments to a testable environment. We developed the platform using high-level feature stories, rather than detailed specifications. This allowed the team to concentrate on the desired outcome, rather than getting caught up in the technical details. At times, we had to forgo a software-based solution in favor of a manual process. When you have limited resources and time, you have to make these types of decisions.

In February, after about 3 weeks, the Ceca Foundation launched the new web platform for one facility and then quickly brought on several more. There was immediate gratification for the team as we watched the nominations flood in.

The “feel good” story

What made this project successful and enjoyable at the same time? I’m reminded of the first value in the Agile manifesto – individuals and interactions over processes and tools. Some factors were technical but most were not:

  • a motivated and enthusiastic customer (Ceca)
  • a set of agreed upon features to provide the Minimum Viable Product
  • frequent collaboration with the customer
  • a cloud-hosted environment to provide infrastructure on-demand for testing and live versions
  • a software-as-a-service model that allowed us to quickly bring on new facilities

For me, it was Agile at its most fundamental: discuss the desired features; provide a cost estimate for those features; negotiate priority with the customer; provide frequent releases of working software.

Check out the Ceca Foundation for more information. You can see a demo of the software under “Technology​“.