Considering SDET capacity – it’s rare to find pitchers that can also hit!
Several of our clients are currently working to improve development team efficiency by hiring Software Development Engineers in Test (SDETs), to drive a deeper level of testing effectiveness by thinking through the design of both the code and the related testing process and frameworks, from the very beginning of the development lifecycle. As the role name suggests, successful SDETs combine developer and test engineer acumen, which allows them to work hand in hand with the development team on an equal footing.
Let us draw a Major League Baseball analogy, in our experience, if you consider test engineers “those that can pitch” and developers to be “those that can hit,” in searching for candidates the role requires an emphasis on the hitting portion of the task. Each task on its own is very hard.
The universe of candidates is certainly full of pitchers, QA engineers who have learned frameworks and now present themselves as SDETs. But in many cases, candidates lack the coding background to be successful in the role. They have been around hitting, but they haven’t had many at-bats.
Here are considerations for those that are looking to fill SDET positions as a candidate or one of our client technical leaders:
- For aspiring SDETs, if you are a QA engineer candidate looking to make the jump to an SDET role, find a way in your current role to bolster your coding experience… you will increase your odds of success! You don’t necessarily have to be a developer, but you must be able to read code and prove so in a technical screen.
- For CTOs and tech leaders, we’d expect most of your developers are not interested in making a full transition to an SDET position. Consider requiring/advocating for testing framework skills from a portion of your development staff and then have them rotate through the SDET position so they remain primarily developers.
In the realm of Major League Baseball, Don Drysdale is known as the classic “pitcher who can hit.” He recorded twenty-nine home runs in 1,169 career at-bats. Shohei Ohtani represents the modern embodiment of a dual-skilled superstar, achieving thirty-nine home runs this season alongside an ERA of 3.43. Just as baseball enthusiasts look for players adept at both pitching and hitting, we, too, search for SDETs who embody a harmonious blend of coding and testing prowess!
Good luck finding (or becoming) the next Drysdale or Ohtani!
As for the CC Pace team, we will be on the lookout for these highly capable pitchers that can hit. While this is not an easy role to quickly fill, we have an extensive referral network that is growing every day. Give us a call if you’re ready to find the perfect fit for your team.
Today’s business leaders find themselves navigating a world in which artificial intelligence (AI) plays an increasingly pivotal role. Among the various types of AI, generative AI – the kind that can produce novel content – has been a game changer. One such example of generative AI is OpenAI’s ChatGPT. Though it’s a powerful tool with significant business applications, it’s also essential to understand its limitations and potential pitfalls.
1. What are Generative AI and ChatGPT?
Generative AI, a subset of AI, is designed to create new content. It can generate human-like text, compose music, create artwork, and even design software. This is achieved by training on vast amounts of data, learning patterns, structures, and features, and then producing novel outputs based on what it has learned.
In the realm of generative AI, ChatGPT stands out as a leading model. Developed by OpenAI, GPT, or Generative Pre-training Transformer, uses machine learning to produce human-like text. By training on extensive amounts of data from the internet, ChatGPT can generate intelligent and coherent responses to text prompts.
Whether it’s crafting detailed emails, writing engaging articles, or offering customer service solutions, ChatGPT’s potential applications are vast. However, the technology is not without its drawbacks, which we’ll delve into shortly.
2. Strategic Considerations for Business Leaders
Adopting a generative AI model like ChatGPT in your business can offer numerous benefits, but the key lies in understanding how best to leverage these tools. Here are some areas to consider:
- 2.1. Efficiency and Cost Savings
Generative AI models like ChatGPT can automate many routine tasks. For example, they can provide first-level customer support, draft emails, or generate content for blogs and social media. Automating these tasks can lead to considerable time savings, freeing your team to focus on more strategic, creative tasks. This not only enhances productivity but could also lead to significant cost savings. - 2.2. Scalability
One of the biggest advantages of generative AI models is their scalability. They can handle numerous tasks simultaneously, without tiring or requiring breaks. For businesses looking to scale, generative AI can provide a solution that doesn’t involve a proportional increase in costs or resources. Moreover, the ability of ChatGPT to learn and improve over time makes it a sustainable solution for long-term growth. - 2.3. Customization and Personalization
In today’s customer-centric market, personalization is key. Generative AI can create content tailored to individual user preferences, enhancing personalization in your services or products. Whether it’s customizing email responses or offering personalized product recommendations, ChatGPT can drive customer engagement and satisfaction to new heights. - 2.4. Innovation
Generative AI is not just about automating tasks; it can also stimulate innovation. It can help in brainstorming sessions by generating fresh ideas and concepts, assist in product development by creating new design ideas, and support marketing strategies by providing novel content ideas. Leveraging the innovative potential of generative AI could be a game-changer in your business strategy.
3. The Pitfalls of Generative AI
While the benefits of generative AI are clear, it’s essential to be aware of its potential drawbacks and pitfalls:
- 3.1. Data Dependence and Quality
Generative AI models learn from the data they’re trained on. This means the quality of their output is directly dependent on the quality of their training data. If the input data is biased, inaccurate, or unrepresentative, the output will likely be flawed as well. This necessitates rigorous data selection and cleaning processes to ensure high-quality outputs.
Employing strategies like AI auditing and fairness metrics can help detect and mitigate data bias and improve the quality of AI outputs. - 3.2. Hallucination
Generative AI models can sometimes produce outputs that appear sensible but are completely invented or unrelated to the input – a phenomenon known as “hallucination”. There are numerous examples in the press regarding false statements or claims made by these models, sometimes funny (like claiming that someone ‘walked’ across the English Channel) to the somewhat frightening (claiming someone has committed a crime, when in fact, they did not). This can be particularly problematic in contexts where accuracy is paramount. For example, if a generative model hallucinates while generating a financial report, it could lead to serious misinterpretations and errors. It’s crucial to have safeguards and checks in place to mitigate such risks.
Implementing robust quality checks and validation procedures can help. For instance, combining the capabilities of generative AI with verification systems, or cross-checking the AI outputs with trusted data sources, can significantly reduce the risk of hallucination. - 3.3. Ethical Considerations
The ability of generative AI models to create human-like text can lead to ethical dilemmas. For instance, they could be used to generate deepfake content or misinformation. Businesses must ensure that their use of AI is responsible, transparent, and aligned with ethical guidelines and societal norms.
Regular ethics training for your team, and keeping lines of communication open for ethical concerns or dilemmas, can help instill a culture of responsible AI usage. - 3.4. Regulatory Compliance
As AI becomes increasingly pervasive, regulatory bodies worldwide are developing frameworks to govern its use. Businesses must stay updated on these regulations to ensure compliance. This is especially important in sectors like healthcare and finance, where data privacy is paramount. Not adhering to these regulations can lead to hefty penalties and reputational damage.
Keep up-to-date with the latest changes in AI-related laws, especially in areas like data privacy and protection. Consider consulting with legal experts specializing in AI and data to ensure your practices align with regulatory requirements. - 3.5 AI Transparency and Explainability
Generative AI models, including ChatGPT, often function as a ‘black box’, with their internal workings being complex and difficult to interpret.
Enhancing AI transparency and explainability is key to gaining trust and mitigating risks. This could involve using techniques that make AI decisions more understandable to humans or adopting models that provide an explanation for their outputs.
4. Navigating the Generative AI Landscape: A Step-by-Step Approach
As generative AI continues to evolve and redefine business operations, it is essential for business leaders to strategically navigate this landscape. Here’s an in-depth look at how you can approach this:
- 4.1. Encourage Continuous Learning
The first step in leveraging the power of AI in your business is building a culture of continuous learning. Encourage your team to deepen their understanding of AI, its applications, and its implications. You can do this by organizing workshops, sharing learning resources, or even bringing in an AI expert (like myself) to educate your team on the best ways to leverage the potential of AI. The more knowledgeable your team is about AI, the better equipped they will be to harness its potential. - 4.2. Identify Opportunities for AI Integration
Next, identify the areas in your business where generative AI can be most beneficial. Start by looking at routine, repetitive tasks that could be automated, freeing up your team’s time for more strategic work. Also, consider where personalization could enhance the customer experience – from marketing and sales to customer service. Finally, think about how generative AI can support innovation, whether in product development, strategy formulation, or creative brainstorming. - 4.3. Develop Ethical and Responsible Use Guidelines
As you integrate AI into your operations, it’s essential to create guidelines for its ethical and responsible use. These should cover areas such as data privacy, accuracy of information, and prevention of misuse. Having a clear AI ethics policy not only helps prevent potential pitfalls but also builds trust with your customers and stakeholders. - 4.4. Stay Abreast of AI Developments
In the fast-paced world of AI, new developments, trends, and breakthroughs are constantly emerging. Make it a point to stay updated on these advancements. Subscribe to AI newsletters, follow relevant publications, and participate in AI-focused forums or conferences. This will help you keep your business at the cutting edge of AI technology. - 4.5. Consult Experts
AI implementation is a significant step and involves complexities that require expert knowledge. Don’t hesitate to seek expert advice at different stages of your AI journey, from understanding the technology to integrating it into your operations. An AI consultant or specialist can help you avoid common pitfalls, maximize the benefits of AI, and ensure that your AI strategy aligns with your overall business goals. - 4.6. Prepare for Change Management
Introducing AI into your operations can lead to significant changes in workflows and job roles. This calls for effective change management. Prepare your team for these changes through clear communication, training, and support. Help them understand how AI will impact their work and how they can upskill to stay relevant in an AI-driven workplace.
In conclusion, navigating the generative AI landscape requires a strategic, well-thought-out approach. By fostering a culture of learning, identifying the right opportunities, setting ethical guidelines, staying updated, consulting experts, and managing change effectively, you can harness the power of AI to drive your business forward.
5. Conclusion: The Promise and Prudence of Generative AI
Generative AI like ChatGPT carries immense potential to revolutionize business operations, from streamlining mundane tasks to sparking creative innovation. However, as with all powerful tools, its use requires a measured approach. Understanding its limitations, such as data dependency, hallucination, and ethical and regulatory challenges, is as important as recognizing its capabilities.
As a business leader, balancing the promise of generative AI with a sense of prudence will be key to leveraging its benefits effectively. In this exciting era of AI-driven transformation, it’s crucial to navigate the landscape with a keen sense of understanding, responsibility, and strategic foresight.
If you have questions or want to identify ways to enhance your organization’s AI capabilities, I’m happy to chat. Feel free to reach out to me at jfuqua@ccpace.com or connect with me on LinkedIn
With the World Cup taking over the headlines, we couldn’t miss an opportunity to bring two of our favorite topics at CC Pace together: sports and Agile. As Team USA gears up to take on the Netherlands, here’s a little history on the unique style of soccer the Dutch created and what Agile teams can learn from their success.
In the 1970s, the Dutch dominated their international counterparts by using a style of soccer they called totaalvoetbal or total football. Total football requires each player on the team to be comfortable and adept enough to switch positions with any other player on the field at any time. The Dutch required the goalkeeper to remain in a fixed position, but everyone else was fluid and able to become an attacker, defender, or midfield player when the play dictated it. Whenever a player moved out of his position, they were replaced by another player. Successively, all other players on the team shifted their positions to maintain their team structure. In modern soccer, we call this collective team behavior compensatory movement. All teammates compensate and adjust to each other’s actions.
This philosophy helped create teams without points of weakness that their opponents could exploit.
Totaalvoetbal only worked because players trained to develop the skills needed to play all positions. Each player was a specialist in a certain position or role, such as striker or center defense, but was also quite competent playing other roles on the team.
In the Agile world, this can be applied to the makeup of scrum teams. Scrum teams that are self-sufficient because of their fluidity are always the most productive and dependable. If scrum teams are comprised of team members with “T-shaped” skills, then there will always be team members that can fill in for others when needed.
People with T-shaped skills have a deep level of skill and expertise in one area and a lower level of expertise across many other areas. When scrum teams are comprised of team members with T-shaped skills, it helps to ensure that all work can be completed within the team. It also means that productivity is less likely to drop when a team member is out of the office because others can roll up their sleeves and help get the job done.
Cross-training and pair programming are great ways to help develop team members with T-shaped skills. Pair programming is an Agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently.
T-shaped skills are not developed by accident but rather intentionally. Careful planning and a thoughtful, proactive approach by the individual and their manager are crucial. A manager must understand the value of investing in the development of people. Cross-training, stretch assignments, training opportunities, shadowing, and pair programming are all excellent methods for developing additional skills that allow for compensatory movement and fluid teams, yet, in some ways, represent short-term reductions in individual productivity. Managers must make this short-term investment to see the long-term value and score more goals.
When I was first introduced to Agile development, it felt like a natural flow for developers and business stakeholders to collaborate and deliver functionality in short iterations. It was rewarding (and sometimes disappointing) to demo features every two weeks and get direct feedback from users. Continuous Integration tools matured to make the delivery process more automated and consistent. However, the operations team was left out of this process. Environment provisioning, maintenance, exception handling, performance monitoring, security – all these aspects were typically deprioritized in favor of keeping the feature release cadence. The DevOps/DevSecOps movement emerged as a cultural and technical answer to this dilemma, advocating for a much closer relationship between development and operations teams.
Today, companies are rapidly expanding their cloud infrastructure footprint. What I’ve heard from discussions with customers is that the business value driven by the cloud is simply too great to ignore. However, much like the relationship between development and ops teams during the early Agile days, a gap is forming between Finance and DevOps teams. Traditional infrastructure budgeting and planning doesn’t work when you’re moving from a CapEx to OpEx cost structure. Engineering teams can provision virtually unlimited cloud resources to build solutions, but cost accountability is largely ignored. Call it the pandemic cloud spend hangover.
Our customers see the flexibility of the cloud as an innovation driver rather than simply an expense. But they still need to understand the true value of their cloud spend – which products or systems are operating efficiently? Which ones are wasting resources?
I decided to look into FinOps practices to discover techniques for optimizing cloud spend. I researched the FinOps Foundation and read the book, Cloud FinOps. Much like the DevOps movement, FinOps seeks to bring cross-functional teams together before cloud spend gets out of hand. It encompasses both cultural and technical approaches.
Here are some questions that I had before and the answers that I discovered from my research:
Where do companies start with FinOps without getting overwhelmed by yet another oversight process?
Start by understanding where your costs are allocated. Understand how the cloud provider’s billing details are laid out and seek to apply the correct costs to a business unit or project team. Resource tagging is an essential first step to allocating costs. The FinOps team should work together to come up with standard tagging guidelines.
Don’t assume the primary goal is cost savings. Instead, approach FinOps as a way to optimize cloud usage to meet your business objectives. Encourage reps from engineering and finance to work together to define objectives and key results (OKRs). These objectives may be different for each team/project and should be considered when making cloud optimization recommendations. For example, if one team’s objective is time-to-market, then costs may spike as they strive to beat the competition.
What are some common tagging/allocation strategies?
Cloud vendors provide granular cost data down to the millisecond of usage. For example, AWS Lambda recently went from rounding to the nearest 100ms of duration to the nearest millisecond. However, it’s difficult to determine what teams/projects/initiatives are using which resources and for how long. For this reason, tagging and cost allocation are essential to FinOps.
According to the book, there are generally two approaches for cost allocation:
- Tagging – these are resource-level labels that provide the most granularity.
- Hierarchy-based – these are at the cloud account or subscription-level. For example, using separate AWS accounts for prod/dev/test environments or different business units.
Their recommendation is to start with hierarchy-based allocations to ensure the highest level of coverage. Tagging is often overlooked or forgotten by engineering teams, leading to unallocated resources. This doesn’t suggest skipping tags, but make sure you have a consistent strategy for tagging resources to set team expectations.
How do you adopt a FinOps approach without disrupting the development team and slowing down their progress?
The nature of usage-based cloud resources puts spending responsibility on the engineering team since inefficient use can affect the bottom line. This is yet another responsibility that “shifts left”, or earlier in the development process. In addition to shifting left on security/testing/deployment/etc., engineering is now expected to monitor their cloud usage. How can FinOps alleviate some of this pressure so developers can focus on innovation?
Again, collaboration is key. Demands to reduce cloud spend cannot be a one-way conversation. A key theme in the book is to centralize rate reduction and decentralize usage reduction (cost avoidance).
- Engineering teams understand their resource needs so they’re responsible for finding and reducing wasted/unused resources (i.e., decentralized).
- Rate reduction techniques like using reserved instances and committed use discounts are best handled by a centralized FinOps team. This team takes a comprehensive view of cloud spend across the organization and can identify common resources where reservations make sense.
Usage reduction opportunities, such as right sizing or shutting down unused resources, should be identified by the FinOps team and provided to the engineering teams. These suggestions become technical debt and are prioritized along with other work in the backlog. Quantifying the potential savings of a suggestion allows the team to determine if it’s worth spending the engineering hours on the change.

https://www.finops.org/projects/encouraging-engineers-to-take-action/
How do you account for cloud resources that are shared among many different teams?
Allocating cloud spend to specific teams or projects based on tagging ensures that costs are distributed fairly and accurately. But what about shared costs like support charges? The book provides three examples for splitting these costs:
- Proportional – Distribute proportionally based on each team’s actual cloud spend. The more you spend, the higher the allocation of support and other shared costs. This is the recommended approach for most organizations.
- Evenly – split evenly among teams.
- Fixed – Pre-determined fixed percentage for each team.
Overall, I thought the authors did a great job of introducing Cloud FinOps without overwhelming the reader with another rigid set of practices. They encourage the Crawl/Walk/Run approach to get teams started on understanding their cloud spend and where they can make incremental improvements. I had some initial concerns about FinOps bogging down the productivity and innovation coming from engineering teams. But the advice from practitioners is to provide data to inform engineering about upward trends and cost anomalies. Teams can then make decisions on where to reduce usage or apply for discounts.
The cloud providers are constantly changing, introducing new services and cost models. FinOps practices must also evolve. I recommend checking out the Cloud FinOps book and the related FinOps Foundation website for up-to-date practices.
We’ve been using the AWS Amplify toolkit to quickly build out a serverless infrastructure for one of our web apps. The services we use are IAM, Cognito, API Gateway, Lambda, and DynamoDB. We’ve found that the Amplify CLI and platform is a nice way to get us up and running. We then update the resulting CloudFormation templates as necessary for our specific needs. You can see our series of videos about our experience here.
The Problem
However, starting with the Amplify CLI version 7, AWS changed the way to override Amplify-generated resource configurations in the form of CFT files. We found this out the hard way when we tried to update the generated CFT files directly. After upgrading the CLI and then calling amplify push, our changes were overridden with default values – NOT GOOD! Specifically, we wanted to add a custom attribute to our Cognito pool.
After a few frustrating hours of troubleshooting and support from AWS, we realized that the Amplify CLI tooling changed how to override Amplify-generated content. AWS announced the changes here, but unfortunately, we didn’t see the announcement or accompanying blog post.
The Solution
Amplify now generates an “overrides.ts” Typescript file for you to provide your own customizations using Cloud Development Kit (CDK) constructs.
In our case, we wanted to create a Cognito custom attribute. Instead of changing the CFT directly (under the new “build” folder in Amplify), we generate an “override.ts” file using the command: “amplify override auth”. We then added our custom attribute using the CDK:
Important Note: The amplify folder structure gets changed starting with CLI version 7. To avoid deployment issues, be sure to keep your CLI version consistent between your local environment and the build settings in the AWS console. Here’s the Amplify Build Setting window in the console (note that we’re using “latest” version):
If you’re upgrading your CLI, especially to version 7, make sure to test deployments in a non-production environment, first.
What are some other uses for this updated override technique? The Amplify blog post and documentation mention examples like Cognito overrides for password policies and IAM roles for auth/unauth users. They also mention S3 overrides for bucket configurations like versioning.
For DynamoDB, we’ve found that Amplify defaults to a provisioned capacity model. There are benefits to this, but this model charges an hourly rate for consumption whether you use it or not. This is not always ideal when you’re building a greenfield app or a proof-of-concept. We used the amplify override tools to set our billing mode to On-demand or “Pay per request”. Again, this may not be ideal for your use case, but here’s the override.ts file we used:
Conclusion
At first, I found this new override process frustrating since it discourages direct updates to the generated CFT files. But I suppose this is a better way to abstract out your own customizations and track them separately. It’s also a good introduction to the AWS CDK, a powerful way to program your environment beyond declarative yaml files like CFT.
Further reading and references:
DynamoDB On-Demand: When, why and how to use it in your serverless applications
Authentication – Override Amplify-generated Cognito resources – AWS Amplify Docs
Override Amplify-generated backend resources using CDK | Front-End Web & Mobile (amazon.com)
Top reasons why we use AWS CDK over CloudFormation – DEV Community
The video below is Part 2 of our 3-part series: Building and Securing Serverless Apps using AWS Amplify. In case you missed Part 1 – take a look at it here. Be sure to stay tuned for Part 3!
AWS Amplify is a set of tools that promises to make full-stack, cloud-native development quicker and easier. We’ve used it to build and deploy different products without getting bogged down by heavy infrastructure configuration. On one hand, Amplify gives you a rapid head start with services like Lambda functions, APIs, CI/CD pipelines, and CloudFormation/IaC templates. On the other hand, you don’t always know what it’s generating and how it’s securing your resources.
If you’re curious about rapid development tools that can get you started on the road to serverless but want to understand what’s being created, check out our series of videos.
We’ll take a front-end web app and incrementally build out authentication, API/function, and storage layers. Along the way, we’ll point out any gotchas or lessons learned from our experience.
Recently, I read an article titled, “Why Distributed Software Development Teams Work Infinitely Better”, by Boris Kontsevoi.
It’s a bit hyperbolic to say that distributed teams work infinitely better, but it’s something that any software development team should consider now that we’ve all been distributed for at least a year.
I’ve worked on Agile teams for 10-15 years and thought that they implicitly required co-located teams. I also experienced the benefits of working side-by-side with (or at least close to) other team members as we hashed out problems on whiteboards and had adhoc architecture arguments.
But as Mr. Kontsevoi points out, Agile encourages face-to-face conversation, but not necessarily in the same physical space. The Principles behind the Agile Manifesto were written over 20 years ago, but they’re still very much relevant because they don’t prescribe exactly “how” to follow the principles. We can still have face-to-face conversations, but now they’re over video calls.
This brings me to a key point of the article -” dispersed teams outperform co-located teams and collaboration is key”. The Manifesto states that building projects around motivated individuals is a key Agile principle.
Translation: collaboration and motivated individuals are essential for a distributed team to be successful.
- You cannot be passive on a team that requires everyone to surface questions and concerns early so that you can plan appropriately.
- You cannot fade into the background on a distributed team, hoping that minimal effort is good enough.
- If you’re leading a distributed team, you must encourage active participation by having regular, collaborative team meetings. If there are team members that find it difficult to speak above the “din” of group meetings, seek them out for 1:1 meetings (also encouraged by Mr. Kontsevoi).
Luckily, today’s tools are vastly improved for distributed teams. They allow people to post questions on channels where relevant team members can respond, sparking adhoc problem-solving sessions that can eventually lead to a video call.
Motivated individuals will always find a way to make a project succeed, whether they’re distributed, co-located, or somewhere in between. The days of tossing software development teams into a physical room to “work it out” are likely over. The new distributed paradigm is exciting and, yes, better – but the old principles still apply.
As 2020 has unfolded, our development team has been working on a brand new app: Pass2Play! Check out the video below to see all of its features and capabilities!
To learn more about Pass2Play click here!
I have a deep interest in cybersecurity, and to keep up with the latest threats, policies and security practices, I became a member of ACT-IAC organization and enrolled in the Cybersecurity Community of Interest group. This is where I got the opportunity to work as a volunteer in the Zero Trust Architecture Phase 2 project. Hence, I am trying to share the knowledge I gained around ZTA strategy and principles. I am planning to break my blog into four series based on how the project progresses.
- What is ZTA?
- Real world deployment scenarios
- ZTA core capabilities
- Vendors providing ZTA capabilities
What is ZTA and how did it come into existence?
Traditionally, perimeter-based security has been used to protect the network infrastructure behind a firewall where if the user gets authenticated, they can access all the resources behind the firewall assuming all network users/devices as trustworthy. This caused a lot of security breaches across the globe where attackers could move laterally and exploit resources to which they were not authorized. The attackers only had to get through the firewall and later crawl across any resource available in the network causing potential damage in terms of data loss and other financial implications that can come via ransomware attacks.
Currently, an enterprise’s infrastructure operates around several networks like cloud-based services, remote users connecting from their own network using their enterprise-owned or personal devices (laptops, mobile devices), network location can change based on where the users/devices are connected from for e.g. public WIFI, internal enterprise networks etc. All these complex use cases made the possibility of moving away from perimeter-based security to “perimeter less” security (not confined to one network infrastructure) which led to the evolution of a new concept called as “Zero-Trust” where you “trust no one, but verify”. ZT approach is primarily based on data protection but it can be applied across other enterprise assets like users, devices, applications and infrastructure.
ZTA is basically an enterprise cybersecurity strategy that prevents data breaches and limits lateral movement within the network infrastructure. It assumes all the internal or external agents (user, device, application, infrastructure) that wants to access an enterprise resource (internal network or externally in the cloud) is not trustworthy and needs to be verified for each request before granting access to them.
What does Zero Trust mean in a ZTA?

(Image courtesy: NIST SP 800-27 publication)
In the above diagram, the user who is trying to access the resource must go through the PDP/PEP. PDP/PEP decides whether to grant access to this request based on enterprise policies (data/access/risk), user identity, device profile, location of the user, time of request and any other attributes needed to gain enough confidence. Once granted, the user is on an “Implicit Trust Zone” where it can access all the resources based on network infrastructure design. “Implicit Trust Zone” is basically the boarding area in an airport where all the passengers are considered trustworthy once they verify themselves through immigration/security check.
You can still limit access to certain resources in the network using a concept called “Micro-Segmentation”. For example, after getting through the security check and reaching the boarding area, passengers are again checked at the boarding gate to make sure they are entering the authorized flight to reach their destination. This is what “Micro Segmentation” means where the resources are more isolated to a segment and access requests are verified separately in addition to PDP/PEP.
Tenets of ZTA: (As per NIST SP 800-27 publication)
All the resources whether its data related, or services provided should be communicating in a secure fashion irrespective of their network location. Each individual access request will be verified before granting access to any resource based on the client’s identity, device they are using to request, type of application used, location coordinates and other behavioral attributes. Each access request granted will be authenticated and authorized dynamically and strictly enforced. In addition, the enterprise should collect all activity information, log decisions, audit logs and monitor the network infrastructure to improve the overall security posture.
What are the logical components of ZTA?

(Image courtesy: NIST SP 800-27 publication)
Policy Engine: Responsible to make and log decisions based on enterprise policy and inputs from external resources (CDM, threat intelligence etc.) to grant access or not to a request.
Policy Administrator: Responsible for establishing or killing the communication path between the subject and enterprise resource based on the decision made by PE. It can generate authentication tokens for the client to access the resource. PA communicates with PEP via the control plane.
Policy Enforcement Point: Responsible for enabling, monitoring and terminating communication between subject and enterprise resource. It can be either used as a single logical component or can be broken into two components: the client agent and resource gateway component that controls access. Beyond the PEP is the “Implicit Trust Zone” to access enterprise resources.
Control Plane/Data Plane: The control plane is made up of components that receive and process requests from the data plane components that wish to access network resources. The control and data planes are more like zones in the ZTA. All the resources, devices, and users within the network can have their own control plane component within them to decide whether the data should be routed further or not. In this diagram, it is just used to explain how control plane works for data plane components. Data plane simply passes packets around and the control plane routes them appropriately based on decisions made.
Note: The dotted line that you see in the image above is the hidden network that is used for communication between the various logical components.
Why should organizations adopt ZTA?
When adopting a ZTA, organizations must weigh all the potential benefits, risks, costs, and ROI. Core ZT outcomes should be focused on creating secure networks, securing data that travels within the network or at rest, reducing impacts during breaches, improving compliance and visibility, reducing cybersecurity costs and improving the overall security posture of an organization.
Lost or stolen data, ransomware attacks, and network and application layer breaches cost organizations huge financial losses and market reputation. It takes a lot of time and money for an organization to resume back to normal if the security breach was of the highest degree. ZT adoption can help organizations avoid such breaches which is the key to survive in today’s world, where state funded hackers are always ahead of the game.
As with all technology changes, the biggest challenge to demonstrate higher ROI and lower cybersecurity costs is the time needed to deliver the desired results. Organizations should consider the following:
- Assess what components of ZTA pillars they currently have in their infrastructure. Integration of components with existing tools can reduce the overall investment needed to adopt ZTA.
- Consider including costs or impacts associated with risk levels and occurrences when doing ROI calculations.
- ZT adoption should simplify, and not complicate, the overall security strategy to reduce costs.
What are the threats to ZTA?
ZTA can reduce the overall risk exposure in an enterprise but there are some threats that can still occur in a ZTA environment.
- Wrongly or mistakenly configured PE and PA could cause disruptions to the users trying to access the resources. Sometimes, the access requests which would get unapproved previously could get through due to misconfiguration of PE and PA by the security administrator. Now, the attackers or subjects could access resources from which they were restricted before.
- Denial of service attacks on PA/PEP can disrupt enterprise operations. All access decisions are made by PA and enforced by PEP to make a successful connection of a device trying to access a resource. If the DoS attack happens on the PA, then no subject would be able to get access as the service would be unavailable due to a flood of requests.
- Attackers could compromise an active user account using social engineering techniques, phishing or any other way to impersonate the subject to access resources. Adaptive MFA may reduce the possibility of such attacks on network resources but still in traditional enterprises with or without ZTA adoption, an attacker might still be able to access resources to which the compromised user has access. Micro-segmentation may protect resources against these attacks by isolating or segmenting the resource using technologies like NGFW, SDP.
- Enterprise network traffic is inspected and analyzed by policy administrators via PEPs but there are other non-enterprise-owned assets that can’t be monitored passively. Since the traffic is encrypted and it’s difficult to perform deep packet inspection, a potential attack could happen on the network from non-enterprise owned devices. ML/AI tools and techniques can help analyze traffic to find anomalies and remediate it quickly.
- Vendors or ZT solution providers could cause interoperability issues if they don’t follow certain standards or protocols when interacting. If one provider has a security issue or disruption, it could potentially disrupt enterprise operations due to service unavailability or the time taken to switch to another provider which can be very costly. Such disruptions can affect core business functions of an enterprise when working in a ZTA environment.
References
[ACT-IAC] American Council for Technology and Industry Advisory Council (2019) Zero Trust Cybersecurity Current Trends. Available at https://www.actiac.org/zero-trust-cybersecurity-current-trends
Draft (2nd 1) NIST Special Publication 800-207. Available at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207-draft2.pdf
NIST Zero Trust Architecture Release: https://www.nccoe.nist.gov/projects/building-blocks/zero-trust-architecture
What is App Modernization
Legacy application modernization is a process to update existing and aging applications with modern architecture to enhance features and capabilities. By migrating your legacy applications, you can include the latest functionalities that better align with what your business needs to succeed. Keeping legacy applications running smoothly while still being able to meet current day needs can be a time consuming and resource intensive affair. That is doubly the case when software becomes so outdated that it may not even be compatible with modern day systems.
A Quick Look at a Sample Legacy Monolithic Application
For this article, say a decade and half year-old, Legacy Monolithic Application is considered as depicted in the following diagram.
This depicts a traditional, n-tier architecture that was very common in the past 20 years or so. There are several shortcomings with this architecture, including the “big bang” deployment that had to be tightly managed when rolling out a release. Most of the resources on the team would sit idle while requirements and design were ironed out. Multiple source control branches had to be managed across the entire system, adding complexity and risk to the merge process. Finally, scalability applied to the entire system, rather than smaller subsystems, causing increase costs for hardware resources.
Why Modernize?
We define modernization as migrating from a monolithic system to many decoupled subsystems, or microservices.
The advantages are:
- Reduce cost
- Costs can be reduced by redirecting computing power only to the subsystems that need it. This allows for more granular scalability.
- Avoid vendor lock-in
- Each subsystem can be built with a technology for which it is best suited
- Reduce operational overhead
- Monolithic systems that are written in legacy technologies tend to stay that way, due to increased cost of change. This requires resources with a specific skillset.
- De-coupling
- Strong coupling makes it difficult to optimize the infrastructure budget
- De-coupling the subsystems makes it easier to upgrade components individually.
Finally, a modern, microservices architecture is better suited for Agile development methodologies. Since work effort is broken up into iterative chunks, each microservice can be upgraded, tested and deployed with significantly less risk to the rest of the system.
Legacy App Modernization Strategies
Legacy application modernization strategies can include the re-architecting, re-factoring, re-coding, re-building, re-platforming, re-hosting or the replacement and retirement of your legacy systems. Applications dating back decades may not be optimized for mobile experiences on smartphones or tablets, which could require entire re-platforming. Lift and Shift will not add any business value if you migrate legacy applications just for the sake of Modernization. Instead, it’s about taking the bones, or DNA, of the original software, and modernizing it to better represent current business needs.
Legacy Monolithic App Modernization Approaches
Having examined the nightmarish aspects of continuing to maintain Legacy Monolithic Applications, this article presents you with two Application Modernization Strategies. Both listed below will be explained at length to get basic idea on to pick whatever is feasible with constraints you might have.
- Migrating to Microservices Architecture
- Migrating to Microservices Architecture with Realtime Data Movement (Aggregation/Deduping) to Data Lake
Microservices Architecture
In this section, we shall take a dig at how re-architecting, re-factoring and re-coding per microservices paradigm will help avoid a lot of overheads of maintaining a legacy monolithic system. The following diagram helps you better understand Microservice Architecture – a leap forward from legacy monolithic architecture.
At a quick glance of above diagram, you can understand there is a big central piece called API Gateway with Discovery Client. This is comparable to a Façade in a Monolithic Application. API Gateway is essentially an entry point to access several microservices which are comparable to modules in Monolithic Application and are identified/discovered with the help of Discovery Client. In this Design/Architecture of Microservices, API Gateway also acts as API Orchestrator as it resorts to one Database set via Database Microservice in the diagram. In other words, API Gateway/Orchestrator orchestrates the sequence of calls based on the business logic to call Database Microservice as individual Microservices have no direct access to database. One can also notice this architecture supports various client systems such as Mobile App, Web App, IOT APP, MQTT App et al.
Although this architecture gives an edge to using different technologies in different microservices, it leaves us with a heavy dependency on the API Gateway/Orchestrator. The Orchestrator is tightly coupled to the business logic and object/data model, which requires it to be re-deployed and tested after each microservice change. This dependency prevents each microservice from having its own separate and distinct Continuous Integration/Continuous Delivery (CI/CD) pipeline. Still, this architecture is a huge step towards building heterogenous systems that work in tandem to provide a complete solution. This goal would otherwise be impossible with a Monolithic Legacy Application Architecture.
Microservices Architecture with Realtime Data Movement to Data Lake
In this section, we shall take a dig at how re-architecting, re-factoring, re-coding, re-building, re-platforming, re-hosting or the replacement and retirement of your legacy systems per microservices paradigm will help avoid a lot of overheads of maintaining a legacy monolithic system. The following diagram helps you understand a complete advanced Microservices Architecture.
At the outset, most part of the diagram for this approach looks like the previous approach. But this adheres to the actual Microservice paradigm more than the previous. In this case, each microservice is individual and has its own micro database of any flavor it chooses to be based on the business needs and avoids dependency on a microservice called database microservice or overload API Gateway to act as Orchestrator with business logic. The advantage of this approach is, each Microservice can have its own CI/CD pipeline release. In other words, a part of application can be released with TDD/ATDD properly implemented avoiding costs incurred for Testing/Deploying and Release Management. This kind of architecture does not limit the overall solution to stick to any particular technical stack but encourages to provide quick solutions with various technical stacks. And gives flexibility to scale resources for highly hit microservices when necessary.
Besides this architecture encourages one to have a Realtime Engine (which can be a microservice itself) that reads data from various databases asynchronously and apply data aggregation and data deduping algorithms and send pristine data to Data lake. Advanced Applications can then use the data from Data lake for Machine Learning and Data Analytics to cater to the business needs.
Note: This article has not been written any cloud flavor in mind. This is general App Modernization Microservices architecture that can run anywhere on-prem or OpenShift (Private Cloud) or Azure Cloud or Google Cloud or AWS (Private Cloud)
It’s a scenario we’ve all been a part of before. To shake things up, your Agile teams are being restructured. After the initial shuffle, the team gets together for a first meeting to figure out how it is going to work. Introductions are made, experiences are shared. Maybe a team lead is named. It’s a heady time full of expectations. Following the cycle of Forming-Storming-Norming-Performing, phase one is off to a good start.
At the first team retro, a better understanding of what everyone brings to the team starts to take shape. Relationships and communications within the team, as well as other players within the organization, take root. The team also starts to get a sense of where there are some gaps. Maybe it’s a misunderstanding of how code reviews work, or how cards are pointed. Storming has happened, and the team is ready to begin the transition to the Norming phase.
I’d suggest that team norms, which tend to be prescriptive in nature, falls short of what the stakeholders are hoping it will. Instead, I’d suggest that a social contract is a better concept to work towards.
A social contract is a team-designed agreement for an aspirational set of values, behaviors and social norms. They not only set expectations but responsibilities. Instead of being focused on how individual team members should approach the work of the team and organization, it lays out the responsibilities of the team members to each other. It also lays out the responsibilities and expectations between the team and the organization.
What would this type of contract look like? It should call out both sides of a relationship. An example of part of a social contract may look like this:
- The Team promises to place value through deliverable software as the highest goal to the organization, as defined by the Product Owner
- The Team promises to raise any obstacles preventing them from delivering value immediately
- The Organization promises to address and remove obstacles in a timely manner to the best of their ability
- The Organization promises to maintain reasonable stability of the team so that it has the opportunity to mature and reach its highest potential
In the spirit of the social contract, this should be discussed and brainstormed with open minds and constructive dialog with both sides of the social equation. In truly Agile fashion, it should also be considered an iterative process, and reviewed from time to time to ensure the social contract itself is providing value.
PowerApps Basics
PowerApps is one of the most recent additions to the Microsoft Office suite of products. PowerApps has been marketed as “programming for non-programmers”, but make no mistake; the seamless interconnectivity PowerApps has with other software products allows it to be leveraged in highly complex enterprise applications. PowerApps basic is included in an Office 365 License, but for additional features and advanced data connections, a plan must be purchased. When I was brought on to CC Pace as an intern to assist with organizational change regarding SharePoint, I assumed that the old way of using SharePoint Designer and InfoPath would be the framework I would be working with. However, as I began to learn more about PowerApps and CC Pace’s specific organizational structure and needs, I realized that it was essential to work with a framework geared towards the future.
Solutions with PowerApps
While Big Data and data warehousing become common practices, data analytics and organized data representation become more and more valuable. On the small to medium organizational scale, bringing together scattered data which is stored in a wealth of different applied business practices and software options has been extremely difficult. This one of the areas where PowerApps can create immense value for your organization. Rather than forcing an extreme and expensive organizational change where everyone submits expense reports, recruitment forms, and software documentation to a brand new custom database management system, PowerApps can be used to pull data from its varying locations and organize it.
PowerApps is an excellent solution for data entry applications, and this is the primary domain I’ve been working in. A properly designed PowerApp allows the end user to easily manipulate entries from all sorts of different database management systems. Create, Read, Update, Delete (CRUD) applications have been around and necessary for a long time, and PowerApps makes it easy to create these types of applications. Input validation and automated checks can even help to prevent mistakes and improve productivity. If your organization is constantly filling out their purchase orders with incorrectly calculated sales tax, a non-existent department code, or forgetting to add any number of fields, PowerApps allows some of those mistakes to be caught extremely early.
Integration with Flow (the upgraded version of SharePoint designer WorkFlows), allows for even greater flexibility using PowerApps. An approval email can be created to ensure to prevent mistakes from being entered into a database management system, push notifications can be created when PowerApps actions are taken, the possibilities are (almost) endless.
Pros and Cons
There are both advantages and disadvantages to leveraging software in an enterprise solution that is still under active development. One of the disadvantages is that many of the features that a user might expect to be included aren’t possible yet. While Flow integration with PowerApps is quite powerful, it is missing several key features, such as an ability to attach a document directly from PowerApps, or to write data over multiple records at a time (i.e. add multiple rows to a SQL database). Additionally, I would not assume that PowerApps is an extremely simple, programming free solution to business apps. Knowledge of the different data types as well as the use of functions gives PowerApps a steep learning curve. While you may not be writing any plaintext code, other than HTML, PowerApps still requires a good amount of knowledge of technology and programming concepts.
The main advantage to PowerApps being new software is just that; it’s brand new software. You may have heard that PowerApps is currently on track to replace, at least partially, the now shelved InfoPath project. InfoPath may continue to work until 2026, but without any new updates to the program, it may become obsolete on newer environments well before that. Here at CC Pace, we focus on innovation and investing in the solutions of tomorrow, and using PowerApps internally rather than creating a soon non-supported InfoPath framework was the right choice.
Author Bio
As a programmer and cybersecurity enthusiast, creating pieces of enterprise systems is something I never knew I would be so interested in. I’m Niels Verhoeven, a summer IT Intern at CC Pace Systems. I study Information Systems with a focus on Cybersecurity Informatics at University of Maryland, Baltimore County. My experiences at CC Pace and my programming background have given me quite a bit of insight into how users, systems, and business can fit together, improving productivity and quality of work.
I’m in the process of reading a book on Agile database warehouse design titled, appropriately enough, Agile Data Warehouse Design and by Lawrence Corr.
While Agile methodologies have been around for some time – going on two decades – they haven’t permeated all aspects of software design and development at the same pace. It’s only in recent years that Agile has been applied to data warehouse design in any significant way.
I’m sure many Agile consultants have worked on projects in the past where they were asked to come up with a complete design up-front. That’s true with data warehouse projects too where a client’s database team wanted the entire schema designed up-front – even before the requirements for the reports the data warehouse would be supporting were identified. What would appear to be driving the design was not the business and their report priorities, but the database team and their desire to have a complete data model.
While Agile Data Warehouse Design introduces some new methods, it emphasizes a common-sense approach that is present in all Agile methodologies. In this case, build the data warehouse or data mart one piece at a time. Instead of thinking of the data warehouse as one big star schema, think of it as a collection of smaller star schemas – each one consisting of a fact table and its supporting dimension tables.
The book covers the basics of data warehouse design including an overview of fact tables, dimension tables, how to model each and as mentioned, star schemas. The book stresses the 7-Ws when designing a data warehouse – who, what, where, when, why, how and how many. These are the questions to ask when talking to business to come up with an appropriate design. “How many” is applicable for the fact tables, while the other questions apply to dimension table design.
Agile Data Warehouse Design stresses collaboration with the business stakeholders, keeping them fully engaged so that they feel like they are not just users, but owners of the data. Agile Data Warehouse Design focuses on modeling the business processes that the business owners want to measure, not the reports to be produced or the data to be collected.
I still have a way to go before I’ve finished the book and then applied what I’ve learned, but so far, it’s been a worthwhile learning experience.
“Your Majesty,” [German General Helmuth von] Moltke said to [Kaiser Wilhelm II] now, “it cannot be done. The deployment of millions cannot be improvised. If Your Majesty insists on leading the whole army to the East it will not be an army ready for battle but a disorganized mob of armed men with no arrangements for supply. Those arrangements took a whole year of intricate labor to complete”—and Moltke closed upon that rigid phrase, the basis for every major German mistake, the phrase that launched the invasion of Belgium and the submarine war against the United States, the inevitable phrase when military plans dictate policy—“and once settled it cannot be altered.”
Excerpt From: Barbara W. Tuchman. “The Guns of August.”
In my spare time, I try to read as much as I can. One of my favorite topics is history, and particularly the history of the 20th century as it played out in Europe with so much misery, bloodshed, and finally mass genocide on an industrial scale. Barbara Tuchman’s book, The Guns of August, deals with the factors that led to the outbreak of WWI. Her thesis is that the war was not in any way inevitable. Rather, it was forced on the major powers by the rigidity of their carefully drawn up war plans and an inability to adjust to rapidly changing circumstances. One by one, like dominos falling, the Great Powers executed their rigid war plans and went to war with each other.
Although the consequences are far less severe, I occasionally see the same thing happen on projects, and not just software projects. A lot of time, perhaps appropriately, perhaps not, is spent in planning. The output of the planning process is, of course, several plans. Inevitably, after the project runs for a short while, the plans begin to diverge from reality. Like the Great Powers in the summer of 1914, project leadership sees the plans as destiny rather than guides. At all costs, the plans must be executed.
Why is this? I believe it stems from the fallacy of sunk cost: we’ve spent so much time on planning and coming up with the plans, it would be too expensive now to re-plan. Instead, let’s try to force the project “back on plan”. Because of the sunk cost of generating the plans, too much weight is placed upon them.
Hang on, though. I’ve played up the last part of the quote above – the part that emphasizes the rigidity of thinking that von Moltke and the German General Staff displayed. What about the first part of his statement? Isn’t it true that “the deployment of millions cannot be improvised”? Indeed it is. And that’s true in any non-trivial project as well. You can’t just start a large software project and hope to “improvise” along the way. So now what?
I believe there’s great value in the act of planning, but far less value in the plans themselves. Going through a process of planning and thinking about how the project is supposed to unfold tells me several things. What are the risks? What happens at each major point of the project if things don’t go as planned? What will be the consequences? How do we mitigate those consequences? This kind of contingency planning is essential.
Here’s how I usually do contingency planning for a software development project. Note that I conduct all these activities with as much of the project team present as is feasible to gather. At a minimum, I include all the major stakeholders.
First, I start with assumptions, or constraints external to the project. What’s our budget? When must the product be delivered? Are there non-functional constraints? For example, an enterprise architecture we must embed within, or data standards, or data privacy laws?
Next, I begin to challenge the assumptions. What if we find ourselves going over budget? Are we prepared to extend the delivery deadline to return to budget? I explore how the constraints play off against each other. Essentially, I’m prioritizing the constraints.
Then comes release planning. I try to avoid finely detailed requirements at this point. Rather, we look at epics. We try to answer the question, “What epics must be implemented by what time to generate the Minimum Value Product (MVP)?” Again, I challenge the plan with contingencies. What if X happens? What about Y? How will they affect the timeline? How will we react?
I don’t restrict this planning to timelines, budgets, etc. I do it at the technical level too. “We plan to implement security with this framework. What are the risks? Have we used the framework before? If not, what happens if we can’t get it to work? What’s our fallback?”
The key is to concentrate not just on coming up with a plan, but on knowing the lay of the land. Whatever the ideal plan that comes out of the planning session may be, I know that it will quickly run into trouble. So I don’t spend a lot of time coming up with an airtight plan. Instead, I build up a good idea of how the team will react when (not if) the plan runs aground. Even if something happens that I didn’t think of in the planning, I can begin to change my plan of attack while keeping the fixed constraints in mind. I have a framework for agility.
Never forget this: when the plan diverges from reality, it’s reality that will win, not the plan. Have no qualms about discarding at least parts of the plan and be ready to go to your contingent plans. Do not let “plans dictate policy”. And don’t stop planning – keep doing it throughout the project. No project ever becomes risk-free, and you always need contingencies.
Outside of my work at the MSRB for CC Pace, I enjoy working with community organizations in Fairfax County. After eight years of running the Jefferson Manor Civic Association, I was named Chairman of the Lee District Assoc. of Civic Orgs (LDACO). This is an organization focused on improving communications between residents in the Lee District section of Fairfax, and the elected officials and staff of the county. I have been involved with the board of directors for several years, and was bursting at the seams with ideas on how to build on the foundation I had inherited.
The nearly unlimited options quickly brought about a familiar end result – paralysis. Ideas ranged from simple house-keeping to epic public festivals. It was, to be honest, complete chaos.
Thankfully, at the time I was working on my understanding of Agile techniques and how they applied to my work situation. A key source for this was ‘The Nature of Software Development” by Ron Jeffries. As the pages flew by, the point of always prioritizing value became clear. How could this perspective be focused on running an advocacy group for civic associations? The clouds parted and the way forward became clear.
Prioritize by what would provide immediate value to the organization and its members. Again referring to Jeffries, I used the “Five Card Method” to determine what our first ‘epics’ to tackle would be. The idea is pick three to five big ticket items that will provide immediate value, and focus on breaking those down into manageable pieces.
How do we determine what our members find valuable? Ask them. A review of LDACO’s contact list showed that it was incomplete and in some cases, outdated. We had no social media outreach, either. Improved, direct communication became epic #1.
Talking with leaders in other communities, as well as long standing members of LDACO, I learned that folks needed a longer lead time to plan on attending our meetings. Epic #2 was to provide a calendar of events with at least 90 days out.
Lastly, LDACO learned that our members wanted a district and county-wide focus for meetings and speakers. While having a very narrow topic may provide value for a single community, it did not translate to the diverse group as a whole. Epic #3 was to aim for big, broad topics with speakers who were involved in the decisions that impacted the largest number of communities.
These became the main focus of LDACO’s work for the past year. These were broken down into smaller, achievable pieces, then worked on and completed. In the past year, we have grown our communication list, begun to grow on social media, and increased our attendance and membership through meetings with important stakeholders. All because we kept the focus on what provided value for our customers.
Picture this: you’ve recently been hired as the CIO of a start-up company. You’ve been tasked with producing the core software that will serve as the lynchpin for allowing the company’s business concept to soar and create disruption in the market. (Think Amazon, Facebook, or Uber.) Lots of excitement combined with a huge amount of pressure to deliver. You’ve got many decisions to make, not the least of which is whether or not to build an in-house team to develop the core system or to outsource it to a software development consultancy. So, how do you decide and to whom do you turn to if you do opt to outsource?
CC Pace is one of the software development consultancies that a company might turn to as we focus in the start-up market. Developing greenfield systems for an innovative new company is an environment that our development team members greatly enjoy.
A question that has been posed to me fairly frequently is: why would a start-up company outsource their software development? While I had my own impressions, I decided to pose the question to some CIOs of the start-up companies we’ve worked with, along with some CEOs who ultimately signed-off on this critical decision.
The answers I received contained a common theme – neither approach is necessarily better, and the proper decision depends on your specific circumstances. With some of these circumstances being inter-related , here are the 4 primary factors that I heard the decision should depend on:
- Time-to-Market – It takes time to assemble a quality team, often up to 6 months or more. Even then, it will take some additional time for this group to jell and perform at its peak. As such, the shorter the time-to-market that the business needs to have the initial system delivered, the more likely you would lean toward an outsourced approach. Conversely, if there is less time sensitivity, it makes sense to put in an in-house team. This team will be able to not only deliver the initial system, but they will also be able to handle future support and development needs without requiring a hand-off.
- Workload Peak – For some new businesses, the bulk of the system requirements will be contained in the initial release(s) while others will have a steady, if not growing, stream of desired functionality. If the former, hiring up to handle the initial peak workload and then having to down-size is not desirable and can be avoided with the outsourced model. On the other hand, a steady stream of development requirements for the foreseeable future would cause you to lean towards building an in-house team from the start.
- Availability of Resources – While there is a scarcity of good IT resources seemingly everywhere, certain markets are definitely tighter than others. In addition, some CIOs have a greater network of talent that they know and could more easily tap than others. The scarcer your resource availability, the more likely you would lean in calling upon outsourced providers. Conversely, if you have ready access to quality talent, take advantage of that asset.
- CIO Preference – Finally, some CIOs just have a particular preference of one approach over the other. This may simply be a result of how they’ve worked predominately in the past and what’s been successful for them. So, going down a path that’s been successful is a logical course to take. Interestingly, one CEO commented that his decision in choosing a CIO would be that person’s ability to make these types of decisions based upon the business needs and not personal preference.
I would love to hear from anyone who has been (or will be) involved in this type of decision either from the start-up side or the consulting provider side, as to whether this jives with your experience and thinking. The one variable that wasn’t mentioned by anyone as a factor was cost. That surprised me a lot and I’d also welcome any of your thoughts as to why this wasn’t mentioned as a factor.
Up, down, Detroit, charm, inside out, strange, London, bottom up, outside in, mockist, classic, Chicago….
Do you remember the questions on standardized tests where they asked you to pick the thing that wasn’t like the others? Well, this isn’t a fair example as there are really two distinct groups of things in that list, but the names of TDD philosophies have become as meaningless to me as the names of quarks. At first I thought I’d use this post to try to sort it all out, but then I decided that I’m not the Académie française of TDD school names and I really don’t care that much. If the names interest you, I can suggest you read TDD – From the Inside Out or the Outside In. I’m not convinced that the author has all the grouping right (in particular, I started learning TDD from Kent Beck, Ron Jeffries and Bob Martin in Chicago, which is about as classic as you can get, and it was always what she calls outside in but without using mocks), but it’s a reasonable introduction.
Still, it felt like it was time to think about TDD again, so instead I went back to Ron Jeffries Thoughts on Mocks and a comment he made on the subject in his Google Groups Forum. In the posting, Ron speculated that architecture could push us a particular style of TDD. That feels right to me. He also suggested that writing systems that are largely “assemblies of OPC (Other People’s Code)” “are surely more complex” than the monolithic architectures that he’s used to from Smalltalk applications and that complexity might make observing the behavior of objects more valuable. That idea puzzles me more.
My own TDD style, which is probably somewhere between the Detroit school, which leans towards writing tests that don’t rely on mocks, and London schools, which leans towards using mocks to isolate each unit of the application, definitely evolved as a way to deal with the complexity I faced in trying to write all my code using TDD. When I first started out, I was working on what I believe would count as a monolithic application in that my team wrote all the code from the UI back to right before the database drivers. We started mocking out the database not particularly to improve the performance of the tests, but because the screens were customizable per user, the data for which was in a database, and the actual data that would be displayed was stored across multiple tables. It was really quite painful to try to get all the data set up correctly and we had to keep a lot more stuff in mind when we were trying to focus on getting the configurable part of the UI written. This was back in 1999 or 2000, and I don’t remember if someone saw an article on mocking, but we did eventually light on the idea of putting in a mock object that was much easier to set up than the actual database. In a sense, I think this is what Ron is talking about in the “Across an interface” section of his post, but it was all within our code. Could we have written that code more simply to avoid the complexity to start with? It was a long time ago and I can’t say whether or not I’d take the same approach now to solving that same problem, but I still do find a lot of advantages in using mocks.
I’ve been wanting to try using a NoSQL database and this seemed like a good opportunity to both try that technology and, after I read Ron’s post, try writing it entirely outside-in, which I always do anyway, and without using mocks, which is unusual for me. I started out writing my front-end using TDD and got to the point that I wanted to connect a persistence mechanism. In a sense, I suppose the simplest thing that could possibly work here would have been to keep my data in a flat file or something like that, but part of my purpose was to experiment with a NoSQL database. (I think this corresponds to the reasonably common situation of “the enterprise has Oracle/MS SQL Server/whatever, so you have to use it.) I therefore started with one of the NoSQL implementations for .NET. Everything seemed fine for my first few unit tests. Then one of my earlier tests failed after my latest test started passing. Okay, this happens. I backed out my the code I’d just written to make sure the failing test started passing, but the same test failed again. I backed out the last test I’d written, too. Now the failing test passed but a different one failed. After some reading and experimentation, I found that the NoSQL implementation I’d picked (honestly without doing a lot of research into it) worked asynchronously and it seemed that I’d just been lucky with timing before they started randomly failing. Okay, this is the point that I’d normally turn to a mocking framework and isolate the problematic stuff to a single class that I could either put the effort into unit testing or else live with it being tested through automated customer tests.
Because I felt more strongly about experimenting with writing tests without using mocks than with using a particular NoSQL implementation, I switched to a different implementation. That also proved to be a painful experience, largely because I hadn’t followed the advice I give to most people using mocks, which is to isolate the code for setting up the mock into an individual class that hides the details of how the data is set up. Had I been following that precept now that I was accessing a real persistence mechanism rather than a mock, I wouldn’t have needed to change my tests to the same degree. The interesting thing here was that I had to radically change both the test and the production code to change the backing store. As I worked through this, I found myself thinking that if only I’d used a mock for the data access part, I could have concentrated on getting the front-end code to do what I wanted without worrying about the persistence mechanism at all. This bothered me enough that I finally did end up decoupling the persistence mechanism entirely from the tests for the front-end code and focus on one thing at a time instead of having to deal with the whole thing at once. I also ended up giving up on the NoSQL implementation for a more familiar relational database.
So, where does all this leave my thoughts on mocks? Ron worried in his forum posting that using mocks creates more classes than testing directly and thus make the system more complex. I certainly ended up with more classes than I could have, but that’s the lowest priority in Ken Beck’s criteria for simple design. Passing the tests is the highest priority, and that’s the one that became much easier when I switched back to using mocks. In this case, the mocks isolated me from the timing vagaries of the NoSQL implementations. In other cases, I’ve also found that they help isolate me from other random elements like other developers running tests that happen to modify the same database tables that are modifying. I also felt like my tests became much more intention-revealing when I switched to mocks because they talked in terms of the high-level concepts that the front-end code dealt with instead of the low-level representation of the data of the persistence code needed to know about. This made me realize that the hard part was caused by the mismatch between the way the persistence mechanism (either a relational database or the document-oriented NoSQL database that I tried) and the way I thought of the data in my code. I have a feeling that if I’d just serialized my object graph to a file or used an object-oriented database instead of a document-oriented database, that complexity would go away. That’s for a future experiment, though. And, even if it’s true, I don’t know how much I can do about it when I’m required to use an existing persistence mechanism.
Ron also worried that the integration between the different components is not tested when using mocks. As Ron puts it in his forum message: “[T]here seems to be a leap of faith made: it’s not obvious to me that if we know that A sends the right messages to Mock B, and B sends the right messages to Mock A, A and B therefore work. There’s an indirection in that logic that makes me nervous. I want to see A and B getting along.” I don’t think I’ve ever actually had a problem with A and B not getting along when I’m using mocks, but I do recall having a lot of problems with it when I had to map between submitted HTML parameters and an object model. (This was back when one did have to write such code oneself.) It was just very to mistype names on either side and not realize it until actual user testing. This is actually the problem that led us to start doing automated customer testing. Although the automated customer tests don’t always have as much detail as the unit tests, I feel like they alleviate any concerns I might have that the wrong things are wired together or that the wiring doesn’t work.
It’s also worth mentioning that I really don’t like the style of using mocks that really just check if a method was called rather than it was used correctly. Too often, I see test code like:
mock.Stub(m => m.Foo(Arg.Is.Anything, Arg.Is.Anything)).Return(0);
…
mock.AssertWasCalled(m => m.Foo(Arg.Is.Anything, Arg.Is.Anything));
I would never do something like this for a method that actually returns a value. I’d much rather set up the mock so that I can recognize that the calling class both sent the right parameters and correctly used the return value, not just that it called some method. The only time I’ll resort to asserting a method was called (with all the correct parameters), is when that method exists only to generate a side-effect. Even with those types of methods, I’ve been looking for more ways to test them as state changes rather than checking behavior. For example, I used to treat logging operations as side-effects: I’d set up a mock logger and assert that the appropriate methods were called with the right parameters. Lately, though, with Log4Net, I’ve been finding that I prefer to set up the logger with a memory appender and then inspect its buffer to make sure that the message I wanted got logged at the level I wanted.
In his Forum posting, Ron is surely right in saying about the mocking versus non-mocking approaches to writing tests: “Neither is right or wrong, in my opinion, any more than I’m right to prefer BMW over Mercedes and Chet is wrong to prefer Mercedes over BMW. The thing is to have an approach to building software that works, in an effective and consistent way that the team evolves on its own.” My own style has certainly changed over the years and I hope it will continue to adapt to the circumstances in which I find myself working. Right now I find myself working with a lot of legacy code that would be extremely hard to get under test if I couldn’t break it up and substitute mocks for collaborators that are nearly impossible to get set up correctly. Hopefully I’ll also be able to use mocks less, as I find more projects that allow me to avoid the impedance between the code’s concept of the model and that of external systems.
Uber. Eight years ago, the company did not exist and the word was simply a rarely used adjective of German origin meaning “ultra”, like an uber intellectual. Today, Uber has become one of the most successful startups in history and the word has become a commonplace verb in English parlance. Transcending to “verb” status puts Uber in the highly exclusive class of innovative business disrupters like Google and FedEx whose business names and processes have become synonymous with an action that didn’t previously exist but is now done on a regular basis. Who today wouldn’t understand what actions you had taken if you said, “I quickly googled the address for the nearest drop-off spot and uber-ed over there so I could fed-ex my package out on time”?
Uber owns no cars, has no drivers, and has minimal fixed assets. Instead, they created an incredibly user-friendly software that improves aspects of the taxi ride industry we didn’t know needed improvement. Not surprisingly, the full legal name is Uber Technologies, Inc. While the only technologies typically found in traditional taxi cabs are the decades old meter clicking away the increments of the cost of your ride, the Uber software provides new value to both the driver and the customer with useful information such as the location of both the driver and the customer, time estimate for pick-up, exact pricing, car options, driving directions, and much more.
By creating this simple way to get a ride, Uber has reached another pinnacle accomplishment whereby the creativity of its business model has become a noun: uber-fication. According to Dr. Paul Marsden in his Reading Room article, The Uberfication of Everything, “…the real genius of Uber lies in a deep understanding of convenience – what it is and why it matters. That’s what Uberfication is all about; pivoting your business to deliver on a core under-exploited consumer need – convenience”.
One thing that every startup has is a dream and a vision. But, let’s be honest, that simply isn’t enough to successfully build a booming new business like Uber. You need the right partners, you need money, and you need passion for the project at hand. We believe that we can help in all these areas, which lead us to formalize an offering exclusively for startups.
When I formed CC Pace nearly 36 years ago, I was driven by a vision of a new model for a consulting company – one where integrity and the client’s best interest were ingrained in the firm’s culture and successful delivery could almost be guaranteed by the quality, drive and teamliness of the employees who worked there. While my dream may not have been as wide-reaching as Uber, when I think back to that time, I just remember energy, excitement and that ‘anything is possible’ feeling. Over the years, we’ve been very fortunate to work with clients in all phases, from startups to Fortune 500 organizations—all of which we value a great deal. I get excited to work with clients of all sizes, but there is something about working with startups that brings about an energy that you can’t replicate in other environments. Being a part of someone else’s vision coming to life brings me right back to where I stood over 35 years ago and is an environment in which I’ve seen our project teams thrive in.
Our experience working with startups combined with our project teams’ passion has lead us to formalize an offering to help startups get off the ground with the right technology. To enable us to work on more of these type of efforts, we are officially launching a new risk/reward program for startups. Here, we are able to combine our technical prowess with our business acumen that result in a software component that fully and effectively supports the start-up’s vision. The premise of our offering is to build the technological platform for your business with less cash required. In exchange for this discount, we agree upon a fair share of some downstream benefits of your startup reflective of the risk we take.
If you like the idea of maintaining control of your vision while paying less up-front to get the results you need, then I would love to hear from you. Interesting companies with challenging technology needs has been a driver for us for over 35 years. For this reason, we are confident that we have the ability to help better enable your dream. After all, it’s only a matter of time before the next “Uber” shocks the world.
For more information on the risk/reward program, check out our offering here.
In my personal experience working on various software development projects, the concept of team energy often appears to be either undervalued or benignly ignored by management teams. The reasons are many. First of all, the term may be confused with “team velocity” which is a relative measurement of a team’s average output or productivity. If the velocity appears to be at either predictable or positive levels, the management team may choose to believe that team energy is also at satisfactory levels. Organizations may attempt to boost employee morale by putting together team-building exercises and outing events. This macro approach may result in generating a perceived positive effect on team energy, thus obscuring the need for focus on individual teams. So, what is team energy and why should managers consider devoting some attention to it?
When thinking in terms of team energy, one can look at it as building credit with each individual team. One can also look at it from an analogy of having a rainy day fund. From a management’s standpoint it is important to keep team energy in the positive territory. This helps ensure that the team will likely empower themselves to exceed expectations, as well as step up during times of crisis or high pressure situations. I have worked in high energy teams, in which members voluntarily pushed themselves past regular working hours to produce deliverables. These cases did not involve any direct increase in compensation or promotion. People naturally wanted to succeed because they possessed enough energy to do so. I have also witnessed the opposite, where a team’s energy was low, deliverables were in a perpetual state of tardiness and the backlog was steadily accruing bugs. Developers and testers did not feel empowered to succeed and entered a cycle of doing the absolute bare minimum to “get the management off their back”.
Science behind building teams
Agile methodologies, whether Scrum or Kanban, prescribe various techniques that are focused on continuous improvement that may positively affect team energy. Regardless of whether an organization has truly embraced Agile, it is difficult to find managers that would oppose efforts in improving a team’s processes of delivering faster and at a higher value. After all, who is against a boost in productivity? There is a hidden psychological component to continuous improvement that has a causal effect on team energy. This component is more associated with the experience of individual team members.
Studies of team dynamics such as ones conducted by MIT’s Human Dynamics Laboratory, and documented in Harvard Business Review suggest that there is a science to building high performing and high energy teams. One of the keys is to focus on the human brain and social dynamics of a group. Your teams may be composed of introverts as well as extroverts and a wide range of personalities, but there is a common factor that seems to persist. The studies show that humans feel good when they achieve their goals and overcome obstacles. A human brain actually rewards its owner with extra levels of dopamine when a goal is achieved. When the team feels good more often than not, the team energy goes up. When the opposite occurs, team energy goes down. Therefore, focusing on small achievable goals not only helps the organization to shift focus of deliverables, but it also fosters this psychological benefit of achievement for each individual team member.
Measurements
An organization may choose to periodically measure team energy. One way to achieve such measurement is through anonymous surveys. Usually this is done at a more enterprise level to gauge the overall organization energy. There is certainly value in doing that, but the effort is not focused and may not necessarily apply to teams. Small teams may not produce very accurate results. There may be disincentives to be frank when answering a survey because team members may feel singled out and fear reprisals from management. In addition, more introverted team members may choose not to “rock the boat”. A more effective and team-focused approach is to have an Agile coach periodically take team energy measurements. An opportune time may be during team retrospectives, when a team is usually more receptive to be candid. Most importantly, these measurements do not need to be secretly stored in a manager’s vault but should be shared with the team. Adding transparency to the team building and management process will not only increase team energy, but also foster leadership skills among the more proactive and extraverted team members.
Building a new software product is a risky venture – some might even say adventure. The product ideas may not succeed in the marketplace. The technologies chosen may get in the way of success. There’s often a lot of money at stake, and corporate and personal reputations may be on the line.
I occasionally see a particular kind of team dysfunction on software development teams: the unwillingness to share risk among all the different parts of the team.
The business or product team may sit down at the beginning of a project, and with minimal input from any technical team members, draw up an exhaustive set of requirements. Binders are filled with requirements. At some point, the technical team receives all the binders, along with a mandate: Come up with an estimate. Eventually, when the estimate looks good, the business team says something along the lines of: OK, you have the requirements, build the system and don’t bother us until it’s done.
(OK, I’m exaggerating a bit for effect – no team is that dysfunctional. Right? I hope not.)
What’s wrong with this scenario? The business team expects the technical team to accept a disproportionate share of the product risk. The requirements supposedly define a successful product as envisioned by the business team. The business team assumes their job is done, and leaves implementation to the technical team. That’s unrealistic: the technical team may run into problems. Requirements may conflict. Some requirements may be much harder to achieve than originally estimated. The technical team can’t accept all the risk that the requirements will make it into code.
But the dysfunction often runs the other way too. The technical team wants “sign off” on requirements. Requirements must be fully defined, and shouldn’t change very much or “product delivery is at risk”. This is the opposite problem: now the technical team wants the business team to accept all the risk that the requirements are perfect and won’t change. That’s also unrealistic. Market dynamics may change. Budgets may change. Product development may need to start before all requirements are fully developed. The business team can’t accept all the risk that their upfront vision is perfect.
One of the reasons Agile methodologies have been successful is that they distribute risk through the team, and provide a structured framework for doing so. A smoothly functioning product development team shares risk: the business team accepts that technical circumstances may need adjustment of some requirements, and the technical team accepts that requirements may need to change and adapt to the business environment. Don’t fall into the trap of dividing the team into factions and thinking that your faction is carrying all the weight. That thinking leads to confrontation and dysfunction.
As leaders in Agile software development, we at CC Pace often encourage our clients to accept this risk sharing approach on product teams. But what about us as a company? If you founded a startup and you’ve raised some money through venture capital – very often putting your control of your company on the line for the money – what risk do we take if you hire us to build your product? Isn’t it glib of us to talk about risk sharing when it’s your company, your money, and your reputation at stake and not ours?
We’ve been giving a lot of thought to this. In the very near future, we’ll launch an exciting new offering that takes these risk sharing ideas and applies them to our client relationships as a software development consultancy. We will have more to say soon, so keep tuning in.
Senior IT managers starting a new project often have to answer the question: build or buy? Meaning, should we look for a packaged solution that does mostly what we need, or should we embark on a custom software development project?
Coders and application-level programmers also face a similar problem when building a software product. To get some part of the functionality completed, should we use that framework we read about, or should we roll our own code? If we write our own code, we know we can get everything we need and nothing we don’t – but it could take a lot of time that we may not have. So, how do we decide?
Your project may (and probably does) vary, but I typically base my decision by distinguishing between infrastructure and business logic.
I consider code to be infrastructure-related if it’s related to the technology required to implement the product. On the other hand, business logic is core to the business problem being solved. It is the reason the product is being built.
Think of it this way: a completely non-technical Product Owner wouldn’t care how you solve an infrastructure issue, but would deeply care about how you implement business logic. It’s the easiest way to distinguish between the two types of problems.
Examples of infrastructure issues: do I use a relational or non-relational database? How important are ACID transactions? Which database will I use? Which transactional framework will I use?
Examples of business logic problems: how do I handle an order file sent by an external vendor if there’s an XML syntax error? How important is it to find a partial match for a record if an exact match cannot be found? How do you define partial?
Note that a business logic question could be technical in nature (XML syntax error) but how you choose to solve it is critical to the Product Owner. And a seemingly infrastructure-related question might constitute business logic – for example, if you are a database company building a new product.
After this long preamble, finally my advice: Strongly favor using existing frameworks to solve infrastructure problems, but prefer rolling your own code for business logic problems.
My rationale is simple: you are (or should be) expert in solving the business logic problems, but probably not the infrastructure problems.
If you’re working on a system to match names against a data warehouse of records, your team knows or can figure out all the details of what that involves, because that’s what the system is fundamentally all about. Your Product Owner has a product idea that includes market differentiators and intellectual property, making it very unlikely that an existing matching framework will fulfill all requirements. (If an existing framework does meet all the requirements, why is the product being developed at all?)
Secondly, the worst thing you want to do as a developer is to use an existing business logic framework “to make things simple”, find that it doesn’t handle your Product Owner’s requirements, and then start pushing back on requirements because “our technology platform doesn’t allow X or Y”. For any software developer with professional pride: I’m sorry, but that’s just weak sauce. Again, the whole point of the project is to build a unique product. If you can’t deliver that to the Product Owner, you’re not holding up your end of the bargain.
On the other hand, you are very likely not experts on transactional frameworks, message buses, XML parsing technology, or elastic cloud clusters. Oracle, Microsoft, Amazon, etc., have large expert teams and have put their own intellectual property into their products, making it highly unlikely you’ll be able to build infrastructure that works as reliably and is as bug free.
Sometimes the choice is harder. You need to validate a custom file format. Should you use an existing framework to handle validations or roll your own code? It depends. It may not even be possible to tell when the need arises. You may need to use an existing framework and see how easy it is to extend and adapt. Later, if you find you’re spending more time extending and adapting than rolling your own optimized code, you can change the implementation of your validation subsystem. Such big changes are much easier if you’ve consistently followed Agile engineering practices such as Test Driven Design.
As always, apply a fundamental Agile principle to any such decision: how can I spend my programming time generating the most business value?
Recently, I had one of those rare moments when my son, a 3rd grader, seemed to understand at least part of what I do for work.
He was excited to tell me about the Code Studio site (studio.code.org) that they used during computer lab at school. The site introduces students to coding concepts in a fun and engaging way. Of course, it helps to sweeten the deal with games related to Star Wars, Minecraft and Frozen.
We chose Minecraft and started working through activities together. The left side of the screen is a game board that shows the field of play, which is your character in a field surrounded by trees and other objects. The right side allows you to drag and drop colorful coding structures and operations related to the game (e.g. place block, destroy block, move forward) in a specific order. When you’re satisfied that you’ve built the program to complete the objective, you press the “Run” button and watch your character move through your instructions.
Anyone familiar with automated testing tools can relate to the joyful anticipation of seeing if your creation does the thing that it’s supposed to do (ok, maybe joy is a strong word, but it’s kind of fun). In our case, we anxiously watched as Steve, our Minecraft Hero, moved through our steps, avoiding lava and creepers along the way. If we failed and ended up taking a lava bath, we tried again. If we succeeded, the site would move us to the next objective, but also inform us that it’s possible to do it in fewer steps/lines of code (never good enough, huh?!).
Together, we were able to break down a complex problem into several smaller steps, which is a fundamental skill when building software incrementally. We also had to detect patterns in our code and determine the best way to reuse them given a limited set of statements and constructs. For my son, it was a fun combination of gaming and puzzle solving. For me, it was nice to return to the fundamentals of working through logic to solve a problem – no annoying XML configurations or git merges to deal with.
I’m a big supporter of the Hour of Code movement and similar initiatives that expose students to programming, but feel that there’s often an emphasis on funneling kids into STEM career paths. This is all well and good for those with the interest and aptitude, but coding also teaches you to patiently and steadfastly work through problems. This can be applied to many different careers. Chris Bosh, the NBA player, even wrote an article about his interest in coding.
I encourage students of all ages to check out the Code Studio site and Hour of Code events: https://hourofcode.com
Like Martin Fowler, I am a long-time Doctor Who fan. Although I haven’t actually gotten around to watching the new series yet, I’ve been going back through as much of the classic series as Netflix has available. Starting way back from An Unearthly Child several years ago, I’ve worked my way up through the late Tom Baker period. The quality of the special effects is often uppermost in descriptions of the classic series, but that doesn’t actually bother me that much. Truth be told, I feel that a lot of modern special effects have their issues, too. Sometimes modern effects are good and sometimes they’re bad; the bad effects are bad differently than those in the classic series, but at least the classic series didn’t substitute effects, good or bad, for story quality or acting ability. To be fair, the classic series also has its share of less than successful stories, but the quality on the whole has remained high enough that I keep going with it. (In my youth, I stopped watching shortly after Colin Baker became The Doctor, although I can’t remember if I became disenchanted with it or if my local PBS station stopped carrying it. I’m very curious to see what happens when I get past Peter Davison this time.)
I’ve also been very interested to see the special features that come with most of the discs. As a rule, I don’t bother with special features as I find them either inane or annoying (particularly directors talking over their movies), but the features on the Doctor Who discs have mostly been worth my time. Each disc generally has at least one extra that has some combination of the directors, producers, writers, designers and actors talking pretty frankly about what worked and didn’t work with the serial. I’ve learned a lot about making a television show at the BBC from these and also about the constraints, both technological and budgetary, that the affected the quality of the effects on the show. (Curiously, it also brought home the effects of inflation to me. I’d heard the stories of people going around with wheelbarrows full of cash in the Weimar Republic, but that didn’t have nearly as much impact on me as hearing someone talking about setting the budget for each show at the beginning of a season in the early 1970s and being able to get only half as much as they expected by the time they were working on the last series of the season. Perhaps because I do create annual budgets, the scale is easier to relate to than the descriptions of hyperinflation.)
While I am sure there are those are fascinated by what I choose to get from Netflix (you know who you are 😊), I actually had a point to make here. I recently watched the Tom Baker episode The Creature from the Pit, which has a decent storyline but was arguably let down by the design of the creature itself. (There are those that argue it was just a bad story to start with and those that argue that it was written as an anti-capitalist satire that the director didn’t understand.) As I watched the special feature in which Mat Irvine, the person in charge of the visual effects including the creature, and some of his crew talked about the problems involved in the serial, I realized that this was a lovely example of why we should value customer collaboration over contract negotiation. Apparently the script described the creature as “a giant, feathered (or perhaps scaled) slug of no particular shape, but of a fairly repulsive grayish-purplish colour…unimaginably huge. Anything from a quarter of a mile to a mile in length.” (Quote from here.) I suspect someone trying to realize this would have a horrible time of it even in today’s world of CGI effects, but apparently in the BBC at the time you just got your requirement and implemented it as best you could even though both the director and the designer were concerned about its feasibility and could point to a previous failure to create a massive creature in The Power of Kroll.
Alas, neither the designer nor the director felt empowered enough to work with the writer or script editor (or someone, I’m unclear on who could actually authorize a change) on the practical difficulties of creating a creature with such a vague description, resulting in what we tend to have a laugh at now. I don’t know what the result would have been if there had been more collaboration in the realization of this creature, but it seems to me that the size and shape of the creature were not really essential to the story. Various characters say that it eats people, although the creature vehemently denies this, and we see some evidence that it accidentally crushes people, but neither of these ideas requires a minimum size of a quarter of a mile. It seems like the design could have gone a different way, that was easier to realize, if everyone involved had really been able to collaborate on the deliverable rather than having each group doing the best that it could in a vacuum.
We recently ran into an issue with ASP.NET authentication that I thought I would share.
The Setup
We’re running an ASP.NET MVC 5 web application which uses Microsoft ASP.NET Identity for authentication and authorization. We allow users to choose the built-in “Remember Me” option to allow them to automatically login even if they close the browser. We first set this to expire after 2 weeks, after which they are forced to login again (unless you’re using Sliding Expiration https://msdn.microsoft.com/en-us/library/dn385548(v=vs.113).aspx)
When we first implemented this feature, users would complain that they had to keep logging into the site, despite checking the “Remember Me” option. We first made sure that within the Startup configuration, the CookieAuthenticationOptions.ExpireTimeSpan was explicitly set to 2 weeks (even though that is the default).
After some more troubleshooting and, of course, a stackoverflow hint, we discovered the problem.
What we checked
- We checked to make sure the browser cookie that contains the encrypted authentication token had an expiration date (instead of “Session” or “When the browsing session ends” in Chrome).
In our case the cookie contained the expected expiration date:
- We observed that the “Remember Me” automatic login worked throughout the day, but typically stopped working the next day.
So what happened every night that might trash or invalidate our authentication token? We discovered that the Application Pool for our IIS site was getting recycled nightly.
This alone did not seem to be the culprit, since a user should still be able to automatically login (if they choose to) even if the app pool restarts. Taken a step further, even if the server restarts, this functionality should still work. What about a load-balanced environment where the user doesn’t even hit the same web server? This should all be transparent and the “Remember Me” option should just work.
- We searched the web for scenarios where a user must relogin after a server restart and came across this article: http://stackoverflow.com/questions/29791804/asp-net-identity-2-relogin-after-deploy
This seemed to be the root of the problem – we needed to configure the machinekey settings in order to maintain consistent encryption and decryption keys, even after a server or app pool restart. If this is not set explicitly, new encryption/decryption keys are generated after each restart, which in turn invalidates all outstanding authentication tokens (and forces users to login again).
The Solution
It turns out that it is easy to generate random keys in IIS, which will set the values directly in your web.config file:
Now we can store the consistent keys in our web.config file and not worry about invalidating a user’s authentication token, even if we restart, redeploy or recycle the app pool on a consistent basis.
In a load-balanced, web farm environment, this setting is critical to allow users to bounce between load-balanced servers with the same authentication token. You just have to make sure that the same encryption/decryption keys are used on each website (configured via the machinekey setting).
If you’re considering using Azure Web Apps, this is transparent since “Azure automatically manages the ASP.NET machineKey for services deployed using IIS”. More details here.
That is a subject for another day.
I once read a book, which shall remain nameless, that seemed to have a quota of illustrations per page: an average of one-third of each page covered by an illustration. It was a horrible read, made all the worse when I realized that the figures were often repeated to illustrate different concepts with only the captions changed.
Happily, although Ron Jeffries’ own illustrations manage to cover perhaps an average of a fifth of each page in his new book, The Nature of Software Development, they actually add value rather than just taking up space. Take the illustrations that Ron uses to introduce the idea of incremental software delivery. It’s very easy to see how people envision their final product in terms of the magic they believe it will be.
The very next illustration, though, helps us recast our thinking in terms of incremental delivery and how it helps us start deriving value from the product we’re building and then another illustration shows how we can use the feedback we get from the earlier releases to produce something better than we’d originally imagined. Most of the time I honestly ignore illustrations and diagrams as useless, but Ron’s illustrations are generally thought provoking and I always found myself stopping to think about how they illuminated the text around them.
The first part of Nature covers “The Circle of Value,” understanding what value is, why we should try to deliver value incrementally and how we can build our product incrementally. The second part of Nature is entitled simply “Notes and Essays” and provides more detailed thoughts on some of the subjects touched on in the first part. One of my favorites was “Creating Teams That Thrive” where Ron reminds us that when the Product Champion, the term Ron is now favoring over Product Owner, brings defined solutions to the team they are less likely to feel a sense of responsibility and pride in the result.
Another nice essay was “Whip the Ponies Harder,” where Ron reminds us that trying to pressure a team into delivering faster can have deleterious effects. But this brings up a point I wanted to make about the book itself rather than what it says: If you’ve followed along with what Ron has been thinking over the years, in the various discussion groups in which he participates, on xprogramming.com/ronjeffries.com or at the various presentations and classes he’s done, there may not be anything new for you in Nature. Even if you’re in that situation, it’s probably worth getting the book anyway. There’s always the possibility that there will be new thoughts for you there, and, even if the ideas themselves aren’t new to you, their presentation in Nature can spur (sorry, the horse/unicorn drawings may be doing something to my language) you to think about them more deeply and also give you a new way to explain those ideas to other people.
A final word of warning: Early on Ron says “[Y]our job is to think a lot, while I write very little.” This reminded of the time when someone told me they’d read XP Explained (the first edition) in a weekend and understood it all. After fifteen years, I’m still deepening my understanding of XP, mostly through trying to introduce its values to development groups that don’t necessarily understand what, if any, values they hold. Even though it’s a short book, give yourself time to really think about what is said in Nature, even after you’ve finished reading it. That’s when the most reward will come.
One of the basic Agile tenets that most people agree on – whether or not you’re an Agile enthusiast or supporter – is the value of early and continuous customer feedback. This early and often feedback is invaluable in helping software development projects quickly adapt to unexpected changes in customer demands and priorities, while concurrently minimizing risk and reducing ‘waste’ over the lifetime of a project.
Sprint Demo
In the Agile world, we traditionally view this customer feedback in the form of a formal product demonstration (i.e. sprint demo) which would occur during the closeout of a Sprint/Iteration. In short, the “team” – those building the software – presents features and functionality they’ve built in the current sprint to the “customer” – those who will be users or sellers of the product. Even though issues may be uncovered and discussed – usually resulting in action items for the following sprint – the ideal end result is both the team and the customer leaving the demo session (and in theory, closing the sprint) with a significant level of confidence in the current state and progress of the project. Sprint demos are subsequently repeated as an established norm over the project’s duration.
This simple, yet valuable Agile concept of working product demos can be expanded and applied effectively to other areas of software development projects:
- Why not take advantage of the proven benefits of product demos and actually apply them internally – within the actual development team – before the customer is even involved?
- Let’s also incorporate product demos more often – why wait two weeks (or sometimes even a month, depending on sprint duration)? We can add value to a software development project daily, if needed, without even involving the customer.
Expand the Benefit of Demos
One of the common practices where I have seen first-hand the effectiveness of product demos involves daily deployment of code into a testing or “QA” environment. Here’s the scenario:
- Testing uncovers five (5) “non-complex” defects (i.e. defects which were easily identified in testing – usually GUI-type defects including typos, navigation or flow issues, table alignments, etc.) which are submitted into a bug-tracking tool. Depending on the bug-tracking tool employed by the team, this process is sometimes quite tedious.
- Defects 1-5 are addressed by the development team and declared fixed (or “ready to test”) and these fixes are included in the latest deployment into a QA environment for retesting.
- Defects 1-5 are retested. Defects #1 and #2 are confirmed fixed and closed.
- But there is a problem – it’s discovered EARLY in retesting that Defects #3 and #4 are not actually fixed and additionally, the “fixes” have now resulted in two additional defects – #6 and #7. For purposes of explanation, these two additional defects were also easily identified early in the retesting process.
- At this point, Defects 3-4 need to be resubmitted, with new Defects – #6 and #7 – also added to the tracking tool.
Where are we at this point? In summary, all of this time and effort has essentially resulted in closing out one net defect, and we’ve essentially lost a day’s worth of work. Not to mention, developers are wasting time fixing and re-fixing bugs (and in many cases, becoming increasingly frustrated) and not contributing to the team’s true velocity. Simultaneously, testers are wasting time retesting and tracking easily identifiable defects, therefore increasing risk by minimizing the time they have to test more complex code and scenarios.
So there is our conundrum. In a nutshell, we’re wasting time and team members are unhappy. Check back for a follow-up post next week, which will provide a simple yet effective solution to this unfortunately all-too-common issue in many of today’s software development practices.