Social Contracts and the Agile Team – Part 1
It’s a scenario we’ve all been a part of before. To shake things up, your Agile teams are being restructured. After the initial shuffle, the team gets together for a first meeting to figure out how it is going to work. Introductions are made, experiences are shared. Maybe a team lead is named. It’s a heady time full of expectations. Following the cycle of Forming-Storming-Norming-Performing, phase one is off to a good start.
At the first team retro, a better understanding of what everyone brings to the team starts to take shape. Relationships and communications within the team, as well as other players within the organization, take root. The team also starts to get a sense of where there are some gaps. Maybe it’s a misunderstanding of how code reviews work, or how cards are pointed. Storming has happened, and the team is ready to begin the transition to the Norming phase.
I’d suggest that team norms, which tend to be prescriptive in nature, falls short of what the stakeholders are hoping it will. Instead, I’d suggest that a social contract is a better concept to work towards.
A social contract is a team-designed agreement for an aspirational set of values, behaviors and social norms. They not only set expectations but responsibilities. Instead of being focused on how individual team members should approach the work of the team and organization, it lays out the responsibilities of the team members to each other. It also lays out the responsibilities and expectations between the team and the organization.
What would this type of contract look like? It should call out both sides of a relationship. An example of part of a social contract may look like this:
- The Team promises to place value through deliverable software as the highest goal to the organization, as defined by the Product Owner
- The Team promises to raise any obstacles preventing them from delivering value immediately
- The Organization promises to address and remove obstacles in a timely manner to the best of their ability
- The Organization promises to maintain reasonable stability of the team so that it has the opportunity to mature and reach its highest potential
In the spirit of the social contract, this should be discussed and brainstormed with open minds and constructive dialog with both sides of the social equation. In truly Agile fashion, it should also be considered an iterative process, and reviewed from time to time to ensure the social contract itself is providing value.
PowerApps is one of the most recent additions to the Microsoft Office suite of products. PowerApps has been marketed as “programming for non-programmers”, but make no mistake; the seamless interconnectivity PowerApps has with other software products allows it to be leveraged in highly complex enterprise applications. PowerApps basic is included in an Office 365 License, but for additional features and advanced data connections, a plan must be purchased. When I was brought on to CC Pace as an intern to assist with organizational change regarding SharePoint, I assumed that the old way of using SharePoint Designer and InfoPath would be the framework I would be working with. However, as I began to learn more about PowerApps and CC Pace’s specific organizational structure and needs, I realized that it was essential to work with a framework geared towards the future.
Solutions with PowerApps
While Big Data and data warehousing become common practices, data analytics and organized data representation become more and more valuable. On the small to medium organizational scale, bringing together scattered data which is stored in a wealth of different applied business practices and software options has been extremely difficult. This one of the areas where PowerApps can create immense value for your organization. Rather than forcing an extreme and expensive organizational change where everyone submits expense reports, recruitment forms, and software documentation to a brand new custom database management system, PowerApps can be used to pull data from its varying locations and organize it.
PowerApps is an excellent solution for data entry applications, and this is the primary domain I’ve been working in. A properly designed PowerApp allows the end user to easily manipulate entries from all sorts of different database management systems. Create, Read, Update, Delete (CRUD) applications have been around and necessary for a long time, and PowerApps makes it easy to create these types of applications. Input validation and automated checks can even help to prevent mistakes and improve productivity. If your organization is constantly filling out their purchase orders with incorrectly calculated sales tax, a non-existent department code, or forgetting to add any number of fields, PowerApps allows some of those mistakes to be caught extremely early.
Integration with Flow (the upgraded version of SharePoint designer WorkFlows), allows for even greater flexibility using PowerApps. An approval email can be created to ensure to prevent mistakes from being entered into a database management system, push notifications can be created when PowerApps actions are taken, the possibilities are (almost) endless.
Pros and Cons
There are both advantages and disadvantages to leveraging software in an enterprise solution that is still under active development. One of the disadvantages is that many of the features that a user might expect to be included aren’t possible yet. While Flow integration with PowerApps is quite powerful, it is missing several key features, such as an ability to attach a document directly from PowerApps, or to write data over multiple records at a time (i.e. add multiple rows to a SQL database). Additionally, I would not assume that PowerApps is an extremely simple, programming free solution to business apps. Knowledge of the different data types as well as the use of functions gives PowerApps a steep learning curve. While you may not be writing any plaintext code, other than HTML, PowerApps still requires a good amount of knowledge of technology and programming concepts.
The main advantage to PowerApps being new software is just that; it’s brand new software. You may have heard that PowerApps is currently on track to replace, at least partially, the now shelved InfoPath project. InfoPath may continue to work until 2026, but without any new updates to the program, it may become obsolete on newer environments well before that. Here at CC Pace, we focus on innovation and investing in the solutions of tomorrow, and using PowerApps internally rather than creating a soon non-supported InfoPath framework was the right choice.
As a programmer and cybersecurity enthusiast, creating pieces of enterprise systems is something I never knew I would be so interested in. I’m Niels Verhoeven, a summer IT Intern at CC Pace Systems. I study Information Systems with a focus on Cybersecurity Informatics at University of Maryland, Baltimore County. My experiences at CC Pace and my programming background have given me quite a bit of insight into how users, systems, and business can fit together, improving productivity and quality of work.
I’m in the process of reading a book on Agile database warehouse design titled, appropriately enough, Agile Data Warehouse Design and by Lawrence Corr.
While Agile methodologies have been around for some time – going on two decades – they haven’t permeated all aspects of software design and development at the same pace. It’s only in recent years that Agile has been applied to data warehouse design in any significant way.
I’m sure many Agile consultants have worked on projects in the past where they were asked to come up with a complete design up-front. That’s true with data warehouse projects too where a client’s database team wanted the entire schema designed up-front – even before the requirements for the reports the data warehouse would be supporting were identified. What would appear to be driving the design was not the business and their report priorities, but the database team and their desire to have a complete data model.
While Agile Data Warehouse Design introduces some new methods, it emphasizes a common-sense approach that is present in all Agile methodologies. In this case, build the data warehouse or data mart one piece at a time. Instead of thinking of the data warehouse as one big star schema, think of it as a collection of smaller star schemas – each one consisting of a fact table and its supporting dimension tables.
The book covers the basics of data warehouse design including an overview of fact tables, dimension tables, how to model each and as mentioned, star schemas. The book stresses the 7-Ws when designing a data warehouse – who, what, where, when, why, how and how many. These are the questions to ask when talking to business to come up with an appropriate design. “How many” is applicable for the fact tables, while the other questions apply to dimension table design.
Agile Data Warehouse Design stresses collaboration with the business stakeholders, keeping them fully engaged so that they feel like they are not just users, but owners of the data. Agile Data Warehouse Design focuses on modeling the business processes that the business owners want to measure, not the reports to be produced or the data to be collected.
I still have a way to go before I’ve finished the book and then applied what I’ve learned, but so far, it’s been a worthwhile learning experience.
“Your Majesty,” [German General Helmuth von] Moltke said to [Kaiser Wilhelm II] now, “it cannot be done. The deployment of millions cannot be improvised. If Your Majesty insists on leading the whole army to the East it will not be an army ready for battle but a disorganized mob of armed men with no arrangements for supply. Those arrangements took a whole year of intricate labor to complete”—and Moltke closed upon that rigid phrase, the basis for every major German mistake, the phrase that launched the invasion of Belgium and the submarine war against the United States, the inevitable phrase when military plans dictate policy—“and once settled it cannot be altered.”
Excerpt From: Barbara W. Tuchman. “The Guns of August.”
In my spare time, I try to read as much as I can. One of my favorite topics is history, and particularly the history of the 20th century as it played out in Europe with so much misery, bloodshed, and finally mass genocide on an industrial scale. Barbara Tuchman’s book, The Guns of August, deals with the factors that led to the outbreak of WWI. Her thesis is that the war was not in any way inevitable. Rather, it was forced on the major powers by the rigidity of their carefully drawn up war plans and an inability to adjust to rapidly changing circumstances. One by one, like dominos falling, the Great Powers executed their rigid war plans and went to war with each other.
Although the consequences are far less severe, I occasionally see the same thing happen on projects, and not just software projects. A lot of time, perhaps appropriately, perhaps not, is spent in planning. The output of the planning process is, of course, several plans. Inevitably, after the project runs for a short while, the plans begin to diverge from reality. Like the Great Powers in the summer of 1914, project leadership sees the plans as destiny rather than guides. At all costs, the plans must be executed.
Why is this? I believe it stems from the fallacy of sunk cost: we’ve spent so much time on planning and coming up with the plans, it would be too expensive now to re-plan. Instead, let’s try to force the project “back on plan”. Because of the sunk cost of generating the plans, too much weight is placed upon them.
Hang on, though. I’ve played up the last part of the quote above – the part that emphasizes the rigidity of thinking that von Moltke and the German General Staff displayed. What about the first part of his statement? Isn’t it true that “the deployment of millions cannot be improvised”? Indeed it is. And that’s true in any non-trivial project as well. You can’t just start a large software project and hope to “improvise” along the way. So now what?
I believe there’s great value in the act of planning, but far less value in the plans themselves. Going through a process of planning and thinking about how the project is supposed to unfold tells me several things. What are the risks? What happens at each major point of the project if things don’t go as planned? What will be the consequences? How do we mitigate those consequences? This kind of contingency planning is essential.
Here’s how I usually do contingency planning for a software development project. Note that I conduct all these activities with as much of the project team present as is feasible to gather. At a minimum, I include all the major stakeholders.
First, I start with assumptions, or constraints external to the project. What’s our budget? When must the product be delivered? Are there non-functional constraints? For example, an enterprise architecture we must embed within, or data standards, or data privacy laws?
Next, I begin to challenge the assumptions. What if we find ourselves going over budget? Are we prepared to extend the delivery deadline to return to budget? I explore how the constraints play off against each other. Essentially, I’m prioritizing the constraints.
Then comes release planning. I try to avoid finely detailed requirements at this point. Rather, we look at epics. We try to answer the question, “What epics must be implemented by what time to generate the Minimum Value Product (MVP)?” Again, I challenge the plan with contingencies. What if X happens? What about Y? How will they affect the timeline? How will we react?
I don’t restrict this planning to timelines, budgets, etc. I do it at the technical level too. “We plan to implement security with this framework. What are the risks? Have we used the framework before? If not, what happens if we can’t get it to work? What’s our fallback?”
The key is to concentrate not just on coming up with a plan, but on knowing the lay of the land. Whatever the ideal plan that comes out of the planning session may be, I know that it will quickly run into trouble. So I don’t spend a lot of time coming up with an airtight plan. Instead, I build up a good idea of how the team will react when (not if) the plan runs aground. Even if something happens that I didn’t think of in the planning, I can begin to change my plan of attack while keeping the fixed constraints in mind. I have a framework for agility.
Never forget this: when the plan diverges from reality, it’s reality that will win, not the plan. Have no qualms about discarding at least parts of the plan and be ready to go to your contingent plans. Do not let “plans dictate policy”. And don’t stop planning – keep doing it throughout the project. No project ever becomes risk-free, and you always need contingencies.
Outside of my work at the MSRB for CC Pace, I enjoy working with community organizations in Fairfax County. After eight years of running the Jefferson Manor Civic Association, I was named Chairman of the Lee District Assoc. of Civic Orgs (LDACO). This is an organization focused on improving communications between residents in the Lee District section of Fairfax, and the elected officials and staff of the county. I have been involved with the board of directors for several years, and was bursting at the seams with ideas on how to build on the foundation I had inherited.
The nearly unlimited options quickly brought about a familiar end result – paralysis. Ideas ranged from simple house-keeping to epic public festivals. It was, to be honest, complete chaos.
Thankfully, at the time I was working on my understanding of Agile techniques and how they applied to my work situation. A key source for this was ‘The Nature of Software Development” by Ron Jeffries. As the pages flew by, the point of always prioritizing value became clear. How could this perspective be focused on running an advocacy group for civic associations? The clouds parted and the way forward became clear.
Prioritize by what would provide immediate value to the organization and its members. Again referring to Jeffries, I used the “Five Card Method” to determine what our first ‘epics’ to tackle would be. The idea is pick three to five big ticket items that will provide immediate value, and focus on breaking those down into manageable pieces.
How do we determine what our members find valuable? Ask them. A review of LDACO’s contact list showed that it was incomplete and in some cases, outdated. We had no social media outreach, either. Improved, direct communication became epic #1.
Talking with leaders in other communities, as well as long standing members of LDACO, I learned that folks needed a longer lead time to plan on attending our meetings. Epic #2 was to provide a calendar of events with at least 90 days out.
Lastly, LDACO learned that our members wanted a district and county-wide focus for meetings and speakers. While having a very narrow topic may provide value for a single community, it did not translate to the diverse group as a whole. Epic #3 was to aim for big, broad topics with speakers who were involved in the decisions that impacted the largest number of communities.
These became the main focus of LDACO’s work for the past year. These were broken down into smaller, achievable pieces, then worked on and completed. In the past year, we have grown our communication list, begun to grow on social media, and increased our attendance and membership through meetings with important stakeholders. All because we kept the focus on what provided value for our customers.
Picture this: you’ve recently been hired as the CIO of a start-up company. You’ve been tasked with producing the core software that will serve as the lynchpin for allowing the company’s business concept to soar and create disruption in the market. (Think Amazon, Facebook, or Uber.) Lots of excitement combined with a huge amount of pressure to deliver. You’ve got many decisions to make, not the least of which is whether or not to build an in-house team to develop the core system or to outsource it to a software development consultancy. So, how do you decide and to whom do you turn to if you do opt to outsource?
CC Pace is one of the software development consultancies that a company might turn to as we focus in the start-up market. Developing greenfield systems for an innovative new company is an environment that our development team members greatly enjoy.
A question that has been posed to me fairly frequently is: why would a start-up company outsource their software development? While I had my own impressions, I decided to pose the question to some CIOs of the start-up companies we’ve worked with, along with some CEOs who ultimately signed-off on this critical decision.
The answers I received contained a common theme – neither approach is necessarily better, and the proper decision depends on your specific circumstances. With some of these circumstances being inter-related , here are the 4 primary factors that I heard the decision should depend on:
- Time-to-Market – It takes time to assemble a quality team, often up to 6 months or more. Even then, it will take some additional time for this group to jell and perform at its peak. As such, the shorter the time-to-market that the business needs to have the initial system delivered, the more likely you would lean toward an outsourced approach. Conversely, if there is less time sensitivity, it makes sense to put in an in-house team. This team will be able to not only deliver the initial system, but they will also be able to handle future support and development needs without requiring a hand-off.
- Workload Peak – For some new businesses, the bulk of the system requirements will be contained in the initial release(s) while others will have a steady, if not growing, stream of desired functionality. If the former, hiring up to handle the initial peak workload and then having to down-size is not desirable and can be avoided with the outsourced model. On the other hand, a steady stream of development requirements for the foreseeable future would cause you to lean towards building an in-house team from the start.
- Availability of Resources – While there is a scarcity of good IT resources seemingly everywhere, certain markets are definitely tighter than others. In addition, some CIOs have a greater network of talent that they know and could more easily tap than others. The scarcer your resource availability, the more likely you would lean in calling upon outsourced providers. Conversely, if you have ready access to quality talent, take advantage of that asset.
- CIO Preference – Finally, some CIOs just have a particular preference of one approach over the other. This may simply be a result of how they’ve worked predominately in the past and what’s been successful for them. So, going down a path that’s been successful is a logical course to take. Interestingly, one CEO commented that his decision in choosing a CIO would be that person’s ability to make these types of decisions based upon the business needs and not personal preference.
I would love to hear from anyone who has been (or will be) involved in this type of decision either from the start-up side or the consulting provider side, as to whether this jives with your experience and thinking. The one variable that wasn’t mentioned by anyone as a factor was cost. That surprised me a lot and I’d also welcome any of your thoughts as to why this wasn’t mentioned as a factor.
Up, down, Detroit, charm, inside out, strange, London, bottom up, outside in, mockist, classic, Chicago….
Do you remember the questions on standardized tests where they asked you to pick the thing that wasn’t like the others? Well, this isn’t a fair example as there are really two distinct groups of things in that list, but the names of TDD philosophies have become as meaningless to me as the names of quarks. At first I thought I’d use this post to try to sort it all out, but then I decided that I’m not the Académie française of TDD school names and I really don’t care that much. If the names interest you, I can suggest you read TDD – From the Inside Out or the Outside In. I’m not convinced that the author has all the grouping right (in particular, I started learning TDD from Kent Beck, Ron Jeffries and Bob Martin in Chicago, which is about as classic as you can get, and it was always what she calls outside in but without using mocks), but it’s a reasonable introduction.
Still, it felt like it was time to think about TDD again, so instead I went back to Ron Jeffries Thoughts on Mocks and a comment he made on the subject in his Google Groups Forum. In the posting, Ron speculated that architecture could push us a particular style of TDD. That feels right to me. He also suggested that writing systems that are largely “assemblies of OPC (Other People’s Code)” “are surely more complex” than the monolithic architectures that he’s used to from Smalltalk applications and that complexity might make observing the behavior of objects more valuable. That idea puzzles me more.
My own TDD style, which is probably somewhere between the Detroit school, which leans towards writing tests that don’t rely on mocks, and London schools, which leans towards using mocks to isolate each unit of the application, definitely evolved as a way to deal with the complexity I faced in trying to write all my code using TDD. When I first started out, I was working on what I believe would count as a monolithic application in that my team wrote all the code from the UI back to right before the database drivers. We started mocking out the database not particularly to improve the performance of the tests, but because the screens were customizable per user, the data for which was in a database, and the actual data that would be displayed was stored across multiple tables. It was really quite painful to try to get all the data set up correctly and we had to keep a lot more stuff in mind when we were trying to focus on getting the configurable part of the UI written. This was back in 1999 or 2000, and I don’t remember if someone saw an article on mocking, but we did eventually light on the idea of putting in a mock object that was much easier to set up than the actual database. In a sense, I think this is what Ron is talking about in the “Across an interface” section of his post, but it was all within our code. Could we have written that code more simply to avoid the complexity to start with? It was a long time ago and I can’t say whether or not I’d take the same approach now to solving that same problem, but I still do find a lot of advantages in using mocks.
I’ve been wanting to try using a NoSQL database and this seemed like a good opportunity to both try that technology and, after I read Ron’s post, try writing it entirely outside-in, which I always do anyway, and without using mocks, which is unusual for me. I started out writing my front-end using TDD and got to the point that I wanted to connect a persistence mechanism. In a sense, I suppose the simplest thing that could possibly work here would have been to keep my data in a flat file or something like that, but part of my purpose was to experiment with a NoSQL database. (I think this corresponds to the reasonably common situation of “the enterprise has Oracle/MS SQL Server/whatever, so you have to use it.) I therefore started with one of the NoSQL implementations for .NET. Everything seemed fine for my first few unit tests. Then one of my earlier tests failed after my latest test started passing. Okay, this happens. I backed out my the code I’d just written to make sure the failing test started passing, but the same test failed again. I backed out the last test I’d written, too. Now the failing test passed but a different one failed. After some reading and experimentation, I found that the NoSQL implementation I’d picked (honestly without doing a lot of research into it) worked asynchronously and it seemed that I’d just been lucky with timing before they started randomly failing. Okay, this is the point that I’d normally turn to a mocking framework and isolate the problematic stuff to a single class that I could either put the effort into unit testing or else live with it being tested through automated customer tests.
Because I felt more strongly about experimenting with writing tests without using mocks than with using a particular NoSQL implementation, I switched to a different implementation. That also proved to be a painful experience, largely because I hadn’t followed the advice I give to most people using mocks, which is to isolate the code for setting up the mock into an individual class that hides the details of how the data is set up. Had I been following that precept now that I was accessing a real persistence mechanism rather than a mock, I wouldn’t have needed to change my tests to the same degree. The interesting thing here was that I had to radically change both the test and the production code to change the backing store. As I worked through this, I found myself thinking that if only I’d used a mock for the data access part, I could have concentrated on getting the front-end code to do what I wanted without worrying about the persistence mechanism at all. This bothered me enough that I finally did end up decoupling the persistence mechanism entirely from the tests for the front-end code and focus on one thing at a time instead of having to deal with the whole thing at once. I also ended up giving up on the NoSQL implementation for a more familiar relational database.
So, where does all this leave my thoughts on mocks? Ron worried in his forum posting that using mocks creates more classes than testing directly and thus make the system more complex. I certainly ended up with more classes than I could have, but that’s the lowest priority in Ken Beck’s criteria for simple design. Passing the tests is the highest priority, and that’s the one that became much easier when I switched back to using mocks. In this case, the mocks isolated me from the timing vagaries of the NoSQL implementations. In other cases, I’ve also found that they help isolate me from other random elements like other developers running tests that happen to modify the same database tables that are modifying. I also felt like my tests became much more intention-revealing when I switched to mocks because they talked in terms of the high-level concepts that the front-end code dealt with instead of the low-level representation of the data of the persistence code needed to know about. This made me realize that the hard part was caused by the mismatch between the way the persistence mechanism (either a relational database or the document-oriented NoSQL database that I tried) and the way I thought of the data in my code. I have a feeling that if I’d just serialized my object graph to a file or used an object-oriented database instead of a document-oriented database, that complexity would go away. That’s for a future experiment, though. And, even if it’s true, I don’t know how much I can do about it when I’m required to use an existing persistence mechanism.
Ron also worried that the integration between the different components is not tested when using mocks. As Ron puts it in his forum message: “[T]here seems to be a leap of faith made: it’s not obvious to me that if we know that A sends the right messages to Mock B, and B sends the right messages to Mock A, A and B therefore work. There’s an indirection in that logic that makes me nervous. I want to see A and B getting along.” I don’t think I’ve ever actually had a problem with A and B not getting along when I’m using mocks, but I do recall having a lot of problems with it when I had to map between submitted HTML parameters and an object model. (This was back when one did have to write such code oneself.) It was just very to mistype names on either side and not realize it until actual user testing. This is actually the problem that led us to start doing automated customer testing. Although the automated customer tests don’t always have as much detail as the unit tests, I feel like they alleviate any concerns I might have that the wrong things are wired together or that the wiring doesn’t work.
It’s also worth mentioning that I really don’t like the style of using mocks that really just check if a method was called rather than it was used correctly. Too often, I see test code like:
mock.Stub(m => m.Foo(Arg.Is.Anything, Arg.Is.Anything)).Return(0);
mock.AssertWasCalled(m => m.Foo(Arg.Is.Anything, Arg.Is.Anything));
I would never do something like this for a method that actually returns a value. I’d much rather set up the mock so that I can recognize that the calling class both sent the right parameters and correctly used the return value, not just that it called some method. The only time I’ll resort to asserting a method was called (with all the correct parameters), is when that method exists only to generate a side-effect. Even with those types of methods, I’ve been looking for more ways to test them as state changes rather than checking behavior. For example, I used to treat logging operations as side-effects: I’d set up a mock logger and assert that the appropriate methods were called with the right parameters. Lately, though, with Log4Net, I’ve been finding that I prefer to set up the logger with a memory appender and then inspect its buffer to make sure that the message I wanted got logged at the level I wanted.
In his Forum posting, Ron is surely right in saying about the mocking versus non-mocking approaches to writing tests: “Neither is right or wrong, in my opinion, any more than I’m right to prefer BMW over Mercedes and Chet is wrong to prefer Mercedes over BMW. The thing is to have an approach to building software that works, in an effective and consistent way that the team evolves on its own.” My own style has certainly changed over the years and I hope it will continue to adapt to the circumstances in which I find myself working. Right now I find myself working with a lot of legacy code that would be extremely hard to get under test if I couldn’t break it up and substitute mocks for collaborators that are nearly impossible to get set up correctly. Hopefully I’ll also be able to use mocks less, as I find more projects that allow me to avoid the impedance between the code’s concept of the model and that of external systems.
Uber. Eight years ago, the company did not exist and the word was simply a rarely used adjective of German origin meaning “ultra”, like an uber intellectual. Today, Uber has become one of the most successful startups in history and the word has become a commonplace verb in English parlance. Transcending to “verb” status puts Uber in the highly exclusive class of innovative business disrupters like Google and FedEx whose business names and processes have become synonymous with an action that didn’t previously exist but is now done on a regular basis. Who today wouldn’t understand what actions you had taken if you said, “I quickly googled the address for the nearest drop-off spot and uber-ed over there so I could fed-ex my package out on time”?
Uber owns no cars, has no drivers, and has minimal fixed assets. Instead, they created an incredibly user-friendly software that improves aspects of the taxi ride industry we didn’t know needed improvement. Not surprisingly, the full legal name is Uber Technologies, Inc. While the only technologies typically found in traditional taxi cabs are the decades old meter clicking away the increments of the cost of your ride, the Uber software provides new value to both the driver and the customer with useful information such as the location of both the driver and the customer, time estimate for pick-up, exact pricing, car options, driving directions, and much more.
By creating this simple way to get a ride, Uber has reached another pinnacle accomplishment whereby the creativity of its business model has become a noun: uber-fication. According to Dr. Paul Marsden in his Reading Room article, The Uberfication of Everything, “…the real genius of Uber lies in a deep understanding of convenience – what it is and why it matters. That’s what Uberfication is all about; pivoting your business to deliver on a core under-exploited consumer need – convenience”.
One thing that every startup has is a dream and a vision. But, let’s be honest, that simply isn’t enough to successfully build a booming new business like Uber. You need the right partners, you need money, and you need passion for the project at hand. We believe that we can help in all these areas, which lead us to formalize an offering exclusively for startups.
When I formed CC Pace nearly 36 years ago, I was driven by a vision of a new model for a consulting company – one where integrity and the client’s best interest were ingrained in the firm’s culture and successful delivery could almost be guaranteed by the quality, drive and teamliness of the employees who worked there. While my dream may not have been as wide-reaching as Uber, when I think back to that time, I just remember energy, excitement and that ‘anything is possible’ feeling. Over the years, we’ve been very fortunate to work with clients in all phases, from startups to Fortune 500 organizations—all of which we value a great deal. I get excited to work with clients of all sizes, but there is something about working with startups that brings about an energy that you can’t replicate in other environments. Being a part of someone else’s vision coming to life brings me right back to where I stood over 35 years ago and is an environment in which I’ve seen our project teams thrive in.
Our experience working with startups combined with our project teams’ passion has lead us to formalize an offering to help startups get off the ground with the right technology. To enable us to work on more of these type of efforts, we are officially launching a new risk/reward program for startups. Here, we are able to combine our technical prowess with our business acumen that result in a software component that fully and effectively supports the start-up’s vision. The premise of our offering is to build the technological platform for your business with less cash required. In exchange for this discount, we agree upon a fair share of some downstream benefits of your startup reflective of the risk we take.
If you like the idea of maintaining control of your vision while paying less up-front to get the results you need, then I would love to hear from you. Interesting companies with challenging technology needs has been a driver for us for over 35 years. For this reason, we are confident that we have the ability to help better enable your dream. After all, it’s only a matter of time before the next “Uber” shocks the world.
For more information on the risk/reward program, check out our offering here.
In my personal experience working on various software development projects, the concept of team energy often appears to be either undervalued or benignly ignored by management teams. The reasons are many. First of all, the term may be confused with “team velocity” which is a relative measurement of a team’s average output or productivity. If the velocity appears to be at either predictable or positive levels, the management team may choose to believe that team energy is also at satisfactory levels. Organizations may attempt to boost employee morale by putting together team-building exercises and outing events. This macro approach may result in generating a perceived positive effect on team energy, thus obscuring the need for focus on individual teams. So, what is team energy and why should managers consider devoting some attention to it?
When thinking in terms of team energy, one can look at it as building credit with each individual team. One can also look at it from an analogy of having a rainy day fund. From a management’s standpoint it is important to keep team energy in the positive territory. This helps ensure that the team will likely empower themselves to exceed expectations, as well as step up during times of crisis or high pressure situations. I have worked in high energy teams, in which members voluntarily pushed themselves past regular working hours to produce deliverables. These cases did not involve any direct increase in compensation or promotion. People naturally wanted to succeed because they possessed enough energy to do so. I have also witnessed the opposite, where a team’s energy was low, deliverables were in a perpetual state of tardiness and the backlog was steadily accruing bugs. Developers and testers did not feel empowered to succeed and entered a cycle of doing the absolute bare minimum to “get the management off their back”.
Science behind building teams
Agile methodologies, whether Scrum or Kanban, prescribe various techniques that are focused on continuous improvement that may positively affect team energy. Regardless of whether an organization has truly embraced Agile, it is difficult to find managers that would oppose efforts in improving a team’s processes of delivering faster and at a higher value. After all, who is against a boost in productivity? There is a hidden psychological component to continuous improvement that has a causal effect on team energy. This component is more associated with the experience of individual team members.
Studies of team dynamics such as ones conducted by MIT’s Human Dynamics Laboratory, and documented in Harvard Business Review suggest that there is a science to building high performing and high energy teams. One of the keys is to focus on the human brain and social dynamics of a group. Your teams may be composed of introverts as well as extroverts and a wide range of personalities, but there is a common factor that seems to persist. The studies show that humans feel good when they achieve their goals and overcome obstacles. A human brain actually rewards its owner with extra levels of dopamine when a goal is achieved. When the team feels good more often than not, the team energy goes up. When the opposite occurs, team energy goes down. Therefore, focusing on small achievable goals not only helps the organization to shift focus of deliverables, but it also fosters this psychological benefit of achievement for each individual team member.
An organization may choose to periodically measure team energy. One way to achieve such measurement is through anonymous surveys. Usually this is done at a more enterprise level to gauge the overall organization energy. There is certainly value in doing that, but the effort is not focused and may not necessarily apply to teams. Small teams may not produce very accurate results. There may be disincentives to be frank when answering a survey because team members may feel singled out and fear reprisals from management. In addition, more introverted team members may choose not to “rock the boat”. A more effective and team-focused approach is to have an Agile coach periodically take team energy measurements. An opportune time may be during team retrospectives, when a team is usually more receptive to be candid. Most importantly, these measurements do not need to be secretly stored in a manager’s vault but should be shared with the team. Adding transparency to the team building and management process will not only increase team energy, but also foster leadership skills among the more proactive and extraverted team members.
Building a new software product is a risky venture – some might even say adventure. The product ideas may not succeed in the marketplace. The technologies chosen may get in the way of success. There’s often a lot of money at stake, and corporate and personal reputations may be on the line.
I occasionally see a particular kind of team dysfunction on software development teams: the unwillingness to share risk among all the different parts of the team.
The business or product team may sit down at the beginning of a project, and with minimal input from any technical team members, draw up an exhaustive set of requirements. Binders are filled with requirements. At some point, the technical team receives all the binders, along with a mandate: Come up with an estimate. Eventually, when the estimate looks good, the business team says something along the lines of: OK, you have the requirements, build the system and don’t bother us until it’s done.
(OK, I’m exaggerating a bit for effect – no team is that dysfunctional. Right? I hope not.)
What’s wrong with this scenario? The business team expects the technical team to accept a disproportionate share of the product risk. The requirements supposedly define a successful product as envisioned by the business team. The business team assumes their job is done, and leaves implementation to the technical team. That’s unrealistic: the technical team may run into problems. Requirements may conflict. Some requirements may be much harder to achieve than originally estimated. The technical team can’t accept all the risk that the requirements will make it into code.
But the dysfunction often runs the other way too. The technical team wants “sign off” on requirements. Requirements must be fully defined, and shouldn’t change very much or “product delivery is at risk”. This is the opposite problem: now the technical team wants the business team to accept all the risk that the requirements are perfect and won’t change. That’s also unrealistic. Market dynamics may change. Budgets may change. Product development may need to start before all requirements are fully developed. The business team can’t accept all the risk that their upfront vision is perfect.
One of the reasons Agile methodologies have been successful is that they distribute risk through the team, and provide a structured framework for doing so. A smoothly functioning product development team shares risk: the business team accepts that technical circumstances may need adjustment of some requirements, and the technical team accepts that requirements may need to change and adapt to the business environment. Don’t fall into the trap of dividing the team into factions and thinking that your faction is carrying all the weight. That thinking leads to confrontation and dysfunction.
As leaders in Agile software development, we at CC Pace often encourage our clients to accept this risk sharing approach on product teams. But what about us as a company? If you founded a startup and you’ve raised some money through venture capital – very often putting your control of your company on the line for the money – what risk do we take if you hire us to build your product? Isn’t it glib of us to talk about risk sharing when it’s your company, your money, and your reputation at stake and not ours?
We’ve been giving a lot of thought to this. In the very near future, we’ll launch an exciting new offering that takes these risk sharing ideas and applies them to our client relationships as a software development consultancy. We will have more to say soon, so keep tuning in.
Senior IT managers starting a new project often have to answer the question: build or buy? Meaning, should we look for a packaged solution that does mostly what we need, or should we embark on a custom software development project?
Coders and application-level programmers also face a similar problem when building a software product. To get some part of the functionality completed, should we use that framework we read about, or should we roll our own code? If we write our own code, we know we can get everything we need and nothing we don’t – but it could take a lot of time that we may not have. So, how do we decide?
Your project may (and probably does) vary, but I typically base my decision by distinguishing between infrastructure and business logic.
I consider code to be infrastructure-related if it’s related to the technology required to implement the product. On the other hand, business logic is core to the business problem being solved. It is the reason the product is being built.
Think of it this way: a completely non-technical Product Owner wouldn’t care how you solve an infrastructure issue, but would deeply care about how you implement business logic. It’s the easiest way to distinguish between the two types of problems.
Examples of infrastructure issues: do I use a relational or non-relational database? How important are ACID transactions? Which database will I use? Which transactional framework will I use?
Examples of business logic problems: how do I handle an order file sent by an external vendor if there’s an XML syntax error? How important is it to find a partial match for a record if an exact match cannot be found? How do you define partial?
Note that a business logic question could be technical in nature (XML syntax error) but how you choose to solve it is critical to the Product Owner. And a seemingly infrastructure-related question might constitute business logic – for example, if you are a database company building a new product.
After this long preamble, finally my advice: Strongly favor using existing frameworks to solve infrastructure problems, but prefer rolling your own code for business logic problems.
My rationale is simple: you are (or should be) expert in solving the business logic problems, but probably not the infrastructure problems.
If you’re working on a system to match names against a data warehouse of records, your team knows or can figure out all the details of what that involves, because that’s what the system is fundamentally all about. Your Product Owner has a product idea that includes market differentiators and intellectual property, making it very unlikely that an existing matching framework will fulfill all requirements. (If an existing framework does meet all the requirements, why is the product being developed at all?)
Secondly, the worst thing you want to do as a developer is to use an existing business logic framework “to make things simple”, find that it doesn’t handle your Product Owner’s requirements, and then start pushing back on requirements because “our technology platform doesn’t allow X or Y”. For any software developer with professional pride: I’m sorry, but that’s just weak sauce. Again, the whole point of the project is to build a unique product. If you can’t deliver that to the Product Owner, you’re not holding up your end of the bargain.
On the other hand, you are very likely not experts on transactional frameworks, message buses, XML parsing technology, or elastic cloud clusters. Oracle, Microsoft, Amazon, etc., have large expert teams and have put their own intellectual property into their products, making it highly unlikely you’ll be able to build infrastructure that works as reliably and is as bug free.
Sometimes the choice is harder. You need to validate a custom file format. Should you use an existing framework to handle validations or roll your own code? It depends. It may not even be possible to tell when the need arises. You may need to use an existing framework and see how easy it is to extend and adapt. Later, if you find you’re spending more time extending and adapting than rolling your own optimized code, you can change the implementation of your validation subsystem. Such big changes are much easier if you’ve consistently followed Agile engineering practices such as Test Driven Design.
As always, apply a fundamental Agile principle to any such decision: how can I spend my programming time generating the most business value?
Recently, I had one of those rare moments when my son, a 3rd grader, seemed to understand at least part of what I do for work.
He was excited to tell me about the Code Studio site (studio.code.org) that they used during computer lab at school. The site introduces students to coding concepts in a fun and engaging way. Of course, it helps to sweeten the deal with games related to Star Wars, Minecraft and Frozen.
We chose Minecraft and started working through activities together. The left side of the screen is a game board that shows the field of play, which is your character in a field surrounded by trees and other objects. The right side allows you to drag and drop colorful coding structures and operations related to the game (e.g. place block, destroy block, move forward) in a specific order. When you’re satisfied that you’ve built the program to complete the objective, you press the “Run” button and watch your character move through your instructions.
Anyone familiar with automated testing tools can relate to the joyful anticipation of seeing if your creation does the thing that it’s supposed to do (ok, maybe joy is a strong word, but it’s kind of fun). In our case, we anxiously watched as Steve, our Minecraft Hero, moved through our steps, avoiding lava and creepers along the way. If we failed and ended up taking a lava bath, we tried again. If we succeeded, the site would move us to the next objective, but also inform us that it’s possible to do it in fewer steps/lines of code (never good enough, huh?!).
Together, we were able to break down a complex problem into several smaller steps, which is a fundamental skill when building software incrementally. We also had to detect patterns in our code and determine the best way to reuse them given a limited set of statements and constructs. For my son, it was a fun combination of gaming and puzzle solving. For me, it was nice to return to the fundamentals of working through logic to solve a problem – no annoying XML configurations or git merges to deal with.
I’m a big supporter of the Hour of Code movement and similar initiatives that expose students to programming, but feel that there’s often an emphasis on funneling kids into STEM career paths. This is all well and good for those with the interest and aptitude, but coding also teaches you to patiently and steadfastly work through problems. This can be applied to many different careers. Chris Bosh, the NBA player, even wrote an article about his interest in coding.
I encourage students of all ages to check out the Code Studio site and Hour of Code events: https://hourofcode.com
Like Martin Fowler, I am a long-time Doctor Who fan. Although I haven’t actually gotten around to watching the new series yet, I’ve been going back through as much of the classic series as Netflix has available. Starting way back from An Unearthly Child several years ago, I’ve worked my way up through the late Tom Baker period. The quality of the special effects is often uppermost in descriptions of the classic series, but that doesn’t actually bother me that much. Truth be told, I feel that a lot of modern special effects have their issues, too. Sometimes modern effects are good and sometimes they’re bad; the bad effects are bad differently than those in the classic series, but at least the classic series didn’t substitute effects, good or bad, for story quality or acting ability. To be fair, the classic series also has its share of less than successful stories, but the quality on the whole has remained high enough that I keep going with it. (In my youth, I stopped watching shortly after Colin Baker became The Doctor, although I can’t remember if I became disenchanted with it or if my local PBS station stopped carrying it. I’m very curious to see what happens when I get past Peter Davison this time.)
I’ve also been very interested to see the special features that come with most of the discs. As a rule, I don’t bother with special features as I find them either inane or annoying (particularly directors talking over their movies), but the features on the Doctor Who discs have mostly been worth my time. Each disc generally has at least one extra that has some combination of the directors, producers, writers, designers and actors talking pretty frankly about what worked and didn’t work with the serial. I’ve learned a lot about making a television show at the BBC from these and also about the constraints, both technological and budgetary, that the affected the quality of the effects on the show. (Curiously, it also brought home the effects of inflation to me. I’d heard the stories of people going around with wheelbarrows full of cash in the Weimar Republic, but that didn’t have nearly as much impact on me as hearing someone talking about setting the budget for each show at the beginning of a season in the early 1970s and being able to get only half as much as they expected by the time they were working on the last series of the season. Perhaps because I do create annual budgets, the scale is easier to relate to than the descriptions of hyperinflation.)
While I am sure there are those are fascinated by what I choose to get from Netflix (you know who you are 😊), I actually had a point to make here. I recently watched the Tom Baker episode The Creature from the Pit, which has a decent storyline but was arguably let down by the design of the creature itself. (There are those that argue it was just a bad story to start with and those that argue that it was written as an anti-capitalist satire that the director didn’t understand.) As I watched the special feature in which Mat Irvine, the person in charge of the visual effects including the creature, and some of his crew talked about the problems involved in the serial, I realized that this was a lovely example of why we should value customer collaboration over contract negotiation. Apparently the script described the creature as “a giant, feathered (or perhaps scaled) slug of no particular shape, but of a fairly repulsive grayish-purplish colour…unimaginably huge. Anything from a quarter of a mile to a mile in length.” (Quote from here.) I suspect someone trying to realize this would have a horrible time of it even in today’s world of CGI effects, but apparently in the BBC at the time you just got your requirement and implemented it as best you could even though both the director and the designer were concerned about its feasibility and could point to a previous failure to create a massive creature in The Power of Kroll.
Alas, neither the designer nor the director felt empowered enough to work with the writer or script editor (or someone, I’m unclear on who could actually authorize a change) on the practical difficulties of creating a creature with such a vague description, resulting in what we tend to have a laugh at now. I don’t know what the result would have been if there had been more collaboration in the realization of this creature, but it seems to me that the size and shape of the creature were not really essential to the story. Various characters say that it eats people, although the creature vehemently denies this, and we see some evidence that it accidentally crushes people, but neither of these ideas requires a minimum size of a quarter of a mile. It seems like the design could have gone a different way, that was easier to realize, if everyone involved had really been able to collaborate on the deliverable rather than having each group doing the best that it could in a vacuum.
We recently ran into an issue with ASP.NET authentication that I thought I would share.
We’re running an ASP.NET MVC 5 web application which uses Microsoft ASP.NET Identity for authentication and authorization. We allow users to choose the built-in “Remember Me” option to allow them to automatically login even if they close the browser. We first set this to expire after 2 weeks, after which they are forced to login again (unless you’re using Sliding Expiration https://msdn.microsoft.com/en-us/library/dn385548(v=vs.113).aspx)
When we first implemented this feature, users would complain that they had to keep logging into the site, despite checking the “Remember Me” option. We first made sure that within the Startup configuration, the CookieAuthenticationOptions.ExpireTimeSpan was explicitly set to 2 weeks (even though that is the default).
After some more troubleshooting and, of course, a stackoverflow hint, we discovered the problem.
What we checked
- We checked to make sure the browser cookie that contains the encrypted authentication token had an expiration date (instead of “Session” or “When the browsing session ends” in Chrome).
In our case the cookie contained the expected expiration date:
- We observed that the “Remember Me” automatic login worked throughout the day, but typically stopped working the next day.
So what happened every night that might trash or invalidate our authentication token? We discovered that the Application Pool for our IIS site was getting recycled nightly.
This alone did not seem to be the culprit, since a user should still be able to automatically login (if they choose to) even if the app pool restarts. Taken a step further, even if the server restarts, this functionality should still work. What about a load-balanced environment where the user doesn’t even hit the same web server? This should all be transparent and the “Remember Me” option should just work.
- We searched the web for scenarios where a user must relogin after a server restart and came across this article: http://stackoverflow.com/questions/29791804/asp-net-identity-2-relogin-after-deploy
This seemed to be the root of the problem – we needed to configure the machinekey settings in order to maintain consistent encryption and decryption keys, even after a server or app pool restart. If this is not set explicitly, new encryption/decryption keys are generated after each restart, which in turn invalidates all outstanding authentication tokens (and forces users to login again).
It turns out that it is easy to generate random keys in IIS, which will set the values directly in your web.config file:
Now we can store the consistent keys in our web.config file and not worry about invalidating a user’s authentication token, even if we restart, redeploy or recycle the app pool on a consistent basis.
In a load-balanced, web farm environment, this setting is critical to allow users to bounce between load-balanced servers with the same authentication token. You just have to make sure that the same encryption/decryption keys are used on each website (configured via the machinekey setting).
If you’re considering using Azure Web Apps, this is transparent since “Azure automatically manages the ASP.NET machineKey for services deployed using IIS”. More details here.
That is a subject for another day.
I once read a book, which shall remain nameless, that seemed to have a quota of illustrations per page: an average of one-third of each page covered by an illustration. It was a horrible read, made all the worse when I realized that the figures were often repeated to illustrate different concepts with only the captions changed.
Happily, although Ron Jeffries’ own illustrations manage to cover perhaps an average of a fifth of each page in his new book, The Nature of Software Development, they actually add value rather than just taking up space. Take the illustrations that Ron uses to introduce the idea of incremental software delivery. It’s very easy to see how people envision their final product in terms of the magic they believe it will be.
The very next illustration, though, helps us recast our thinking in terms of incremental delivery and how it helps us start deriving value from the product we’re building and then another illustration shows how we can use the feedback we get from the earlier releases to produce something better than we’d originally imagined. Most of the time I honestly ignore illustrations and diagrams as useless, but Ron’s illustrations are generally thought provoking and I always found myself stopping to think about how they illuminated the text around them.
The first part of Nature covers “The Circle of Value,” understanding what value is, why we should try to deliver value incrementally and how we can build our product incrementally. The second part of Nature is entitled simply “Notes and Essays” and provides more detailed thoughts on some of the subjects touched on in the first part. One of my favorites was “Creating Teams That Thrive” where Ron reminds us that when the Product Champion, the term Ron is now favoring over Product Owner, brings defined solutions to the team they are less likely to feel a sense of responsibility and pride in the result.
Another nice essay was “Whip the Ponies Harder,” where Ron reminds us that trying to pressure a team into delivering faster can have deleterious effects. But this brings up a point I wanted to make about the book itself rather than what it says: If you’ve followed along with what Ron has been thinking over the years, in the various discussion groups in which he participates, on xprogramming.com/ronjeffries.com or at the various presentations and classes he’s done, there may not be anything new for you in Nature. Even if you’re in that situation, it’s probably worth getting the book anyway. There’s always the possibility that there will be new thoughts for you there, and, even if the ideas themselves aren’t new to you, their presentation in Nature can spur (sorry, the horse/unicorn drawings may be doing something to my language) you to think about them more deeply and also give you a new way to explain those ideas to other people.
A final word of warning: Early on Ron says “[Y]our job is to think a lot, while I write very little.” This reminded of the time when someone told me they’d read XP Explained (the first edition) in a weekend and understood it all. After fifteen years, I’m still deepening my understanding of XP, mostly through trying to introduce its values to development groups that don’t necessarily understand what, if any, values they hold. Even though it’s a short book, give yourself time to really think about what is said in Nature, even after you’ve finished reading it. That’s when the most reward will come.
One of the basic Agile tenets that most people agree on – whether or not you’re an Agile enthusiast or supporter – is the value of early and continuous customer feedback. This early and often feedback is invaluable in helping software development projects quickly adapt to unexpected changes in customer demands and priorities, while concurrently minimizing risk and reducing ‘waste’ over the lifetime of a project.
In the Agile world, we traditionally view this customer feedback in the form of a formal product demonstration (i.e. sprint demo) which would occur during the closeout of a Sprint/Iteration. In short, the “team” – those building the software – presents features and functionality they’ve built in the current sprint to the “customer” – those who will be users or sellers of the product. Even though issues may be uncovered and discussed – usually resulting in action items for the following sprint – the ideal end result is both the team and the customer leaving the demo session (and in theory, closing the sprint) with a significant level of confidence in the current state and progress of the project. Sprint demos are subsequently repeated as an established norm over the project’s duration.
This simple, yet valuable Agile concept of working product demos can be expanded and applied effectively to other areas of software development projects:
- Why not take advantage of the proven benefits of product demos and actually apply them internally – within the actual development team – before the customer is even involved?
- Let’s also incorporate product demos more often – why wait two weeks (or sometimes even a month, depending on sprint duration)? We can add value to a software development project daily, if needed, without even involving the customer.
Expand the Benefit of Demos
One of the common practices where I have seen first-hand the effectiveness of product demos involves daily deployment of code into a testing or “QA” environment. Here’s the scenario:
- Testing uncovers five (5) “non-complex” defects (i.e. defects which were easily identified in testing – usually GUI-type defects including typos, navigation or flow issues, table alignments, etc.) which are submitted into a bug-tracking tool. Depending on the bug-tracking tool employed by the team, this process is sometimes quite tedious.
- Defects 1-5 are addressed by the development team and declared fixed (or “ready to test”) and these fixes are included in the latest deployment into a QA environment for retesting.
- Defects 1-5 are retested. Defects #1 and #2 are confirmed fixed and closed.
- But there is a problem – it’s discovered EARLY in retesting that Defects #3 and #4 are not actually fixed and additionally, the “fixes” have now resulted in two additional defects – #6 and #7. For purposes of explanation, these two additional defects were also easily identified early in the retesting process.
- At this point, Defects 3-4 need to be resubmitted, with new Defects – #6 and #7 – also added to the tracking tool.
Where are we at this point? In summary, all of this time and effort has essentially resulted in closing out one net defect, and we’ve essentially lost a day’s worth of work. Not to mention, developers are wasting time fixing and re-fixing bugs (and in many cases, becoming increasingly frustrated) and not contributing to the team’s true velocity. Simultaneously, testers are wasting time retesting and tracking easily identifiable defects, therefore increasing risk by minimizing the time they have to test more complex code and scenarios.
So there is our conundrum. In a nutshell, we’re wasting time and team members are unhappy. Check back for a follow-up post next week, which will provide a simple yet effective solution to this unfortunately all-too-common issue in many of today’s software development practices.