Tech Aside: What does it mean to be 'Agile' to me?

 Introduction

Welcome to the first installment of the new series called 'Tech Aside'. It will group all articles where I focus on various non-technical aspects of my work. 
 
I hope you'll agree that the key to a successful career in Software Development  is not only having  sound technical and analytical skills, but also possessing strong soft skills. We need to be able to work in a team, to organize our work, to communicate with team members, users and all the other people around us. To express our fears, our goals, our approach to solving problems. To be able to provide feedback, to explain technical aspects to non-technical folks, to defend our estimates and decisions against the deadline pressure. Our work is not placed in a vacuum. Deep understanding of the business context we're in and the ability to work collectively are fundamental to our success as Software Developers and/or Engineers. In the end, we need to be able to deliver and not antagonize everyone around us in the process.
 

The Flow of Work


To be able to do that, we need some kind of workflow. A good set of habits and an organizational pattern that will aid us in delivering business requirements in a timely fashion. The simplest of them all is the dreaded 'Waterfall' methodology. It' pretty straightforward, but comes with lots of issues that would generally surface and cause problems towards the end of the development cycle:
  • it usually becomes apparent that the original estimates were incorrect (and too optimistic) due to various factors, like making constant changes to the requirements, changes in the development team, technical challenges
  • it can occur, that the finished product doesn't meet the original business requirements and/or is not user friendly enough to make the users actually use it
  • the business landscape can shift during the development window enough to make the product obsolete

In general, of all the above can be traced to lack of a fast Feedback Loop. Once the requirements are complete and the development process begins, there's no good way of measuring quality and accuracy of the developed software, at least until the first release hits the UAT (User Acceptance Tests) phase. 

To address all of that, an "Agile Revolution" was born.

Agility

The Agile Manifesto was created with a set of 12 simple Principles to follow. I'm not going to pretend that I was there when it happened. It was 2001 and I was in the middle school, just barely beginning to understand how computers work. By the time I started my professional career (2010), Agile was everywhere and for good reasons. Everyone wanted to produce Software on Scope, on Time and on Budget.
But, after 20 years since the original conception, did it age like a fine wine? Or did it just turn into vinegar? The 'Agile at 20: The Failed Rebellion' retrospective doesn't leave too many illusions. Agile, in most cases and appliances (mainly corporate-based), turned into just another tool and is viewed by many "as a means for management to extract unrealistic promises and push dev teams to work crazy hours".
 
That's why in the rest of the article I will try to convey what does it mean to be 'Agile' to me. From this point everything will revolve around one, crucial aspect of developing software in a successful way. And that is...

Feedback

Software doesn't come from a void. It is an answer to a problem. That problem may occur when an existing process is faulty, error prone, slow and/or not user-friendly. It may also occur when an unfulfilled desire or a new way of doing something (for example to communicate) gets discovered. In any case, for the software to successfully resolve a problem, it has to provide the best possible UX. Otherwise, even if it solves the original problem, Users won't want to use it. They may sometimes be forced to use it, but they won't like to do it. 
The question arises - how to develop a Software that the Users will want to use? The only answer I know of is - you let the Users become an integral part of your development process. You adjust your entire development workflow to be able to get as frequent User Feedback as possible. In an ideal scenario you should have a Feedback Loop that should prevent you from loosing more that one week of work. How to do that? Let's start with defining some

Team Roles

I would say that nothing kills a Dev Team more effectively than a sense distributed responsibility. That is where I have to somewhat disagree with one of the original Agile Principles (or at least with its most common interpretation): "The best architectures, requirements, and designs emerge from self-organizing teams.". I strongly believe that those things don't just emerge spontaneously out of the blue and that each Dev Team has to have two clearly defined Roles responsible for them:
  • Lead Developer/Architect - a person that 
    • takes full responsibility for all technical aspects of the Product including Tech Stack, Architecture, applying Patterns and Best Practices, Code Quality, Automated Testing, Performance, etc,
      • can delegate parts of the decision making to other team members (for example Back-end Lead, Front-end Lead, Dev-Ops Lead etc), but retains full responsibility for all those things to all Stakeholders outside of the Dev Team,
    • serves not as a despotic ruler, forcing everyone to use technologies and practices at all cost, but as a definitive technical decision maker that takes Dev Team's Feedback into account,
    • takes full responsibility for Team's Estimates and declared Deadlines. Makes sure that the Requirements are in good enough shape to be properly estimated, without having to make wild guesses. Is able to recognize and reject too vague and incorrect Requirements,
    • is a go-to person in case of any technical issues with the Product
  • Business Requirements Owner - a person that 
    • is a source of knowledge about the business purpose of the Product,
    • can gather, define and describe functional and non-functional Requirements,
      •  is able to come up with mock UI screens or to closely work with UX expert(s) to obtain them
    • is able to attend the Sprint Demo and decide whether the Requirements were implemented in a correct way or not,
    • can communicate with the target Users and other Stakeholders to make sure, that all interested parties are OK with how the Product is being developed,
    • is able to perform a Business Demo to a wider audience and collect meaningful Feedback,
    • can define, in collaboration with the Lead Developer, the Product Roadmap including Milestones, Epics and planned Releases,

Those are, in my opinion, two crucial Roles that greatly contribute to either success or a failure of any Software Development effort. They lead it and bear the whole responsibility for it, shielding the Development Team from unnecessary stress and backlash related to not meeting Deadlines, poor Performance and/or unsatisfactory User Experience.

'Cut my Scope into Pieces, this is my Last Resort' is what would Papa Fowler probably sing, if he would be into rock music. Yeah. Defining Team Roles is a big step forward, but it's not enough to guarantee success. For that to happen, we need to divide the Scope into manageable pieces: Milestones, Epics and finally Stories. Stories that will then be assigned to...

Sprints

If we want to implement Business Requirements in the best possible way, we need to have a fast Feedback Loop. That's why I tend to organize my Team's work into one-week Sprints. Many people may argue that one week is not enough time to do any meaningful development. I understand that fear. For many years one-week Sprints seemed to extreme to me to even consider. Yet after many problematic initiatives that involved dividing work into two or even three week Sprints, I've decided to give one-week Sprints a try, and I know that most probably never go back. Why?
  • They force Business Requirements Owners to create very small User Stories, that can be then expanded into manageable sets of Use Cases (given/when/then). Such Stories are easy to estimate and with pretty good accuracy/confidence level. That means we can efficiently plan Sprint Scopes and rarely deal with Carryovers,
    • Having documented Use Cases promote and encourage good programming practices, like BDD/TDD.
    • Creation of Use Cases serves as a crucial filter, helping to find out which Requirements make sense and can be implemented, and which ones do not, before even touching the Code.
    • Having well documented and estimated User Stories are a powerful weapon against those, who'd like to endlessly add new Features to a predefined Release Date. Presenting them can cut pointless discussions about 'adding just that one new thing' and will leave the decision-makers with a clear choice - either don't touch the Scope, or move the Deadline.
  • They allow for frequent, weekly Sprint Demos at the end of each week. The newly implemented Scope is small enough so that eventual corrections will not hurt the Development Timeline too much and Team can end the week with a clean slate, not having to think about the tasks in progress during the weekends.
So what are the parts of a such short Sprint? Well, it can get busy, but this extreme iterative approach yields great results:
  • Sprint Planning and Retrospective 
    • Who?: The entire Dev Team
    • When?: First day of the Sprint, preferably it the beginning of the Business Day 
    • What?
      • Assigning Tasks from Backlog to the current Sprint
      • Analyzing what went good and what went bad in previous Sprint, applying the conclusions to the current Sprint.
         
  • Backlog Refinement
    • Who?: Lead Developer and Business Requirements Owner
    • When?: No fixed schedule, can occur on a need-to basis
    • What?:
      • Reviewing new requirements, deciding if a requirement makes sense and to which Milestone the requirement should be assigned (prioritizing)
      • Writing requirements in a form of User Story: as <Someone> I want to do <Something> so that I can have <a Result>. 
      • Splitting Requirements into Use Cases: given <an initial condition> when <I perform an action> then <there is a result of that action>.
      • Analyzing functional and non-functional requirements, making sure they make sense and are applicable to the Business Context.
      • Analyzing UX impact, reviewing proposed UI designs and workflows
      • Updating existing Tasks and Creating new ones – applying Feedback from the previous Demo (if applicable).
         
  • Backlog Tasks Estimation
    • Who?: The entire Dev Team
    • When?:  No fixed schedule, can occur on a need-to basis
    • What?:
      • Estimating the Tasks that don’t have an estimate yet.
        • The only Tasks eligible for estimation are the ones that have a completed User Story and Use Cases.
  • Working on Sprint Tasks
    • Who?:  The entire Dev Team
    • When?: Every day
    • What?: Doing the actual work on the Stories assigned to the Sprint.
  • Daily Standup
    • Who?:  The entire Dev Team
    • When?: Every day
    • What?: Giving an update on what everyone is working on and how is the work progressing
  • Sprint Demo
    • Who?: The entire Dev Team and Business Requirements Owner
    • When?: Last day of the Sprint, preferably it the end of the Business Day 
    • What?
      • Showing the Results of the work performed during the Spring to all interested Parties: Requirements Owner, Potential Clients/Users, Senior Management.
      •  Getting a feedback on the current work and noting it to apply during the next Backlog Refinement
         
That sure looks like a lot of additional stuff on top of actually doing the work. However, keep in mind that the Scope for a one-week Sprint will be pretty small, so each of those additional activities will not consume as much time as you might think. Also, not every activity requires the presence of all Dev Team members and there are some activities that don't have to happen every Sprint.
OK, are we ready to conclude? Not yet. We still haven't talked about

Releases

Having short Sprints and frequent Sprint Demos is a great way of implementing Business Requirements in an Agile, iterative way. However the reality is that nothing beats a real-life usage with real-life scenarios. No amount of tests will generate the production load and production-specific edge cases that will verify our software in a battlefield of a day-to-day work. That's why we should release as frequently as we can. Sadly, as that will greatly depend on the infrastructure, Dev-Ops capabilities and formal procedures of your work environment, you might not get as frequent Releases as you'd like. Just remember that the smaller Scope you're releasing at once, the smaller number of things can potentially go wrong and/or require changing.

Conclusion

I understand that some of the things described above may come off as extreme and/or naive. Many of you may think 'well, that won't work in my situation' and that may very well be the truth. I don't claim to know every possible scenario and real-life appliance of the Agile methodology. I don't even know if what I've described qualifies as an 'Agile methodology'. I'm not a qualified Scrum Master (nor have I worked with such) and I don't have any Agile Certificates to back up my claims. It's just an explanation of what does it mean to be 'Agile' to me. It's a knowledge that I've been collecting over the years, hearing this and trying out that. And the important thing is that it really works. At least for me :) 
 
Thanks for reading.

Comments