The team’s process is derived from extreme programming and adapted for remote, distributed teams. It can be summarized as follows:
Our roadmap is a product backlog, owned by a Product Owner.
Each item is described in terms of:
A user story (U/S) following a Role-Feature-Reason template
Acceptance Criteria (A/C) written in Gherkin
Possible extra information or documents
Items in the backlog are sorted by priority.
When picked, U/S are estimated in terms of number of sprints. A story estimated to more than 3 sprints should be broken down into smaller stories.
As a stake pool operator
I want the pool ordering to be fair and not favor any particular pools especially during the bootstrapping era
So that every pool has the same chance to be selected by users in the early stages.
Given that stake pools can be listed via https://input-output-hk.github.io/cardano-wallet/api/edge/#operation/listStakePools
And they are ordered by “apparent performance”
When I query stake pools during the first epoch (when little information about them is available)
Then pools are ordered arbitrarily
And the order is not necessarily the same between different wallets
And the order is consistent between successive calls within the same wallet.
The project is divided into weekly iterations called sprints.
Releases happen at the beginning of every sprints, Monday or Tuesday.
Every 3 sprints, the team does 1 week of recovery time (See Recovery Week below).
User stories are assigned to and owned by a single member of the team (a.k.a the pilot). Pilots are seconded by a Co-pilot as follows:
| Mission | Role | | --- | --- | | Clarify product requirements as needed with the product owner(s) | Pilot | | Break U/S into tasks (small, sizeable, chunks of work) | Pilot | | Estimate U/S in terms of # of sprints | Pilot | | Implement each task of a U/S | Pilot | | Challenge the task division and review it | Co-Pilot | | Primary reviewer of the development tasks | Co-Pilot | | Call for assistance from peers when needed | Co-Pilot | | Challenge implementation decisions and technical choices | Co-Pilot |
Tasks and Pull Requests have a dedicated GitHub template:
Tasks move across the following board (see Task template for transitions)
|*************| |*************| |*************| |*************| | Backlog | | In Progress | | QA | | Closed | |-------------| |-------------| |-------------| |-------------| | | | | | | | | | ... -----> ... -----> ... -----> ... | |_____________| |_____________| |_____________| |_____________|
Sprinters can’t run all the time. During sprints, we often accumulate technical debts (e.g.
During recovery weeks, the team has a dedicated moment to tackle some of the technical debts. This includes:
Reviewing and extending code documentation
Refactoring some potentially entangled parts of the code
Re-organizing modules and folder achitecture
FIXMEs, or, turn them into U/S
Identify areas of the source code which needs improvement
Recovery weeks happen instead of a sprint, and start with a retrospective meeting about the last 3 sprints.
The code is collectively owned, everyone is knowledgeable about every part of the code
The code is peer-reviewed
Code is following an agreed standard and style
Code is integrated and tested daily to the main branch (
master) through PR
The main branch should be releasable at any time and not contain broken features
No unplanned optimizations, features or unneeded abstractions are implemented
We favor simple unbloated code and use refactoring techniques to add features
We test chunks of codes as we submit and integrate them, maintaining a high code coverage at all time
All code should be covered by tests (either unit, integration or manual).
We favor automated tests over manual testing.
Issues are closed by QA, once convinced by developers that the added code works and is covered
Developers are expected to point relevant automated or manual test procedures to QA
Developers may also point to documentation or, code details that ensure reliability of the code
When a bug is found, regression tests are created to illustrate the failure, prior to fixing it
Tests are ran daily in a integration environment.
Critical parts of the code have benchmarks to identify potential bottlenecks.
Code and more importantly public interfaces are well-documented and digestible.
When a potential bug is found, a Bug ticket is created with a label
Corresponding sections of the ticket are filled-in (context, reproduction path, expected behavior…)
The bug is added to the following board in “Needs Triage”
The ticket is discussed on Slack with the team to confirm that it’s indeed a bug.
Once confirmed, the label
BUG?is changed to
BUG:CONFIRMEDand the bug is given a priority (either low or high).
If dispelled, the bug ticket is closed without further ado.
When resolved, the bugs moved to the “QA” section of the bugs board.
We have daily written, asynchronous, stand-up on Slack on a separate channel
Each Wednesday, an iteration meeting is done:
To do a retrospective on past U/S and estimations.
To assign new U/S to team members
To discuss important matters or change in the process
Every 3 sprints, the Wednesday meeting becomes a monthly retrospective where the team can discuss what went well, what didn’t and take actions to improve things (see also https://www.retrospected.com/)
Discussions happen on Slack in clear threads, decisions are documented on GitHub as comments on issues
Our GitHub wiki can be extended at any time with insights and details about the software
Reports and metrics about the project are available to anyone
Every week, we produce a technical & non-technical report (see weekly-reports) containing:
A brief non-technical summary of the week overview
A list of the completed user stories and their business value
A list of known issues or debt accumulated during the iteration