Table of Contents
Trying to hash out different ideas of how to attack the pre-tested commit feature (JENKINS-1682.)
- Every personal build (PB) and team build gets a build number from the same counter. But PBs are invisible to people other than the submitter, so this causes some gap in the build number. It's presumably done this way to handle both kinds of builds uniformly internally.
- When I submit a PB, those changes also remain in the workspace. This is both good and bad — good because you can continue to make the follow up changes (say you are 50% done in a big refactoring, so you can submit a build, then continue to work on the remaining 50%), bad because you can't move on to work on something totally different.
- My PB is tested against the tip of SCM, not against the revision I had locally.
- When I submit a PB, entire files are sent to TeamCity, instead of just diff. So if the tip of SCM contains a change from Bob in the same file, Bob's change is overwritten by mine. This situation is detected later when IDE tries to commit them, as it then notices that the file being committed is not the latest. This just appears to be a limitation in the implementation, as I don't see anything preventing them from just sending a diff.
- The delayed commit correctly excludes all the additional changes that I made after submitting a PB. This appears to be done by (1) pushing my current files aside, (2) bringing back the files as they were when a PB was started, (3) run SCM commit, then (4) bring back the files set aside in step 1. I can see this process in the IDE editor panes, as they change the contents.
Who writes IDE support?
TC implements this basically as a build with a patch as a build parameter, so it's straight-forward. OTOH, the delayed commit part is very tedious — if we are to do this outside IDE, we need to write the SCM commit handling, and if we are to do this inside IDE, we have to add this functionality into all the IDEs.
Don't reinvent the patch management
For a developer to make a progress while a PB is running, he has to come up with his own patch management mechanism, such as quilt. Whatever you do there, it isn't integrated with the delayed commit workflow, and so you can't use delayed commits in the fire-and-forget style.
For example, say you want to (1) work on bug A, (2) do a delayed commit, (3) work on another bug B, (4) do a delayed commit, ..., then between (2) and (3) you have to push your diff off to somewhere, so that you can come back to it if PB fails and you can work on bug B without mixing bug A. This is exactly the kind of things SCMs are good at, so why not let SCM do it? (For example, by using branches)
OTOH, this implementation hides those untested changes from showing up in SCM, and for SCMs like CVS and Subversion that has a rather poor merge capability, this simplifies the change tracking and forensic analysis of the SCM history later.
"Branchy SCM" approach
This implementation approach is based on the idea that untested commits are OK, as long as they aren't in the main branch of the development. So the model here is to let people commit untested commits as new branches, then let Hudson build them and integrate them automatically.
We can also let Hudson prune branches that are "done" — for example by deleting branches if they are successfully merged and no additional change was made to it for N days, or for example by using a naming convention or metadata or something like that.
User experience / Usage scenario
The following two scenarios show how the same feature can be used in different ways.
Working on multiple bug fixes in the fire-and-forget style
Joe wants to fix bug A and B in his project, which are independent.
- Joe checks out the workspace, and come up with a fix for A
- Joe commits this to a "bug-A" branch
- Joe goes back to the trunk and come up with a fix for B
- Joe commits this to a "bug-B" branch
Hudson detects that those two branches are feature branches for automatic integration, so it'll merge them with the tip of the upstream branch, do a build/test, and if they run fine, it'll be committed to the upstream branch.
If the build or merge fails, Joe will be notified and he can work on those branches. Maybe he'll make a follow-up change, or maybe he'll sync it with the upstream branch to resolve merge conflicts, or maybe he just deletes the branch to abandon the change entirely.
Another benefit of this approach is that this can be used to organize workspaces into a multi-level hierarchy, instead of just forcing it to be 2 levels, as in team build/personal build distinction.
Personal branch under a team branch
Alice, Bob, Charlie, and 23 other engineers are in a team that works on the same code base. Each person gets a branch off from the team branch.
- Alice checks out the workspace and makes some changes
- Hudson detects this change, it'll merge them with the tip of the team branch, do a build/test, and if they run fine, it'll be committed to the upstream branch.
- Alice keeps on changing her branch, while Hudson is doing all that asynchronously on the server
This requires SCMs that have a reasonably modern branching and merging support, such as Subversion 1.5, Git, and Mercurial. OTOH, this won't work at all with SCMs that lack a decent merge support, such as CVS and earlier versions of Subversion.
Possible design decisions
How are testable branches identified?
There could be an explicit list of branches to test, or you could just have a name pattern.
Or you might wish to only test branches when explicitly requested, rather than having Hudson poll branches for changes. Or you might wish to have changes in branches trigger informational builds, but not merge unless requested.
What constitutes a successful merge to trunk?
No file-level conflicts? No line-level conflicts?
(Mercurial supports both modes, FWIW.)
When to merge back from trunk?
It is possible for Hudson to attempt to merge trunk back into available branches immediately after accepting changesets for trunk. (If there is a merge conflict, just skip it - let the branch owner resolve the conflict when they are ready.)
Some sites may prefer for developers to synchronize their branches with trunk manually. Of course in this case the developers need to remember to do so reasonably frequently; otherwise the chance of their changes being unusable when merged with trunk, or failing to merge with trunk at all, increases steadily over time.
Should the same set of tests be used for all branches?
This is certainly simpler. But particular users or subteams might be interested in a specific set of tests which is too slow to be run for every commit to the repository. At least there needs to be a lowest common denominator test set which is required for merge to trunk.
One possibility is to create a separate plugin which supplies the list of modified files to the build in an environment variable, so you can use various tools and heuristics to decide which tests it might be prudent to run. Or just require project admins to handle it as part of the build script, e.g. by running some additional tests based on the name of the branch.
Test before merge, after merge, or both?
There are several possible topologies:
- Test in parallel, only before merge. Quick but unsafe since the merge could introduce a regression not present in the merge parents.
- Test only after merge. Safe but slow since testing of all branches must be serialized, say using a round-robin policy.
- Test branches in parallel, then test again after merge. Faster since the post-merge test will rarely fail.
- Test and merge in a more complex tree with some fan-in factor. Potentially fastest, though the heuristics for picking the tree layout can be complex.
How to deal with diachronic builds?
Some projects - notably Maven modulesets using non-snapshot dependencies - succeed or fail based on factors other than the currently checked-out SCM snapshot.
Probably punt on this and require such modulesets to be built as separate projects. Assume that all the necessary tests covering changes in a module are in fact present in that module (or at least in a snapshot-related sister module included in the same project).
Make it possible for folk to configure this up on their own, and then we can incrementally improve IDE integration, automatic detection etc. This is a variation of the Branch SCM approach, but deliberately punting on all the hard questions.
New SCM methods
Add new optional methods to the interface SCM's implement (probably via a new subinterface to keep binary compatibility) with two calls:
New merge-branch plugin
This plugin would require 2 parameters to builds it is enabled on : a url to merge, and a commit message to give. The plugin would simply call the new SCM methods, for the SCM in use on a build, before and after the build (and only on stable builds).
- Gets something up and running.
- Can work with any SCM that implements the interface. Note that you can do decent merges in old svn and even in CVS if you record appropriate metadata - and the SCM implementation can choose to do that.
- Few changes needed.
- May waste some builds on bzr/git/hg because it depends on the SCM to reject/permit commits done to the same branch without updating the work area first.
- May commit some broken builds on cvs/svn for the same reason - because these SCM's permit out of date commits to work.