Braidwood's current approach for background measurement and removal is to use the veto systems to reject events occuring within several neutron lifetimes of the passage of a muon through or near a detector. The neutron events are effectively cut if the efficiency of the veto is high. There will be some residual correlated backgrounds, however, from muons which are missed and from spallation products that have long lifetimes and will not be cut in this way, such as 9Li. For these longer-lived contaminants, we will measure or estimate the shapes of their energy (or even position) distributions, and fit out their contributions during the final fitting process. The final fit can use soft constraints on the amplitudes of the background contributions by using other information, such as the overall muon rate and the expected production cross sections, or measurements of the backgrounds made outside the neutrino energy regime of interest. If the size of the residual backgrounds is small (say fractionally less than 1% of the expected neutrino signal in the far detectors), then we can tolerate a relatively large uncertainty from the constraints and the fit. There is, at this time, no reason to believe that we will need to apply any spallation-related cuts to the data set other than the time-after-muon cut. The time-after-muon cut creates an effective deadtime in the detector, and as such should have a very small uncertainty, dependent only on how well we measure time between the two sites. If the two sites have a different overburden, the deadtimes will be different, but unless we have a severe clock problem it is hard to see how the uncertainty here would be any larger than 0.1% or so. Our understanding of the rate of spallation products, of the various species created, and of our own efficiencies are of course not absolute. Both Super-Kamiokande and KamLAND have found that to remove the very complex zoo of products from their data sets, they had to apply cuts around the reconstructed muon tracks. We have a much higher signal rate than they do, and thus we are not as sensitive as they are to these backgrounds (and hence are unlikely to need such a cut). But until we actually build the experiments and begin our measurements, we do not know whether we, too, will have to create additional background cuts. These cuts could be cuts around the muon tracks, they could be cuts on R^3 positions if we believe there is leakage from the outside, they could be cuts on pulse shapes, or timing, or hit pattern (for, say, the identification of events with or without Cerenkov light), they could be cuts on the quality of the reconstruction (for the removal of events which reconstruct poorly because of `too many' secondary vertices, for example), or even just on delta_r between "positron" and "neutron". Even assuming that any such cut will be perfectly efficient at removing background, it will still reduce the acceptance for signal events, and thus the uncertainty on that acceptance will become a factor in our overall senstivity. If these cuts are tied to the rate of cosmics, they become a source of difference between the two detectors if the rate of cosmics is different. For equal overburdens, the acceptances themselves nearly cancel, and any correlated part of the uncertainty cancels as well. Detectors with different overburdens do not have this luxury. Imagine that one finds that cuts around the muon tracks are needed to remove ill-understood spallation products. The cut is itself a fiducial-volume cut, and the fraction of the fiducial volume which is removed in this way (weighted by livetime, of course) will be very different if one detector is shallow and one deep. One would thus be back to the problem of having to very accurately measure fiducial volumes. If the depths are the same, the average fiducial volume removed is very nearly the same, and the sensitivity to uncertainties is that much smaller.