Wed Mar 19, 2014 2:58 pm by mleipold
My experience is that prep-to-prep variation among in-house conjugates can blur these lines a bit. Sometimes you have a good prep, sometimes you have a great prep, sometimes you have a borderline prep.
It all depends on what your ultimate goal is: in our case, we're mainly just wanting to resolve our populations of interest, so as long as we can do that sufficiently well, we don't care quite so much about absolute staining intensities (ie, that the median Dual counts for Population X are exactly the same from one batch of antibody to the next). This poses an issue for archival normalization (including your meta analyses): if between one project and the next (or even worse, one plate to the next), you switch a "lot" of only one antibody of your cocktail and its performance changes, then normalization will NOT "fix" that.
So, since many people currently still use a number of in-house conjugates in their panels, this will continue to be a small limitation on normalization.
Re Nolan tool: it should allow you to normalize across all files that use that particular lot of beads.