Re: Helios Detector Lifetime/Service Plan Value
My answers below:
1. If your Xe level is high enough that things are not passing tuning under the normal setup, then yes, your Xe level is still probably high enough that you're shortening the detector lifetime. For example, Xe131 (the normal Xe isotope in the tuning process) is 21.18% of natural Xe, while Xe128 is only 1.92% of natural abundance......more than a 10x difference.
What is your Xe128 signal intensity?
In the IMC mode, for a good argon tank, our Xe131 is < 2000. We currently are using Argon with Xe131 level at around 20,000. Fluidigm told us that as long as Xe131 is below 25,000, it won't have large impact on detector lifetime. Granted that this is not ideal, I don't think the high Xe level would explain why we burnt a brand new detector within a couple of month. We used Xe128 for tuning but are not recording its signal intensity directly.
2. I don't have experience with IMC, so I can't comment on how it tunes. However, I know in regular suspension Helios tuning with the Tuning Solution, the max Xe131 signal allowed before it would fail is 400,000 Dual. When Stanford and the SF Bay Area were having Xe contamination problems, this would often fail.
As I understand it, though, the issue isn't Xe131 itself, but the effects on other things in the Mass Calibration. Remember, the Mass Calibration step uses Cs133 (lower bound) and Ir193 (upper bound) from the Tuning Solution in order to calculate the TOF acquisition windows for all the other mass channels. It basically looks for the brightest signal in two certain TOF windows, and locks them as Cs133 (low) and Ir193 (high).
When Stanford had the problem, I was told that part of the issue with the high Xe levels is that there would be a chance that the Mass Calibration would lock onto a Xe isotope (probably Xe132 at 26.89%, but possibly Xe134 at 10.44%) *instead of* the correct Cs133. This would effectively miscalibrate the TOF acquisition windows for all the other masses. This could result in losing part of your signal, if the *actual* TOF arrival of the Tb159 (for example) ion peak was shifted up or down from what the Tuning had calculated based on Cs133/Ir193 Mass Calibration. This is similar to what happens over very long runtimes, as the TOF arrival of the ion peaks shift toward higher values (part of why you should retune your instrument every 6hr or so for very long runtimes, at least in suspension mode).
My understanding is that at least one lab at Stanford *did* experience miscalibration for one sample set. The way we and some other labs addressed it was going into the Tuning Manager and tightening the Cs133 TOF search window, so basically it couldn't search the areas that might be impacted by the Xe132 and Xe134 signals.
I don't know of a good way to retrospectively check whether a Mass was miscalibrated.....maybe looking at the Raw (unnormalized) EQ bead signal intensities and comparing them to historical (pre-Xe contamination) values?
I don't know how exactly changing the Xe isotope helps solving the tuning problem. But maybe the instrument can be more accurately identify a less abundant Xe isotope if the contamination level is high? How do you identify a mistuning? We looked at our data file after sample acquisition and the signal looks like what it is supposed to be. And for imaging data, there is a lot of variations between samples so it is hard to say whether we are losing sensitivity or not...
Mike