Difference between revisions of "HFI-Validation"

From Planck PLA 2015 Wiki
Jump to: navigation, search
m
 
(154 intermediate revisions by 8 users not shown)
Line 1: Line 1:
The HFI validation is mostly modular. That is, each part of the pipeline, be it timeline processing, map-making, or any other, validates the results of its work at each step of the processing. In addition, we do additional validation with an eye towards overall system integrity. These are described below.  
+
{{DISPLAYTITLE:Overall internal validation}}
 +
 
 +
HFI validation is mostly modular. In other words, each part of the pipeline, e.g., timeline processing or mapmaking, has the results of its work validated at each step of the processing, looking specifically for known issues. In addition, we perform additional validation with an eye towards overall system integrity, by looking at generic differences between sets of maps, in which most problems will become apparent (whether known or not). Both these approaches are described below.  
  
 
==Expected systematics and tests (bottom-up approach)==
 
==Expected systematics and tests (bottom-up approach)==
  
{{:HFI-bottom_up}}
+
Like all experiments, Planck-HFI had a number of specific issues that it needed to be tracked to verify that they were not compromising the data. While these are discussed in appropriate sections, here we gather them together to give brief summaries of the issues and refer the reader to the appropriate section for more details.
 
 
==Generic approach to systematics==
 
  
While we track and try to limit the individual effects listed above, and we do not believe there are other large effects which might compromise the data, we test this using a suite of general difference tests. As an example, the first and second years of Planck observations used almost exactly the same scanning pattern (they differed by one arc-minute at the Ecliptic plane). By differencing them, the fixed sky signal is almost completely removed, and we are left with only time variable signals, such as any gain variations and, of course, the statistical noise.  
+
* Cosmic rays &ndash; unprotected by the atmosphere and more sensitive than previous bolometric experiments, HFI saw many more cosmic ray hits than its predecessors. These were detected, the worst parts of the data flagged as unusable, and "tails" were modelled and removed. This is described in [[TOI_processing#Glitch_statistics|the section on glitch statistics]]<!-- and in [[#Cosmic_rays|the section on cosmic rays]],--> as well as in {{PlanckPapers|planck2013-p03e|1|the 2013 HFI glitch removal paper}}.
 +
* "Elephants" &ndash; cosmic rays also hit the HFI 100-mK stage and cause the temperature to vary, inducing small temperature and thus noise variations in the detectors. These elephants are removed with the rest of the thermal fluctuations, described directly below.
 +
* Thermal fluctuations &ndash; HFI is an extremely stable instrument, but there are small thermal fluctuations. These are discussed in [[TOI_processing#Thermal_template_for_decorrelation|the timeline processing section on thermal decorrelation]]<!-- and in [[#1.6K_and_4K_stage_fluctuations|the section on 1.6-K and 4-K thermal fluctuations]]-->.
 +
* Random telegraphic signal (RTS) or "popcorn noise" &ndash; some channels were occasionally affected by what seems to be a baseline that abruptly changes between two levels, which has been variously called popcorn noise or random telegraphic signal. These data are usually flagged. This is described in [[TOI_processing#Noise_stationarity|the section on noise stationarity]]<!-- and [[#RTS_noise|the section on Random Telegraphic Signal Noise]]-->.
 +
* Jumps &ndash; similar to (but distinct from) popcorn noise, small jumps were occasionally found in the data streams. These jumps are usually corrected, as described in [[TOI_processing#jump_correction|the section on jump corrections]].
 +
* 4-K cooler-induced EM noise &ndash; the 4-K cooler induced noise in the detectors with very specific frequency signatures, which can be filtered. This is described in {{PlanckPapers|planck2013-p03|1|the 2013 HFI DPC Paper}}<!--, [[#4K_lines_Residuals|the section below on 4-K line residuals]]-->; their stability is discussed in [[TOI_processing#4K_cooler_lines_variability|the section on 4-K cooler line stability]].
 +
* Compression &ndash; on-board compression is used to overcome our telemetry bandwidth limitations. This is explained in {{PlanckPapers|planck2011-1-5}}.
 +
* Noise correlations &ndash; correlations in noise between detectors seems to be negligible, except for two polarization-sensitive detectors in the same horn. This is discussed in {{PlanckPapers|planck2013-p03e|1|the 2013 HFI Glitch removal paper}}.
 +
* Pointing &ndash; the final pointing reconstruction for Planck is near the arcsecond level. This is discussed in {{PlanckPapers|planck2013-p03|1|the 2013 HFI DPC Paper}}.
 +
* Focal plane geometry &ndash; the relative positions of different horns in the focal plane are reconstructed using planets. This is also discussed in {{PlanckPapers|planck2013-p03|1|the 2013 HFI DPC paper}}.
 +
* Main beam &ndash; the main beams for HFI are discussed in the {{PlanckPapers|planck2013-p03d|1|2013 Beams and Transfer function paper}}.
 +
* Ruze envelope &ndash; random imperfections, or dust on the mirrors, can mildly increase the size of the beam. This is discussed in {{PlanckPapers|planck2013-p03d|1|the 2013 Beams and Transfer function paper}}.
 +
* Dimpling &ndash; the mirror support structure causes a pattern of small imperfections in the beams, which generate small sidelobe responses outside the main beam. This is discussed in the {{PlanckPapers|planck2013-p03d|1|the 2013 Beams and Transfer function paper}}.
 +
* Far sidelobes &ndash; small amounts of light can sometimes hit the detectors from just above the primary or secondary mirrors, or even from reflections off the baffles. While small, when the Galactic centre is in the right position, this can be detected in the highest frequency channels, and so is removed from the data. This is discussed in {{PlanckPapers|planck2013-p03d|1|the 2013 Beams and Transfer function paper}} and also in {{PlanckPapers|planck2013-pip88|1|the 2013 Zodiacal emission paper}}.
 +
* Planet fluxes &ndash; comparing the known flux densities of planets with the calibration on the CMB dipole is a useful check of calibration for the CMB channels, and is the primary calibration source for the submillimetre channels. This is done in {{PlanckPapers|planck2013-p03b|1|the 2013 Mapmaking and Calibration paper}}.  
 +
* Point source fluxes &ndash; as with planet fluxes, we also compare fluxes of known, bright point sources with the CMB dipole calibration. This is done in {{PlanckPapers|planck2013-p03b|1|the 2013 Mapmaking and Calibration paper}}.
 +
* Time constants &ndash; the HFI bolometers do not react instantaneously to light; there are small time constants, discussed in {{PlanckPapers|planck2013-p03d|1|the 2013 Beams and Transfer function paper}}.
 +
* ADC correction &ndash; the HFI analogue-to-digital converters are not perfect, and are not used perfectly. Their effects on the calibration are discussed in {{PlanckPapers|planck2013-p03c|1|the 2013 Mapmaking and Calibration paper}}.
 +
<!--* Gain changes with temperature changes-->
 +
<!--* Optical cross-talk &ndash; this is negligible, as noted in [[#Optical_Cross-Talk|the optical cross-talk note]]. -->
 +
* Bandpass &ndash; the transmission curves, or "bandpasses" have shown up in a number of places. This is discussed in {{PlanckPapers|planck2013-p03d|1|the 2013 spectral response paper}}.
 +
<!--* Saturation &ndash; while this is mostly an issue only for Jupiter observations, it should be remembered that the HFI detectors cannot observe arbitrarily bright objects. This is discussed in [[#Saturation|the section below on saturation]].-->
  
In addition, while Planck scans the sky twice a year, during the first six months (or survey) and the second six months (the second survey), the orientations of the scans and optics are actually different. Thus, by forming a difference between these two surveys, in addition to similar sensitivity to the time-variable signals seen in the yearly test, the survey difference also tests our understanding and sensitivity to scan-dependent noise such as time constant and beam asymmetries.
+
<!---==Generic approach to systematics==
  
These tests use the "Yardstick" simulations below and culminate in the "Probabilities to Exceed" tests just after.
+
<font style="color:red;font-size:300%">This section is Under (Re-)Construction</font>
  
==Yardstick simulations (Delouis)==[[#sims]]
+
Some (null) tests done at the map level are described in {{PlanckPapers|planck2013-p03b}} and {{PlanckPapers|planck2014-a09}}.
  
The yardstick allows gauging various effects to see whether they need be included in monte-carlo to describe data. It also allows gauging the significance of validation tests on data (e.g. can null test can be described by the model?).
+
Some further tests will be described in the "CMB Power spectra and liklihood paper".
 
Yardstick 3.0 that characterizes the DX9 data goes through the following steps:
 
  
#The input maps are computed using the Planck Sky Model.
+
Finally detailed end-to-end simulations (from lowest level instrument behaviour to maps) are still ongoing for a detailed characterization, which will accompany the 100-217GHz polarisation maps when they are made available, probably by the summer of 2015.--->
#The LevelS is used to project input maps on timeline using the B-Spline scanning beam and the DX9 pointing (called ptcor6). The real pointing is affected by the aberration that is corrected by map-making. Yardstick does not simulate aberration. Finally, the difference between the projected pointing from simulation and from DX9 is equal to the aberration.
 
#The simulated noise timelines, that are added to the projected signal, have the same spectrum (low and high frequency) than the characterized noise. For yardstick 3.0. No correlation in time or between detectors have been simulated.
 
#The simulation map making step use the DX9 sample flags.
 
#For the low frequencies (100, 143, 217, 353), the yardstick output are calibrated using the same mechanism (e.g. dipole fitting) than DX9. The calibration is not done for higher frequency (545, 857)
 
#The Official map making is run on those timelines using the same parameters than for real data.
 
A yardstick production is composed by all survey map (1,2 and nominal), all detector Detsets and channel maps. The Yardstick 3.0 is based on 5 noise iterations for each map realization.
 
  
===Sysiphe summary including what can neglected(Montier)===
+
==References==
  
==Simulations versus data results (including PTE) (Techene)==
+
<References />
We make consistency test between DX9 and Yardstick production. Yardstick production contains sky (generated with LevelS starting from PSM177) and noise timeline realisations proceeded with the official map making. DX9 production was regenerated with the same code in order to get rid of possible differences that might appear for not running the official pipeline in the same conditions. We compare statistical properties of the cross spectra of null test maps for 100, 143, 217, 353 channels. Null test maps can be survey null test or half focal plane null test, each of them means a specific goal : survey1-survey2
+
aims at isolating transfer function or pointing issues, while half focal plane null test enables to focus on beam issues. Comparing cross spectra we isolate systematic effects from the noise, and we
 
can check whether they are properly simulated or need to. Spectra are computed with spice masking either DX9 point sources or simulated point sources, and masking the galactic plane with several mask width, the sky fraction from which spectra are computed are around 30%, 60% and 80%.
 
  
DX9 and the Y3.0 realisations are binned. For each bin we compute the statistical parameters (mean and variance) of the Yardstick distribution. The following figure is a typical example of a consistency test, it shows the differences between Y3.0 mean and DX9 considering the standard deviation of the yardstick. We also indicate chi square values, which are computed within larger bin : [0,20], [20,400], [400,1000][1000,2000], [2000, 3000], using the ratio between (DX9-Y3.0 mean)^2 and Y3.0 variance within each bin. This binned chi square is only indicative: it may not be always significant since DX9 variations sometimes disappear as we average them in a bin, the mean is then at the same scale as the yardstick one.
 
  
[[File:DX9_Y3_consistency.png | 500px  | center | thumb | '''Example of consistency test for 143 survey null test maps.''']]
+
[[Category:HFI data processing|006]]

Latest revision as of 12:59, 7 July 2015


HFI validation is mostly modular. In other words, each part of the pipeline, e.g., timeline processing or mapmaking, has the results of its work validated at each step of the processing, looking specifically for known issues. In addition, we perform additional validation with an eye towards overall system integrity, by looking at generic differences between sets of maps, in which most problems will become apparent (whether known or not). Both these approaches are described below.

Expected systematics and tests (bottom-up approach)[edit]

Like all experiments, Planck-HFI had a number of specific issues that it needed to be tracked to verify that they were not compromising the data. While these are discussed in appropriate sections, here we gather them together to give brief summaries of the issues and refer the reader to the appropriate section for more details.

  • Cosmic rays – unprotected by the atmosphere and more sensitive than previous bolometric experiments, HFI saw many more cosmic ray hits than its predecessors. These were detected, the worst parts of the data flagged as unusable, and "tails" were modelled and removed. This is described in the section on glitch statistics as well as in the 2013 HFI glitch removal paper[1].
  • "Elephants" – cosmic rays also hit the HFI 100-mK stage and cause the temperature to vary, inducing small temperature and thus noise variations in the detectors. These elephants are removed with the rest of the thermal fluctuations, described directly below.
  • Thermal fluctuations – HFI is an extremely stable instrument, but there are small thermal fluctuations. These are discussed in the timeline processing section on thermal decorrelation.
  • Random telegraphic signal (RTS) or "popcorn noise" – some channels were occasionally affected by what seems to be a baseline that abruptly changes between two levels, which has been variously called popcorn noise or random telegraphic signal. These data are usually flagged. This is described in the section on noise stationarity.
  • Jumps – similar to (but distinct from) popcorn noise, small jumps were occasionally found in the data streams. These jumps are usually corrected, as described in the section on jump corrections.
  • 4-K cooler-induced EM noise – the 4-K cooler induced noise in the detectors with very specific frequency signatures, which can be filtered. This is described in the 2013 HFI DPC Paper[2]; their stability is discussed in the section on 4-K cooler line stability.
  • Compression – on-board compression is used to overcome our telemetry bandwidth limitations. This is explained in Planck-Early-IV[3].
  • Noise correlations – correlations in noise between detectors seems to be negligible, except for two polarization-sensitive detectors in the same horn. This is discussed in the 2013 HFI Glitch removal paper[1].
  • Pointing – the final pointing reconstruction for Planck is near the arcsecond level. This is discussed in the 2013 HFI DPC Paper[2].
  • Focal plane geometry – the relative positions of different horns in the focal plane are reconstructed using planets. This is also discussed in the 2013 HFI DPC paper[2].
  • Main beam – the main beams for HFI are discussed in the 2013 Beams and Transfer function paper[4].
  • Ruze envelope – random imperfections, or dust on the mirrors, can mildly increase the size of the beam. This is discussed in the 2013 Beams and Transfer function paper[4].
  • Dimpling – the mirror support structure causes a pattern of small imperfections in the beams, which generate small sidelobe responses outside the main beam. This is discussed in the the 2013 Beams and Transfer function paper[4].
  • Far sidelobes – small amounts of light can sometimes hit the detectors from just above the primary or secondary mirrors, or even from reflections off the baffles. While small, when the Galactic centre is in the right position, this can be detected in the highest frequency channels, and so is removed from the data. This is discussed in the 2013 Beams and Transfer function paper[4] and also in the 2013 Zodiacal emission paper[5].
  • Planet fluxes – comparing the known flux densities of planets with the calibration on the CMB dipole is a useful check of calibration for the CMB channels, and is the primary calibration source for the submillimetre channels. This is done in the 2013 Mapmaking and Calibration paper[6].
  • Point source fluxes – as with planet fluxes, we also compare fluxes of known, bright point sources with the CMB dipole calibration. This is done in the 2013 Mapmaking and Calibration paper[6].
  • Time constants – the HFI bolometers do not react instantaneously to light; there are small time constants, discussed in the 2013 Beams and Transfer function paper[4].
  • ADC correction – the HFI analogue-to-digital converters are not perfect, and are not used perfectly. Their effects on the calibration are discussed in the 2013 Mapmaking and Calibration paper[7].
  • Bandpass – the transmission curves, or "bandpasses" have shown up in a number of places. This is discussed in the 2013 spectral response paper[4].


References[edit]

(Planck) High Frequency Instrument

random telegraphic signal

Cosmic Microwave background

[LFI meaning]: absolute calibration refers to the 0th order calibration for each channel, 1 single number, while the relative calibration refers to the component of the calibration that varies pointing period by pointing period.

analog to digital converter