Difference between revisions of "LFI-Validation"

From Planck PLA 2015 Wiki
Jump to: navigation, search
(Consistency checks)
 
(17 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{DISPLAYTITLE:Internal overall validation}}
+
{{DISPLAYTITLE:Overall internal validation}}
 
== Overview ==
 
== Overview ==
Data validation is a step of paramount importance in the complex process of data analysis and the extraction of the final
+
Data validation is critical at each step of the analysis pipeline.  Much of the LFI data validation is based on null tests. Here we present some examples from the current release, with comments on relevant time scales and sensitivity to various systematics.
scientific goals of an experiment. The LFI approach to data validation is based upon null-tests approach and here we present
 
the rationale behind envisaged/performed null-tests and the actual results for the present data release. Also we will provide
 
results of the same kind of tests performed on previous release to show the overall improvements in the data quality.
 
  
== Null-tests approach ==
+
== Null tests approach ==
In general null-tests are performed in order to highlight possible issues in the data related to instrumental  
+
Null tests at map level are performed routinely, whenever changes are made to the mapmaking pipeline.  These include differences at survey, year, 2-year, half- mission and half-ring levels, for single detectors, horns, horn pairs and full frequency complements.  Where possible, map differences are generated in <i>I</i>, <i>Q</i> and <i>U</i>.
systematic effect not properly accounted for within the processing pipeline and related to known events of the
+
For this release, we use the Full Focal Plane 8 (FFP8) simulations for comparison. We can use FFP8 noise simulations, identical to the data in terms of sky sampling and with matching time domain noise characteristics, to make statistical arguments about the likelihood of the noise observed in the actual data nulls.
operational conditions (e.g. switch-over of the sorption coolers) or to intrinsic instrument properties coupled with
+
In general null tests are performed to highlight possible issues in the data related to instrumental systematic effecst not properly accounted for within the processing pipeline, or related to known changes in the operational conditions (e.g., switch-over of the sorption coolers), or related to intrinsic instrument properties coupled with the sky signal, such as stray light contamination.
sky signal like stray-light contamination.
+
Such null-tests can be performed by using data on different time scales ranging from 1 minute to 1 year of observations, at different unit levels (radiometer, horn, horn-pair), within frequency and cross-frequency, both in total intensity, and, when applicable, in polarization.
  
Such null-tests are expected to be performed considering data on different time scales ranging from 1-minute to one year
+
=== Sample Null Maps ===
of observations, at different unit level (radiometer, horn, horn-pair, within frequency and cross-frequency both in total
 
intensity and, when applicable, to polarisation.
 
  
This is quite  demanding in terms of all possible combinations. In addition some tools are already
+
<gallery mode="nolines">
available and can be properly used for this kind of analysis. However it may be
+
File:PlanckFig_map_Diff030full_ringhalf_1-full_ringhalf_2sm3I_Stokes_88mm.png
possible that on some specific time-scale, detailed tools have to be developed in order
+
File:PlanckFig_map_Diff030full_ringhalf_1-full_ringhalf_2sm3Q_Stokes_88mm.png
to produce the desired null-test results. In this respect the actual half-ring jack-knives
+
File:PlanckFig_map_Diff030yr1+yr3-yr2+yr4sm3Q_Stokes_88mm.png
are suitable to track any effects on pointing period times scales. On time-scales between half-ring
+
File:PlanckFig_map_Diff030yr1+yr3-yr2+yr4sm3I_Stokes_88mm.png
and survey there are lot of possibilities. It has to be verified if the actual code producing
+
</gallery>
half-ring jack-knives (madam) can handle data producing jack-knives of larger
+
<gallery mode="nolines">
(e.g. 1 hour) times scales.
+
File:PlanckFig_map_Diff044full_ringhalf_1-full_ringhalf_2sm3I_Stokes_88mm.png
 +
File:PlanckFig_map_Diff044full_ringhalf_1-full_ringhalf_2sm3Q_Stokes_88mm.png
 +
File:PlanckFig_map_Diff044yr1+yr3-yr2+yr4sm3Q_Stokes_88mm.png
 +
File:PlanckFig_map_Diff044yr1+yr3-yr2+yr4sm3I_Stokes_88mm.png
 +
</gallery>
 +
<gallery mode="nolines">
 +
File:PlanckFig_map_Diff070full_ringhalf_1-full_ringhalf_2sm3I_Stokes_88mm.png
 +
File:PlanckFig_map_Diff070full_ringhalf_1-full_ringhalf_2sm3Q_Stokes_88mm.png
 +
File:PlanckFig_map_Diff070yr1+yr3-yr2+yr4sm3Q_Stokes_88mm.png
 +
File:PlanckFig_map_Diff070yr1+yr3-yr2+yr4sm3I_Stokes_88mm.png
 +
</gallery>
 +
'''<small>Figure 1: Null map samples for 30GHz (top), 44GHz (middle), and 70GHz (bottom). From left to right: half-ring differences in <i>I</i> and <i>Q</i>; 2-year combination differences in <i>I</i> and <i>Q</i>. All maps are smoothed to 3&deg;.</small>'''
  
It is fundamental that such test have to be performed on DPC data product with clear
 
and identified properties (e.g. single $R$, $DV/V$ single fit, etc.) in order to avoid any
 
possible mis-understanding due to usage of non homogeneous data sets.
 
  
Many of the null-tests proposed are done at map level with sometime compression of their
+
Three things are worth pointing out generally about these maps.
statistical information into an angular power spectrum. However
+
Firstly, there are no obvious "features" standing out at this resolution and noise level. Earlier versions of the data processing, where there were errors in calibration for instance, would show scan-correlated structures at much larger amplitude.
together with full-sky maps it is interesting to have a closer look on some specific sources.
+
Secondly, the half-ring difference maps and the 2-year combination difference maps give a very similar overall impression. This is a very good sign, as these two types of null map cover very different time scales (1 hour to 2 years).
I would be important to compare fluxes from both polarized and un-polarized point sources with
+
Thirdly, it is impossible to know how "good" the test is just by looking at the map. There is clearly noise, but determining if it is consistent with the noise model of the instrument, and further, if it will bias the scientific results, requires more analysis.
different radiometers in order to asses possible calibration mis-match and/or polarization leakage issues.
 
Such comparison will also possibly indicate problems related to channel central frequencies.
 
The proposed set of sources would be: M42, Tau A, Cas A and Cyg A. However other <math>H \, II</math> regions
 
like Perseus are valuable. One can compare directly their fluxes from different sky surveys and/or the flux
 
of the difference map and how this is consistent with instrumental noise.
 
  
Which kind of effect is probed with a null-test on a specific time scale? Here it is a simple list. At survey time
+
===Statistical Analyses===
scale it is possible to underlying any side-lobes effects, while on time scales of full-mission, it is possible to
+
The next level of data compression is to use the angular power spectra of the null tests, and to compare to simulations in a statistically robust way. We use two different approaches.
have an indication of calibration problems when observing the sky with the same S/C orientation. Differences
+
In the first we compare pseudo-spectra of the null maps to the half-ring spectra, which are the most "free" of anticipated systmatics.
at this time scale between horns at the same frequency may also reveal central frequency and beam
+
In the second, we use noise Monte Carlos from the FFP8 simulations, where we run the mapmaking identically to the real data, over data sets with identical sampling to the real data but consisting of noise only generated to match the per-detector LFI noise model.  
issues.  
 
  
=== Total Intensity Null Tests ===
+
Here we show examples comparing the pseudo-spectra of a set of 100 Monte Carlos to the real data. We mask both data and simlations to concentrate on residuals impacting CMB analyses away from the Galactic plane.
In order to highlight different issues, several time scales and data combinations are considered. The following table
 
is a sort of null-test matrix to be filled with test results. It should be important to try to set a sort
 
of pass/fail criteria for each of the tests and to be prepared to detailed actions in order to avoid
 
and correct any failure of the tests. To assess the results an idea could be to proceed as in the nominal pipeline $i.e.$
 
to compare the angular power spectra of null test maps with a fiducial angular power spectrum of a white noise map.
 
This could be made automatic and, in case the test does not pass then a more thorough investigation could be performed.
 
This will provide an overall indication of the residuals. However structures in the residual are important as well as the
 
overall average level and visual inspection of the data is therefore fundamental.
 
  
Concerning null-tests on various time scales a comment is in order. At large time scales (i.e. of the order of
+
[[File:null_cl_ffp8_070yr1+yr3-yr2+yr4_TT_lin_88mm.png|thumb|center|400px]]
a survey or more) it is clear that the basic data set will be made of the single survey maps at
+
[[File:null_cl_ffp8_070yr1+yr3-yr2+yr4_TT_log88mm.png|thumb|center|400px|'''Figure 2: Pseudo-spectrum comparison (70GHz <i>TT</i>) of 2-year data difference (Year(1+3)-Year(2+4) in green) to the FFP8 simulation distribution (blue error bars).''']]
radiometer/horn/frequency level that will be properly combined to obtain the null-test under consideration.
 
For example at 6 months time scale we will analysis maps of the difference between different surveys
 
for the radiometer/horn/frequency under test. On the other hand at 12 months time scale we will combine
 
surveys 1 and 2 together to be compared with the same combination for surveys 3 and 4.
 
At full-mission time scale, the analysis it is not always possible e.g. at radiometer level we have only one
 
full-mission data set. However it would be interesting to combine odd surveys together and compare them
 
with even surveys again combined together.
 
On shorter time scales (i.e. less than a survey) the data products to be considered are different and
 
will be the output of the jack-knives code when different time scales are considered: the usual half-ring
 
JK on pointing period time scale and the new, if possible, jack-knives on 1 minute time scale.
 
Therefore null-tests will use both surveys/full-mission maps as well as tailored jack-knives maps.
 
  
The following table reports our total intensity null-tests matrix with a $\checkmark$ where tests are possible.
+
[[File:null_cl_ffp8_070yr1+yr3-yr2+yr4_EE_lin_88mm.png|thumb|center|400px]]
{| border="1" cellpadding = "5" cellspacing = "0" align = "centre"
+
[[File:null_cl_ffp8_070yr1+yr3-yr2+yr4_EE_log88mm.png|thumb|center|400px|'''Figure 3: Pseudo-spectrum comparison (70GHz <i>EE</i>) of 2-year data difference (Year(1+3)-Year(2+4) in green) to the FFP8 simulation distribution (blue error bars).''']]
|-
 
! Data Set
 
! 1minute
 
! 1 hour
 
! Survey
 
! Full Mission
 
|-
 
| Radiometer (M/S) || $\checkmark$ || $\checkmark$ || $\checkmark$ || $\checkmark$
 
|-
 
| Horn (M+S) || $\checkmark$ || $\checkmark$ || $\checkmark$ || $\checkmark$
 
|-
 
| Horn Pair$^1$ || || || $\checkmark$ || $\checkmark$
 
|-
 
| Frequency || $\checkmark$ || $\checkmark$ || $\checkmark$ || $\checkmark$
 
|-
 
| Cross-Frequency ||  || || $\checkmark$ || $\checkmark$
 
|}
 
$^1$ this is (M+S)/2 and differences are between couple of horns (e.g. (28M+28S)/2- (27M+27S)/2)
 
  
=== Polarisation Null Tests ===
+
[[File:null_cl_ffp8_070yr1+yr3-yr2+yr4_BB_lin_88mm.png|thumb|center|400px]]
The same arguments applies also for polarization analysis with only some differences regarding the possible
+
[[File:null_cl_ffp8_070yr1+yr3-yr2+yr4_BB_log88mm.png|thumb|center|400px|'''Figure 4: Pseudo-spectrum comparison(70GHz <i>BB</i>) of 2-year data difference (Year(1+3)-Year(2+4) in green) to the FFP8 simulation distribution (blue error bars).''']]
combination producing polarized data. Radiometer will not be available, instead of sum between M and S radiometer we will
 
consider their difference.
 
  
{| border="1" cellpadding = "5" cellspacing = "0" align = "centre"
+
Finally, we can look at the distribution of noise power from the Monte Carlos "&#8467; by &#8467;" and check where the real data fall in that distribution, to see if it is consistent with noise alone.
|-
 
! Data Set
 
! 1minute
 
! 1 hour
 
! Survey
 
! Full Mission
 
|-
 
| Horn (M-S) || $\checkmark$ || $\checkmark$ || $\checkmark$ || $\checkmark$
 
|-
 
| Horn Pair$^1$ || || || $\checkmark$ || $\checkmark$
 
|-
 
| Frequency || $\checkmark$ || $\checkmark$ || $\checkmark$ || $\checkmark$
 
|-
 
| Cross-Frequency ||  || || $\checkmark$ || $\checkmark$
 
|}
 
$^1$ this is difference between couple of horns (e.g. (28M-28S)/2- (27M-27S)/2)
 
  
=== Practical Considerations ===
+
[[File:ffp8_dist_070full-survey_1survey_3_TT88mm_nt.png|350px|]]
For practical purposed and visual inspection of the null-tests results it would be useful to produce results smoothed at $3^\circ$ (and at $10^\circ$ for highlight larger angular scales) for
+
[[File:ffp8_dist_070full-survey_1survey_3_EE88mm_nt.png|350px|]]
all the total intensity maps. For polarization, as we already did several times when comparing to $WMAP$ data,
+
[[File:ffp8_dist_070full-survey_1survey_3_BB88mm_nt.png|350px|]]
a downgrade of the product at $N_{\rm side}=128$ would be useful to highlight large scale residuals. These considerations are
 
free to evolve according to our needs.
 
  
Due to large possibilities and number of data sets to be considered, it would be desirable to have sort of automatic tools that
+
'''<small>Figure 5: Sample 70GHz null test in comparison with FFP8 null distribution for multipoles from 2 to 4. From left to right we show <i>TT</i>, <i>EE</i>, <i>BB</i>. In this case, the null test is the full mission map - (Survey 1+Survey 3). We report the probability to exceed (PTE) values for the data relative to the FFP8 noise-only distributions. All values for this example are very reasonable, suggesting that our noise model captures the important features of the data even at low multipoles.</small>'''
ingest two, or more, inputs maps and produce difference map(s) and corresponding angular power spectrum(spectra). This
 
has been implemented using Python language and interacting directly with FITS files of a specific data release. The code is
 
parallel and can run both at NERSC and at DPC producing consistent results. In addition for each null-tests performed a JSON
 
DB file is produced in which main test informations are stored together with interesting computed quantities like mean, standard deviation of the residual maps. Beside JSON files also GIF images of the null-test are produced. Such JSON and GIF files are used to create (both with Python again and with Scheme) a report in form of an HTML page from the LFI Wiki.
 
 
 
Together with images, power spectra of the residual are also produced and compared with the expected level of white noise
 
derived from the half-ring jack-knifes. With these quantities are combined to produce a sort of $\chi^2$. This gives an indication of the deviation of the residuals with respect to the white noise level. Of course underlying signal does not posses a Gaussian statistic and therefore with non-Gaussian data, the $\chi^2$ tests is less meaningful. However this gives an hint on the presence of residuals which in some cases are indeed expected: in fact making difference between odd and even survey at horn and frequency level, is a way to show the signature of the external stray-light which, although properly accounted for during the calibration procedure, has not been removed from the data.
 
  
 
==Consistency checks ==
 
==Consistency checks ==
  
All the details can be found in {{PlanckPapers|planck2013-p02}}.  
+
All the details of consistency tests performed can be found in {{PlanckPapers|planck2013-p02}} and {{PlanckPapers|planck2014-a03||Planck-2015-A03}}.  
  
===Intra frequency consistency check===
+
===Intra-frequency consistency check===
We have tested the consistency between 30, 44, and 70 GHz maps by comparing the power spectra in the multipole range around the first acoustic peak. In order to do so, we have removed the estimated contribution from unresolved point source from the spectra. We have then built the scatter plots for the three frequency pairs, i.e. 70 vs 30 GHz, 70 vs 44 GHz, and 44 vs 30 GHz, and performed a linear fit accounting for errors on both axis.
+
We have tested the consistency between 30, 44, and 70GHz maps by comparing the power spectra in the multipole range around the first acoustic peak. In order to do so, we have removed the estimated contribution from unresolved point source from the spectra. We have then built scatter plots for the three frequency pairs, i.e., 70GHz versus 30 GHz, 70GHz versus 44GHz, and 44GHz versus 30GHz, and performed a linear fit, accounting for errors on both axes.
The results reported in Fig. 1 show that the three power spectra are consistent within the errors. Moreover, please note that current error budget does not account for foreground removal, calibration, and window function uncertainties. Hence, the resulting agreement between spectra at different frequencies can be fairly considered even more significant.
+
The results reported in Fig. 6 show that the three power spectra are consistent within the errors. Moreover, note that the current error budget does not account for foreground removal, calibration, and window function uncertainties. Hence, the observed agreement between spectra at different frequencies can be considered to be even more satisfactory.
  
[[File:cons1.jpg|thumb|center|800px|'''Figure 1. Consistency between spectral estimates at different frequencies. From left to right: 70 vs 44 GHz; 70 vs 30 GHz; 44 vs 30 GHz. Solid red lines are the best fit of the
+
[[File:LFI_70vs44_DX11D_maskTCS070vs060_a.jpg|thumb|center|400px|]][[File:LFI_70vs30_DX11D_maskTCS070vs040_a.jpg|thumb|center|400px|]][[File:LFI_44vs30_DX11D_maskTCS060vs040_a.jpg|thumb|center|400px|'''Figure 6: Consistency between spectral estimates at different frequencies. From top to bottom: 70GHz versus 44 GHz; 70GHz versus 30 GHz; and 44GHz versus 30 GHz. Solid red lines are the best fit of the linear regressions, whose angular coefficients &alpha; are consistent with unity within the errors.''']]
linear regressions, whose angular coefficients <math>\alpha</math> are consistent with 1 within the errors.''']]
 
  
 
===70 GHz internal consistency check===
 
===70 GHz internal consistency check===
We use the Hausman test {{BibCite|polenta_CrossSpectra}} to assess the consistency of auto and cross spectral estimates at 70 GHz. We define the statistic:
+
We use the Hausman test {{BibCite|polenta_CrossSpectra}} to assess the consistency of auto- and cross-spectral estimates at 70 GHz. We specifically define the statistic:
  
 
:<math>
 
:<math>
H_{\ell}=\left(\hat{C_{\ell}}-\tilde{C_{\ell}}\right)/\sqrt{Var\left\{ \hat{C_{\ell}}-\tilde{C_{\ell}}\right\} }
+
H_{\ell}=\left(\hat{C_{\ell}}-\tilde{C_{\ell}}\right)/\sqrt{{\rm Var}\left\{ \hat{C_{\ell}}-\tilde{C_{\ell}}\right\} },
 
</math>
 
</math>
  
where <math>\hat{C_{\ell}}</math> and <math>\tilde{C_{\ell}}</math> represent auto- and
+
where <math>\hat{C_{\ell}}</math> and <math>\tilde{C_{\ell}}</math> represent auto- and
cross-spectra respectively. In order to combine information from different multipoles into a single quantity, we define the following quantity:
+
cross-spectra, respectively. In order to combine information from different multipoles into a single quantity, we define
  
 
:<math>
 
:<math>
B_{L}(r)=\frac{1}{\sqrt{L}}\sum_{\ell=2}^{[Lr]}H_{\ell},r\in\left[0,1\right]
+
B_{L}(r)=\frac{1}{\sqrt{L}}\sum_{\ell=2}^{[Lr]}H_{\ell},r\in\left[0,1\right],
 
</math>
 
</math>
  
where <math>[.]</math> denotes integer part. The distribution of <math>B_{L}(r)</math>
+
where square brackets denote the integer part. The distribution of <i>B<sub>L</sub></i>(<i>r</i>)
 
converges (in a functional sense) to a Brownian motion process, which can be studied through the statistics
 
converges (in a functional sense) to a Brownian motion process, which can be studied through the statistics
<math>s_{1}=\textrm{sup}_{r}B_{L}(r)</math>
+
<i>s</i><sub>1</sub>=sup<sub><i>r</i></sub><i>B<sub>L</sub></i>(<i>r</i>),
<math>s_{2}=\textrm{sup}_{r}|B_{L}(r)|</math> and
+
<i>s</i><sub>2</sub>=sup<sub><i>r</i></sub>|<i>B<sub>L</sub></i>(<i>r</i>)|, and
<math>s_{3}=\int_{0}^{1}B_{L}^{2}(r)dr</math>. Using the ''FFP6'' simulations
+
<i>s</i><sub>3</sub>=&#8747;<sub>0</sub><sup>1</sup><i>B<sub>L</sub></i><sup>2</sup>(<i>r</i>)dr. Using the "FFP7" simulations,
we derive the empirical distribution for all the three test statistics and we compare with results obtained from Planck data
+
we derive empirical distributions for all the three test statistics and compare with results obtained from Planck data
(see Fig. 2). Thus, the Hausman test shows no statistically significant inconsistencies between the two spectral
+
(see Fig. 7). We find that the Hausman test shows no statistically significant inconsistencies between the two spectral
 
estimates.
 
estimates.
  
[[File:cons2.jpg|thumb|center|800px|'''Figure 2. From left to right, the empirical
+
[[File:cons2.jpg|thumb|center|800px|'''Figure 7: From left to right, the empirical
distribution (estimated via ''FFP6'') of the <math>s_{1},s_{2},s_{3}</math>
+
distribution (estimated via FFP7) of the <i>s</i><sub>1</sub>, <i>s</i><sub>2</sub>, and <i>s</i><sub>3</sub>
statistics (see text). The vertical line represents 70 GHz data.''']]
+
statistics (see text). The vertical line represents 70GHz data.''']]
  
As a further test, we have estimated the temperature power spectrum for each of three horn-pair map, and we have compared the
+
As a further test, we have estimated the temperature power spectrum for each of three horn-pair maps, and have compared the
results with the spectrum obtained from all the 12 radiometers shown above. In Fig. 3 we show the
+
results with the spectrum obtained from all 12 radiometers shown above. In Fig. 8 we plot the
 
difference between the horn-pair and the combined power spectra.
 
difference between the horn-pair and the combined power spectra.
Again, The error bars have been estimated from the ''FFP6'' simulated dataset. A <math>\chi^{2}</math> analysis of the residual shows that they are compatible with the null hypothesis, confirming the
+
Again, the error bars have been estimated from the FFP7 simulated data set. A &chi;<sup>2</sup> analysis of the residual shows that they are compatible with the null hypothesis, confirming the
 
strong consistency of the estimates.
 
strong consistency of the estimates.
  
[[File:cons3.jpg|thumb|center|500px|'''Figure 3. Residuals between the auto power spectra of the horn pair maps and the power spectrum of the full 70 GHz frequency map. Error bars are derived from ''FFP6'' simulations.''']]
+
[[File:cons3.jpg|thumb|center|500px|'''Figure 8: Residuals between the auto-power spectra of the horn-pair maps and the power spectrum of the full 70GHz frequency map. Error bars here are derived from FFP7 simulations.''']]
  
 
<!--
 
<!--

Latest revision as of 13:37, 19 September 2016

Overview[edit]

Data validation is critical at each step of the analysis pipeline. Much of the LFI data validation is based on null tests. Here we present some examples from the current release, with comments on relevant time scales and sensitivity to various systematics.

Null tests approach[edit]

Null tests at map level are performed routinely, whenever changes are made to the mapmaking pipeline. These include differences at survey, year, 2-year, half- mission and half-ring levels, for single detectors, horns, horn pairs and full frequency complements. Where possible, map differences are generated in I, Q and U. For this release, we use the Full Focal Plane 8 (FFP8) simulations for comparison. We can use FFP8 noise simulations, identical to the data in terms of sky sampling and with matching time domain noise characteristics, to make statistical arguments about the likelihood of the noise observed in the actual data nulls. In general null tests are performed to highlight possible issues in the data related to instrumental systematic effecst not properly accounted for within the processing pipeline, or related to known changes in the operational conditions (e.g., switch-over of the sorption coolers), or related to intrinsic instrument properties coupled with the sky signal, such as stray light contamination. Such null-tests can be performed by using data on different time scales ranging from 1 minute to 1 year of observations, at different unit levels (radiometer, horn, horn-pair), within frequency and cross-frequency, both in total intensity, and, when applicable, in polarization.

Sample Null Maps[edit]

Figure 1: Null map samples for 30GHz (top), 44GHz (middle), and 70GHz (bottom). From left to right: half-ring differences in I and Q; 2-year combination differences in I and Q. All maps are smoothed to 3°.


Three things are worth pointing out generally about these maps. Firstly, there are no obvious "features" standing out at this resolution and noise level. Earlier versions of the data processing, where there were errors in calibration for instance, would show scan-correlated structures at much larger amplitude. Secondly, the half-ring difference maps and the 2-year combination difference maps give a very similar overall impression. This is a very good sign, as these two types of null map cover very different time scales (1 hour to 2 years). Thirdly, it is impossible to know how "good" the test is just by looking at the map. There is clearly noise, but determining if it is consistent with the noise model of the instrument, and further, if it will bias the scientific results, requires more analysis.

Statistical Analyses[edit]

The next level of data compression is to use the angular power spectra of the null tests, and to compare to simulations in a statistically robust way. We use two different approaches. In the first we compare pseudo-spectra of the null maps to the half-ring spectra, which are the most "free" of anticipated systmatics. In the second, we use noise Monte Carlos from the FFP8 simulations, where we run the mapmaking identically to the real data, over data sets with identical sampling to the real data but consisting of noise only generated to match the per-detector LFI noise model.

Here we show examples comparing the pseudo-spectra of a set of 100 Monte Carlos to the real data. We mask both data and simlations to concentrate on residuals impacting CMB analyses away from the Galactic plane.

Null cl ffp8 070yr1+yr3-yr2+yr4 TT lin 88mm.png
Figure 2: Pseudo-spectrum comparison (70GHz TT) of 2-year data difference (Year(1+3)-Year(2+4) in green) to the FFP8 simulation distribution (blue error bars).
Null cl ffp8 070yr1+yr3-yr2+yr4 EE lin 88mm.png
Figure 3: Pseudo-spectrum comparison (70GHz EE) of 2-year data difference (Year(1+3)-Year(2+4) in green) to the FFP8 simulation distribution (blue error bars).
Null cl ffp8 070yr1+yr3-yr2+yr4 BB lin 88mm.png
Figure 4: Pseudo-spectrum comparison(70GHz BB) of 2-year data difference (Year(1+3)-Year(2+4) in green) to the FFP8 simulation distribution (blue error bars).

Finally, we can look at the distribution of noise power from the Monte Carlos "ℓ by ℓ" and check where the real data fall in that distribution, to see if it is consistent with noise alone.

Ffp8 dist 070full-survey 1survey 3 TT88mm nt.png Ffp8 dist 070full-survey 1survey 3 EE88mm nt.png Ffp8 dist 070full-survey 1survey 3 BB88mm nt.png

Figure 5: Sample 70GHz null test in comparison with FFP8 null distribution for multipoles from 2 to 4. From left to right we show TT, EE, BB. In this case, the null test is the full mission map - (Survey 1+Survey 3). We report the probability to exceed (PTE) values for the data relative to the FFP8 noise-only distributions. All values for this example are very reasonable, suggesting that our noise model captures the important features of the data even at low multipoles.

Consistency checks[edit]

All the details of consistency tests performed can be found in Planck-2013-II[1] and Planck-2015-A03[2].

Intra-frequency consistency check[edit]

We have tested the consistency between 30, 44, and 70GHz maps by comparing the power spectra in the multipole range around the first acoustic peak. In order to do so, we have removed the estimated contribution from unresolved point source from the spectra. We have then built scatter plots for the three frequency pairs, i.e., 70GHz versus 30 GHz, 70GHz versus 44GHz, and 44GHz versus 30GHz, and performed a linear fit, accounting for errors on both axes. The results reported in Fig. 6 show that the three power spectra are consistent within the errors. Moreover, note that the current error budget does not account for foreground removal, calibration, and window function uncertainties. Hence, the observed agreement between spectra at different frequencies can be considered to be even more satisfactory.

LFI 70vs44 DX11D maskTCS070vs060 a.jpg
LFI 70vs30 DX11D maskTCS070vs040 a.jpg
Figure 6: Consistency between spectral estimates at different frequencies. From top to bottom: 70GHz versus 44 GHz; 70GHz versus 30 GHz; and 44GHz versus 30 GHz. Solid red lines are the best fit of the linear regressions, whose angular coefficients α are consistent with unity within the errors.

70 GHz internal consistency check[edit]

We use the Hausman test [3] to assess the consistency of auto- and cross-spectral estimates at 70 GHz. We specifically define the statistic:

[math] H_{\ell}=\left(\hat{C_{\ell}}-\tilde{C_{\ell}}\right)/\sqrt{{\rm Var}\left\{ \hat{C_{\ell}}-\tilde{C_{\ell}}\right\} }, [/math]

where [math]\hat{C_{\ell}}[/math] and [math]\tilde{C_{\ell}}[/math] represent auto- and cross-spectra, respectively. In order to combine information from different multipoles into a single quantity, we define

[math] B_{L}(r)=\frac{1}{\sqrt{L}}\sum_{\ell=2}^{[Lr]}H_{\ell},r\in\left[0,1\right], [/math]

where square brackets denote the integer part. The distribution of BL(r) converges (in a functional sense) to a Brownian motion process, which can be studied through the statistics s1=suprBL(r), s2=supr|BL(r)|, and s3=∫01BL2(r)dr. Using the "FFP7" simulations, we derive empirical distributions for all the three test statistics and compare with results obtained from Planck data (see Fig. 7). We find that the Hausman test shows no statistically significant inconsistencies between the two spectral estimates.

Figure 7: From left to right, the empirical distribution (estimated via FFP7) of the s1, s2, and s3 statistics (see text). The vertical line represents 70GHz data.

As a further test, we have estimated the temperature power spectrum for each of three horn-pair maps, and have compared the results with the spectrum obtained from all 12 radiometers shown above. In Fig. 8 we plot the difference between the horn-pair and the combined power spectra. Again, the error bars have been estimated from the FFP7 simulated data set. A χ2 analysis of the residual shows that they are compatible with the null hypothesis, confirming the strong consistency of the estimates.

Figure 8: Residuals between the auto-power spectra of the horn-pair maps and the power spectrum of the full 70GHz frequency map. Error bars here are derived from FFP7 simulations.


References[edit]

  1. Planck 2013 results. II. Low Frequency Instrument data processing, Planck Collaboration, 2014, A&A, 571, A2
  2. Planck 2015 results. II. LFI processing, Planck Collaboration, 2016, A&A, 594, A2.
  3. Unbiased estimation of an angular power spectrum, G. Polenta, D. Marinucci, A. Balbi, P. de Bernardis, E. Hivon, S. Masi, P. Natoli, N. Vittorio, J. Cosmology Astropart. Phys., 11, 1, (2005).

(Planck) Low Frequency Instrument

Cosmic Microwave background

[LFI meaning]: absolute calibration refers to the 0th order calibration for each channel, 1 single number, while the relative calibration refers to the component of the calibration that varies pointing period by pointing period.