TODO: fix missing publication references
- 1 Introduction
- 2 HFI simulation pipeline
- 2.1 The Planck Sky Model
- 2.2 PSM maps to timelines
- 2.3 Instrument simulation
- 2.4 Important note about noise
- 2.5 TOI processing
- 2.6 Effects and processings not simulated
- 2.7 Mapmaking
- 2.8 Post-processing
- 3 Delivered Products
- 4 References
While PR2-2015 simulation (FFP8) was focused on the reproduction of the flight data Gaussian noise power spectra and their time variations, this new PR3-2018 simulation (FFP10) brings for the first time the simulation of the HFI instrumental effects. Moreover these simulated systematic effects are corrected in the timelines with the same algorithms (and when possible, codes) as used for the flight data.
The FFP10 dataset is made of several full-sky map sets in FITS format:
- 300 realizations of noise and systematic effect residuals per HFI frequency,
- 1000 realizations of lensed scalar CMB convolved with effective beams using the FEBeCoP software per HFI frequency,
- one fiducial simulation with full sky signal components: lensed scalar CMB, foregrounds, noise and systematic effect residuals, for all HFI frequencies,
- separated input sky components per HFI bolometer
HFI simulation pipeline
HFI end-to-end simulation pipeline uses several software components which are described below in the order they are used, as seen in the following schematic.
The Planck Sky Model
The FFP10 simulations input sky is the coaddition of the following sky components generated using the Planck Sky Model (PSM) package (Delabrouille et al. 2013). Each of these components is convovled with each HFI bolometer spectral response by the PSM software, using the same spectral responses as in 2015 FFP8.
The FFP10 lensed CMB maps are generated in the same way as for the previous FFP8 release and described in detail in Planck-2015-A12. FFP10 simulations only contain the scalar part lensed with independent lensing potential realizations.
One "fiducial" realization is used as input CMB for the full end-to-end pipeline, and 1000 other realizations are convolved with FEBeCoP effective beams to be combined with the 300 noise and systematic residuals maps.
The main cosmological parameters used are:
|Parameter||Symbol||PR2-2015 (FFP8.1)||PR3-2018 (FFP10)|
|Cold dark matter density|
|Neutrino energy density|
|Thomson optical depth through reionization|
|Primordial curvature perturbation spectrum:|
Diffuse Galactic components
The dust model maps are built as follows. The Stokes I map at 353 GHz is the dust total intensity Planck map obtained by applying the Generalized Needlet Internal Linear Combination (GNILC) method of Remazeilles et al. (2011) to the 2015 release of Planck HFI maps (PR2-2015), as described in Planck-2016-XLVIII, and subtracting the monopole of the cosmic infrared background (Planck-2015-A08). For the Stokes Q and U maps at 353 GHz, we started with one realization of the statistical model of Vansyngel et al. (2017). The portions of the simulated Stokes Q and U maps near Galactic plane were replaced by the Planck 353-GHz PR2 data. The transition between data and simulation was made using a Galactic mask with a 5° apodization, which leaves 68% of the sky unmasked at high latitude. Furthermore, on the full sky, the large angular scales in the simulated Stokes Q and U maps were replaced by the Planck data. Specifically, the first ten multipoles came from the Planck 353-GHz PR2 data, while over the simulations were introduced smoothly using the function .
To scale the dust Stokes maps from the 353-GHz templates to other Planck frequencies, we follow the FFP8 prescription (Planck-2015-A12). A different modified blackbody emission law is used for each of the HEALPix pixels. The dust spectral index used for scaling in frequency is different for frequencies above and below 353 GHz. For frequencies above 353 GHz, the parameters come from the modified blackbody fit of the dust spectral energy distribution (SED) for total intensity obtained by applying the GNILC method to the PR2 HFI maps (Planck-2016-XLVIII). These parameter maps have a variable angular resolution that decreases towards high Galactic latitudes. Below 353 GHz, we also use the dust temperature map from Planck-2016-XLVIII, but with a distinct map of spectral indices from Planck-2013-XI, which has an angular resolution of 30’. These maps introduce significant spectral variations over the sky at high Galactic latitudes, and between the dust SEDs for total intensity and polarization. The spatial variations of the dust SED for polarization in the FFP10 sky model are quantified in Planck Collaboration LIV (2017).
Synchrotron intensity is modelled by scaling in frequency the 408-MHz template map from Haslam et al. (1982), as reprocessed by Remazeilles et al. (2015) using a single power law per pixel. The pixel-dependent spectral index is derived from an analysis of WMAP data by Miville-Deschênes et al. (2008). The generation of synchrotron polarization follows the prescription of Delabrouille et al. (2013).
- Other components
Free-free, spinning dust models, and Galactic CO emissions are essentially the same as used for the FFP8 sky model (Planck-2015-A12), but the actual synchrotron and free-free maps used for FFP10 are obtained with a different realization of small-scale fluctuations of the intensity. CO maps do not include small-scale fluctuations, and are generated from the spectroscopic survey of Dame et al. (2001). None of these three components is polarized in the FFP10 simulations.
Unresolved point sources and cosmic infrared background
Catalogues of individual radio and low-redshift infrared sources are generated in the same way as for FFP8 simulations (Planck-2015-A12), but use a different seed for random number generation. Number counts for three types of galaxies (early-type proto-spheroids, and more recent spiral and starburst galaxies) are based on the model of Cai et al. (2013). The entire Hubble volume out to redshift is cut into 64 spherical shells and for each shell we generate a map of density contrast integrated along the line of sight between and , such that the statistics of these density contrast maps (i.e., power spectrum of linear density fluctuations, and cross-spectra between adjacent shells, as well as with the CMB lensing potential), obey statistics computed using the Cosmic Linear Anisotropy Solving System (CLASS) code (Blas et al. 2011; Di Dio et al. 2013). For each type of galaxy, a catalogue of randomly-generated galaxies is generated for each shell, following the appropriate number counts. These galaxies are then distributed in the shell to generate a single intensity map at a given reference frequency, which is scaled across frequencies using the prototype galaxy SED at the appropriate redshift.
A full-sky catalogue of galaxy clusters is generated based on number counts following the method of Delabrouille et al. (2002). The mass function of Tinker et al. (2008) is used to predict number counts. Clusters are distributed in redshift shells, proportionally to the density contrast in each pixel with a bias , in agreement with the linear bias model of Mo & White (1996). For each cluster, we assign a universal profile based on XMM observations, as described in Arnaud et al. (2010). Relativistic corrections are included to first order following the expansion of Nozawa et al. (1998). To assign an SZ flux to each cluster, we use a mass bias of to match actual cluster number counts observed by Planck for the best-fit cosmological model coming from CMB observations. We use the specific value .
The kinematic SZ effect is computed by assigning to each cluster a radial velocity that is randomly drawn from a centred Gaussian distribution, with a redshift-dependent standard deviation that is computed from the power spectrum of density fluctuations. This neglects correlations between cluster motions, such as bulk flows or pairwise velocities of nearby clusters.
PSM maps to timelines
The LevelS software package (Reinecke et al. 2006 ) is used to convert the PSM maps to timelines for each bolometer.
- using conviqtv3, the PSM maps are convolved with the same scanning beams as for FFP8, which were produced by stacking intensity-only observations of planets, and to which a fake polarization has been added using a simple model based on each bolometer polarization angle and leakage.
- the convolved PSM maps are then scanned to timelines with multimod, using the same scanning strategy as the 2018 flight data release. As explained in the HFI DPC paper, the only difference between the 2018 scanning strategy and the 2015 one is that about 1000 stable pointing periods at the end of the mission are omitted in 2018, because it has been found that the data quality was significantly lower in this interval.
The main new aspect of FFP10 is the production of E2E simulations. These include all significant systematic effects, and are used to produce maps of noise plus systematic effect residuals. The stim pipeline adds the modelled instrumental systematic effects at the timeline level. It includes noise only up to the time response convolution step, after which the signal is added and the systematics simulated. It was shown in appendix B.3.1 of Planck-2016-XLVI that, including the CMB map in the inputs or adding it after SRoll processing, leads to differences for the power spectra in CMB channels below the level. This justifies the use of CMB swapping even when non-Gaussian systematic effects dominate over the TOI detector noise.
Here are the main systematic effects of these E2E simulations:
- White noise: the noise is based on a physical model composed of photon noise, phonon noise, and electronic noise. The time-transfer functions are different for these three noise sources. A timeline of noise only is created, with the level adjusted to agree with the observed TOI white noise after removal of the sky signal averaged in a ring.
- Bolometer signal time-response convolution: the photon white noise is convolved with the bolometer time response using the same code and same parameters as in the 2015 TOI processing. A second white noise contribution is added to the convolved photon white noise to simulate the electronics noise.
- Noise auto-correlation due to deglitching: it has been found that the deglitching step in the TOI processing creates noise auto-correlation by flagging samples that are synchronous with the sky. Nevertheless, since we do not simulate the cosmic-ray glitches, we mimic this behaviour by adjusting the noise of samples above a given threshold to simulate their flagging.
- Time response deconvolution: the timeline containing the photon and electronic noise contributions is then deconvolved with the bolometer time response and low-pass filtered to limit the amplification of the high-frequency noise, using the same parameters as in the 2015 data TOI processing.
The input sky signal timeline is added to the convolved/deconvolved noise timeline and is then put through the instrument simulation. Note that the sky signal is not convolved/deconvolved with the bolometer time response, since it is already convolved with the scanning beam extracted from the 2015 TOI processing output and thus already contains the low-pass filter and residuals associated with the time-response deconvolution.
- Simulation of the signal non-linearity: the first step of electronics simulation is the conversion of the input sky plus noise signal from KCMB units to analog-to-digital units (ADU) using the detector response measured on the ground and assumed to be stable in time. The ADU signal is then fed through a simulator of a non-linear analogue-to-digital converter (ADCNL). This step is the one introducing complexity into the signal, inducing time variation of the response, and causing gain difference with respect to the ground-based measurements. This corresponds to specific new correction steps in the mapmaking.
The ADCNL transfer-function simulation is based on the TOI processing, with correction from the ground measurements, combined with in-flight measurements carried out during the warm extension of the mission. A reference simulation is built for each bolometer, which minimizes the difference between the simulation and the data gain variations, measured in a first run of the SRoll mapmaking. Realizations of the ADCNL are then drawn to mimic the variable behaviour of the gains seen in the 2018 data.
- Compression/decompression: the signal is then compressed by the lossy algorithm required by the telemetry rate allocated to the HFI instrument. While very close to the compression algorithm used on-board, the one used in the simulation pipeline differs slightly, due to the non-simulation of the cosmic-ray glitches, together with the use of the average of the signal in the compression slice.
The number of compression steps, the signal mean of each compression slice and the step value for each sample are then used by the decompression algorithm to reconstruct the modulated signal.
Important note about noise
As stated in the introduction, FFP10 focus is on the simulation and correction of the main instrumental effects and systematics. It thus uses a noise model which doesn't vary in time, contrary to FFP8 simulations which used realizations of one noise power spectrum per stable pointing period and per detector. Doing so, all systematic residuals in FFP8 are considered as Gaussian noise, which time variations follow more closely the flight data.
So if your interest is in noise variations and accuracy rather than instrumental effects and systematics, you may prefer using FFP8 noise maps instead of FFP10. This is particularly true for 545 GHz and 857 GHz, which don't contain instrumental effects and systematics in FFP10.
The TOIs issued from the steps outlined above are then processed in the same way as the flight TOI data. Because of the granularity needed and the required computational performance, the TOI processing pipeline applied to the simulated data is highly optimized and slightly different from the one applied the data. The specific steps are the following.
- ADCNL correction: the ADCNL correction is carried out with the same parameters as the 2015 data TOI processing, and with the same algorithm. The difference between the realizations of ADC transfer function used for simulation and the constant one used for TOI processing is tuned to reproduce the uncertainties and residuals found in 2015 processed TOI.
- Demodulation: signal demodulation is also performed in the same way as the flight TOI processing. First, the signal is converted from ADU to volts. Next, the signal is demodulated by subtracting from each sample the average of the modulated signal over 1 hour and then taking the opposite value for negative parity samples.
- Conversion to watts and thermal baseline subtraction: the demodulated signal is then converted to watts (ignoring the conversion non-linearity of the bolometers and amplifiers, which has been shown to be negligible). Finally, a thermal baseline is subtracted; this is derived from the flight signals of the two dark bolometers, smoothed over 1 minute.
- 1/f noise: a 1/f type noise component is added to each signal ring, with parameters (slope and knee frequency) adjusted on the flight data.
- Projection to HPR: the signal timeline is then projected and binned to HEALPix pixels for each stable pointing period (HEALPix rings, or HPR) after removal of flight-flagged data (unstable pointing periods, glitches, Solar system objects, planets, etc.).
- 4-K line residuals: a HPR of the 4-K line residuals for each bolometer, built by stacking the 2015 TOI, is added to the simulation output HPR.
Effects and processings not simulated
- no discrete point sources,
- no glitching/deglitching, only deglitching-induced noise auto-correlation,
- no 4-K line simulation and removal, only addition of their residuals,
- no bolometer volts-to-watts conversion non-linearity from the bolometers and amplifiers,
- no far sidelobes (FSLs) are added or removed,
- reduced simulation pipeline at 545 GHz and 857 GHz
To be more specific about this last item, the processing uses a reduced simulation pipeline without electronics simulation. This contains only photon and electronic noise, deglitching noise auto-correlation, time-response convolution/deconvolution, and 1/f noise. Bolometer by bolometer baseline addition and thermal baseline subtraction, compression/decompression, and 4-K line residuals are not included.
The next stage is to use the SRoll mapmaking on the stim HPR. The following SRoll parameters are all the same for simulation mapmaking as for the data:
- thermal dust, CO, and free-free map templates,
- detector NEP and polarization parameters,
- detector pointings,
- bad rings list and sample flagging
The FSL removal performed in the SRoll destriper is not activated (since no FSL effect is included in the input). The total dipole removed by SRoll is the same as the input in the sky TOIs generated by LevelS (which exact amplitude and direction are given in the HFI DPC paper (REF!)).
- Noise alignment: an additional noise component is added to align the noise levels of the simulations with the noise estimate from the 2018 odd-even ring maps. Of course, this adjustment of the noise level may not satisfy all the other noise null tests. This alignment is different for temperature and for polarization maps, in order to simulate the effect of the noise correlation between detectors within a PSB.
- Monopole adjustment: a constant value is added to each simulated map to bring its monopole to the same value as the corresponding 2018 flight maps, given in the HFI DPC paper (REF!).
- Signal subtraction: from each map, the input sky (CMB and foregrounds) is subtracted to build the “noise and residual systematics frequency maps.” The systematics include additional noise and residuals induced by sky-signal distortion. These maps are part of the FFP10 data set.
Input sky components
The separated input sky components generated by the Planck Sky Model are available for each HFI detector.
The 1000 CMB maps are convolved with the FEBeCoP effective beams computed using the 2015 scanning beams (Planck Collaboration VII 2016, appendix B), and the updated scanning strategy described in the "Beams and scanning strategy" section.
Noise and instrumental effect residual maps
- Planck 2015 results. XII. Full Focal Plane Simulations, Planck Collaboration, 2016, A&A, 594, A12.
- Planck intermediate results. XLVIII. Disentangling Galactic dust emission and cosmic infrared background anisotropies, Planck Collaboration Int. XLVIII A&A, 596, A109, (2016).
- Planck 2015 results. VIII. High Frequency Instrument data processing: Calibration and maps, Planck Collaboration, 2016, A&A, 594, A8.
- Planck 2013 results. XII. All-sky model of thermal dust emission, Planck Collaboration, 2014, A&A, 571, A12.
- Planck intermediate results. XLVI. Reduction of large-scale systematic effects in HFI polarization maps and estimation of the reionization optical depth, Planck Collaboration Int. XLVI A&A, 596, A107, (2016).
(Planck) High Frequency Instrument
Flexible Image Transfer Specification
Cosmic Microwave background
Planck Sky Model
(Hierarchical Equal Area isoLatitude Pixelation of a sphere, <ref name="Template:Gorski2005">HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere, K. M. Górski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, M. Bartelmann, ApJ, 622, 759-771, (2005).
Data Processing Center
analog to digital converter
Noise Equivalent Power