- 1 Introduction
- 2 HFI Simulation Pipeline
- 3 Simulation Pipeline Schematic Figure
- 4 Delivered Products
While PR2-2015 simulations were focused on the reproduction of the flight data gaussian noise power spectra and their time variations, this new PR3-2018 release brings for the first time the simulation of the HFI instrumental effects and their correction in the timelines with the same algorithms (and when possible, codes) as used for the flight data.
This release is made of several full-sky map sets in FITS format:
- 300 realizations of noise and systematic effects only per HFI frequency,
- 1000 realizations of lensed scalar CMB convolved with effective beams using the FEBeCoP software,
- one fiducial simulation with full sky signal components: lensed scalar CMB, foregrounds, noise and systematic effects, for all HFI frequencies,
- separated input sky components per detector
HFI Simulation Pipeline
HFI end-to-end simulation pipeline uses several software components which are described below in the order they are used.
The input sky is generated using the Planck Sky Model (PSM) software.
Diffuse Galactic components
The dust model maps are built as follows. The Stokes I map at 353 GHz is the dust total intensity Planck map obtained by applying the Generalized Needlet Internal Linear Combination (GNILC) method of Remazeilles et al. (2011) to the 2015 release of Planck HFI maps (PR2), as described in Planck Collaboration Int. XLVIII (2016), and subtracting the monopole of the cosmic infrared background (Planck Collaboration VIII 2016). For the Stokes Q and U maps at 353 GHz, we started with one realization of the statistical model of Vansyngel et al. (2017). The portions of the simulated Stokes Q and U maps near Galactic plane were replaced by the Planck 353-GHz PR2 data. The transition between data and the simulations was made using a Galactic mask with a 5 ◦ apodization, which leaves 68 % of the sky unmasked at high latitude. Furthermore, on the full sky, the large angular scales in the simulated Stokes Q and U maps were replaced by the Planck data. Specifically, the first ten multipoles came from the Planck data, while over l = 10–20 the simulations were introduced smoothly using the function (1 + sin [π (15 − l) /10]) /2.
To scale the dust Stokes maps from the 353-GHz templates to other Planck frequencies, we follow the FFP8 prescription (Planck Collaboration XII 2016). A different modified blackbody emission law is used for each of the N side = 2048 HEALPix pixels. The dust spectral index used for scaling in frequency is different for frequencies above and below 353 GHz. For frequencies above 353 GHz, the parameters come from the modified blackbody fit of the dust spectral energy distribution (SED) for total intensity obtained by applying the GNILC method to the PR2 HFI maps (Planck Collaboration Int. XLVIII 2016). These parameter maps have a variable angular resolution that decreases towards high Galactic latitudes. Below 353 GHz, we also use the dust temperature map from Planck Collaboration Int. XLVIII (2016), but with a distinct map of spectral indices from Planck Collaboration XI (2014), which has an angular resolution of 30’. These maps introduce significant spectral variations over the sky at high Galactic latitudes, and between the dust SEDs for total intensity and polarization. The spatial variations of the dust SED for polarization in the FFP10 sky model are quantified in Planck Collaboration LIV (2017).
Synchrotron intensity is modelled by scaling in frequency the 408-MHz template map from Haslam et al. (1982), as reprocessed by Remazeilles et al. (2015) using a single power law per pixel. The pixel-dependent spectral index is derived from an analysis of WMAP data by Miville-Deschênes et al. (2008). The generation of synchrotron polarization follows the prescription of Delabrouille et al. (2013).
Free-free, spinning dust models, and Galactic CO emissions are essentially the same as used for the FFP8 sky model (Planck Collaboration XII 2016), but the actual synchrotron and free-free maps used for FFP10 are obtained with a different realization of small-scale fluctuations of the intensity. CO maps do not include small-scale fluctuations, and are generated from the spectroscopic survey of Dame et al. (2001). None of these three components is polarized in the FFP10 simulations.
Unresolved point sources and the cosmic infrared background
Catalogues of individual radio and low-redshift infrared sources are generated in the same way as for FFP8 simulations (Planck Collaboration XII 2016), but use a different seed for random number generation. Number counts for three types of galaxies (early-type proto-spheroids, and more recent spiral and starburst galaxies) are based on the model of Cai et al. (2013). The entire Hubble volume out to redshift z = 6 is cut into 64 spherical shells and for each shell we generate a map of density contrast integrated along the line of sight between z min and z max , such that the statistics of these density contrast maps (i.e., power spectrum of linear density fluctuations, and cross-spectra between adjacent shells, as well as with the CMB lensing potential), obey statistics computed using the Cosmic Linear Anisotropy Solving System (CLASS) code (Blas et al. 2011; Di Dio et al. 2013). For each type of galaxy, a catalogue of randomly-generated galaxies is generated for each shell, following the appropriate number counts. These galaxies are then distributed in the shell to generate a single intensity map at a given reference frequency, which is scaled across frequencies using the prototype galaxy SED at the appropriate redshift.
A full-sky catalogue of galaxy clusters is generated based on number counts following the method of Delabrouille et al. (2002). The mass function of Tinker et al. (2008) is used to predict number counts. Clusters are distributed in redshift shells, proportionally to the density contrast in each pixel with a bias b(z, M), in agreement with the linear bias model of Mo & White (1996). For each cluster, we assign a universal profile based on XMM observations, as described in Arnaud et al. (2010). Relativistic corrections are included to first order following the expansion of Nozawa et al. (1998). To assign an SZ flux to each cluster, we use a mass bias of M Xray /M true = 0.63 to match actual cluster number counts observed by Planck for the best-fit cosmological model coming from CMB observations. We use the specific value σ 8 = 0.8159.
The kinematic SZ effect is computed by assigning to each cluster a radial velocity that is randomly drawn from a centred Gaussian distribution, with a redshift-dependent standard deviation that is computed from the power spectrum of density fluctuations. This neglects correlations between cluster motions, such as bulk flows or pairwise velocities of nearby clusters.
PSM Maps to Timelines
As for the 2015 data release, the frequency simulated maps are built using the LevelS software package and its modules conviqt and multimod. The generated TOIs are convolved with the same scanning beam as for the 2015 data release, but with an updated 2018 scanning strategy omitting the 1000 pointing periods from the end of the mission; see Sect. 2.1.3). Scanning beams are the 2015 intensity-only scanning beams issued from the 2015 maps, to which a fake polarization is added using a simple model based on each bolometer polarization angle and leakage.
The main new aspect of the HFI 2018 simulations is the production of E2E simulations. These include all significant systematic effects, and are used to produce maps of noise plus systematic effect residuals. The stim pipeline adds the modelled instrumental systematic effects at the timeline level. It includes noise only up to the time response convolution step, after which the signal is added and the systematics simulated. It was shown in appendix B.3.1 of LowEll2016 that, including the CMB map in the inputs or adding it after SRoll processing, leads to differences for the power spectra in CMB channels below the 10 −4 μK 2 level. This justifies the use of CMB swapping even when non-Gaussian systematic effects dominate over the TOI detector noise.
here are the main systematic effect ingredients of the E2E simulations.
White noise: the noise is based on a physical model composed of photon noise, phonon noise, and electronic noise. The time-transfer functions are different for these three noise sources. A timeline of noise only is created, with the level adjusted to agree with the observed TOI white noise after removal of the sky signal averaged in a ring.
Bolometer signal time-response convolution: the photon white noise is convolved with the bolometer time response using the same code and same parameters as in the 2015 TOI processing. A second white noise contribution is added to the convolved photon white noise to simulate the electronics noise.
Noise auto-correlation due to deglitching: it has been found that the deglitching step in the TOI processing creates noise auto-correlation by flagging samples that are synchronous with the sky. Nevertheless, since we do not simulate the cosmicray glitches, we mimic this behaviour by adjusting the noise of samples above a given threshold to simulate their flagging.
Time response deconvolution: the timeline containing the photon and electronic noise contributions is then deconvolved with the bolometer time response and low-pass filtered to limit the amplification of the high-frequency noise, using the same parameters as in the 2015 data TOI processing.
The input sky signal timeline is added to the convolved/deconvolved noise timeline and is then put through the instrument simulation. Note that the sky signal is not convolved/deconvolved with the bolometer time response, since it is already convolved with the scanning beam extracted from the 2015 TOI processing output and thus already contains the low-pass filter associated with the time-response deconvolution.
Simulation of the signal non-linearity: the first step of electronics simulation is the conversion of the input sky plus noise signal from K CMB units to analog-to-digital units (ADU) using the detector response measured on the ground and assumed to be very stable in time. The ADU signal is then fed through a simulator of a non-linear analogue-to-digital converter. This step is the one introducing complexity into the signal, inducing time variation of the response, and causing gain difference with respect to the ground-based measurements. This corresponds to specific new modules of correction in the mapmaking.
The ADCNL transfer-function simulation is based on the TOI processing, with correction from the ground measurements, combined with in-flight measurements carried out during the warm extension of the mission. A reference simulation is built for each bolometer, which minimizes the difference between the simulation and the data gain variations, measured in a first run of the SRoll mapmaking. Realizations of the ADCNL are then drawn to mimic the variable behaviour of the gains seen in the 2018 data.
Compression/decompression: the signal is then compressed by the lossy algorithm required by the telemetry rate allocated to the HFI instrument. While very close to the compression algorithm used on-board, the one used in the simulation pipeline differs slightly, due to the non-simulation of the cosmicray glitches, together with the use of the average of the signal in the compression slice. The number of compression steps, the signal mean of each compression slice and the step value for each sample are then used by the decompression algorithm to reconstruct the modulated signal.
Important Note about Noise
As stated in the introduction, PR3-2018 focus is on the simulation and correction of instrumental effects and systematics. It thus uses a noise model which doesn't vary in time, contrary to PR2-2015 simulations which use realizations of one noise power spectrum per ring and per detector. Doing so, all systematic residuals in PR2-2015 are considered as Gaussian noise, which time variations follow more closely the flight data.
So if your interest is in noise variations and accuracy rather than instrumental effects and systematics, you may prefer using PR2-2015 noise simulations instead of PR3-2018. This is particularly true for 545GHz and 857GHz, which don't contain instrumental effects and systematics in PR3-2018.
The TOIs issued from the steps outlined above are then processed in the same way as the 2018 TOI data. Because of the granularity needed and the required computational performance, the TOI processing pipeline applied to the simulated data is not exactly the same as the one applied the data. The specific steps are the following.
ADCNL correction: the ADCNL correction is carried out with the same parameters as the 2015 data TOI processing, and with the same algorithm.
Demodulation: signal demodulation is also performed in the same way as the flight TOI processing. First, the signal is converted from ADU to volts. Next, the signal is demodulated by subtracting from each sample the average of the modulated signal over 1 hour and then taking the opposite value for negative parity samples.
Conversion to watts and thermal baseline subtraction: the demodulated signal is then converted to watts (ignoring the conversion non-linearity of the bolometers and amplifiers, which has been shown to be negligible). Finally, a thermal baseline is subtracted; this is derived from the flight signals of the two dark bolometers, smoothed over 1 minute.
1/f noise: a 1/f -type noise component is then added to each signal ring, with parameters (slope and knee frequency) adjusted on the flight data.
Projection to HPR: the signal is then projected to HEALPix rings after removal of flight-flagged data (unstable pointing periods, glitches, Solar system objects, planets, etc.).
4-K line residuals: an HPR of the 4-K lines residuals for each bolometer, built by stacking the 2015 TOI, is added to the simulation output HPR.
List of modules and effects not included in the E2E simulations of TOI and HPR processing:
- no discrete point sources,
- no glitching/deglitching, only deglitching-induced noise auto-correlation,
- no 4-K line simulation and removal, only the simulation of their residuals,
- no bolometer volts-to-watts conversion non-linearity from the bolometers and amplifiers,
- no far sidelobes (FSLs) are added or removed,
- reduced simulation pipeline at 545- and 857-GHz
To be more specific about this last item, the processing uses a reduced simulation pipeline without electronics simulation. This contains only photon and electronic noise, deglitching noise auto-correlation, and time-response convolution/deconvolution, and 1/ f noise. Bolometer by bolometer baseline addition and thermal baseline subtraction, compression/decompression, and 4-K line residuals are not included.
The next stage is that the TOIs are input into the SRoll mapmaking. The following SRoll parameters are all the same for simulation mapmaking as for the data:
- thermal dust, CO, and free-free map templates,
- detector NEP and polarization parameters,
- bad rings list and sample flagging
The FSL removal performed in the SRoll destriper is not activated (since no FSL effect is included in the input). The total dipole removed by SRoll is the same as the input in the sky TOIs generated by LevelS.
Noise alignment: an additional noise component is added to align the noise levels of the simulations with the noise estimate from the 2018 odd-even rings. Of course, this adjustment of the noise level does not satisfy all the other noise null tests (see Sect. 3.3.2). This alignment is different for temperature and for polarization maps, in order to simulate the effect of the noise correlation between detectors within a PSB.
Monopole adjustment: a constant is added to each simulated map to bring its monopole to the same value as the corresponding 2018 maps.
Signal subtraction: from each map, the input sky (CMB and foreground) is subtracted to build the “noise and residual systematics frequency maps.” The systematics include additional noise and residuals induced by sky-signal distortion. Those maps are part of the FFP10 data set.
Simulation Pipeline Schematic Figure
Input Sky Components
Noise and Instrumental Effect Residual Maps
Fiducial Sky Simulation
(Planck) High Frequency Instrument
Flexible Image Transfer Specification
Cosmic Microwave background
Planck Sky Model
(Hierarchical Equal Area isoLatitude Pixelation of a sphere, <ref name="Template:Gorski2005">HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere, K. M. Górski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, M. Bartelmann, ApJ, 622, 759-771, (2005).
Noise Equivalent Power