Difference between revisions of "Planck Added Value Tools"

From Planck Legacy Archive Wiki
Jump to: navigation, search
(Generate a New Sky)
Line 360: Line 360:
 
:*Medium Redshift NBins: Number of redshift shells between 0.1 and 1 for matter density perturbations with CLASS
 
:*Medium Redshift NBins: Number of redshift shells between 0.1 and 1 for matter density perturbations with CLASS
 
:*High Redshift NBins: Number of redshift shells between 1 and 5 for matter density perturbations with CLASS
 
:*High Redshift NBins: Number of redshift shells between 1 and 5 for matter density perturbations with CLASS
:*Non Diagonal Mattershells: How many non-diagonal terms to include correlation between adjacent redshit shells  
+
:*Non Diagonal Mattershells: How many non-diagonal terms to include correlation between adjacent redshift shells  
 
:*Class Non Linear: Estimate of the non-linear P(k) and Cls for CLASS
 
:*Class Non Linear: Estimate of the non-linear P(k) and Cls for CLASS
 
;Model Selection
 
;Model Selection

Revision as of 07:19, 13 July 2018


General information about the functionality offered[edit]

The PLA offers several functionalities for the user to manipulate the data products it contains. Some of this functionality is intended to allow users to modify the products themselves, while other parts is intended to allow users to use the existing products to create new data.

The are broken down into two separate categories, based on their synchronous or asynchronous nature of their implementation and are summarised here, but further explained in detail in the later parts of the Explanatory Supplement.

Synchronous Operations[edit]

The following list of operations, belonging in the first category, have the characteristic that the operations take place in realtime (or near-realtime) and that the user gets back their answer in a matter of seconds.

These operations include:

  1. Component subtraction, allowing a user to subtract from a map certain physical components;
  2. Unit conversion, allowing a user to convert the units of a map;
  3. Colour correction, allowing the user to convert a map to a different assumed spectral index;
  4. Bandpass transformation, allowing the user to convert a map to what it would look like if observed with a different bandpass;
  5. Masking, allowing a user to mask away parts of a map, using existing or user-defined masks;
  6. Bandpass Leakage Correction, allowing the user to undo leakage corrections that were applied by the Planck team.

All of the above functionalities are available both for map cutouts and for full sky maps.

In the case of the Full Map Operations, the responses are in an Asynchronous format (see next section), due to the size and the multiple number of maps selected.

The user can select one or more of these functionalities to apply to one or more full maps, or a cutout. If several functionalities are selected, they are applied in the order above, i.e., component subtraction takes place first, then unit conversion, colour correction, bandpass transformation, and finally masking. Note, however, that certain functionalities might invalidate others; for example, in order to perform colour correction, a map must be in (M)Jy/sr units, and if the user does not convert a map to this unit, colour correction will not take place.

Asynchronous Operations[edit]

In this category, the operations performed by the user will be queued by execution by the PLA server, and the results will be emailed back to user in the form of download links, at a later time when the execution has been completed. The user will then be able at their convenience download their results.

In this category the following operations are included:

  • Effective Beam Average, allowing a user to calculate the average of the Planck effective beams over a given cutout of the map;
  • Noise Map Cutout, creating a cutout of the noise maps in the PLA;
  • Map-Making, providing the user with several tools to create a map cutout from the time-ordered data;
  • Component Separation, providing the user with the possibility to use Planck data to produce maps of various astrophysical components.

Component Subtraction[edit]

This functionality allows the user to subtract either a CMB map or parametric component map from a target frequency map. Details about the components to be subtracted can be found here: Generally, the user will select a number of "components", of which CMB maps can be a part, and the algorithm will use the spectral model of that component to extrapolate the component signal to the frequency of the target frequency map. Several components can be chosen, in which case what is subtracted from the target map is a sum of the extrapolated components. The subtraction takes place in harmonic space to avoid pixelization effects, and the user is allowed to select a resolution ("Output NSIDE") and smoothing FWHM with which to smooth both the components ("Smoothing→Component") before subtracting and the map ("Smoothing→Target") after subtraction. If the component map contains a polarization map as well, this can also be subtracted from the target map (if the target map contains polarized components, and if the components to be subtracted have a polarized model). Some of the components come in several resolutions; for these, the user will be allowed to choose ("Available Nsides"). For these, the user is also informed about what the Nside of the map they have chosen is ("Input Nside"). Note, however, that some of the resolutions might not contain a polarization component and selecting that resolution will disable the possibility to subtract the polarization component from the target map.

CMB: All CMB maps are available as temperature-only Nside=2048 maps and temperature + polarization Nside=1024 maps. In addition, the Commander temperature+polarization CMB map is available at Nside=256. These maps, before subtraction, are converted into the same units as the target map using the appropriate bandpass (the one belonging to the target map).

Other components: The best-fit component maps from the 32-band run of this paper are available as Nside=256 maps. Each component is extrapolated to the frequency of the target map using the rules for the spectral behaviour of those components, detailed in the abovementioned paper. They are then converted to the same units as the target map before they are subtracted.

These components consist of the following.

Synchrotron, a low-frequency template (GALPROP z10LMPD_SUNfE from Orlando and Strong (2013)) with an amplitude and pivot frequency fit. The temperature best-fit parameters used for this component are contained in COM_CompMap_Synchrotron-commander_0256_R2.00.fits, while the polarization parameters are contained in COM_CompMap_SynchrotronPol-commander_0256_R2.00.fits.

Free-free, the Draine (2011) two-parameter bremsstrahlung model, parametrized by the emission measure and electron temperature. The map containing the best-fit parameters is COM_CompMap_freefree-commander_0256_R2.00.fits.

Spinning dust, two templates derived from SpDust2 (Ali-Haïmoud et al., 2009, Ali-Haïmoud 2010, Silsbee et al., 2011), each with separate amplitudes, and the first template with a spatially varying peak frequency, while the second template has a spatially constant peak frequency. The parameter file containing the best-fit parameters in this case is COM_CompMap_AME-commander_0256_R2.00.fits.

Thermal dust, three-parameter (including the amplitude) modified blackbody model, parametrized by the emissivity index and dust temperature. There are three available best-fit parameter maps for this component: The Nside=256 map from the 32-band solution, COM_CompMap_dust-commander_0256_R2.00.fits, and two high-resolution parameter maps from a run using only the Planck data at 143 GHz and above: The temperature + polarization Nside=1024 map, COM_CompMap_DustPol-commander_1024_R2.00.fits, and the temperature-only Nside=2048 map, COM_CompMap_ThermalDust-commander_2048_R2.00.fits.

SZ, the thermal Sunyaev-Zeldovich effect, parametrized by y_sz. The best-fit parameters of this component are contained in COM_CompMap_SZ-commander_0256_R2.00.fits.

Line emission, the CO lines (3→2, 2→1, 1→0) and the 94/100 GHz line emission signals. These are only available for the frequencies at which they make a contribution. For CO1→0 this is the 100 GHz and sub-detectors, for CO2→1 this is 217 GHz and sub-detectors, and for CO3→2 this is 353 GHz and sub-detectors. The 94/100 GHz emission is also at the 100 GHz detectors. In the case of CO emission, the file containing the best-fit parameters is COM_CompMap_CO-commander_0256_R2.00.fits, while for 94/100 GHz, the file is COM_CompMap_xline-commander_0256_R2.00.fits.

For some of the above components (free-free, thermal dust, and SZ) we give the user the possibility to replace the best-fit parameters from Commander by a single global value.

Once the component maps are in the correct unit, they, along with the target map, are transformed into the harmonic domain. If there are more than one component map chosen, they will be added together, and then the sum will be subtracted from the target map. The user can then choose to smooth the final map before projecting onto the desired resolution.

The output file will be of the same format as the input, except that it will have had the selected components subtracted from it, and will be of the resolution chosen by the user.

Factor Computations[edit]

The unit conversion, colour correction, and bandpass transformation functionalities all have in common that they calculate a factor and multiply either a full map or map cutout by that factor. They also provide the option to calculate an error estimate of this factor, in the cases where the bandpass responses also have corresponding uncertainty values; only the HFI bandpasses has this, so for LFI bandpasses this functionality is disabled. The uncertainty is estimated by generating Monte Carlo samples, following the same procedure outlined here These values, when this option is selected, are included in the header of the resulting map file.

Unit Conversion and Colour Correction[edit]

The unit conversion and colour correction functionalities are interfaces to the uc_cc code used internally in Planck, and whose documentation can be found here.

Bandpass Transformation[edit]

The bandpass transformation functionality follows the same structure as the unit conversion and colour correction functionalities, though the goal here is slightly different. Defining the monochromatic flux density [math]\tilde{S}[/math] as

[math] \int{R(\nu)S(\nu)d\nu}= \tilde{S}(\nu_0)\int{R(\nu)f(\nu)d\nu} [/math]

where R is the bandpass response of the bandpass in question, S is the actual flux density of the source, and f is the spectral behaviour of the reference spectrum (e.g. ν/ν0 in the case of the IRAS convention), the goal of bandpass transformation is to find the monochromatic flux density for the same source and reference spectrum given a different bandpass response. Since the ratio of the left hand and right hand side of the equation above should be 1 with any bandpass, we have

[math] \tilde{S}_{new}(\nu_{0, new})=\frac{\int{R_{old}(\nu)f(\nu,\nu_{0, old})d\nu}}{\int{R_{new}(\nu)f(\nu,\nu_{0, new})d\nu}}\cdot\frac{\int{R_{new}(\nu)S(\nu)d\nu}}{\int{R_{old}(\nu)S(\nu)d\nu}}\tilde{S}_{old}(\nu_{0, old}). [/math]

This equation forms the basis of the bandpass transformation algorithm. In order to use the functionality, it is therefore necessary to specify both the reference spectrum and the assumed spectral behaviour of the source. The reference spectra are either "thermodynamic", "iras", "ysz", or "brightness" (Rayleigh-Jeans or antenna temperature). The assumed spectral behaviours are either "powerlaw", which is a spectrum obeying this behaviour (in brightness units):

[math] S(\nu) = \bigl(\frac{\nu}{\nu_0}\bigr) ^ \alpha [/math]

or ‘modified blackbody’, which is a spectrum obeying this behaviour (in brightness units):

[math] S(\nu) = \frac{2\hbar\nu^3}{c^2 \exp(x(\nu, T)) - 1} \cdot \nu ^ \beta [/math]

where [math]x(\nu, T) = \frac{\hbar \nu}{k_bT}[/math].

Uncertainty estimation is performed in the same way as in uc_cc: given uncertainty values of the bandpass response, we can generate Monte Carlo samples by repeatedly simulating bandpass responses and calculating the bandpass transformation factor for those responses. The standard deviation of these samples is then reported as the uncertainty of the factor.

Masking[edit]

This functionality gives the user the possibility to mask away areas of a sky map using either a PLA mask, a mask generated from source catalogues, or a custom uploaded mask. The basic masking operation is straightforward: For every masked pixel, that pixel in the target map is set to zero.

The first option is using a predefined mask stored in PLA, all of which can be found using the dropdown tool. Each mask file can contain several columns, each of which typically represents various threshold values chosen for the masks.

The second option is generating a mask from one or more source catalogues. The user will then choose the source catalogues to use.

The user then has to select the radius around each source they want to mask away. There are two different ways to define this radius: Either a fixed radius, defined by a value and a unit, or a dynamic radius that depends on the signal-to-noise ratio of the source measurement. This is the algorithm that was used in the generation of the LFI masks. The exact equation that is used is

[math] r = \frac{FWHM}{2 \sqrt{2\log(2)}}\cdot f [/math]

where

[math] f = \sqrt{2 \log\bigl(\frac{amp}{0.1 * noise}\bigr)} [/math]

Here, amp and noise are columns from the source catalogue and describe some flux measure and uncertainty measure connected to that flux, respectively. The user also defines the FWHM in the above equation.

After selecting the radius around each source, the user can define filters which take away sources that do not meet certain criteria. There are three types of filters: A) Those that filter away those sources where a given column is greater/lower than a given value, B) those that filter away those sources where the ratio of two given columns is greater/lower than a given value, and C) those that filter away those sources where a flag is set to False.

Several such filters can be created, and each of them is applied to a given source catalogue.

The third option is uploading a custom mask. This mask must be a FITS file containing a single extension (excepting the primary extension), and the first column of this file will be used. The mask must be in HEALPix format, and if the NSIDE and/or ORDERING parameters of this mask is different than the target map, the mask will be automatically converted before application (when downgrading the mask, we perform a simple average of the subpixels to determine whether the superpixel should be masked; if its value is >= 0.5, it will be). The FITS file can be zipped as a tar.gz file.

The output file will contain the input map, with the pixels indicated by the chosen mask set to zero. The mask itself will also be appended as a separate column at the end of the FITS extension.

Noise Map Cutout[edit]

The purpose of this functionality is to generate a noise map cutout from one of the simulated noise realizations available in the PLA using the same cutting settings (region of interest, resolution, and rotation angle) used to generate the associated frequency map cutout, at that frequency and for the same time coverage.

Effective Beam Average[edit]

The purpose of this functionality is to provide the user with with an average over the effective beams for a map cutout. Given a cutout, the algorithm takes the beam whose center pixel is the four corners of the cutout, as well as the central pixel of the cutout, and it then performs a weighted average of these five beams, where the weights are given by the number of hits in the center pixel of the beam. The effective beams are defined in this document.

Map Making from Time-ordered data[edit]

The majority of the maps ingested in the PLA are those generated by the Planck Data Processing Centres (DPCs). In addition to maps, the PLA also provides access to Time-ordered data, which can be used to generate new maps. The new maps will be different from those in the PLA since the map-making method is different, and the selection of Time-ordered data may be different. We caution the user that the map-making methods used by PLA are much less sophisticated than the ones used in the Planck DPCs. The new maps generated should only be used as a rough initial approximation, for example to gauge the effect of deselecting part of the input time-ordered data.

Two map making tools are provided by PLA: the first, ring-based map making, provides an interface to the rings-based map making software written by Keihanen et al in preparation.

The option ‘Remove temperature monopole’ subtracts the average value of the map pixels from all pixels.

The output from the ring-based map making is three FITS files each containing one or more full-sky HEALPix maps: The map itself (I or IQU), the hit map (number of observations in each pixel) and the white noise covariance matrix (either 1 column if only temperature is selected, or six columns containing all IQU correlations if polarization is selected as well).

The second map-making option, called ‘Baseline-removed pixel averaging ’, is a timeline based mapmaker. It basically consists of binning individual samples into Healpix pixels, and is therefore the simplest possible map-making. The user selects a region or timespan, along with instrumental parameters, and the timelines corresponding to the parameters chosen are processed in the following way (this assumes both temperature and polarization maps are requested; the temperature-only case is analogous, but simpler):

We first define [math]s_r[/math] and [math]s^2_r[/math] as the baseline-subtracted signal observed at sample [math]r[/math], and the signal observed at sample [math]r[/math] squared, respectively (to be clear: [math]s^2 \doteq (s_1^2, s_2^2\dots)[/math] etc., and not the alternative interpretation where [math]s^2 = s^Ts[/math]. Alternatively, one could say that [math]s^2 = diag(s^Ts)[/math]). These will be nsample-sized vectors.

In order to baseline-subtract the signal, we follow two different approaches for LFI and HFI, respectively:

LFI: In this case the subtraction is simple: Each signal timeline comes with a corresponding baseline, and we simply subtract the offset for the given detector from the signal for that detector.

HFI: In this case the offset data for all timelines is stored in a single file, and we must use the metadata of the signal timeline to find the offset array that corresponds to that timeline. The way this is done is by first extracting the 'BEGRING', 'ENDRING', 'BEGIDX' and 'ENDIDX' keywords from the signal timeline, and comparing these with the data in HFI_ROI_DestripeOffsets_R2.02_full.fits and HFI_ROI_GlobalParams_R2.02_full.fits in the following way:

First the 'start ring index' is found by subtracting the 'BEGRING' of the global parameter file from the 'BEGRING' of the science timeline. The 'end ring index' is similarly found by subtracting the 'BEGRING' of the global parameter file from the 'ENDRING' of the science timeline.

For each ring index between the start and end ring indices, we locate the sample indices inside the given ring by fetching all sample indices between ring index and ring index +1. This is done by accessing the 'BEGIDX' field of the global parameter data from ring index and ring index + 1. We then map this set of sample indices to the actual offset data for that ring, which we do by going into the DestripeOffsets file and access the offsets at the given indices (belonging to that specific ring). We do this by accessing the ring index'th row and of the DestripeOffsets data, also taking care to use the right detector name to access the correct field. We use the first element of the resulting array, which corresponds to the full map offset (as opposed to first and second half-ring, which is the two other elements of that array).

We now then have the offset data for all rings between BEGRING and ENDRING (in the signal timeline) mapped to a sample index. We can then access the offset data corresponding to the sample indices we actually need, which are the sample indices between BEGIDX and ENDIDX from the signal timeline. From these we create the final offset array and subtract it from the signal timeline.

We then define the pointing matrix [math]A[/math] as the [math]N_{pix} \times N_{samples}[/math] matrix where the entry corresponding to pixel [math]p[/math] and sample [math]r[/math] is given by

[math] A_{p, r} = i(p, r) \cdot [1, \cos(2 \psi(r)), \sin(2 \psi(r))], [/math]

Where [math]i[/math] is a function that is 1 if the sample falls within the pixel [math]p[/math] and 0 otherwise, and [math]\psi[/math] is the pointing angle of sample [math]r[/math]. This means that inside each element of [math]A[/math] is embedded a 3-dimensional space containing pointing information for that element.

We then create the diagonal [math]N_{pix} \times N_{pix}[/math] matrix [math]W[/math], defined as

[math] W = A^TA, [/math]

Where now each diagonal element of [math]W[/math] is a [math]3\times 3[/math] matrix. Each element of [math]W[/math] thus looks like this (We let [math] T[/math] be the set of all timelines, and [math]ns[/math] is the function giving the number of samples in a timeline):

[math] W_{p, p} = \sum_{q\epsilon T} \sum_{r=1}^{ns(q)} i(p, r) \cdot \begin{pmatrix} 1 & \cos(2 \psi(r)) & \sin(2 \psi(r)) \\ \cos(2 \psi(r)) & \cos^2(2\psi(r)) & \cos(2 \psi(r))\sin(2\psi(r)) \\ \sin(2 \psi(r)) & \cos(2\psi(r)\sin(2\psi(r)) & \sin^2(2\psi(r))) \end{pmatrix} [/math]


The map, [math]m[/math], in pixel [math]p[/math] will then be given by

[math] m_p = (A_{p, p}^TA_{p, p})^{-1} \cdot (A^Ts)_{p, p}, [/math]

where all the matrix operations now happen in the sub-space mentioned above, except [math]A^Ts[/math] which projects the [math]n_{samp}[/math]-sized vector onto the [math]n_{pix}[/math]-sized space.

Further, the rms of pixel [math]p[/math] is given by

[math] rms_p = \frac{1}{n_{samp}}\sqrt{(diag((A_{p, p}^TA_{p, p})^{-1} \cdot (A^Ts^2)_{p, p}) - m_p^Tm_p)}, [/math]

Where again, all matrix operations except for [math]A^Ts^2[/math] happen in the sub-dimension, and [math]n_{samp}[/math] is the number of samples binned into pixel [math]p[/math]. This quantity is given by the [math](1, 1)[/math] element of [math]W[/math].

For temperature only, the 3-dimensional sub-space is reduced to 1 dimension, and all operations in that space reduce to simple scalar operations - i.e. weighted averaging of all the timeline samples falling in each pixel.

The output from this mapmaker is [math]m, rms[/math], and [math]W[/math].

Component Separation[edit]

Component separation is the process of estimating the contribution of a specific source of emission to an observed map. The PLA provides a number of all-sky maps of components which have been derived from Planck observations by the Planck Data Processing Centres. These maps have been generated using specific algorithms optimized in certain ways, using certain input data, and targeted to produce all-sky maps. The PLA allows to generate new estimates of components using Planck observations. Both are very simplified versions of the DPC algorithms, but may offer certain advantages as they allow to select the input data, and to target small regions of the sky.

The PLA offers two methods to carry out component separation: an Internal Linear Combination algorithm (ILC), and a parametric model maximum likelihood algorithm. The ILC method allows to estimate the CMB only, whereas the parametric method allows to estimate a set of sources defined by model SEDs. Both can be applied either to the entire sky or to a specified cutout.


ILC component separation[edit]

The ILC component separation algorithm follows the standard ILC procedure (e.g. Bennett, C. L. et al. 2003b, ApJS, 148, 97): Assume that for a map observed at a given channel [math]k[/math] we have

[math] T(\nu_k) = T_{cmb} + T_{res}(\nu_k), [/math]

Where [math]T_{cmb}[/math] is the CMB contribution and is assumed to be independent of channel as long as our map is in thermodynamic units. We then form the combination

[math] T = \sum_k w_kT(\nu_k) [/math]

Where [math]w_k[/math] is some weight applied to that particular channel, which is under the constraint that

[math] \sum_k w_k = 1. [/math]

With this constraint,

[math] T = T_{cmb} + \sum_k w_k \cdot T_{res}(\nu_k), [/math]

meaning that if we can choose [math]w_k[/math] that will minimize the second sum, we end up with a map that is close to the CMB contribution. The weights that will minimize this sum is given by

[math] w_k = \frac{\sum_j C_{k, j}^{-1}}{\sum_{i, j} C_{i, j}^{-1}} [/math] Where [math]C[/math] is the [math]n_{maps} \times n_{maps}[/math] sample covariance matrix.


If polarization is enabled, there will be two sets of weights: one for the [math]I[/math] field, and one for the [math]Q[/math] and [math]U[/math] fields. Only those selected maps that have a polarization component will be used for calculating the [math]Q[/math] and [math]U[/math] weights.

The user can select a number of Planck maps and a selection of external maps, as well as a target resolution and degree of smoothing. The function will convert the map to thermodynamic temperature and smooth all maps in harmonic space with the target smoothing FWHM before projecting it onto a map with the target [math]N_{side}[/math]. This is done regardless if the maps have different resolutions, and it is recommended to choose an FWHM that corresponds to the lowest-resolution map (or lower). The system does not attempt to sanity check the FWHM that the user prescribes.

The output from the ILC functionality is a file containing, as its first 1, 2, or 3 columns (depending on whether I, QU or IQU was chosen, respectively), the ILC solution. The subsequent columns contains the residuals of each of the input maps, defined as the ILC solution subtracted from the input map at each channel (after the input maps have been converted to K_CMB). The number of residual columns is equal to n * (1, 2, or 3), where n is the number of input maps and the last factor again depends on whether I, QU or IQU was chosen.


Parametric model maximum likelihood component separation[edit]

This method uses the 32-band temperature data and parametric model described in the 2015 diffuse components paper. The user can select which of the 32 bands and which of the parametric components they wish to include in the analysis. They also must choose a region in which to perform the analysis.

Using this data we can define a chi-squared for each pixel as follows:

[math] \chi^2 = \sum_{i=1}^{n_{bands}} \frac{(m(i) - d_i) ^ 2}{rms_i^2}, [/math] where [math]m(i)[/math] is the model evaluated at band [math]i[/math] and [math]d_i[/math] and [math]rms_i[/math] is the signal observed by, and the rms corresponding to that band, respectively.

We start from the parameter results from the abovementioned paper and run a Powell minimization algorithm to minimize the chi-squared until we have found parameter values for all pixels in the target region.

Global parameters are fixed to their best-fit values from the 2015 paper, as are monopole and dipole values.

Output from this functionality are the best-fit parameter values of the components that the user has chosen to include, as well as the chi-squared value in each pixel. The user can also elect to get as output the residual maps in a single file, meaning the data minus the best-fit model. As the data are given in [math]N_{side}=256[/math] and smoothed to 60 arcmin, the output residual maps and parameters will also have this resolution, although they will be reprojected onto the cutout the user has chosen.

Note that results from this will likely be poorer in regions of high activity where we have a limited understanding of the physical processes, such as close to the galactic center. See the masks and maps in the abovementioned paper to get an idea of which regions might be less contaminated and thus give better results.

Bandpass Leakage Correction[edit]

The purpose of this functionality is to add or subtract correction templates available in the PLA from the associated frequency maps to “apply” or “remove” corrections that have been introduced by the Planck Data Processing Center to correct mainly for systematic effects. It is up to the advance users of the PLA to use this functionality, but the Planck Collaboration recommends not to use uncorrected maps for science analysis unless the user understands fully the impact of removing these corrections. By selecting the correction map to be applied or removed based on the type of correction, frequency and mission coverage, the system will automatically know to which map to apply the correction, or from which map to remove the correction, and as a result, the uncorrected map will be generated.

Custom Bandpasses[edit]

There are several functionalities that allow the user to upload custom bandpasses: Unit conversion, colour correction, bandpass transformation, ILC component separation, and component subtraction. These bandpasses must be an ASCII file containing two or three columns. The first column contains the frequency at which the response is defined, given in Hz. The second contains the actual bandpass response, and the optional third column contains the uncertainty of the bandpass response, assumed to be one standard deviation. If this column is present, an uncertainty estimate will be provided (for those functionalities that provide such an estimate).

The currently available bandpasses are Planck, WMAP and Herschel SPIRE.

The SPIRE PLW and PMW bandpasses, based on the data found here. Note that the bandpasses we currently possess are created using lambda-squared weighting, which the SPIRE team has since changed, so these do not correspond to the current state of the SPIRE bandpasses. The WMAP bandpasses, as delivered by the Commander team from their 32-band run for the 2015 diffuse components paper.

A custom bandpass should be:

  • A text file
  • Comment lines should be started with a # character
  • Comments are allowed only in the top of the file
  • There is no need for column headers
  • The columns can be space or tab-separated
  • There is no need for the columns to be aligned vertically
  • The structure is a 3 column file with the following columns: frequency, response and standard_error
  • To prevent abuse of the system, there can be a total of 50,000 lines in the file

Planck Sky Model[edit]

The current implementation of Planck Sky Model in the PLA archives, is using version PSM v.2.0.7 released July 03 2017

The Planck Sky Model is an asynchronous operation, meaning that the job is submitted to the PLA archive and an email will be send to the user when the job is completed, with links to pickup the results. The amount of waiting time is undetermined and it depends on the complexity of the submitted job and the current load of the PLA backend.

The execution of PSM follows a simple two step process that can be summarized in the following diagram.

PSM fig1.png

The user basically has three execution paths:

  • Generate a new Sky only
  • Generate a new Sky and perform a Sky Observation or
  • Use an existing Sky Map and perform a Sky Observation (currently not available).

Generate a New Sky[edit]

In this step a Planck Sky is generated according to the specifications of the user.

If this is the only step in the PSM execution path (that is, it is not followed by a Sky Observation), the generated Sky will be saved, for later use by any user of PLA. Saved PSM skies have a minimum lifespan of one week, after which they are removed automatically by the system.

The purpose of the saved Sky Maps, is to give the user the opportunity to perform multiple Sky Observations (in subsequent PSM executions) without having to recreate the same Sky Map each time.

For the generation of the new Sky the user is asked to fill in the following parameters:

Info and Control
  • Precision: Floating point precision of the output maps
  • Fields: Model and process temperature only or both temperature and polarisation
  • Seed: Specify the seed for random number generation
  • Sky Pixel Window: Whether sky maps are sampled at pixel centers (0) or averaged over pixel areas (1)
  • Sky Model Parameters
  • Sky Resolution: Resolution of sky maps in arc-minutes (Gaussian)
  • Sky LMAX: Specify the harmonic band limit of the PSM simulation
  • HEALPix Nside: Nside parameter for sky HEALPix maps
Cosmological Parameters
  • T CMB: CMB temperature (Kelvin)
  • Hubble H100: Hubble parameter at present time (H = h = H0=100, with H0 in km/s/Mpc)
  • ωm: Matter parameter density
  • ωb: Baryonic matter parameter density
  • ωk: Curvature parameter Ωk. This sets the dark energy density parameter as ΩDE = 1 − Ωk − Ωm
  • σ8: Amplitude of matter perturbations at the scale of 8h ̄¹ Mpc
  • ns : Scalar spectral index ns of primordial fluctuations
  • ns running: Running of the scalar spectral index ns
  • nt: Tensor spectral index nt of primordial fluctuations
  • nt running: Running of the tensor spectral index nt
  • r: Tensor to scalar ratio (primordial power at kpivot)
  • TAU reion: Reionisation optical depth
  • He fraction: Helium fraction (by mass)
  • nmassless nu: Number of massless (i.e. relativistic) neutrino species
  • W dark energy: w parameter for the equation of state of dark energy
  • K pivot scalar: The comoving scale kpivot (in Mpc ̄¹) at which the amplitudes of initial scalar and tensor power spectra are defined
  • A: Amplitude of scalar modes
  • n non-cold dm: Some Description
  • M ncdm: Some Description
  • T ncdm: Some Description
CMB, Lensing, and Cosmic Structure Power Spectra Parameters
  • CMB CL Source: Source for CMB Cl and lensing potential
  • Generate CAMB outputs: Whether to compute CMB and matter power spectra using CAMB
  • Generate CLASS outputs: Whether to compute CMB and matter power spectra using CLASS
  • Do Mattershells: Whether to compute matter shells
  • Matter LMax: Band limit for Large Scale Structure
  • Low Redshift NBins: Number of redshift shells between 0.01 and 0.1 for matter density perturbations with CLASS
  • Medium Redshift NBins: Number of redshift shells between 0.1 and 1 for matter density perturbations with CLASS
  • High Redshift NBins: Number of redshift shells between 1 and 5 for matter density perturbations with CLASS
  • Non Diagonal Mattershells: How many non-diagonal terms to include correlation between adjacent redshift shells
  • Class Non Linear: Estimate of the non-linear P(k) and Cls for CLASS
Model Selection
  • CMB Monopole
  • Mean primordial y: Value of the mean primordial y parameter
  • Mean primordial mu: Value of the mean primordial mu parameter
  • Randomize: Whether to launch a generic or prediction model
  • CMB Dipole
  • Dipole glon: Dipole galactic longitude (degrees)
  • Dipole glat: Dipole galactic latitude (degrees)
  • Dipole ampl: Dipole amplitude (mK CMB)
  • Randomize: Whether to launch a generic or prediction model
  • Dipole glon error: Uncertainty (1σ) on dipole galactic longitude (degrees)
  • Dipole glat error: Uncertainty (1σ) on dipole galactic latitude (degrees)
  • Dipole ampl error: Uncertainty (1σ) on dipole amplitude (mK CMB)
  • Save dipole map: Whether to save the dipole map (in addition to alm) or not
  • CMB Anisotropies
  • CMB model:
  • Prediction: is derived from a CMB map obtained on WMAP 5-year data using a needlet ILC component separation method.
  • Gaussian: the CMB map produced by the PSM is a random generation of CMB harmonic coefficients alm according to
  • Gaussian statistics dened by an input CMB power spectrum (temperature, and polarisation).
  • Gaussian with shells: need to compute matter shells for CMB
  • nongaussian fnl: The PSM can produce simulated CMB maps with non-gaussianity of the local type. Such CMB realisations have been precomputed and are stored in the PSM data repository, both at lmax=1024 (1000 realisations), or at lmax=3500 (100 realisations). The non-Gaussian CMB model assumes a linear-plus-quadratic model for Bardeen's gauge-invariant curvature potential, where the contribution of the quadratic term is given by a single parameter fnl
  • The gaussian part should be extended at high ell: Whether to add gaussian fluctuations at l > 3500
  • Renormalise the spectrum of the maps: Whether the power spectrum of the simulated non-gaussian template should be readjusted to match the expectation for the input cosmological parameters
  • The non-linear factor fnl: Specified or Random value of fnl is drawn
  • Whether the map used is drawn at random among specified available maps: Specified or Random map number used if the map number is randomly drawn or not
  • CMB Lensing: Lensing of the CMB by large scale structure generates small shifts of the CMB temperature and polarisation patterns on the sky. This in turn changes the power spectrum of temperature and
  • SZ Emmision
  • SZ Model:
  • prediction: The SZ prediction model includes only expected signals from the clusters included in the catalogues specified with the SZ_INPUT_CAT parameter. The parameters CLUSTER_PROFILE, NORM_PROFILE, PROFILE_BOUNDS and CLUSTER_T_STAR are active for this model. This model generates only thermal SZ effect.
  • dmb: The SZ dmb model generates first a catalogue of galaxy clusters according to the mass function specified by the MASS_FUNCTION parameter. For each cluster, the expected SZ signal is computed on the basis of a physical model linking mass and redshift to electron density and temperature, on the basis of the spherically symmetric profile specified with the CLUSTER_PROFILE parameter. Cluster are distributed at random over the 4π of the sky, with a uniform probability. To each cluster, a velocity is assigned as a function of its redshift (assuming linear growth of structures). The 3-D velocity vector is drawn at random given the variance of the velocity field at that redshift, for the given cosmological parameters. This model accepts two additional parameters: SZ_CONSTRAINED and SZ_INCLUDE_POLARISED.
  • Include the thermal SZ: Whether to include thermal SZ effects in the sky model
  • Include the kinetic SZ: Whether to include kinetic SZ effects in the sky model
  • Include the polarised SZ: Whether to include polarised SZ effect
  • Relativistic corrections order: Order of relativistic corrections to the thermal SZ effect
  • Use density shells: Whether clusters are in CLASS redshift shells.
  • Cluster mass bias factor: Parameter that connects the X-ray mass estimate used in scaling relations to the true mass of the cluster (used in the mass function)
  • Mass function dN/dMdz: The mass function used to generate the catalogue
  • Lowest mass of included clusters: Lower mass limit of clusters included in the catalogue, in units of 1015 solar masses
  • Highest mass of included clusters: Higher mass limit of clusters included in the catalogue, in units of 1015 solar masses
  • Input catalogues: List of catalogues of known clusters to be included in the model sky emission
  • Catalogue contains only known clusters: Whether the catalogue contains real observed clusters
  • Contamination by point sources: Whether contamination by point sources
  • Galactic emission
  • Galactic model: There are at present two models of galactic emission, the prediction and simulation models. Both of them are based on the same input galactic templates, but the simulation model generates random small scale structure that is added to the synchrotron, free-free and thermal dust templates if the sky resolution of the PSM run is smaller than the resolution of the available template.
  • Galactic version: choice is among prelaunch and postlaunch
  • Galactic Components:
  • Synchrotron:
  • Synchrotron intensity template: Which synchrotron intensity template to use
  • Synchrotron scaling: choice is among powerlaw and curvpowerlaw
  • Synchrotron index model: choice is among mamd2008, giardino2002, uniform and random.
  • Free-free:
  • Main free-free template map: Template free-free map
  • Electron temperature: Temperature of free-free electrons (in K)
  • Spinning dust:
  • Spinning dust emission law: What emission law should be used to model spinning dust emission
  • Cold Neutral Medium (CNM): Proportion of cold neutral medium for spinning dust emission
  • Warm Neutral Medium (WNM): Proportion of warm neutral medium for spinning dust emission
  • Warm Ionised Medium (WIM): Proportion of warm ionised medium for spinning dust emission
  • Molecular clouds (MOL): Proportion of molecular clouds for spinning dust emission
  • Dark component (DRK): Proportion of dark gas for spinning dust emission
  • Reflection Nebulae (RN): Proportion of reflexion nebulae for spinning dust emission
  • SP_DUST extra component (EXTRA):
  • Spinning dust polarization fraction: Proportion of extra component for spinning dust emission
  • Thermal dust:
  • Thermal dust model: Which dust model to use among template, ffp7, ffp8, ffp10
  • Thermal dust temperature: In case of a dust model of 'template', specify the dust temperature in K
  • Thermal dust spectral index: In case of a dust model of 'template', specify the dust spectral index
  • Thermal dust amplitude: which dust template to use at 100 micron among SFD, SFDnoHII, FFP6-JD
  • Thermal dust scaling: which dust scaling to use across frequencies among SFD-7, SFD-8
  • CO:
  • Template used for modelling CO lines: Template used for modelling CO lines among Dame, Planck-v1
  • Ratio between J=2-1 and J=1-0 lines: Ratio between J=2-1 and J=1-0 lines, in K.km/s
  • Ratio between J=3-2 and J=2-1 lines: Ratio between J=3-2 and J=2-1 lines, in K.km/s
  • PS emission
  • PS model: There are two point sources models implemented in the PSM: prediction and simulation. As many radio sources are variable, the prediction model comprises only infrared sources and ultra-compact Hii regions, modelled on the basis of extrapolations of real IRAS sources. The simulation model comprises fake (faint) infrared sources to homogenize the IRAS coverage, and extrapolations of radio sources observed at frequencies ranging from 850MHz to 4.85 GHz.
  • Include radio sources: Whether to include radio sources in the sky model
  • Include IR sources: Whether to include infrared sources in the sky model
  • Use real sources: Whether to use real sources
  • Strong Sources to cat: What to do with strong sources: make observed catalogue
  • Strong sources to map: What to do with strong sources: make observed map
  • Strong source limit freq. (GHz): Set of frequencies used to separate between strong and faint sources
  • Strong source limit flux (Jy): Flux limits in Jy, above which sources are considered as strong (must be a list of same size as the list of corresponding frequencies above)
  • Mean IR Polar Degree: Typical degree of polarisation of the infrared sources
  • FIRB emission
  • FIRB model: There is at present one single model of emission for the far infrared background, due to a collection of blended high redshift infrared sources.
  • Polarized: Whether the simulations will contain the polarised FIRB
  • Mean Spheroid Polar Degree: Typical degree of polarisation of the proto-spheroids
  • Mean Spiral Polar Degree: Typical degree of polarisation of the spirals
  • Mean Starburst Polar Degree: Typical degree of polarisation of the starburst

Sky Observation[edit]

During this step the user is asked to select the instruments that they would like to use to observer the Sky. They can also define their own custom instruments, based on the provided Instrument templates.

The predefined list of Instruments and their parameters are:

  • LFI
  • Version: Version for LFI (default R2.50)
  • 30GHz Channels: List of 30 GHz channels for LFI frequencies {030-28S 030-28M 030-27S 030-27M} and/or F030 and/or alldet or none
  • 44GHz Channels: List of 44 GHz channels for LFI frequencies {044-26S 044-26M 044-25S 044-25M 044-24S 044-24M} and/or F044 or alldet or none
  • 70GHz Channels: List of 70 GHz channels for LFI frequencies {070-23S 070-23M 070-22S 070-22M 070-21S 070-21M 070-20S 070-20M 070-19S 070-19M 070-18S 070-18M}
  • Pix: Pixelisation for all LFI instruments, can be {sky instr}
  • Noise: Noise for LFI instrument, can be {nominal none}
  • HFI
  • Version: Version for HFI (default R2.00)
  • 100GHz Channels: List of 100 GHz channels for HFI frequencies {100_1a 100_1b 100_2a 100_2b 100_3a 100_3b 100_4a 100_4b} and/or F100 and/or alldet or none
  • 143GHz Channels: List of 143 GHz channels for HFI frequencies {143_1a 143_1b 143_2a 143_2b 143_3a 143_3b 143_4a 143_4b 143_5 143_6 143_7 143_8} and/or F143 and/or alldet or none
  • 217GHz Channels: List of 217 GHz channels for HFI frequencies {217_1 217_2 217_3 217_4 217_5a 217_5b 217_6a 217_6b 217_7a 217_7b 217_8a 217_8b} and/or F217 and/or alldet or none
  • 353GHz Channels: List of 353 GHz channels for HFI frequencies {353_1 353_2 353_3a 353_3b 353_4a 353_4b 353_5a 353_5b 353_6a 353_6b 353_7 353_8} and/or F353 and/or alldet or none
  • 545GHz Channels: List of 545 GHz channels for HFI frequencies {545_1 545_2 545_3 545_4} and/or F545 and/or alldet or none
  • 857GHz Channels: List of 857 GHz channels for HFI frequencies {857_1 857_2 857_3 857_4} and/or F857 and/or alldet or none
  • Pix: Pixelisation for all HFI instruments, can be {sky instr}
  • Noise: Noise for HFI instrument, can be {nominal none}
  • IRAS
  • Version: Version for IRAS {no_iras, ideal, v1}
  • Channels: List of channels for IRAS available {100micron, 60micron} or none
  • Channel units: Units for output map observations of IRAS instrument
  • Pix: Pixelisation for all IRAS instruments, can be {sky instr}
  • Noise: Noise for IRAS instrument, can be {nominal none}
  • WMAP
  • Version: Version for WMAP {no_wmap, 9yr}
  • Channels: List of channels for WMAP in frequency list {K, Ka, Q, V, W} or none
  • Channel units: Units for output map observations of WMAP instrument
  • Pix: Pixelisation for all WMAPinstruments, can be {sky instr}
  • Noise: Noise for WMAPinstrument, can be {nominal none}
  • COrEplus
  • Bandshape: Type of instrument bands ('TOPHAT')
  • Bandwidth: Bandwidths Delta_nu/nu (for TOPHAT bands)
  • BeamType: beam type (can be GAUSSIAN)
  • Noise: Noise for PSM instrument (nominal)
  • Customize: selection of frequencies
  • LiteBIRD
  • Bandshape: Type of instrument bands ('TOPHAT')
  • Bandwidth: Bandwidths Delta_nu/nu (for TOPHAT bands)
  • BeamType: beam type (can be GAUSSIAN)
  • Stokes: TQU
  • Observation Units: K_RJ/sr
  • Noise: Noise for PSM instrument (nominal)
  • Customize: selection of frequencies
  • LiteCOrE120
  • Bandshape: Type of instrument bands ('TOPHAT')
  • Bandwidth: Bandwidths Delta_nu/nu (for TOPHAT bands)
  • BeamType: beam type (can be GAUSSIAN)
  • Stokes: TQU
  • Observation Units: K_RJ/sr
  • Noise: Noise for PSM instrument (nominal)
  • Customize: selection of frequencies
  • CMBS4_SouthPole
  • Bandshape: Type of instrument bands ('TOPHAT')
  • Bandwidth: Bandwidths Delta_nu/nu (for TOPHAT bands)
  • BeamType: beam type (can be GAUSSIAN)
  • Stokes: TQU
  • Observation Units: K_RJ/sr
  • Noise: Noise for PSM instrument (nominal)
  • Customize: selection of frequencies
  • CMBS4_Atacama
  • Bandshape: Type of instrument bands ('TOPHAT')
  • Bandwidth: Bandwidths Delta_nu/nu (for TOPHAT bands)
  • BeamType: beam type (can be GAUSSIAN)
  • Stokes: TQU
  • Observation Units: K_RJ/sr
  • Noise: Noise for PSM instrument (nominal)
  • Customize: selection of frequencies

The predefined Instrument templates used to define a custom Instrument, require the following parameters:

  • Name: Name of the instrument
  • Bandshape: Type of instrument bands (can be 'DIRAC', 'TOPHAT')
  • Bandwidth: Bandwidths Delta_nu/nu (for TOPHAT bands)
  • BeamType: beam type (can be GAUSSIAN)
  • Stokes: can be {TQU, T}
  • Observation Units: {K_RJ/sr, Jy/sr, K_CMB, K/CMB, y_sz, W/m2/sr/Hz}
  • Noise: Noise for PSM instrument (can be nominal or none)
  • Customize: selection of frequencies
  • Number of Frequencies (GHz)
  • Central Frequencies (GHz)
  • Beam (arcminutes)
  • Temperature Noise Level (uK_CMB.arcmin)
  • Polarization Noise Level (uK_CMB.arcmin)

Use Existing Planck Sky (Currently not available)[edit]

As mentioned before, the generated Planck Sky Maps are preserved for repeated uses.

The PLA Archive, except for the list of user generated Planck Sky Maps, also offers another list of previously PLA generated Sky Maps.

The list of PLA provided Sky Maps, will include the Full Focal Plane Simulation 10 (FFP10)

Once the user has elected to use a pre-existing Sky, then the only operation they can perform after is a Sky Observation only.

References[edit]

Planck Legacy Archive

Cosmic Microwave background

Full-Width-at-Half-Maximum

Sunyaev-Zel'dovich

(Planck) High Frequency Instrument

(Planck) Low Frequency Instrument

Flexible Image Transfer Specification

(Hierarchical Equal Area isoLatitude Pixelation of a sphere, <ref name="Template:Gorski2005">HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere, K. M. Górski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, M. Bartelmann, ApJ, 622, 759-771, (2005).

Ring-Ordered Information (DMC group/object)

Data Processing Center

Planck Sky Model