Planck Added Value Tools Documentation
- 1 General information about the functionality offered
- 2 Component Subtraction
- 3 Factor Computations
- 4 Noise Map Cutout
- 5 Effective Beam Average
- 6 Map Making from Time-ordered data
- 7 Component Separation
- 8 Bandpass Leakage Correction
- 9 Custom Bandpasses
- 10 Planck Sky Model
- 11 References
General information about the functionality offered
The PLA offers several functionalities for the user to manipulate the data products it contains. Some of this functionality is intended to allow users to modify the products themselves, while other parts is intended to allow users to use the existing products to create new data.
The are broken down into two separate categories, based on their synchronous or asynchronous nature of their implementation and are summarised here, but further explained in detail in the later parts of the Explanatory Supplement.
The following list of operations, belonging in the first category, have the characteristic that the operations take place in realtime (or near-realtime) and that the user gets back their answer in a matter of seconds.
These operations include:
- Component subtraction, allowing a user to subtract from a map certain physical components;
- Unit conversion, allowing a user to convert the units of a map;
- Colour correction, allowing the user to convert a map to a different assumed spectral index;
- Bandpass transformation, allowing the user to convert a map to what it would look like if observed with a different bandpass;
- Masking, allowing a user to mask away parts of a map, using existing or user-defined masks;
- Bandpass Leakage Correction, allowing the user to undo leakage corrections that were applied by the Planck team.
All of the above functionalities are available both for map cutouts and for full sky maps.
In the case of the Full Map Operations, the responses are in an Asynchronous format (see next section), due to the size and the multiple number of maps selected.
The user can select one or more of these functionalities to apply to one or more full maps, or a cutout. If several functionalities are selected, they are applied in the order above, i.e., component subtraction takes place first, then unit conversion, colour correction, bandpass transformation, and finally masking. Note, however, that certain functionalities might invalidate others; for example, in order to perform colour correction, a map must be in (M)Jy/sr units, and if the user does not convert a map to this unit, colour correction will not take place.
In this category, the operations performed by the user will be queued by execution by the PLA server, and the results will be emailed back to user in the form of download links, at a later time when the execution has been completed. The user will then be able at their convenience download their results.
In this category the following operations are included:
- Effective Beam Average, allowing a user to calculate the average of the Planck effective beams over a given cutout of the map;
- Noise Map Cutout, creating a cutout of the noise maps in the PLA;
- Map-Making, providing the user with several tools to create a map cutout from the time-ordered data;
- Component Separation, providing the user with the possibility to use Planck data to produce maps of various astrophysical components.
This functionality allows the user to subtract either a CMB map or parametric component map from a target frequency map. Details about the components to be subtracted can be found here: Generally, the user will select a number of "components", of which CMB maps can be a part, and the algorithm will use the spectral model of that component to extrapolate the component signal to the frequency of the target frequency map. Several components can be chosen, in which case what is subtracted from the target map is a sum of the extrapolated components. The subtraction takes place in harmonic space to avoid pixelization effects, and the user is allowed to select a resolution ("Output NSIDE") and smoothing FWHM with which to smooth both the components ("Smoothing→Component") before subtracting and the map ("Smoothing→Target") after subtraction. If the component map contains a polarization map as well, this can also be subtracted from the target map (if the target map contains polarized components, and if the components to be subtracted have a polarized model). Some of the components come in several resolutions; for these, the user will be allowed to choose ("Available Nsides"). For these, the user is also informed about what the Nside of the map they have chosen is ("Input Nside"). Note, however, that some of the resolutions might not contain a polarization component and selecting that resolution will disable the possibility to subtract the polarization component from the target map.
CMB: All CMB maps are available as temperature-only Nside=2048 maps and temperature + polarization Nside=1024 maps. In addition, the Commander temperature+polarization CMB map is available at Nside=256. These maps, before subtraction, are converted into the same units as the target map using the appropriate bandpass (the one belonging to the target map).
Other components: The best-fit component maps from the 32-band run of this paper are available as Nside=256 maps. Each component is extrapolated to the frequency of the target map using the rules for the spectral behaviour of those components, detailed in the abovementioned paper. They are then converted to the same units as the target map before they are subtracted.
These components consist of the following.
Synchrotron, a low-frequency template (GALPROP z10LMPD_SUNfE from Orlando and Strong (2013)) with an amplitude and pivot frequency fit. The temperature best-fit parameters used for this component are contained in COM_CompMap_Synchrotron-commander_0256_R2.00.fits, while the polarization parameters are contained in COM_CompMap_SynchrotronPol-commander_0256_R2.00.fits.
Free-free, the Draine (2011) two-parameter bremsstrahlung model, parametrized by the emission measure and electron temperature. The map containing the best-fit parameters is COM_CompMap_freefree-commander_0256_R2.00.fits.
Spinning dust, two templates derived from SpDust2 (Ali-Haïmoud et al., 2009, Ali-Haïmoud 2010, Silsbee et al., 2011), each with separate amplitudes, and the first template with a spatially varying peak frequency, while the second template has a spatially constant peak frequency. The parameter file containing the best-fit parameters in this case is COM_CompMap_AME-commander_0256_R2.00.fits.
Thermal dust, three-parameter (including the amplitude) modified blackbody model, parametrized by the emissivity index and dust temperature. There are three available best-fit parameter maps for this component: The Nside=256 map from the 32-band solution, COM_CompMap_dust-commander_0256_R2.00.fits, and two high-resolution parameter maps from a run using only the Planck data at 143 GHz and above: The temperature + polarization Nside=1024 map, COM_CompMap_DustPol-commander_1024_R2.00.fits, and the temperature-only Nside=2048 map, COM_CompMap_ThermalDust-commander_2048_R2.00.fits.
SZ, the thermal Sunyaev-Zeldovich effect, parametrized by y_sz. The best-fit parameters of this component are contained in COM_CompMap_SZ-commander_0256_R2.00.fits.
Line emission, the CO lines (3→2, 2→1, 1→0) and the 94/100 GHz line emission signals. These are only available for the frequencies at which they make a contribution. For CO1→0 this is the 100 GHz and sub-detectors, for CO2→1 this is 217 GHz and sub-detectors, and for CO3→2 this is 353 GHz and sub-detectors. The 94/100 GHz emission is also at the 100 GHz detectors. In the case of CO emission, the file containing the best-fit parameters is COM_CompMap_CO-commander_0256_R2.00.fits, while for 94/100 GHz, the file is COM_CompMap_xline-commander_0256_R2.00.fits.
For some of the above components (free-free, thermal dust, and SZ) we give the user the possibility to replace the best-fit parameters from Commander by a single global value.
Once the component maps are in the correct unit, they, along with the target map, are transformed into the harmonic domain. If there are more than one component map chosen, they will be added together, and then the sum will be subtracted from the target map. The user can then choose to smooth the final map before projecting onto the desired resolution.
The output file will be of the same format as the input, except that it will have had the selected components subtracted from it, and will be of the resolution chosen by the user.
The unit conversion, colour correction, and bandpass transformation functionalities all have in common that they calculate a factor and multiply either a full map or map cutout by that factor. They also provide the option to calculate an error estimate of this factor, in the cases where the bandpass responses also have corresponding uncertainty values; only the HFI bandpasses has this, so for LFI bandpasses this functionality is disabled. The uncertainty is estimated by generating Monte Carlo samples, following the same procedure outlined here These values, when this option is selected, are included in the header of the resulting map file.
Unit Conversion and Colour Correction
The unit conversion and colour correction functionalities are interfaces to the uc_cc code used internally in Planck, and whose documentation can be found here.
The bandpass transformation functionality follows the same structure as the unit conversion and colour correction functionalities, though the goal here is slightly different. Defining the monochromatic flux densityas
where R is the bandpass response of the bandpass in question, S is the actual flux density of the source, and f is the spectral behaviour of the reference spectrum (e.g. ν/ν0 in the case of the IRAS convention), the goal of bandpass transformation is to find the monochromatic flux density for the same source and reference spectrum given a different bandpass response. Since the ratio of the left hand and right hand side of the equation above should be 1 with any bandpass, we have
This equation forms the basis of the bandpass transformation algorithm. In order to use the functionality, it is therefore necessary to specify both the reference spectrum and the assumed spectral behaviour of the source. The reference spectra are either "thermodynamic", "iras", "ysz", or "brightness" (Rayleigh-Jeans or antenna temperature). The assumed spectral behaviours are either "powerlaw", which is a spectrum obeying this behaviour (in brightness units):
or ‘modified blackbody’, which is a spectrum obeying this behaviour (in brightness units):
Uncertainty estimation is performed in the same way as in uc_cc: given uncertainty values of the bandpass response, we can generate Monte Carlo samples by repeatedly simulating bandpass responses and calculating the bandpass transformation factor for those responses. The standard deviation of these samples is then reported as the uncertainty of the factor.
This functionality gives the user the possibility to mask away areas of a sky map using either a PLA mask, a mask generated from source catalogues, or a custom uploaded mask. The basic masking operation is straightforward: For every masked pixel, that pixel in the target map is set to zero.
The first option is using a predefined mask stored in PLA, all of which can be found using the dropdown tool. Each mask file can contain several columns, each of which typically represents various threshold values chosen for the masks.
The second option is generating a mask from one or more source catalogues. The user will then choose the source catalogues to use.
The user then has to select the radius around each source they want to mask away. There are two different ways to define this radius: Either a fixed radius, defined by a value and a unit, or a dynamic radius that depends on the signal-to-noise ratio of the source measurement. This is the algorithm that was used in the generation of the LFI masks. The exact equation that is used is
Here, amp and noise are columns from the source catalogue and describe some flux measure and uncertainty measure connected to that flux, respectively. The user also defines the FWHM in the above equation.
After selecting the radius around each source, the user can define filters which take away sources that do not meet certain criteria. There are three types of filters: A) Those that filter away those sources where a given column is greater/lower than a given value, B) those that filter away those sources where the ratio of two given columns is greater/lower than a given value, and C) those that filter away those sources where a flag is set to False.
Several such filters can be created, and each of them is applied to a given source catalogue.
The third option is uploading a custom mask. This mask must be a FITS file containing a single extension (excepting the primary extension), and the first column of this file will be used. The mask must be in HEALPix format, and if the NSIDE and/or ORDERING parameters of this mask is different than the target map, the mask will be automatically converted before application (when downgrading the mask, we perform a simple average of the subpixels to determine whether the superpixel should be masked; if its value is >= 0.5, it will be). The FITS file can be zipped as a tar.gz file.
The output file will contain the input map, with the pixels indicated by the chosen mask set to zero. The mask itself will also be appended as a separate column at the end of the FITS extension.
Noise Map Cutout
The purpose of this functionality is to generate a noise map cutout from one of the simulated noise realizations available in the PLA using the same cutting settings (region of interest, resolution, and rotation angle) used to generate the associated frequency map cutout, at that frequency and for the same time coverage.
Effective Beam Average
The purpose of this functionality is to provide the user with with an average over the effective beams for a map cutout. Given a cutout, the algorithm takes the beam whose center pixel is the four corners of the cutout, as well as the central pixel of the cutout, and it then performs a weighted average of these five beams, where the weights are given by the number of hits in the center pixel of the beam. The effective beams are defined in this document.
Map Making from Time-ordered data
The majority of the maps ingested in the PLA are those generated by the Planck Data Processing Centres (DPCs). In addition to maps, the PLA also provides access to Time-ordered data, which can be used to generate new maps. The new maps will be different from those in the PLA since the map-making method is different, and the selection of Time-ordered data may be different. We caution the user that the map-making methods used by PLA are much less sophisticated than the ones used in the Planck DPCs. The new maps generated should only be used as a rough initial approximation, for example to gauge the effect of deselecting part of the input time-ordered data.
Two map making tools are provided by PLA: the first, ring-based map making, provides an interface to the rings-based map making software written by Keihanen et al in preparation.
The option ‘Remove temperature monopole’ subtracts the average value of the map pixels from all pixels.
The output from the ring-based map making is three FITS files each containing one or more full-sky HEALPix maps: The map itself (I or IQU), the hit map (number of observations in each pixel) and the white noise covariance matrix (either 1 column if only temperature is selected, or six columns containing all IQU correlations if polarization is selected as well).
The second map-making option, called ‘Baseline-removed pixel averaging ’, is a timeline based mapmaker. It basically consists of binning individual samples into Healpix pixels, and is therefore the simplest possible map-making. The user selects a region or timespan, along with instrumental parameters, and the timelines corresponding to the parameters chosen are processed in the following way (this assumes both temperature and polarization maps are requested; the temperature-only case is analogous, but simpler):
We first defineand as the baseline-subtracted signal observed at sample , and the signal observed at sample squared, respectively (to be clear: etc., and not the alternative interpretation where . Alternatively, one could say that ). These will be nsample-sized vectors.
In order to baseline-subtract the signal, we follow two different approaches for LFI and HFI, respectively:
LFI: In this case the subtraction is simple: Each signal timeline comes with a corresponding baseline, and we simply subtract the offset for the given detector from the signal for that detector.
HFI: In this case the offset data for all timelines is stored in a single file, and we must use the metadata of the signal timeline to find the offset array that corresponds to that timeline. The way this is done is by first extracting the 'BEGRING', 'ENDRING', 'BEGIDX' and 'ENDIDX' keywords from the signal timeline, and comparing these with the data in HFI_ROI_DestripeOffsets_R2.02_full.fits and HFI_ROI_GlobalParams_R2.02_full.fits in the following way:
First the 'start ring index' is found by subtracting the 'BEGRING' of the global parameter file from the 'BEGRING' of the science timeline. The 'end ring index' is similarly found by subtracting the 'BEGRING' of the global parameter file from the 'ENDRING' of the science timeline.
For each ring index between the start and end ring indices, we locate the sample indices inside the given ring by fetching all sample indices between ring index and ring index +1. This is done by accessing the 'BEGIDX' field of the global parameter data from ring index and ring index + 1. We then map this set of sample indices to the actual offset data for that ring, which we do by going into the DestripeOffsets file and access the offsets at the given indices (belonging to that specific ring). We do this by accessing the ring index'th row and of the DestripeOffsets data, also taking care to use the right detector name to access the correct field. We use the first element of the resulting array, which corresponds to the full map offset (as opposed to first and second half-ring, which is the two other elements of that array).
We now then have the offset data for all rings between BEGRING and ENDRING (in the signal timeline) mapped to a sample index. We can then access the offset data corresponding to the sample indices we actually need, which are the sample indices between BEGIDX and ENDIDX from the signal timeline. From these we create the final offset array and subtract it from the signal timeline.
We then define the pointing matrixas the matrix where the entry corresponding to pixel and sample is given by
Whereis a function that is 1 if the sample falls within the pixel and 0 otherwise, and is the pointing angle of sample . This means that inside each element of is embedded a 3-dimensional space containing pointing information for that element.
We then create the diagonalmatrix , defined as
Where now each diagonal element ofis a matrix. Each element of thus looks like this (We let be the set of all timelines, and is the function giving the number of samples in a timeline):
The map,, in pixel will then be given by
where all the matrix operations now happen in the sub-space mentioned above, exceptwhich projects the -sized vector onto the -sized space.
Further, the rms of pixelis given by
Where again, all matrix operations except forhappen in the sub-dimension, and is the number of samples binned into pixel . This quantity is given by the element of .
For temperature only, the 3-dimensional sub-space is reduced to 1 dimension, and all operations in that space reduce to simple scalar operations - i.e. weighted averaging of all the timeline samples falling in each pixel.
The output from this mapmaker is, and .
Component separation is the process of estimating the contribution of a specific source of emission to an observed map. The PLA provides a number of all-sky maps of components which have been derived from Planck observations by the Planck Data Processing Centres. These maps have been generated using specific algorithms optimized in certain ways, using certain input data, and targeted to produce all-sky maps. The PLA allows to generate new estimates of components using Planck observations. Both are very simplified versions of the DPC algorithms, but may offer certain advantages as they allow to select the input data, and to target small regions of the sky.
The PLA offers two methods to carry out component separation: an Internal Linear Combination algorithm (ILC), and a parametric model maximum likelihood algorithm. The ILC method allows to estimate the CMB only, whereas the parametric method allows to estimate a set of sources defined by model SEDs. Both can be applied either to the entire sky or to a specified cutout.
ILC component separation
The ILC component separation algorithm follows the standard ILC procedure (e.g. Bennett, C. L. et al. 2003b, ApJS, 148, 97): Assume that for a map observed at a given channelwe have
Where CMB contribution and is assumed to be independent of channel as long as our map is in thermodynamic units. We then form the combinationis the
Whereis some weight applied to that particular channel, which is under the constraint that
With this constraint,
meaning that if we can choose CMB contribution. The weights that will minimize this sum is given bythat will minimize the second sum, we end up with a map that is close to the
Where is the sample covariance matrix.
If polarization is enabled, there will be two sets of weights: one for the field, and one for the and fields. Only those selected maps that have a polarization component will be used for calculating the and weights.
The user can select a number of Planck maps and a selection of external maps, as well as a target resolution and degree of smoothing. The function will convert the map to thermodynamic temperature and smooth all maps in harmonic space with the target smoothing FWHM before projecting it onto a map with the target . This is done regardless if the maps have different resolutions, and it is recommended to choose an FWHM that corresponds to the lowest-resolution map (or lower). The system does not attempt to sanity check the FWHM that the user prescribes.
The output from the ILC functionality is a file containing, as its first 1, 2, or 3 columns (depending on whether I, QU or IQU was chosen, respectively), the ILC solution. The subsequent columns contains the residuals of each of the input maps, defined as the ILC solution subtracted from the input map at each channel (after the input maps have been converted to K_CMB). The number of residual columns is equal to n * (1, 2, or 3), where n is the number of input maps and the last factor again depends on whether I, QU or IQU was chosen.
Parametric model maximum likelihood component separation
This method uses the 32-band temperature data and parametric model described in the 2015 diffuse components paper. The user can select which of the 32 bands and which of the parametric components they wish to include in the analysis. They also must choose a region in which to perform the analysis.
Using this data we can define a chi-squared for each pixel as follows:
where is the model evaluated at band and and is the signal observed by, and the rms corresponding to that band, respectively.
We start from the parameter results from the abovementioned paper and run a Powell minimization algorithm to minimize the chi-squared until we have found parameter values for all pixels in the target region.
Global parameters are fixed to their best-fit values from the 2015 paper, as are monopole and dipole values.
Output from this functionality are the best-fit parameter values of the components that the user has chosen to include, as well as the chi-squared value in each pixel. The user can also elect to get as output the residual maps in a single file, meaning the data minus the best-fit model. As the data are given inand smoothed to 60 arcmin, the output residual maps and parameters will also have this resolution, although they will be reprojected onto the cutout the user has chosen.
Note that results from this will likely be poorer in regions of high activity where we have a limited understanding of the physical processes, such as close to the galactic center. See the masks and maps in the abovementioned paper to get an idea of which regions might be less contaminated and thus give better results.
Bandpass Leakage Correction
The purpose of this functionality is to add or subtract correction templates available in the PLA from the associated frequency maps to “apply” or “remove” corrections that have been introduced by the Planck Data Processing Center to correct mainly for systematic effects. It is up to the advance users of the PLA to use this functionality, but the Planck Collaboration recommends not to use uncorrected maps for science analysis unless the user understands fully the impact of removing these corrections. By selecting the correction map to be applied or removed based on the type of correction, frequency and mission coverage, the system will automatically know to which map to apply the correction, or from which map to remove the correction, and as a result, the uncorrected map will be generated.
There are several functionalities that allow the user to upload custom bandpasses: Unit conversion, colour correction, bandpass transformation, ILC component separation, and component subtraction. These bandpasses must be an ASCII file containing two or three columns. The first column contains the frequency at which the response is defined, given in Hz. The second contains the actual bandpass response, and the optional third column contains the uncertainty of the bandpass response, assumed to be one standard deviation. If this column is present, an uncertainty estimate will be provided (for those functionalities that provide such an estimate).
The currently available bandpasses are Planck, WMAP and Herschel SPIRE.
The SPIRE PLW and PMW bandpasses, based on the data found here. Note that the bandpasses we currently possess are created using lambda-squared weighting, which the SPIRE team has since changed, so these do not correspond to the current state of the SPIRE bandpasses. The WMAP bandpasses, as delivered by the Commander team from their 32-band run for the 2015 diffuse components paper.
A custom bandpass should be:
- A text file
- Comment lines should be started with a # character
- Comments are allowed only in the top of the file
- There is no need for column headers
- The columns can be space or tab-separated
- There is no need for the columns to be aligned vertically
- The structure is a 3 column file with the following columns: frequency, response and standard_error
- To prevent abuse of the system, there can be a total of 50,000 lines in the file
Planck Sky Model
The current implementation of Planck Sky Model in the PLA archives, is using version PSM v.2.0.7 released July 03 2017
The Planck Sky Model is an asynchronous operation, meaning that the job is submitted to the PLA archive and an email will be send to the user when the job is completed, with links to pickup the results. The amount of waiting time is undetermined and it depends on the complexity of the submitted job and the current load of the PLA backend.
The execution of PSM follows a simple two step process that can be summarized in the following diagram.
The user basically has three execution paths:
- Generate a new Sky only
- Generate a new Sky and perform a Sky Observation or
- Use an existing Sky Map and perform a Sky Observation (currently not available).
Generate a New Sky
In this step a Planck Sky is generated according to the specifications of the user.
If this is the only step in the PSM execution path (that is, it is not followed by a Sky Observation), the generated Sky will be saved, for later use by any user of PLA. Saved PSM skies have a minimum lifespan of one week, after which they are removed automatically by the system.
The purpose of the saved Sky Maps, is to give the user the opportunity to perform multiple Sky Observations (in subsequent PSM executions) without having to recreate the same Sky Map each time.
For the generation of the new Sky the user is asked to fill in the following parameters:
- Info and Control
- Precision: Floating point precision of the output maps
- Fields: Model and process temperature only or both temperature and polarisation
- Seed: Specify the seed for random number generation
- Sky Pixel Window: Whether sky maps are sampled at pixel centers (0) or averaged over pixel areas (1)
- Sky Model Parameters
- Sky Resolution: Resolution of sky maps in arc-minutes (Gaussian)
- Sky LMAX: Specify the harmonic band limit of the PSM simulation
- HEALPix Nside: Nside parameter for sky HEALPix maps
- Cosmological Parameters
- T CMB: CMB temperature (Kelvin)
- Hubble H100: Hubble parameter at present time (H = h = H0=100, with H0 in km/s/Mpc)
- ωm: Matter parameter density
- ωb: Baryonic matter parameter density
- ωk: Curvature parameter Ωk. This sets the dark energy density parameter as ΩDE = 1 − Ωk − Ωm
- σ8: Amplitude of matter perturbations at the scale of 8h ̄¹ Mpc
- ns : Scalar spectral index ns of primordial fluctuations
- ns running: Running of the scalar spectral index ns
- nt: Tensor spectral index nt of primordial fluctuations
- nt running: Running of the tensor spectral index nt
- r: Tensor to scalar ratio (primordial power at kpivot)
- TAU reion: Reionisation optical depth
- He fraction: Helium fraction (by mass)
- nmassless nu: Number of massless (i.e. relativistic) neutrino species
- W dark energy: w parameter for the equation of state of dark energy
- K pivot scalar: The comoving scale kpivot (in Mpc ̄¹) at which the amplitudes of initial scalar and tensor power spectra are defined
- A: Amplitude of scalar modes
- n non-cold dm: Some Description
- M ncdm: Some Description
- T ncdm: Some Description
- CMB, Lensing, and Cosmic Structure Power Spectra Parameters
- CMB CL Source: Source for CMB Cl and lensing potential
- Generate CAMB outputs: Whether to compute CMB and matter power spectra using CAMB
- Generate CLASS outputs: Whether to compute CMB and matter power spectra using CLASS
- Do Mattershells: Whether to compute matter shells
- Matter LMax: Band limit for Large Scale Structure
- Low Redshift NBins: Number of redshift shells between 0.01 and 0.1 for matter density perturbations with CLASS
- Medium Redshift NBins: Number of redshift shells between 0.1 and 1 for matter density perturbations with CLASS
- High Redshift NBins: Number of redshift shells between 1 and 5 for matter density perturbations with CLASS
- Non Diagonal Mattershells: How many non-diagonal terms to include correlation between adjacent redshit shells
- Class Non Linear: Estimate of the non-linear P(k) and Cls for CLASS
- Model Selection
- CMB Monopole
- Mean primordial y: Value of the mean primordial y parameter
- Mean primordial mu: Value of the mean primordial mu parameter
- Randomize: Whether to launch a generic or prediction model
- CMB Dipole
- Dipole glon: Dipole galactic longitude (degrees)
- Dipole glat: Dipole galactic latitude (degrees)
- Dipole ampl: Dipole amplitude (mK CMB)
- Randomize: Whether to launch a generic or prediction model
- Dipole glon error: Uncertainty (1σ) on dipole galactic longitude (degrees)
- Dipole glat error: Uncertainty (1σ) on dipole galactic latitude (degrees)
- Dipole ampl error: Uncertainty (1σ) on dipole amplitude (mK CMB)
- Save dipole map: Whether to save the dipole map (in addition to alm) or not
- CMB Anisotropies
- CMB model:
- Prediction: is derived from a CMB map obtained on WMAP 5-year data using a needlet ILC component separation method.
- Gaussian: the CMB map produced by the PSM is a random generation of CMB harmonic coefficients alm according to
- Gaussian statistics dened by an input CMB power spectrum (temperature, and polarisation).
- Gaussian with shells: need to compute matter shells for CMB
- nongaussian fnl: The PSM can produce simulated CMB maps with non-gaussianity of the local type. Such CMB realisations have been precomputed and are stored in the PSM data repository, both at lmax=1024 (1000 realisations), or at lmax=3500 (100 realisations). The non-Gaussian CMB model assumes a linear-plus-quadratic model for Bardeen's gauge-invariant curvature potential, where the contribution of the quadratic term is given by a single parameter fnl
- The gaussian part should be extended at high ell: Whether to add gaussian fluctuations at l > 3500
- Renormalise the spectrum of the maps: Whether the power spectrum of the simulated non-gaussian template should be readjusted to match the expectation for the input cosmological parameters
- The non-linear factor fnl: Specified or Random value of fnl is drawn
- Whether the map used is drawn at random among specified available maps: Specified or Random map number used if the map number is randomly drawn or not
- CMB Lensing: Lensing of the CMB by large scale structure generates small shifts of the CMB temperature and polarisation patterns on the sky. This in turn changes the power spectrum of temperature and
- SZ Emmision
- SZ Model:
- prediction: The SZ prediction model includes only expected signals from the clusters included in the catalogues specified with the SZ_INPUT_CAT parameter. The parameters CLUSTER_PROFILE, NORM_PROFILE, PROFILE_BOUNDS and CLUSTER_T_STAR are active for this model. This model generates only thermal SZ effect.
- dmb: The SZ dmb model generates first a catalogue of galaxy clusters according to the mass function specified by the MASS_FUNCTION parameter. For each cluster, the expected SZ signal is computed on the basis of a physical model linking mass and redshift to electron density and temperature, on the basis of the spherically symmetric profile specified with the CLUSTER_PROFILE parameter. Cluster are distributed at random over the 4π of the sky, with a uniform probability. To each cluster, a velocity is assigned as a function of its redshift (assuming linear growth of structures). The 3-D velocity vector is drawn at random given the variance of the velocity field at that redshift, for the given cosmological parameters. This model accepts two additional parameters: SZ_CONSTRAINED and SZ_INCLUDE_POLARISED.
- Include the thermal SZ: Whether to include thermal SZ effects in the sky model
- Include the kinetic SZ: Whether to include kinetic SZ effects in the sky model
- Include the polarised SZ: Whether to include polarised SZ effect
- Relativistic corrections order: Order of relativistic corrections to the thermal SZ effect
- Use density shells: Whether clusters are in CLASS redshift shells.
- Cluster mass bias factor: Parameter that connects the X-ray mass estimate used in scaling relations to the true mass of the cluster (used in the mass function)
- Mass function dN/dMdz: The mass function used to generate the catalogue
- Lowest mass of included clusters: Lower mass limit of clusters included in the catalogue, in units of 1015 solar masses
- Highest mass of included clusters: Higher mass limit of clusters included in the catalogue, in units of 1015 solar masses
- Input catalogues: List of catalogues of known clusters to be included in the model sky emission
- Catalogue contains only known clusters: Whether the catalogue contains real observed clusters
- Contamination by point sources: Whether contamination by point sources
- Galactic emission
- Galactic model: There are at present two models of galactic emission, the prediction and simulation models. Both of them are based on the same input galactic templates, but the simulation model generates random small scale structure that is added to the synchrotron, free-free and thermal dust templates if the sky resolution of the PSM run is smaller than the resolution of the available template.
- Galactic version: choice is among prelaunch and postlaunch
- Galactic Components:
- Synchrotron intensity template: Which synchrotron intensity template to use
- Synchrotron scaling: choice is among powerlaw and curvpowerlaw
- Synchrotron index model: choice is among mamd2008, giardino2002, uniform and random.
- Main free-free template map: Template free-free map
- Electron temperature: Temperature of free-free electrons (in K)
- Spinning dust:
- Spinning dust emission law: What emission law should be used to model spinning dust emission
- Cold Neutral Medium (CNM): Proportion of cold neutral medium for spinning dust emission
- Warm Neutral Medium (WNM): Proportion of warm neutral medium for spinning dust emission
- Warm Ionised Medium (WIM): Proportion of warm ionised medium for spinning dust emission
- Molecular clouds (MOL): Proportion of molecular clouds for spinning dust emission
- Dark component (DRK): Proportion of dark gas for spinning dust emission
- Reflection Nebulae (RN): Proportion of reflexion nebulae for spinning dust emission
- SP_DUST extra component (EXTRA):
- Spinning dust polarization fraction: Proportion of extra component for spinning dust emission
- Thermal dust:
- Thermal dust model: Which dust model to use among template, ffp7, ffp8, ffp10
- Thermal dust temperature: In case of a dust model of 'template', specify the dust temperature in K
- Thermal dust spectral index: In case of a dust model of 'template', specify the dust spectral index
- Thermal dust amplitude: which dust template to use at 100 micron among SFD, SFDnoHII, FFP6-JD
- Thermal dust scaling: which dust scaling to use across frequencies among SFD-7, SFD-8
- Template used for modelling CO lines: Template used for modelling CO lines among Dame, Planck-v1
- Ratio between J=2-1 and J=1-0 lines: Ratio between J=2-1 and J=1-0 lines, in K.km/s
- Ratio between J=3-2 and J=2-1 lines: Ratio between J=3-2 and J=2-1 lines, in K.km/s
- PS emission
- PS model: There are two point sources models implemented in the PSM: prediction and simulation. As many radio sources are variable, the prediction model comprises only infrared sources and ultra-compact Hii regions, modelled on the basis of extrapolations of real IRAS sources. The simulation model comprises fake (faint) infrared sources to homogenize the IRAS coverage, and extrapolations of radio sources observed at frequencies ranging from 850MHz to 4.85 GHz.
- Include radio sources: Whether to include radio sources in the sky model
- Include IR sources: Whether to include infrared sources in the sky model
- Use real sources: Whether to use real sources
- Strong Sources to cat: What to do with strong sources: make observed catalogue
- Strong sources to map: What to do with strong sources: make observed map
- Strong source limit freq. (GHz): Set of frequencies used to separate between strong and faint sources
- Strong source limit flux (Jy): Flux limits in Jy, above which sources are considered as strong (must be a list of same size as the list of corresponding frequencies above)
- Mean IR Polar Degree: Typical degree of polarisation of the infrared sources
- FIRB emission
- FIRB model: There is at present one single model of emission for the far infrared background, due to a collection of blended high redshift infrared sources.
- Polarized: Whether the simulations will contain the polarised FIRB
- Mean Spheroid Polar Degree: Typical degree of polarisation of the proto-spheroids
- Mean Spiral Polar Degree: Typical degree of polarisation of the spirals
- Mean Starburst Polar Degree: Typical degree of polarisation of the starburst
Planck Legacy Archive
Cosmic Microwave background
(Planck) High Frequency Instrument
(Planck) Low Frequency Instrument
Flexible Image Transfer Specification
(Hierarchical Equal Area isoLatitude Pixelation of a sphere, <ref name="Template:Gorski2005">HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere, K. M. Górski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, M. Bartelmann, ApJ, 622, 759-771, (2005).
Ring-Ordered Information (DMC group/object)
Data Processing Center
Planck Sky Model