Difference between revisions of "Planck Added Value Tools"

From Planck PLA 2015 Wiki
Jump to: navigation, search
(Created page with "{{DISPLAYTITLE:Planck Added Value Tools Documentation}} ==General information about the functionality offered== The PLA offers several functionalities for the user to manipul...")
(No difference)

Revision as of 12:59, 28 February 2018


General information about the functionality offered[edit]

The PLA offers several functionalities for the user to manipulate the data products it contains. Some of this functionality is intended to allow users to modify the products themselves, while other parts is intended to allow users to use the existing products to create new data.

The are broken down into two separate categories, based on their synchronous or asynchronous nature of their implementation and are summarised here, but further explained in detail in the later parts of the Explanatory Supplement.

Synchronous Operations[edit]

The following list of operations, belonging in the first category, have the characteristic that the operations take place in realtime (or near-realtime) and that the user gets back their answer in a matter of seconds.

These operations include:

  1. Component subtraction: Allowing a user to subtract from a map certain physical components
  2. Unit conversion: Allowing a user to convert the units of a map.
  3. Colour correction: Allowing the user to convert a map to a different assumed spectral index. Bandpass transformation: Allowing the user to convert a map to what it would look like if observed with a different bandpass.
  4. Masking: Allowing a user to mask away parts of a map, using existing or user-defined masks.
  5. Bandpass Leakage Correction: Allowing the user to undo leakage corrections that were applied by the Planck team.

All of the above functionalities are available both for map cutouts and for full sky maps.

In the case of the Full Map Operations the responses are in an Asynchronous format (read next section) due to the size and the multiple number of maps selected.

The user can select one or more of these functionalities to apply to one or more full maps, or a cutout. If several functionalities are selected, they are applied in the order above, i.e. component subtraction takes place first, then unit conversion, colour correction, bandpass transformation and finally masking. Note, however, that certain functionalities might invalidate others; for example, in order to perform colour correction, a map must be in (M)Jy/sr units, and if the user does not convert a map to this unit, colour correction will not take place.

Asynchronous Operations[edit]

In this category, the operations performed by the user will be queued by execution by the PLA server, and the results will be emailed back to user in the form of download links, at a later time when the execution has been completed. The user will then be able at their convenience download their results.

In this category the following operations are included:

Effective Beam Average: Allowing a user to calculate the average of the Planck effective beams over a given cutout of the map. Noise Map Cutout: Creating a cutout of the noise maps in the PLA. Planck Sky Model: Providing the user an interface to the PSM software, allowing users to create sky models using their own parameters. Map-Making: Providing the user with several tools to create a map cutout from the time-ordered data. Component Separation: Providing the user with the possibility to use Planck data to produce maps of various astrophysical components.


Component Subtraction[edit]

This functionality allows the user to subtract either a CMB map or parametric component map from a target frequency map. Details about the components to be subtracted can be found here: Generally, the user will select a number of ‘components’, of which CMB maps can be a part, and the algorithm will use the spectral model of that component to extrapolate the component signal to the frequency of the target frequency map. Several components can be chosen, in which case what is subtracted from the target map is a sum of the extrapolated components. The subtraction takes place in harmonic space to avoid pixelization effects, and the user is allowed to select a resolution (‘Output NSIDE’) and smoothing FWHM with which to smooth both the components (‘Smoothing->Component’) before subtracting and the map (‘Smoothing->Target’) after subtraction. If the component map contains a polarization map as well, this can also be subtracted from the target map (if the target map contains polarized components, and if the components to be subtracted have a polarized model). Some of the components come in several resolutions; for these, the user will be allowed to choose (‘Available Nsides’). For these, the user is also informed about what the Nside of the map they have chosen is (‘Input Nside’). Note, however, that some of the resolutions might not contain a polarization component and selecting that resolution will disable the possibility to subtract the polarization component from the target map.

CMB: All CMB maps are available as temperature-only [math]N_{side}=2048[/math] maps and temperature + polarization [math]N_{side}=1024[/math] maps. In addition, the Commander temperature+polarization CMB map is available at [math]N_{side}=256[/math]. These maps, before subtraction, are converted into the same unit as the target map using the appropriate bandpass (the one belonging to the target map)

Other components: The best-fit component maps from the 32-band run of this paper are available as [math]N_{side}=256[/math] maps. Each component is extrapolated to the frequency of the target map using the rules for the spectral behaviour of those components, detailed in the abovementioned paper. They are then converted to the same unit as the target map before they are subtracted.

These components consist of:

“Synchrotron”: A low-frequency template (GALPROP z10LMPD_SUNfE from Orlando and Strong (2013)) with an amplitude and pivot frequency fit. The temperature best-fit parameters used for this component are contained in COM_CompMap_Synchrotron-commander_0256_R2.00.fits, while the polarization parameters are contained in COM_CompMap_SynchrotronPol-commander_0256_R2.00.fits.

“Free-free”: The Draine (2011) two-parameter bremsstrahlung model, parametrized by the emission measure and electron temperature. The map containing the best-fit parameters is COM_CompMap_freefree-commander_0256_R2.00.fits.

“Spinning dust”: Two templates derived from SpDust2 (Ali-Haïmoud et al., 2009, Ali-Haïmoud 2010, Silsbee et al., 2011), each with separate amplitudes, and the first template with a spatially varying peak frequency, while the second template has a spatially constant peak frequency. The parameter file containing the best-fit parameters in this case is COM_CompMap_AME-commander_0256_R2.00.fits.

“Thermal dust”: Three-parameter (including the amplitude) modified blackbody model, parametrized by the emissivity index and dust temperature. There are three available best-fit parameter maps for this component: The Nside=256 map from the 32-band solution, COM_CompMap_dust-commander_0256_R2.00.fits, and two high-resolution parameter maps from a run using only the Planck data at 143 GHz and above: The temperature + polarization Nside 1024 map, COM_CompMap_DustPol-commander_1024_R2.00.fits, and the temperature-only Nside 2048 map, COM_CompMap_ThermalDust-commander_2048_R2.00.fits.

SZ”: The thermal Sunyaev-Zeldovich effect, parametrized by y_sz. The best-fit parameters of this component are contained in COM_CompMap_SZ-commander_0256_R2.00.fits.

“Line emission”: The CO lines (3->2, 2->1, 1->0) and the 94/100 GHz line emission signals. These are only available for the frequencies at which they make a contribution. For CO1->0 this is the 100 GHz and sub-detectors, for CO2->1 this is 217 GHz and sub-detectors, and for CO3->2 this is 353 GHz and sub-detectors. The 94/100 GHz emission is also at the 100 GHz detectors. In the case of CO emission, the file containing the best-fit parameters is COM_CompMap_CO-commander_0256_R2.00.fits, while for 94/100 GHz, the file is COM_CompMap_xline-commander_0256_R2.00.fits.

For some of the above components (free-free, thermal dust, and SZ) we give the user the possibility to replace the best-fit parameters from Commander by a single global value.

Once the component maps are in the correct unit, they, along with the target map, are transformed into the harmonic domain. If there are more than one component map chosen, they will be added together, and then the sum will be subtracted from the target map. The user can then choose to smooth the final map before projecting onto the desired resolution.

The output file will be of the same format as the input, except that it will have had the selected components subtracted from it, and will be of the resolution that the user chose.


Title: The HFI 100 GHz full frequency map, before component subtraction


Title: The HFI 100 GHz full frequency map after subtracting the CMB component as estimated by SMICA.

Factor Computations[edit]

The unit conversion, colour correction, and bandpass transformation functionalities all have in common that they calculate a factor and multiply either a full map or map cutout by that factor. They also provide the option to calculate an error estimate of this factor, in the cases where the bandpass responses also have corresponding uncertainty values; only the HFI bandpasses has this, so for LFI bandpasses this functionality is disabled. The uncertainty is estimated by generating Monte Carlo samples, following the same procedure outlined here These values, when this option is selected, are included in the header of the resulting map file.

Unit Conversion and Colour Correction[edit]

The unit conversion and colour correction functionalities are interfaces to the uc_cc code used internally in Planck, and whose documentation can be found here.

Bandpass Transformation[edit]

The bandpass transformation functionality follows the same structure as the unit conversion and colour correction functionalities, though the goal here is slightly different. Defining the monochromatic flux density [math]\tilde{S}[/math] as

[math] \int{R(\nu)S(\nu)d\nu}= \tilde{S}(\nu_0)\int{R(\nu)f(\nu)d\nu} [/math]

where R is the bandpass response of the bandpass in question, S is the actual flux density of the source, and f is the spectral behaviour of the reference spectrum (e.g. [math]\nu/\nu_0[/math] in the case of the IRAS convention), the goal of bandpass transformation is to find the monochromatic flux density for the same source and reference spectrum given a different bandpass response. Since the ratio of the left hand and right hand side of the equation above should be 1 with any bandpass, we have

[math] \tilde{S}_{new}(\nu_{0, new})=\frac{\int{R_{old}(\nu)f(\nu,\nu_{0, old})d\nu}}{\int{R_{new}(\nu)f(\nu,\nu_{0, new})d\nu}}\cdot\frac{\int{R_{new}(\nu)S(\nu)d\nu}}{\int{R_{old}(\nu)S(\nu)d\nu}}\tilde{S}_{old}(\nu_{0, old}). [/math]

This equation forms the basis of the bandpass transformation algorithm. In order to use the functionality, it is therefore necessary to specify both the reference spectrum and the assumed spectral behaviour of the source. The reference spectra are either ‘thermodynamic’, ‘iras’, ‘ysz’, or ‘brightness’ (Rayleigh-Jeans or antenna temperature). The assumed spectral behaviours are either ‘powerlaw’, which is a spectrum obeying this behaviour (in brightness units):

[math] S(\nu) = \bigl(\frac{\nu}{\nu_0}\bigr) ^ \alpha [/math]

or ‘modified blackbody’, which is a spectrum obeying this behaviour (in brightness units):

[math] S(\nu) = \frac{2\hbar\nu^3}{c^2 \exp(x(\nu, T)) - 1} \cdot \nu ^ \beta [/math]

where [math]x(\nu, T) = \frac{\hbar \nu}{k_bT}[/math].

Uncertainty estimation is performed in the same way as in uc_cc: given uncertainty values of the bandpass response, we can generate Monte Carlo samples by repeatedly simulating bandpass responses and calculating the bandpass transformation factor for those responses. The standard deviation of these samples is then reported as the uncertainty of the factor.

Masking[edit]

This functionality gives the user the possibility to mask away areas of a sky map using either a PLA mask, a mask generated from source catalogues, or a custom uploaded mask. The basic masking operation is straightforward: For every masked pixel, that pixel in the target map is set to zero.

The first option is using a predefined mask stored in PLA, all of which can be found using the dropdown tool. Each mask file can contain several columns, each of which typically represents various threshold values chosen for the masks.

The second option is generating a mask from one or more source catalogues. The user will then choose the source catalogues to use.

The user then has to select the radius around each source they want to mask away. There are two different ways to define this radius: Either a fixed radius, defined by a value and a unit, or a dynamic radius that depends on the signal-to-noise ratio of the source measurement. This is the algorithm that was used in the generation of the LFI masks. The exact equation that is used is

[math] r = \frac{FWHM}{2 \sqrt{2\log(2)}}\cdot f [/math]

where

[math] f = \sqrt{2 \log\bigl(\frac{amp}{0.1 * noise}\bigr)} [/math]

Here, amp and noise are columns from the source catalogue and describe some flux measure and uncertainty measure connected to that flux, respectively. The user also defines the FWHM in the above equation.

After selecting the radius around each source, the user can define filters which take away sources that do not meet certain criteria. There are three types of filters: A) Those that filter away those sources where a given column is greater/lower than a given value, B) those that filter away those sources where the ratio of two given columns is greater/lower than a given value, and C) those that filter away those sources where a flag is set to False.

Several such filters can be created, and each of them is applied to a given source catalogue.

The third option is uploading a custom mask. This mask must be a FITS file containing a single extension (excepting the primary extension), and the first column of this file will be used. The mask must be in HEALPix format, and if the NSIDE and/or ORDERING parameters of this mask is different than the target map, the mask will be automatically converted before application (when downgrading the mask, we perform a simple average of the subpixels to determine whether the superpixel should be masked; if its value is >= 0.5, it will be). The FITS file can be zipped as a tar.gz file.

The output file will contain the input map, with the pixels indicated by the chosen mask set to zero. The mask itself will also be appended as a separate column at the end of the FITS extension.

Title: The LFI 44 GHz full sky map with a dynamic radius mask from the 44 GHz source catalogue.

Noise Map Cutout[edit]

The purpose of this functionality is to generate a noise map cutout from one of the simulated noise realizations available in the PLA using the same cutting settings (region of interest, resolution, and rotation angle) used to generate the associated frequency map cutout, at that frequency and for the same time coverage.�

Effective Beam Average[edit]

The purpose of this functionality is to provide the user with with an average over the effective beams for a map cutout. Given a cutout, the algorithm takes the beam whose center pixel is the four corners of the cutout, as well as the central pixel of the cutout, and it then performs a weighted average of these five beams, where the weights are given by the number of hits in the center pixel of the beam. The effective beams are defined in this document.

Title: The average of the effective 545 GHz beams in the Crab nebula region.


Planck Sky Model[edit]

This functionality provides an interface to the Planck Sky Model (PSM), the software used internally in Planck to simulate sky maps and instrumental observations. More detailed documentation can be found here. Are you referring to the PSM Manual ? Fine. But I think that we need to include a short summary for the user here. (after all, the reason for making this interface is to make it easy for the user). The summary should be such that if the user wants to run a "simple" case (defaults values with small variations), he should not need to look up the manual. The manual should be referred to for "complex" cases.

Map Making from Time-ordered data[edit]

The majority of the maps ingested in the PLA are those generated by the Planck Data Processing Centres (DPCs). In addition to maps, the PLA also provides access to Time-ordered data, which can be used to generate new maps. The new maps will be different from those in the PLA since the map-making method is different, and the selection of Time-ordered data may be different. We caution the user that the map-making methods used by PLA are much less sophisticated than the ones used in the Planck DPCs. The new maps generated should only be used as a rough initial approximation, for example to gauge the effect of deselecting part of the input time-ordered data.

Two map making tools are provided by PLA: the first, ring-based map making, provides an interface to the rings-based map making software written by Keihanen et al. More documentation about this software can be found here.

The option ‘Remove temperature monopole’ subtracts the average value of the map pixels from all pixels.

The output from the ring-based map making is three FITS files each containing one or more full-sky HEALPix maps: The map itself (I or IQU), the hit map (number of observations in each pixel) and the white noise covariance matrix (either 1 column if only temperature is selected, or six columns containing all IQU correlations if polarization is selected as well).


Title: Example of ring-based mapmaking choosing a four-month observing period.

Describe the use of baseline offsets.

The second map-making option, called ‘Baseline-removed pixel averaging ’, is a timeline based mapmaker. It basically consists of binning individual samples into Healpix pixels, and is therefore the simplest possible map-making. The user selects a region or timespan, along with instrumental parameters, and the timelines corresponding to the parameters chosen are processed in the following way (this assumes both temperature and polarization maps are requested; the temperature-only case is analogous, but simpler):

We first define [math]s_r[/math] and [math]s^2_r[/math] as the baseline-subtracted signal observed at sample [math]r[/math], and the signal observed at sample [math]r[/math] squared, respectively (to be clear: [math]s^2 \doteq (s_1^2, s_2^2\dots)[/math] etc., and not the alternative interpretation where [math]s^2 = s^Ts[/math]. Alternatively, one could say that [math]s^2 = diag(s^Ts)[/math]). These will be nsample-sized vectors.

In order to baseline-subtract the signal, we follow two different approaches for LFI and HFI, respectively:

LFI: In this case the subtraction is simple: Each signal timeline comes with a corresponding baseline, and we simply subtract the offset for the given detector from the signal for that detector.

HFI: In this case the offset data for all timelines is stored in a single file, and we must use the metadata of the signal timeline to find the offset array that corresponds to that timeline. The way this is done is by first extracting the 'BEGRING', 'ENDRING', 'BEGIDX' and 'ENDIDX' keywords from the signal timeline, and comparing these with the data in HFI_ROI_DestripeOffsets_R2.02_full.fits and HFI_ROI_GlobalParams_R2.02_full.fits in the following way:

First the 'start ring index' is found by subtracting the 'BEGRING' of the global parameter file from the 'BEGRING' of the science timeline. The 'end ring index' is similarly found by subtracting the 'BEGRING' of the global parameter file from the 'ENDRING' of the science timeline.

For each ring index between the start and end ring indices, we locate the sample indices inside the given ring by fetching all sample indices between ring index and ring index +1. This is done by accessing the 'BEGIDX' field of the global parameter data from ring index and ring index + 1. We then map this set of sample indices to the actual offset data for that ring, which we do by going into the DestripeOffsets file and access the offsets at the given indices (belonging to that specific ring). We do this by accessing the ring index'th row and of the DestripeOffsets data, also taking care to use the right detector name to access the correct field. We use the first element of the resulting array, which corresponds to the full map offset (as opposed to first and second half-ring, which is the two other elements of that array).

We now then have the offset data for all rings between BEGRING and ENDRING (in the signal timeline) mapped to a sample index. We can then access the offset data corresponding to the sample indices we actually need, which are the sample indices between BEGIDX and ENDIDX from the signal timeline. From these we create the final offset array and subtract it from the signal timeline.

We then define the pointing matrix [math]A[/math] as the [math]N_{pix} \times N_{samples}[/math] matrix where the entry corresponding to pixel [math]p[/math] and sample [math]r[/math] is given by

[math] A_{p, r} = i(p, r) \cdot [1, \cos(2 \psi(r)), \sin(2 \psi(r))], [/math]

Where [math]i[/math] is a function that is 1 if the sample falls within the pixel [math]p[/math] and 0 otherwise, and [math]\psi[/math] is the pointing angle of sample [math]r[/math]. This means that inside each element of [math]A[/math] is embedded a 3-dimensional space containing pointing information for that element.

We then create the diagonal [math]N_{pix} \times N_{pix}[/math] matrix [math]W[/math], defined as

[math] W = A^TA, [/math]

Where now each diagonal element of [math]W[/math] is a [math]3\times 3[/math] matrix. Each element of [math]W[/math] thus looks like this (We let [math] T[/math] be the set of all timelines, and [math]ns[/math] is the function giving the number of samples in a timeline):

[math] W_{p, p} = \sum_{q\epsilon T} \sum_{r=1}^{ns(q)} i(p, r) \cdot \begin{pmatrix} 1 & \cos(2 \psi(r)) & \sin(2 \psi(r)) \\ \cos(2 \psi(r)) & \cos^2(2\psi(r)) & \cos(2 \psi(r))\sin(2\psi(r)) \\ \sin(2 \psi(r)) & \cos(2\psi(r)\sin(2\psi(r)) & \sin^2(2\psi(r))) \end{pmatrix} [/math]


The map, [math]m[/math], in pixel [math]p[/math] will then be given by

[math] m_p = (A_{p, p}^TA_{p, p})^{-1} \cdot (A^Ts)_{p, p}, [/math]

where all the matrix operations now happen in the sub-space mentioned above, except [math]A^Ts[/math] which projects the [math]n_{samp}[/math]-sized vector onto the [math]n_{pix}[/math]-sized space.

Further, the rms of pixel [math]p[/math] is given by

[math] rms_p = \frac{1}{n_{samp}}\sqrt{(diag((A_{p, p}^TA_{p, p})^{-1} \cdot (A^Ts^2)_{p, p}) - m_p^Tm_p)}, [/math]

Where again, all matrix operations except for [math]A^Ts^2[/math] happen in the sub-dimension, and [math]n_{samp}[/math] is the number of samples binned into pixel [math]p[/math]. This quantity is given by the [math](1, 1)[/math] element of [math]W[/math].

For temperature only, the 3-dimensional sub-space is reduced to 1 dimension, and all operations in that space reduce to simple scalar operations - i.e. weighted averaging of all the timeline samples falling in each pixel.

The output from this mapmaker is [math]m, rms[/math], and [math]W[/math].



Explain the options: Planets timelines; use ringhalf offsets.

Component Separation[edit]

Component separation is the process of estimating the contribution of a specific source of emission to an observed map. The PLA provides a number of all-sky maps of components which have been derived from Planck observations by the Planck Data Processing Centres. These maps have been generated using specific algorithms optimized in certain ways, using certain input data, and targeted to produce all-sky maps. The PLA allows to generate new estimates of components using Planck observations. Both are very simplified versions of the DPC algorithms, but may offer certain advantages as they allow to select the input data, and to target small regions of the sky.

The PLA offers two methods to carry out component separation: an Internal Linear Combination algorithm (ILC), and a parametric model maximum likelihood algorithm. The ILC method allows to estimate the CMB only, whereas the parametric method allows to estimate a set of sources defined by model SEDs. Both can be applied either to the entire sky or to a specified cutout.


ILC component separation[edit]

The ILC component separation algorithm follows the standard ILC procedure (e.g. Bennett, C. L. et al. 2003b, ApJS, 148, 97): Assume that for a map observed at a given channel [math]k[/math] we have

[math] T(\nu_k) = T_{cmb} + T_{res}(\nu_k), [/math]

Where [math]T_{cmb}[/math] is the CMB contribution and is assumed to be independent of channel as long as our map is in thermodynamic units. We then form the combination

[math] T = \sum_k w_kT(\nu_k) [/math]

Where [math]w_k[/math] is some weight applied to that particular channel, which is under the constraint that

[math] \sum_k w_k = 1. [/math]

With this constraint,

[math] T = T_{cmb} + \sum_k w_k \cdot T_{res}(\nu_k), [/math]

meaning that if we can choose [math]w_k[/math] that will minimize the second sum, we end up with a map that is close to the CMB contribution. The weights that will minimize this sum is given by

[math] w_k = \frac{\sum_j C_{k, j}^{-1}}{\sum_{i, j} C_{i, j}^{-1}} [/math] Where [math]C[/math] is the [math]n_{maps} \times n_{maps}[/math] sample covariance matrix.


If polarization is enabled, there will be two sets of weights: one for the [math]I[/math] field, and one for the [math]Q[/math] and [math]U[/math] fields. Only those selected maps that have a polarization component will be used for calculating the [math]Q[/math] and [math]U[/math] weights.

The user can select a number of Planck maps and a selection of external maps, as well as a target resolution and degree of smoothing. The function will convert the map to thermodynamic temperature and smooth all maps in harmonic space with the target smoothing FWHM before projecting it onto a map with the target [math]N_{side}[/math]. This is done regardless if the maps have different resolutions, and it is recommended to choose an FWHM that corresponds to the lowest-resolution map (or lower). The system does not attempt to sanity check the FWHM that the user prescribes.

The output from the ILC functionality is a file containing, as its first 1, 2, or 3 columns (depending on whether I, QU or IQU was chosen, respectively), the ILC solution. The subsequent columns contains the residuals of each of the input maps, defined as the ILC solution subtracted from the input map at each channel (after the input maps have been converted to K_CMB). The number of residual columns is equal to n * (1, 2, or 3), where n is the number of input maps and the last factor again depends on whether I, QU or IQU was chosen.


Parametric model maximum likelihood component separation[edit]

This method uses the 32-band temperature data and parametric model described in the 2015 diffuse components paper. The user can select which of the 32 bands and which of the parametric components they wish to include in the analysis. They also must choose a region in which to perform the analysis.

Using this data we can define a chi-squared for each pixel as follows:

[math] \chi^2 = \sum_{i=1}^{n_{bands}} \frac{(m(i) - d_i) ^ 2}{rms_i^2}, [/math] where [math]m(i)[/math] is the model evaluated at band [math]i[/math] and [math]d_i[/math] and [math]rms_i[/math] is the signal observed by, and the rms corresponding to that band, respectively.

We start from the parameter results from the abovementioned paper and run a Powell minimization algorithm to minimize the chi-squared until we have found parameter values for all pixels in the target region.

Global parameters are fixed to their best-fit values from the 2015 paper, as are monopole and dipole values.

Output from this functionality are the best-fit parameter values of the components that the user has chosen to include, as well as the chi-squared value in each pixel. The user can also elect to get as output the residual maps in a single file, meaning the data minus the best-fit model. As the data are given in [math]N_{side}=256[/math] and smoothed to 60 arcmin, the output residual maps and parameters will also have this resolution, although they will be reprojected onto the cutout the user has chosen.

Note that results from this will likely be poorer in regions of high activity where we have a limited understanding of the physical processes, such as close to the galactic center. See the masks and maps in the abovementioned paper to get an idea of which regions might be less contaminated and thus give better results.

Title: Example output from the parametric model maximum likelihood component separation on a cutout: The thermal dust amplitude.

Title: Example output from the parametric model maximum likelihood component separation on a cutout: The thermal dust spectral index.


Bandpass Leakage Correction[edit]

The purpose of this functionality is to add or subtract correction templates available in the PLA from the associated frequency maps to “apply” or “remove” corrections that have been introduced by the Planck Data Processing Center to correct mainly for systematic effects. It is up to the advance users of the PLA to use this functionality, but the Planck Collaboration recommends not to use uncorrected maps for science analysis unless the user understands fully the impact of removing these corrections. By selecting the correction map to be applied or removed based on the type of correction, frequency and mission coverage, the system will automatically know to which map to apply the correction, or from which map to remove the correction, and as a result, the uncorrected map will be generated.

My recollection is that the AVI should be able to both "Do" and "Undo" corrections ?

Explain the "perform cutting" option.

Describe the output files.


Custom Bandpasses[edit]

There are several functionalities that allow the user to upload custom bandpasses: Unit conversion, colour correction, bandpass transformation, ILC component separation, and component subtraction. These bandpasses must be an ASCII file containing two or three columns. The first column contains the frequency at which the response is defined, given in Hz. The second contains the actual bandpass response, and the optional third column contains the uncertainty of the bandpass response, assumed to be one standard deviation. If this column is present, an uncertainty estimate will be provided (for those functionalities that provide such an estimate).


The currently available bandpasses are

The SPIRE PLW and PMW bandpasses, based on the data found here. Note that the bandpasses we currently possess are created using lambda-squared weighting, which the SPIRE team has since changed, so these do not correspond to the current state of the SPIRE bandpasses. The WMAP bandpasses, as delivered by Ingunn Wehus and Hans Kristian Eriksen from their 32-band Commander run for the 2015 diffuse components paper.


A custom bandpass should be:

  • A text file
  • Comment lines should be started with a # character
  • Comments are allowed only in the top of the file
  • There is no need for column headers
  • The columns can be space or tab-separated
  • There is no need for the columns to be aligned vertically
  • The structure is a 3 column file with the following columns: frequency, response and standard_error
  • To prevent abuse of the system, there can be a total of 50,000 lines in the file


References[edit]

Planck Legacy Archive

Planck Sky Model

Cosmic Microwave background

Full-Width-at-Half-Maximum

Sunyaev-Zel'dovich

(Planck) High Frequency Instrument

(Planck) Low Frequency Instrument

Flexible Image Transfer Specification

(Hierarchical Equal Area isoLatitude Pixelation of a sphere, <ref name="Template:Gorski2005">HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere, K. M. Górski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, M. Bartelmann, ApJ, 622, 759-771, (2005).

Ring-Ordered Information (DMC group/object)

Data Processing Center