TOI processing LFI
Contents
Overview[edit]
The LFI Level2 Pipeline analyzes each horn of the instrument separately, one pointing period at time, and store results in object the length of an OD. Each diode of the horn is corrected from systematic, differentiated and then combined with its complementary diode in the same radiometer. The horn is then calibrated and the photometric calibration is applied.
Pre-processing[edit]
Before the run of the Level2 pipeline and to improve the analysis the Mission information and data sampling divisions are stored in the database.
The Mission information is a set of objects, one for each Operational Day (OD, as defined in REFERENCE??), in which are stored Pointing Period data: DPC pointing ID (where 1 is the first pointing of the nominal mission), PSO pointing ID, start OBT of the pointing maneuver, start OBT of the stable pointing, end OBT of the pointing, spin axis ecliptic longitude and latitude.
The sampling information is a set of objects, one for each LFI frequency, in which are stored for each pointing ID: start OBT of the pointing maneuver, start OBT of the stable pointing, end OBT of the pointing, number of samples of the pointing, number of stable samples of the pointing, start sample of the stable pointing and sample number from the start of the nominal mission. Valid samples and OBTs are defined where any of the radiometers from that frequency cohort contain valid data.
ADC Correction[edit]
During analysis it appeared that white noise and calibration seemed affected by something in common. It turn out to be a non linearity in the Analogic/Digital Converter on board. More on P02 and P02a.
Evaluation[edit]
The mathematical model represents the digital ADC output as:
where DAE gain, is the DAE offset and is the DAE .
is the voltage input, is theWe can model the non-linearity as a function of the input voltage
. So we have the apparent inferred voltage and we can link it to the actual input voltage with:so that:
Since
and we can use the much simpler relation:and we expect it to be very near to unity for all
.To find the response curve we have only the apparent voltage to work with, so we had to use the inverse response function
and replace the real input voltage with times the time varying gain factor .If we introduce a small signal on top of
which leads to increased detected voltage and corresponding apparent voltage increment:so carrying out the differentiation respect to
to the relation between true and apparent signal voltage leads to:We now assume
and are fixed and that the variations are due to slow drifts in the gain. So we can isolate the terms:Combining the equations through the gain factor to remove it:
Rearranging and putting
So there is the expected direct proportionality of
to due to the assumption that the variations in voltage are due to overall gain drift, so the amplitude of voltage and signal will vary together. Then there is the additional differential term which will pull signal amplitude away from the linear relationship. So if we plot measured white noise or dipole gain factor against recovered voltage we should see this linear curve with variations due to local slope changes at particular voltages. The linear part can be taken out and the differential part fitted. This was numerically integrated up to get the inverse response curve, what we need to convert the measured voltages to corrected voltages.Application[edit]
For each of the 44 LFI diodes there is the corresponding object in the Database. Each object contains 4 columns: the input voltages coming from the sky channel and the corresponding linearized output, the input voltages coming from the reference channel and the corresponding linearized output.
Data loaded by the module are used to initialize two different interpolators using CSPLINE and the functions from gsl (GNU Scientific Libraries) libraries. The interpolators are then used to correct each sample.
Spikes Removal[edit]
Some of the LFI receivers exhibit a small artifact with exactly 1 second repetition, which visible in the power spectra. The effect is a set of spikes at 1 Hz and harmonics. The spurious signal is very well modeled and is removed from the timelines. More information can be found in P02 and P02a.
Modeling[edit]
The cause of the spikes at 1 Hz and harmonics is a tiny 1 second square wave embedded in affected channels. The method to estimate the 1 Hz signal is to build a template in time domain synchronized with the spurious signal. The first step is dividing each second of data into time bins using OBT. The number of bins is computed using:
where fsamp is the sampling frequency and is 136 at 70 GHz, 80 at 44 GHz and 56 at 30 GHz. Then the bins vector is initialized with time intervals. To avoid aliasing effects template resolution is
. We can write the process adding an index to the time sample: lower index denotes the particular time sample, while the upper index labels the bin into which the sample falls. The linear filter can be written as:Here
is the filter weight which is determined by where within the bin sample lies. If we use with only an upper index to denote the start of each bin, then we can write the filter weight as follows:In other words, the filter weight is the time sample value minus the start of the bin divided by the width of the bin.
We must estimate the parameters
from the data. With the assumption that the instrument has stable noise properties, we can use a least square algorithm to estimate the bin values:This can be represented in matrix equation:
with the following definitions:
With these definitions we have to make use of periodic boundary conditions to obtain the correct results, such that if
, and , . Once this is done, we have a symmetric tridiagonal matrix with additional values at the upper right and lower left corners of the matrix. The matrix is solved with LU decomposition. In order to be certain of the numerical accuracy of the result, we can perform a simple iteration. The solving of the linear system and the iterative improvement of the solution are implemented as suggested in Numerical Recipes.Application[edit]
For each of the 44 LFI diodes there is the corresponding object in the Database. Because of the amplitude of the spikes we choose to apply the correction only on the 44 GHz radiometers. Each object contains 3 columns: the bins start time vector, the sky amplitudes and the reference amplitudes.
For each sample the value to be subtracted is computed using:
where k is the index of the bins at a given time.
Gaps Filling[edit]
During the mission some of the data packets were lost (see P02). Moreover in two different and very peculiar situations LFI was shutdown and restarted, giving inconsistencies in data sampling. All of those data aren't used for scientific purpose but to avoid discrepancies in data analysis all of the radiometers at the same frequency must have the same samples.
To accomplish this the length of the data stream to be reduced in a specific pointing period is compared with the data stored in the sample information object. If the length is not the same the OBT vector is filled with missing sample times, the data vector is filled with zeros and in the flag column the bit for gap is raised.
Gain Modulation Factor[edit]
The pseudo-correlation design of the LFI radiometers dramatically reduces the when the and outputs. The two streams are slightly unbalanced, as one looks at the 2.7 K sky and the other looks at the ~4.5 K reference load. To force the mean of the difference to zero, the load signal is multiplied by the Gain Modulation Factor (R). For each pointing period the factor is computed using:
Then the data are differenced using:
This value for R minimizes the 1/f and the white noise in the difference timestream. The i index represents the diode and can be 0 or 1.
At this point the maneuver flag bit is set to identify which samples have missing data, using the information stored in the sampling information object. This identifies which data to ignore in the next step of the Pipeline.
The R values are stored in the database. At the same time the mean values of
and are stored in order to be used in other steps of the analysis.Diode Combination[edit]
The two complementary diodes of each radiometer are combined. The relative weights of the diodes in the combination are chosen for optimal noise. We assign relative weights to the uncalibrated diode streams based on their first order calibrated noise.
Evaluation[edit]
From first order calibration we compute an absolute gain
and , subtract an estimated sky and calculate the calibrated white noise and , for the pair of diodes. The weights for the two diodes ( = 0 or 1) are:where the weighted calibration constant is given by:
The weights are fixed to a single value per diode for the entire dataset. Small variations in the relative noise of the diodes would in principle suggest recalculating the weights on shorter timescales, however, we decided a time varying weight could possibly induce more significant subtle systematics, so chose a single best estimate for the weights for each diode pair.
Detector ID | Weight |
---|---|
LFI18M-00 | 0.567304963 |
LFI18M-01 | 0.432695037 |
LFI18S-10 | 0.387168785 |
LFI18S-11 | 0.612831215 |
LFI19M-00 | 0.502457723 |
LFI19M-01 | 0.497542277 |
LFI19S-10 | 0.55143474 |
LFI19S-11 | 0.44856526 |
LFI20M-00 | 0.523020094 |
LFI20M-01 | 0.476979906 |
LFI20S-10 | 0.476730576 |
LFI20S-11 | 0.523269424 |
LFI21M-00 | 0.500324722 |
LFI21M-01 | 0.499675278 |
LFI21S-10 | 0.563712153 |
LFI21S-11 | 0.436287847 |
LFI22M-00 | 0.536283158 |
LFI22M-01 | 0.463716842 |
LFI22S-10 | 0.553913461 |
LFI22S-11 | 0.446086539 |
LFI23M-00 | 0.508036034 |
LFI23M-01 | 0.491963966 |
LFI23S-10 | 0.36160661 |
LFI23S-11 | 0.63839339 |
LFI24M-00 | 0.602269189 |
LFI24M-01 | 0.397730811 |
LFI24S-10 | 0.456037835 |
LFI24S-11 | 0.543962165 |
LFI25M-00 | 0.482050606 |
LFI25M-01 | 0.517949394 |
LFI25S-10 | 0.369618239 |
LFI25S-11 | 0.630381761 |
LFI26M-00 | 0.593126369 |
LFI26M-01 | 0.406873631 |
LFI26S-10 | 0.424268188 |
LFI26S-11 | 0.575731812 |
LFI27M-00 | 0.519877701 |
LFI27M-01 | 0.480122299 |
LFI27S-10 | 0.484831449 |
LFI27S-11 | 0.515168551 |
LFI28M-00 | 0.553227696 |
LFI28M-01 | 0.446772304 |
LFI28S-10 | 0.467677355 |
LFI28S-11 | 0.532322645 |
Application[edit]
The weight in the table above are used in the formula:
Planet Flagging[edit]
Why we flag planets.
Extraction Method[edit]
The planets Temperature have been estimated from chunk of samples affected, plus a surrounding region, projected onto a grid (microstripes), by assuming an elliptical Gaussian beam using parameters from instrument database.
Microstripes are a way to extract and store relevant samples for planets detection. Relevant samples are samples affected by the planet plus samples in the neighbor. The search radius to select samples as relevant is 5 deg around the planet position, computed at the pointing period mid time. For each sample we store SCET (Spacecraft Event Time), pointing directions and calibrated temperature. Destriping is applied during application.
Random errors are estimated by taking the variance of samples entering each micromap pixel. This is fast and the major problems (near a bright source the noise gives a larger value and it is difficult to extract the correlation matrix) causes the noise to be overestimated by a factor of two that in this situation is not a major drawback.
The apparent position of Planets as seen from Planck at a given time is derived from JPL Horizon. Position are sampled in tables at steps of 15 minutes and then linearly interpolated at the sampling frequency of each detector. JPL Horizons tables allow also to derive other quantities such as the Planet-Planck distance and the Planet-Sun distance nad the planet angular diameter affecting the apparent brightness of the planet.
The antenna temperature is a function of the dilution factor, according to:
where
and are the observed and reduced , the instantaneous planets angular diameter and the beam full width half maximum.With the above definition
could be considered as the for a planet with , but a more convenient view is to take a Reference Dilution factor , as the dilution factor for a standardized planet angular diameter and beam fwhm , , to have:leading to the following definition of a standardized
:with the advantage of removing variations among different detectors and transits while keeping the value of
similar to that seen by the instrument and then allowing a prompt comparison of signals and sensitivities.Application[edit]
The OBT vector found by the search are saved in a set of object, one for each horn. In Level2 Pipeline those OBTs are compared with the OBT vector of the data to raise planet bit flag where needed.
Photometric Calibration[edit]
Photometric calibration is the procedure used to convert data from volts to kelvin. The source of the calibration is the well known CMB dipole, caused by the motion of the Solar System with respect to the CMB reference frame. To this signal we add the modulation induced by the orbital motion of Planck around the Sun. The resulting signal is then convoluted with the horn beam to get the observed Dipole.
Beam Convoluted Dipole[edit]
In computing the beam convoluted dipole we used an elegant algorithm to save time and computing power. In computing the cosmological dipole signal it is usually assumed a pencil-like beam acting as a delta of Dirac. In this case a dipole timeline is defined as:
where
is the pointing direction, in the observer reference frame and is the dipole axis scaled by the dipole amplitude again in the same reference frame.In general the true signal would have to be convoluted with the beam pattern of the given radiometer, usually described as a fixed map in the Beam reference frame or as a time dependent map in the observer reference frame. In this case it is easiest to describe the convolution in the beam reference frame, since the function to be convoluted is described by a single vector.
Denoting with
the matrix converting pointings from he observer to the beam reference frame, so that:the instantaneous dipole direction in the beam reference frame is:
By denoting with
a pointing direction in the beam reference frame then:where
is a normalization costant.Denoting with
, , the three cartesian components of the the integral of the dot product can be decomposed in three independent integrals:those integrals define a time independent vector characteristic of each radiometer and constant over the mission.
By using this characteristic vector the calculation of the convoluted dipole is simply defined by a dot product of the vector
by the dipole axis rotated in the beam reference frame.Binning[edit]
In order to simplify the computation and to reduce the amount of data used in the calibration procedure the data are phase binned in map with Nside 256. The low resolution is sufficient for the purpose because the Dipole signal lives over large angular scale. During phase binning all the data with flagged for maneuvers, planets, gaps and the ones flagged in Level1 analysis as not recoverable are discharged.
Fit[edit]
The first order calibration values are given by a Least Square Fit between the signal and the dipole. For each pointing a gain (
) and an offset ( ) values are computed minimizing:The sum includes samples outside a Galactic mask.
Mademoiselle[edit]
The largest source of error in the fit arises from unmodeled sky signal CMB anisotropy. To correct this we iteratively project the calibrated data (without the dipole) onto a map, scan this map to produce a new TOD with astrophysical signal removed, and finally run a simple destriping algorithm to find the corrections to the gain and offset factors.
fromTo reduce the impact of the noise during the iterative procedure the sky estimation is built using data from both radiometers of the same horn.
Smoothing[edit]
To improve accuracy given by the iterative algorithm and remove noise from the solution a smoothing algorithm must be performed. We used two different algorithms: OSG for the 44 and 70 GHz radiometers, and DV/V Fix for the 30 GHz. The reasons behind this choice can be found in P02b.
OSG[edit]
OSG is a python code that performs smoothing with a 3 step algorithm.
The first step is a Moving Average Window: the gain and offset factors are streams containing one value for each pointing period, that we call dipole fit raw streams. The optimized window has a length of 600 pointing period.
The second step is a wavelet algorithm, using pywt (Discrete Wavelet Transform in Python) libraries. Both dipole fit raw streams and averaged streams are denoised using wavelets of the Daubechies family extending the signals using symmetric-padding.
The third step is the combination of dipole fit raw and averaged denoised signal using knowledge about the instrument performance during the mission.
4 K total-power and Fix[edit]
For the 30 GHz channels we used 4K total-power to track gain changes. The theory and explanation of the choice can be found in P02b.
The algorithm uses
mean values computed during differentiation and raw gains as they are after iterative calibration, performing a linear weighted fit between the two streams using as weight the dipole variance in single pointing periods. The fit is a single parameter fit, so the offsets are put to zero in this smoothing method. It uses the gsl libraries.In addition to the smoothing, to better follow sudden gain changes due to instrument configuration changes, a fix algorithm is implemented. The first step is the application of the 4k total-power smoothed gains to the data and the production of single radiometer maps in the periods between events. The resulting maps are then fit with dipole maps covering the same period of time prducing two factor for each radiometer:
is the result of the fit using the main radiometer and the one coming from the side radiometer. The correction to be applied to the gain values is then computed as:Gain Application[edit]
The last step in TOI processing is the creation of the calibrated stream. For each sample we have:
where t is the time and k is the pointing period. CMB Dipole convoluted with the beam.
is the(Planck) Low Frequency Instrument
Operation Day definition is geometric visibility driven as it runs from the start of a DTCP (satellite Acquisition Of Signal) to the start of the next DTCP. Given the different ground stations and spacecraft will takes which station for how long, the OD duration varies but it is basically once a day.
Data Processing Center
Planck Science Office
On-Board Time
analog to digital converter
LFI Data Acquisition Electronics
Cosmic Microwave background