# TOI processing

## Overview

The Level2 Pipeline analyzes data from each horn of the instrument separately, one pointing period at time, and stores the results in an object the length of an . Each diode of the horn is corrected for systematic effects. Next, measurements of the sky and the 4K load are differenced, then the signals from one diode are combined with signals from the complementary diode in the same radiometer. Finally, photometric calibration is applied for each horn.

### Pre-processing

Before the Level 2 pipeline is run, the Mission information and data sampling divisions are stored in the database, in order to improve the analysis.

The Mission information is a set of objects, one for each Operational Day (, as defined in the glossary), in which are stored pointing period data: the pointing ID (where 1 is the first pointing of the nominal mission), pointing ID, start of the pointing maneuver, start of the stable pointing, of the end of the stable pointing, and spin axis ecliptic longitude and latitude.

The sampling information is a set of objects, one for each frequency, in which are stored for each pointing ID: start of the pointing maneuver, start of the stable pointing, end of the pointing, number of samples of the pointing, number of stable samples of the pointing, start sample of the stable pointing and sample number from the start of the nominal mission. Valid samples and OBTs are defined to be those where any of the radiometers in a frequency band contain valid data.

## ADCanalog to digital converter Correction

During the analysis of date, we discovered a systematic common to both white noise and . It turn out to be a non linearity in the Analogue to Digital Converter () on board. More detail is supplied in Planck-2013-II[1]Planck-2013-III[2], Planck-2015-A03[3] and Planck-2015-A04[4].

### Evaluation

The mathematical model represents the digital output as:

$X = (V - \Delta) \gamma + x_0$

where $V$ is the voltage input, $\gamma$ is the gain, $\Delta$ is the offset and $x_0$ is the $T_{zero}$.

We can model the non-linearity as a function of the input voltage $R(V)$. So we have the apparent inferred voltage $V'$ and we can link it to the actual input voltage with:

$((V - \Delta) \gamma + x_0) R(V) = X = ((V' - \Delta) \gamma + x_0$

so that:

$R(V) = (V' - \Delta) \gamma + x_0 \over (V - \Delta) \gamma + x_0$

Since $V \gt\gt \Delta$ and $V \gamma \gt\gt x_0$ we can use the much simpler relation:

$R(V) = \frac{V'}{V}$

and we expect it to be very near to unity for all $V$.

To find the response curve we have only the apparent voltage to work with, so we had to use the inverse response function $R'(V')$ and replace the real input voltage with $T_{sys}$ times the time varying gain factor $G(t)$.

$V = V'R'(V') = G(t)T_{sys}$

If we introduce a small signal on top of $T_{sys}$ which leads to increased detected voltage and corresponding apparent voltage increment:

$V + \delta V = (V' + \delta V') R' (V' + \delta V') = G(t) (T_{sys} + \delta T)$

so carrying out the differentiation with respect to $V'$ to the relation between true and apparent signal voltage leads to:

$\delta V = \left( V' {dR'(V') \over dV' + R'(V')} \right) \delta V' = G(t) \delta T$

We now assume $T_{sys}$ and $\delta T$ are fixed and that the variations are due to slow drifts in the gain. So we can isolate the terms:

$V' = {G(t) T_{sys} \over R'(V')}$
$\delta V' = {G(t) \delta T \over V' {dR'(V') \over dV} + R'(V')}$

Combining the equations through the gain factor to remove it:

${V' R'(V') \over T_{sys}} = {\delta V' {dR'(V') \over dV'} + R'(V') \over \delta T }$

Rearranging and putting $a = {\delta T \over T_{sys}}$

${dR'(V') \over dV' } = \left( {a \over dV'} - {1 \over V'} \right) R'(V')$

So there is the expected direct proportionality of $\delta V'$ to $V'$ based on the assumption that the variations in voltage are due to overall gain drift, so the amplitude of voltage and signal will vary together. Then there is the additional differential term which will pull the signal amplitude away from the linear relationship. So if we plot measured white noise or dipole gain factor against recovered voltage, we should see a linear curve with variations due to local slope changes at particular voltages. The linear part can be taken out and the differential part fitted. This was numerically integrated up to get the inverse response curve, which we then used to convert the measured voltages to corrected voltages.

### Application

For each of the 44 diodes there is the corresponding corrected object in the Database. Each object contains 4 columns: the input voltages coming from the sky channel and the corresponding linearized output, the input voltages coming from the 4K reference channel and the corresponding linearized output.

Data loaded by the module are used to initialize two different interpolators using CSPLINE and the functions from gsl (GNU Scientific Libraries) libraries. The interpolators are then used to correct each sample.

## Spikes Removal

Some of the receivers exhibit a small artifact with exactly 1 second period, that produces effects visible in the power spectra. The effect is a set of spikes at 1 Hz and harmonics. The spurious signal is very well modeled and is removed from the timelines. More information can be found in Planck-2013-II[1] Planck-2013-III[2] Planck-2015-A03[3] Planck-2015-A04[4].

### Modeling

The cause of the spikes at 1 Hz and harmonics is a tiny 1 second square wave embedded in signals from the affected channels. The method to estimate the 1 Hz signal is to build a template in the time domain synchronized with the spurious signal. The first step is dividing each second of data into time bins using . The number of bins is computed using:

$nbins = fsamp * template\_resolution$

where fsamp is the sampling frequency and is 136 Hz at 70 GHz, 80 at 44 GHz and 56 at 30 GHz. Then the bins vector is initialized with time intervals. To avoid aliasing effects the template resolution is $\sqrt {3}$ . We can write the process adding an indices to the time sample: the lower index denotes the particular time sample, while the upper index labels the bin into which the sample falls. The linear filter can be written as:

$s(t_{i}^{j}) = a_j \left(1- \Delta x (t_{i}^{j}) \right) + a_{j+1} \Delta x (t_{i}^{j})$

Here $\Delta x (t_{i}^{j})$ is the filter weight which is determined by where within the bin the sample lies. If we use $t^j$ with only an upper index to denote the start of each bin, then we can write the filter weight as follows:

$\Delta x (t_{i}^{j}) = {{{t_i^j - t^j} \over {t^{j+1} - t^j}}}$

In other words, the filter weight is the time sample value minus the start of the bin divided by the width of the bin.

We must estimate the parameters $a_j$ from the data. With the assumption that the instrument has stable noise properties, we can use a least square algorithm to estimate the bin values:

${\partial \over \partial a_k} \sum_{i,j} \left( s(t_i^j) – d_i^j \right)^2 = 0$

This can be represented in matrix equation:

$M_{jk}a_k = b_j$

with the following definitions:

$M_{k,k-1} = \sum_i (1 - \Delta x (t_i^{k-1})) \Delta x (t_i^{k-1})$
$M_{k,k} = \sum_i (1 - \Delta x (t_i^k))^2 \Delta x (t_i^{k-1})^2$
$M_{k,k+1} = \sum_i (1 - \Delta x (t_i^k)) \Delta x (t_i^k)$
$M_{k,k+n} (|n| \gt 1) = 0$
$b_j = \sum_i d_i^k (1- \Delta x (t_k^i)) + d_i^{k-1}\Delta x (t_i^{k-1})$

With these definitions we have to make use of periodic boundary conditions to obtain the correct results, such that if $k = 0$, $k-1 = n-1$ and $k = n-1$, $k+1 = 0$. Once this is done, we have a symmetric tridiagonal matrix with additional values at the upper right and lower left corners of the matrix. The matrix is solved with LU decomposition. In order to be certain of the numerical accuracy of the result, we can perform a simple iteration. The solving of the linear system and the iterative improvement of the solution are implemented as suggested in Numerical Recipes.

### Application

For each of the 44 diodes there is the corresponding object in the Database. After studying the amplitude of the spikes at other frequencies, we chose to apply the correction only to the 44 GHz radiometers. Each object contains 3 columns: the bins start time vector, the sky amplitudes and the reference amplitudes.

For each sample the value to be subtracted is computed using:

$V = skyAmp_k (1 - \Delta x (t_k)) + skyAmp_{k+1} \Delta x (t_k)$

where k is the index of the bins at a given time.

## Filling Gaps in the Data

During the mission, a small number of data packets were lost (Planck-2013-II[1], Planck-2015-A03[3]). Moreover in two different and very peculiar situations, was shut down and restarted, giving inconsistencies in data sampling. None of these data are used for scientific purpose but to avoid discrepancies in data analysis all of the radiometers at the same frequency must have the same samples.

To ensure this, we compare the length of the data stream to be reduced in a specific pointing period to the data stored in the sample information object. If the length is not the same, the vector is filled with missing sample times, the data vector is filled with zeros, and in the flag column the bit for gap is entered.

## Gain Modulation Factor

The pseudo-correlation design of the radiometers allows a dramatic reduction of $1/f$ noise when the $V_{sky}$ and $V_{load}$ outputs are differenced (see LFI instrument description). The two streams are slightly unbalanced, as one looks at the 2.7 K sky and the other looks at the ~4.5 K reference load. To force the mean of the difference to zero, the load signal is multiplied by the Gain Modulation Factor (R). For each pointing period this factor is computed using (see eq. (3) in LFI description):

$R = { \lt V_{sky}\gt \over \lt V_{load}\gt}$

Then the data are differenced using:

$TOI_{diff}^i = V_{sky}^i – R V_{load}^i$

This value for R minimizes both the 1/f noise and the white noise in the difference timestream. The i index represents the diode and can be 0 or 1.

At this point, we also set the maneuver flag bit to identify which samples have missing data, using the information stored in the sampling information object. This identifies which data to ignore in the next step of the Pipeline.

The R values are stored in the database. At the same time the mean values of $V_{sky}$ and $V_{load}$ are stored so they can be used in other steps of the analysis.

## Diode Combination

The two complementary diodes of each radiometer are combined. The relative weights of the diodes in the combination are chosen for optimal noise. We assign relative weights to the uncalibrated diode streams based on their first order calibrated noise.

### Evaluation

From first order calibration we compute an absolute gain $G_0$ and $G_1$, subtract an estimated sky signal and calculate the calibrated white noise $\sigma_0$ and $\sigma_1$, for the pair of diodes (see eq. (6) in LFI description):. The weights for the two diodes ($i$ = 0 or 1) are:

$W_i = {\sigma_i^2 \over G_{01}} {1 \over {\sigma_0^2 + \sigma_1^2}}$

where the weighted calibration constant is given by:

$G_{01} = {1 \over {\sigma_0^2 + \sigma_1^2}} [G_0 \sigma_1^2 + G_1 \sigma_0^2]$

The weights are fixed to a single value per diode for the entire dataset. Small variations in the relative noise of the diodes would in principle suggest recalculating the weights on shorter timescales, however, we decided a time varying weight could possibly induce more significant subtle systematics, so chose a single best estimate for the weights for each diode pair.

Weights used in combining diodes

### Application

The weights in the table above are used in the formula:

$TOI_{diff} = w_0 TOI_{diff0} + w_1 TOI_{diff1}$

## Planet Flagging

### Extraction Method

Measurements of planets have been formed from samples containing flux from the planet, plus a surrounding region, projected onto a grid (microstripes), by assuming an elliptical Gaussian beam using parameters from instrument database.

Microstripes are a way to extract and store relevant samples for planet detection. Relevant samples are samples affected by the planet plus samples in the neighborhood (to establish a background level). The search radius to select samples as relevant is 5 deg around the planet's position, computed at the pointing period mid time. For each sample we store SCET (Spacecraft Event Time), pointing directions and calibrated temperature. Destriping is applied.

Random errors are estimated by taking the variance of samples entering each micromap pixel. This is fast but not exact, since the variance is larger near a bright source. This can cause the noise to be overestimated by a factor of. Given the large S/N for planetary observations, that is not a major drawback.

The apparent position of a planet as seen from Planck at a given time is derived from JPL Horizon. Positions are tabulated in steps of 15 minutes and then linearly interpolated at the sampling frequency of each detector. JPL Horizons tables allow also to derive other quantities such as the planet-Planck distance and the planet-Sun distance and the planet angular diameter, which affects the apparent brightness of the planet.

The antenna temperature is a function of the dilution factor, according to:

$T_{ant,obs} = 4 log 2T_{ant,1} \left( {\theta \over b_{fwhm} } \right) ^2$

where $T_{ant,obs}$ and $T_{ant,1}$ are the observed and reduced $T_{ant}$, $\theta$ the instantaneous angular diameter of the planet and $b_{fwhm}$ the beam full width at half maximum.

With the above definition $T_{ant,1}$ could be considered as the $T_{ant}$ for a planet with $b_{fwhm} = \theta$, but a more convenient view is to take a reference dilution factor $D_0$, as the dilution factor for a standardized angular diameter for the planet and fiducial beam fwhm $b_{fwhm}$, $\theta_0$, to have:

$D_0 = \left( {\theta_0 \over b_0 } \right) ^2$

leading to the following definition of a standardized $T_{ant}$:

$T_{ant,obs} = 4 log 2 T_{ant,0} \left( {b_{fwhm,0} \over b_{fwhm}} {\theta \over \theta_0} \right) ^2$

with the advantage of removing variations among different detectors and transits while keeping the value of $T_{ant}$ similar to that seen by the instrument and then allowing a prompt comparison of signals and sensitivities.

### Application

The vectors found by the search are saved in a set of objects, one for each horn. In the Level 2 pipeline those OBTs are compared with the vector of the data to set the planet bit flag where needed.

## Photometric Calibration

Photometric calibration is the procedure used to convert data from volts to kelvin. The source of the calibration is the well known dipole, caused by the motion of the Solar System with respect to the reference frame. To this signal we add the modulation induced by the orbital motion of Planck around the Sun. The resulting signal is then convoluted with the horn beam to get the observed dipole. For details refer to Planck-2015-A06[5].

### Beam Convolved Dipole

In computing the beam convolved dipole we used an elegant algorithm to save time and computing power. In computing the cosmological dipole signal it is common to assume a pencil-like beam acting as a Dirac delta function. In this case a dipole timeline is defined as:

$\Delta T_{D,\delta}(t) = \mathbf{P}_E(t) \cdot \mathbf{D}_E$

where $\mathbf{P}_E(t)$ is the pointing direction, in the observer reference frame and $\mathbf{D}_E$ is the dipole axis scaled by the dipole amplitude again in the same reference frame.

In general the true signal would have to be convolved with the beam pattern of the given radiometer, usually described as a fixed map in the beam reference frame or as a time dependent map in the observer reference frame. In this case it is easiest to describe the convolution in the beam reference frame, since the function to be convolved then can be described by a single vector.

Denoting with $\mathcal{U}(t)$ the matrix converting from the observer to the beam reference frame, so that:

$\mathcal{U}(t) \mathbf{P}_E(t) = \mathbf{e}_z$

the instantaneous dipole direction in the beam reference frame is:

$\mathbf{D}(t) = \mathcal{U}(t) \mathbf{D}_E$

By denoting with $\mathbf{P}$ a pointing direction in the beam reference frame then:

$\Delta T_D(t) = N \int_{4\pi} B(\mathbf{P})\mathbf{P} \cdot \mathbf{D}(t) d^3\mathbf{P}$

where $\mathbf{N}$ is a normalization constant.

$N^{-1} = \int_{4\pi} B(\mathbf{P}) d^3\mathbf{P}$

Denoting with $\mathbf{P}_x$, $\mathbf{P}_y$, $\mathbf{P}_z$ the three cartesian components of the $\mathbf{P}$ the integral of the dot product can be decomposed into three independent integrals:

$S_x = N \int_{4\pi} B(\mathbf{P}) P_x d^3\mathbf{P}$
$S_y = N \int_{4\pi} B(\mathbf{P}) P_y d^3\mathbf{P}$
$S_z = N \int_{4\pi} B(\mathbf{P}) P_z d^3\mathbf{P}$

those integrals define a time independent vector characteristic of each radiometer and constant over the mission.

Detector ID $S_x$ $S_y$ $S_z$
LFI18S 4.6465302801434851e-03 -1.3271241134658798e-03 9.9577083251002374e-01
LFI18M 2.8444949872572308e-03 -8.9511701920330633e-04 9.9735308499993403e-01
LFI19S 4.2948823053610341e-03 -1.4037771886936600e-03 9.9612844031547154e-01
LFI19M 4.5097043389558588e-03 -1.6255119872940569e-03 9.9573258887316529e-01
LFI20S 5.2797843121430788e-03 -1.9479348764780897e-03 9.9514557074549859e-01
LFI20M 4.7420554683343680e-03 -1.8462064956921880e-03 9.9548302435765323e-01
LFI21S 5.2377151034493320e-03 1.9337533884005056e-03 9.9517552967578515e-01
LFI21M 4.3713123245201109e-03 1.7075534076511198e-03 9.9585142295482576e-01
LFI22S 3.6272964935149194e-03 1.2030860597901591e-03 9.9679007501368400e-01
LFI22M 3.1817723013115090e-03 1.1383448292256770e-03 9.9705527860728405e-01
LFI23S 3.2582522957523585e-03 9.1080217792813213e-04 9.9706066300815721e-01
LFI23M 2.6384956780749246e-03 8.0545189089709709e-04 9.9755224385088148e-01
LFI24S 1.2007669925034473e-03 -2.9877396130232452e-07 9.9897924004634964e-01
LFI24M 1.1966952022302408e-03 -1.3590311231894260e-06 9.9894545519286426e-01
LFI25S -2.1006042848582498e-04 4.4287041964311393e-04 9.9965683952381934e-01
LFI25M -2.5133433631240997e-04 5.5803053337715499e-04 9.9954614333864866e-01
LFI26S -1.9766749938836072e-04 -4.2202164851420096e-04 9.9966955259454382e-01
LFI26M -2.5426738260127387e-04 -5.7426678736881309e-04 9.9952104084618332e-01
LFI27S 5.8555501689062294e-03 1.8175261751324087e-03 9.9448302844644199e-01
LFI27M 4.9962842224387377e-03 1.5766304210447924e-03 9.9529240281463061e-01
LFI28S 6.3750745068891614e-03 -1.9617207193203534e-03 9.9396633709784610e-01
LFI28M 4.7877703093970429e-03 -1.5442874049905993e-03 9.9551694709996730e-01

By using this characteristic vector the calculation of the convolved dipole is simply defined by a dot product of the vector $\mathbf{S}$ and the dipole axis rotated in the beam reference frame.

$\Delta T (t) = \mathbf{S}^T \mathcal{U}(t) \mathbf{D}_E$

### Binning

In order to simplify the computation and to reduce the amount of data used in the calibration procedure the data are phase binned in map with $N_{side}$ 256. During phase binning all the data flagged for maneuvers, planets and gaps, as well as the ones flagged in Level 1 analysis as not recoverable, are ignored.

### Fit

The first order calibration values are given by a least squares fit between the signal and the dipole. For each pointing gain ($g_k$) and offset ($b_k$) values are computed by minimizing:

$\chi^2 = \sum_{i \in k} {[ \Delta V (t_i) - \Delta V_m (t_i|g_k, b_k)]^2 \over rms_i^2 }$

The sum includes only samples outside a Galactic mask.

### DaCapo

The largest source of error in the fit arises from unmodeled sky signal $\Delta T_a$ from anisotropy.

The procedure adopted to correct this effect is described below.

The uncalibrated toi $y_{i}$ belonging to a pointing period $k$ is modeled as

$y_{i}=g_{k}*(s_{i}+d_{i})+b_{k}+n$

where the unknowns $s_{i}$,$g_{k}$ and $b_{k}$ are respectively the signal per , the gain per PID and the offset per PID, and $d_{i}$ is the total dipole.

This is a non linear $\chi^{2}$ minimization problem. It can be linearized with an iterative procedure: for each iteration step a linearized model toi is built as

$y_{i,model}=g_{k}(-1)*(s_{i}-s_{i}(-1))+g_{k}*(d_{i}+a_{i}(-1))+b_{k}$

where $g_{k}(-1)$ and $s_{i}(-1)$ are the gain and sky signal computed at the previous step of iteration.

A linear fit is performed between $y_{i}$ and $y_{i,model}$ to get the new $g_{k}$ and $s_{i}$ . The procedure is iterated until convergence.

To reduce the impact of the noise during the iterative procedure the sky estimation is built using data from both radiometers of the same horn.

### Smoothing

To improve accuracy given by the iterative algorithm and remove noise from the solution a smoothing algorithm must be performed.

#### OSGTV

OSGTV is a 3 step smoothing algorithm, implemented with a C++ code.

The gain reconstructed with DaCapo can be expressed as

$G_{raw}(t,T(t))=G_{true}(t,T(t))+G_{noise}(t)$

where the $T(t)$ function represents temperature fluctuations of the Focal Plane. The time dependence of the “real” gain $G_{true}$ is modeled as the superposition of a “slow” component, with a time scale of ~3 months, and a “fast” component with a time scale of few PIDs:

$G_{true}=G_{slow}+G_{fast}$.

The slow component takes into account the seasonal variations of the thermal structure of the spacecraft due to the orbital motion, while the “fast” component describes the thermal effects of the electronics and compressors, as well as single events like the sorption cooler switchover.

To disentangle the components of $G_{true}$, we need a parametric “hybrid” approach in three steps.

Step 1. For each PID $i$, we used the 4K total-power $V_{load}(i)$ and the signal from temperature detectors in the focal plane $T_{i}$, subsampled at 1 sample/PID, to track gain changes. This is implemented through a linear fit between $V_{load}(i) * G_{raw}$ and $T_{i}$:

$V_{load}(i)*G_{raw}=K_{0}+K_{1}*(T_{i}-\langle T_{MW}\rangle)$

where the window length $MW$ of the moving average is proportional to the variance of the dipole in the considered PID. The resulting gain is:

$G_{i} = \frac{ K_{0} + K_{1} * (T_{i} - \langle T_{MW} \rangle ) }{ V_{load}(i) }$

Step 2. The "fast" component $G_{fast}$ is recovered as follows: we define a maximum moving average window length $WL_{max}$,and for each PID we compute the variance $\sigma_{i}^2$ of $G_{i}$ in this window. We define a percentile on the ordered variance array and we compute the corresponding value of the variance $\sigma_{p}^2$. The window length for each PID is then computed as

$WL_{i}=\frac{\sigma_{p}^2*WL_{max}}{\sigma_{i}^2}$.

If $WL_{i}\gt WL_{max}$ we impose $WL_{i}=WL_{max}$. With these window lengths a moving window average is performed on $G_{i}$. The averaged gain vector is subtracted from the raw hybrid gain to get $G_{fast}$.

Step 3. We perform a moving window average on $G_{i}$ and we compute the variances of the smoothed array. The window length is computed with a linear interpolation between a minimum length $WL_{min}$ defined in the dipole minima and a maximum $WL_{max}$ defined in the dipole maxima. The array of gain variances is weighted with the variance of the dipole. We set a percentile on the variance array and we find the corresponding variance value $\sigma_{p}^2$. Around this value we search for local maxima of the variance array, and we split the domain of the gain in subsets between consecutive maxima. For each subset we perform a moving average with the corresponding window length. The "slow" component $G_{slow}$ is given by the union of these subsets.

### Gain Application

The last step in TOI processing is the creation of the calibrated stream. For each sample we have:

$TOI_{cal}(t) = (TOI_{diff}(t) – offset(k)) \cdot g(k) \, - \, convDip(t)\, - \, T_{stray}(t)$

where t is the time and k is the pointing period, $convDip$ is the Dipole convolved with the beam, and $T_{stray}$ is the straylight.

## Noise

This pipeline step aims at the reconstruction of the noise parameters from calibrated flight TOI. The goal is two-fold. On the one hand we need to know the actual noise properties of our instrument in order to properly take them into account, especially during later processing and analysis steps like map-making and power spectrum estimation. On the other hand evaluation of noise properties during the instrument life-time is a way to track down possible variations, anomalies and general deviations from the expected behaviour.

### Operations

Noise estimation is performed on calibrated data; since we would like to track possible noise variations during the mission life-time, we select data in chunks of 5 ODs (Operational Days). These data are processed by the ROMA Iterative Generalized Least Square (IGLS) map-making algorithm which includes a noise estimation tool. In general IGLS map-making is a quite costly in terms of time and resources required. However the length of the data is such that it can run on the cluster in very short time (~1-2 minutes).

The method implemented can be summerized as follows. We model the calibrated TOI as

$\mathbf{\Delta T} = \mathbf{P} \mathbf{m} + \mathbf{n}$

where $\mathbf{n}$ is the noise vector and $\mathbf{P}$ is the pointing matrix that links a pixel in the map $\mathbf{m}$ with a sample in the TOI $\mathbf{d}$. The zero-th order estimation of the signal is obtained by simply rebinning TOI into a map. Then an iterative approach follows in which both signal and noise are estimated according to

$\mathbf{\hat{n}_i} = \mathbf{\Delta T} - \mathbf{P\hat{m}_i}$
$\mathbf{\hat{m}_{i+1}} = \mathbf{(P^T\hat{N}_i^{-1}P)^{-1}P^T\hat{N}_i^{-1}\Delta T}$

where $\mathbf{\hat{N}_i}$ is the noise covariance matrix in the time domain resulting from iteration $i$. After three iterations, convergence is achieved.

We then perform an FFT (Fast Fourier Transform) on the noise time stream from the iterative approach and fit the resulting spectrum.

### Fitting Pipeline

As already done in the 2013 release, we estimate the parameters of the noise properties (i.e. white noise level, knee-frequency and slope of the low-frequency noise component) by means of a MCMC approach. Therefore on the spectra just described we first compute the white noise level taking the mean of the last 10% of data (at 30 GHz due to the higher value of the knee-frequency this quantity is reduced to 5%). Once the white noise level has been determined we proceed with the actual fitting of knee-frequency and slope. The resulting values reported are the medians of the fitted values for our 5 ODs chunk along the whole mission lifetime.

Horn White Noise M [$\mu$K s$^{1/2}$] White Noise S [$\mu$K s$^{1/2}$] Knee-frequency M [mHz] Knee-frequency S [mHz] Slope M Slope S
18 513.0$\pm$2.1 467.2$\pm$2.3 14.8$\pm$2.5 17.8$\pm$1.5 -1.06$\pm$0.10 -1.18$\pm$0.13
19 579.6$\pm$2.2 555.0$\pm$2.2 11.7$\pm$1.2 13.7$\pm$1.3 -1.21$\pm$0.26 -1.11$\pm$0.14
20 587.3$\pm$2.1 620.5$\pm$2.7 8.0$\pm$1.9 5.7$\pm$1.5 -1.20$\pm$0.36 -1.30$\pm$0.41
21 451.0$\pm$1.7 560.1$\pm$2.0 37.9$\pm$5.2 13.3$\pm$1.5 -1.25$\pm$0.09 -1.21$\pm$0.09
22 490.8$\pm$1.5 531.3$\pm$2.3 9.7$\pm$2.3 14.8$\pm$6.7 -1.42$\pm$0.23 -1.24$\pm$0.30
23 504.3$\pm$1.8 539.7$\pm$1.8 29.7$\pm$1.1 59.0$\pm$1.4 -1.07$\pm$0.03 -1.21$\pm$0.02
24 463.0$\pm$1.4 400.7$\pm$1.3 26.8$\pm$1.3 88.3$\pm$8.9 -0.94$\pm$0.01 -0.91$\pm$0.01
25 415.3$\pm$1.5 395.4$\pm$2.9 20.1$\pm$0.7 46.4$\pm$1.8 -0.85$\pm$0.01 -0.90$\pm$0.01
26 483.0$\pm$1.9 423.2$\pm$2.5 64.4$\pm$1.9 68.2$\pm$9.5 -0.92$\pm$0.01 -0.76$\pm$0.01
27 281.5$\pm$2.1 303.2$\pm$1.8 174.5$\pm$2.9 108.8$\pm$2.5 -0.93$\pm$0.01 -0.91$\pm$0.01
28 317.1$\pm$2.4 286.5$\pm$2.3 130.1$\pm$4.4 43.1$\pm$2.4 -0.93$\pm$0.01 -0.90$\pm$0.02

Time variations of noise parameters are a good tracer of possible modifications in the instrument behaviour and we know that some events capable of affecting instrument behaviour had happened during the mission. Both variations of the physical temperature of the instrument due to the transition in the operations from the first sorption cooler to the second one, as well as the observed degradation in the performances of the first cooler are events that clearly show their fingerprints in the variation of the noise spectra.

In the following figure we report a representative sample of noise spectra, one for each frequency channel (from left to right LFi 18M, 25S and 27M), covering the whole mission lifetime. The white noise is extremely stable at the level of 0.3%. As already noted in the 2013 release, knee-frequencies and slopes are stable until 326 and show clear variations and deviations from the simple one knee-frequency one slope model. This is a sign of the degradation of the first cooler inducing thermal variations with a characteristic knee-frequency (different from the radiometric one) and with a steeper slope. Once the second cooler became operational and performed as expected, the noise spectra gradually returned to the original shape at the beginning of the mission. This behaviour is visible, although not identical at each frequency for several reasons e.g. intrinsic thermal susceptibility and position on the focal plane that determines the actual thermal transfer function.

## References

1. Planck 2013 results. II. Low Frequency Instrument data processing, Planck Collaboration, 2014, A&A, 571, A2
2. Planck 2013 results. III. Low Frequency Instrument systematic uncertainties, Planck Collaboration, 2014, A&A, 571, A3
3. Planck 2015 results. II. LFI processing, Planck Collaboration, 2016, A&A, 594, A2.
4. Planck 2015 results. III. LFI systematics, Planck Collaboration, 2016, A&A, 594, A3.
5. Planck 2015 results. V. LFI calibration, Planck Collaboration, 2016, A&A, 594, A5.