https://wiki.cosmos.esa.int/planck-legacy-archive/api.php?action=feedcontributions&user=Stechene&feedformat=atomPlanck Legacy Archive Wiki - User contributions [en-gb]2024-03-29T13:08:03ZUser contributionsMediaWiki 1.31.6https://wiki.cosmos.esa.int/planck-legacy-archive/index.php?title=File:DX9_Y3_consistency.png&diff=2554File:DX9 Y3 consistency.png2012-10-19T16:56:46Z<p>Stechene: DX9, Y3 consistency</p>
<hr />
<div>DX9, Y3 consistency</div>Stechenehttps://wiki.cosmos.esa.int/planck-legacy-archive/index.php?title=HFI-Validation&diff=2553HFI-Validation2012-10-19T16:56:08Z<p>Stechene: </p>
<hr />
<div>The HFI validation is mostly modular. That is, each part of the pipeline, be it timeline processing, map-making, or any other, validates the results of its work at each step of the processing. In addition, we do additional validation with an eye towards overall system integrity. These are described below. <br />
<br />
==Expected systematics and tests (bottom-up approach)==<br />
<br />
{{:HFI-bottom_up}}<br />
<br />
==Generic approach to systematics==<br />
<br />
While we track and try to limit the individual effects listed above, and we do not believe there are other large effects which might compromise the data, we test this using a suite of general difference tests. As an example, the first and second years of Planck observations used almost exactly the same scanning pattern (they differed by one arc-minute at the Ecliptic plane). By differencing them, the fixed sky signal is almost completely removed, and we are left with only time variable signals, such as any gain variations and, of course, the statistical noise. <br />
<br />
In addition, while Planck scans the sky twice a year, during the first six months (or survey) and the second six months (the second survey), the orientations of the scans and optics are actually different. Thus, by forming a difference between these two surveys, in addition to similar sensitivity to the time-variable signals seen in the yearly test, the survey difference also tests our understanding and sensitivity to scan-dependent noise such as time constant and beam asymmetries. <br />
<br />
These tests use the "Yardstick" simulations below and culminate in the "Probabilities to Exceed" tests just after. <br />
<br />
==Yardstick simulations (Delouis)==<br />
<br />
The yardstick allows gauging various effects to see whether they need be included in monte-carlo to describe data. It also allows gauging the significance of validation tests on data (e.g. can null test can be described by the model?).<br />
<br />
Yardstick 3.0 that characterizes the DX9 data goes through the following steps:<br />
<br />
#The input maps are computed using the Planck Sky Model.<br />
#The LevelS is used to project input maps on timeline using the B-Spline scanning beam and the DX9 pointing (called ptcor6). The real pointing is affected by the aberration that is corrected by map-making. Yardstick does not simulate aberration. Finally, the difference between the projected pointing from simulation and from DX9 is equal to the aberration.<br />
#The simulated noise timelines, that are added to the projected signal, have the same spectrum (low and high frequency) than the characterized noise. For yardstick 3.0. No correlation in time or between detectors have been simulated.<br />
#The simulation map making step use the DX9 sample flags.<br />
#For the low frequencies (100, 143, 217, 353), the yardstick output are calibrated using the same mechanism (e.g. dipole fitting) than DX9. The calibration is not done for higher frequency (545, 857)<br />
#The Official map making is run on those timelines using the same parameters than for real data.<br />
A yardstick production is composed by all survey map (1,2 and nominal), all detector Detsets and channel maps. The Yardstick 3.0 is based on 5 noise iterations for each map realization.<br />
<br />
===Sysiphe summary including what can neglected(Montier)===<br />
<br />
==Simulations versus data results (including PTE) (Techene)==<br />
We make consistency test between DX9 and Yardstick production. Yardstick production contains sky (generated with LevelS starting from PSM177) and noise timeline realisations proceeded with the official map making. DX9 production was regenerated with the same code in order to get rid of possible differences that might appear for not running the official pipeline in the same conditions. We compare statistical properties of the cross spectra of null test maps for 100, 143, 217, 353 channels. Null test maps can be survey null test or half focal plane null test, each of them means a specific goal : survey1-survey2<br />
aims at isolating transfer function or pointing issues, while half focal plane null test enables to focus on beam issues. Comparing cross spectra we isolate systematic effects from the noise, and we<br />
can check whether they are properly simulated or need to. Spectra are computed with spice masking either DX9 point sources or simulated point sources, and masking the galactic plane with several mask width, the sky fraction from which spectra are computed are around 30%, 60% and 80%.<br />
<br />
DX9 and the Y3.0 realisations are binned. For each bin we compute the statistical parameters (mean and variance) of the Yardstick distribution. The following figure is a typical example of a consistency test, it shows the differences between Y3.0 mean and DX9 considering the standard deviation of the yardstick. We also indicate chi square values, which are computed within larger bin : [0,20], [20,400], [400,1000][1000,2000], [2000, 3000], using the ratio between (DX9-Y3.0 mean)^2 and Y3.0 variance within each bin. This binned chi square is only indicative: it may not be always significant since DX9 variations sometimes disappear as we average them in a bin, the mean is then at the same scale as the yardstick one.<br />
<br />
[[File:DX9_Y3_consistency.png | 500px | center | thumb | '''Example of consistency test for 143 survey null test maps.''']]</div>Stechenehttps://wiki.cosmos.esa.int/planck-legacy-archive/index.php?title=HFI-Validation&diff=2520HFI-Validation2012-10-19T16:03:40Z<p>Stechene: </p>
<hr />
<div>The HFI validation is mostly modular. That is, each part of the pipeline, be it timeline processing, map-making, or any other, validates the results of its work at each step of the processing. In addition, we do additional validation with an eye towards overall system integrity. These are described below. <br />
<br />
==Expected systematics and tests (bottom-up approach)==<br />
<br />
{{:HFI-bottom_up}}<br />
<br />
==Generic approach to systematics==<br />
<br />
While we track and try to limit the individual effects listed above, and we do not believe there are other large effects which might compromise the data, we test this using a suite of general difference tests. As an example, the first and second years of Planck observations used almost exactly the same scanning pattern (they differed by one arc-minute at the Ecliptic plane). By differencing them, the fixed sky signal is almost completely removed, and we are left with only time variable signals, such as any gain variations and, of course, the statistical noise. <br />
<br />
In addition, while Planck scans the sky twice a year, during the first six months (or survey) and the second six months (the second survey), the orientations of the scans and optics are actually different. Thus, by forming a difference between these two surveys, in addition to similar sensitivity to the time-variable signals seen in the yearly test, the survey difference also tests our understanding and sensitivity to scan-dependent noise such as time constant and beam asymmetries. <br />
<br />
These tests use the "Yardstick" simulations below and culminate in the "Probabilities to Exceed" tests just after. <br />
<br />
==Yardstick simulations (Delouis)==<br />
<br />
The yardstick allows gauging various effects to see whether they need be included in monte-carlo to describe data. It also allows gauging the significance of validation tests on data (e.g. can null test can be described by the model?).<br />
<br />
Yardstick 3.0 that characterizes the DX9 data goes through the following steps:<br />
<br />
#The input maps are computed using the Planck Sky Model.<br />
#The LevelS is used to project input maps on timeline using the B-Spline scanning beam and the DX9 pointing (called ptcor6). The real pointing is affected by the aberration that is corrected by map-making. Yardstick does not simulate aberration. Finally, the difference between the projected pointing from simulation and from DX9 is equal to the aberration.<br />
#The simulated noise timelines, that are added to the projected signal, have the same spectrum (low and high frequency) than the characterized noise. For yardstick 3.0. No correlation in time or between detectors have been simulated.<br />
#The simulation map making step use the DX9 sample flags.<br />
#For the low frequencies (100, 143, 217, 353), the yardstick output are calibrated using the same mechanism (e.g. dipole fitting) than DX9. The calibration is not done for higher frequency (545, 857)<br />
#The Official map making is run on those timelines using the same parameters than for real data.<br />
A yardstick production is composed by all survey map (1,2 and nominal), all detector Detsets and channel maps. The Yardstick 3.0 is based on 5 noise iterations for each map realization.<br />
<br />
===Sysiphe summary including what can neglected(Montier)===<br />
<br />
==Simulations versus data results (including PTE) (Techene)==<br />
We make consistency test between DX9 and Yardstick production. Yardstick production contains sky (generated with LevelS starting from PSM177) and noise timeline realisations proceeded with the official map making. DX9 production was regenerated with the same code in order to get rid of possible differences that might appear for not running the official pipeline in the same conditions. We compare statistical properties of the cross spectra of null test maps for 100, 143, 217, 353 channels. Null test maps can be survey null test or half focal plane null test, each of them means a specific goal : survey1-survey2<br />
aims at isolating transfer function or pointing issues, while half focal plane null test enables to focus on beam issues. Comparing cross spectra we isolate systematic effects from the noise, and we<br />
can check whether they are properly simulated or need to. Spectra are computed with spice masking either DX9 point sources or simulated point sources, and masking the galactic plane with several mask width, the sky fraction from which spectra are computed are around 30%, 60% and 80%.<br />
<br />
DX9 and the Y3.0 realisations are binned. For each bin we compute the statistical parameters (mean and variance) of the Yardstick distribution. The following figure is a typical example of a consistency test, it shows the differences between Y3.0 mean and DX9 considering the standard deviation of the yardstick. We also indicate chi square values, which are computed within larger bin : [0,20], [20,400], [400,1000][1000,2000], [2000, 3000], using the ratio between (DX9-Y3.0 mean)^2 and Y3.0 variance within each bin. This binned chi square is only indicative: it may not be always significant since DX9 variations sometimes disappear as we average them in a bin, the mean is then at the same scale as the yardstick one.<br />
<br />
[[File:test.jpg | 500px | center | thumb | '''Example of consistency test for 143 survey null test maps.''']]</div>Stechenehttps://wiki.cosmos.esa.int/planck-legacy-archive/index.php?title=HFI-Validation&diff=2445HFI-Validation2012-10-19T14:30:01Z<p>Stechene: </p>
<hr />
<div>The HFI validation is mostly modular. That is, each part of the pipeline, be it timeline processing, map-making, or any other, validates the results of its work at each step of the processing. In addition, we do additional validation with an eye towards overall system integrity. These are described below. <br />
<br />
==Expected systematics and tests (bottom-up approach)==<br />
<br />
{{:HFI-bottom_up}}<br />
<br />
==Generic approach to systematics==<br />
<br />
While we track and try to limit the individual effects listed above, and we do not believe there are other large effects which might compromise the data, we test this using a suite of general difference tests. As an example, the first and second years of Planck observations used almost exactly the same scanning pattern (they differed by one arc-minute at the Ecliptic plane). By differencing them, the fixed sky signal is almost completely removed, and we are left with only time variable signals, such as any gain variations and, of course, the statistical noise. <br />
<br />
In addition, while Planck scans the sky twice a year, during the first six months (or survey) and the second six months (the second survey), the orientations of the scans and optics are actually different. Thus, by forming a difference between these two surveys, in addition to similar sensitivity to the time-variable signals seen in the yearly test, the survey difference also tests our understanding and sensitivity to scan-dependent noise such as time constant and beam asymmetries. <br />
<br />
These tests use the "Yardstick" simulations below and culminate in the "Probabilities to Exceed" tests just after. <br />
<br />
==Yardstick simulations (Delouis)==<br />
<br />
The yardstick allows gauging various effects to see whether they need be included in monte-carlo to describe data. It also allows gauging the significance of validation tests on data (e.g. can null test can be described by the model?).<br />
<br />
Yardstick 3.0 that characterizes the DX9 data goes through the following steps:<br />
<br />
#The input maps are computed using the Planck Sky Model.<br />
#The LevelS is used to project input maps on timeline using the B-Spline scanning beam and the DX9 pointing (called ptcor6). The real pointing is affected by the aberration that is corrected by map-making. Yardstick does not simulate aberration. Finally, the difference between the projected pointing from simulation and from DX9 is equal to the aberration.<br />
#The simulated noise timelines, that are added to the projected signal, have the same spectrum (low and high frequency) than the characterized noise. For yardstick 3.0. No correlation in time or between detectors have been simulated.<br />
#The simulation map making step use the DX9 sample flags.<br />
#For the low frequencies (100, 143, 217, 353), the yardstick output are calibrated using the same mechanism (e.g. dipole fitting) than DX9. The calibration is not done for higher frequency (545, 857)<br />
#The Official map making is run on those timelines using the same parameters than for real data.<br />
A yardstick production is composed by all survey map (1,2 and nominal), all detector Detsets and channel maps. The Yardstick 3.0 is based on 5 noise iterations for each map realization.<br />
<br />
===Sysiphe summary including what can neglected(Montier)===<br />
<br />
==Simulations versus data results (including PTE) (Techene)==<br />
We make consistency test between DX9 and Yardstick production. Yardstick production contains sky (generated with LevelS starting from PSM177) and noise timeline realizations procedeed with the official map making. DX9 production was regenerated with the same code in order to get rid of possible differences that might appear for not running the official pipeline in the same conditions. We compare statistical properties of the cross spectra of null test maps for 100, 143, 217, 353 channels. Null test maps can be survey null test or half focal plane null test, each of them means a specific goal : survey1-survey2<br />
aims at isolating transfert function or pointing issues, while half focal plane null test enables to focus on beam issues. Comparing cross spectra we isolate systematic effects from the noise, and we<br />
can check whether they are properly simulated or need to. Spectra are computed with spice masking either DX9 point sources or simulated point sources, and masking the galactic plane with several mask width, the sky fraction from which spectra are computed are around 30%, 60% and 80%.<br />
<br />
DX9 and the Y3.0 realizations are binned using : '/data/dmc/MISS03/DATA/BINTAB/bin_v4.7_tt'. For each bin we compute the statisical parameters of the Yardstick distribution. The following plot shows the differences between Y3.0 mean and DX9 considering the standart deviation of the yardstick. We also indicate chi square value, Chi^2 is computed over larger bin : [0,20][20,400] [400,1000][1000,2000][2000, 3000], using the ratio between (DX9-Y3.0 mean)^2 and Y3.0 variance within each bin.<br />
<br />
[[File:test.jpg | 500px | center | thumb | '''Exemple of consistency test for 143 survey null test maps.''']]</div>Stechenehttps://wiki.cosmos.esa.int/planck-legacy-archive/index.php?title=HFI-Validation&diff=2444HFI-Validation2012-10-19T14:28:35Z<p>Stechene: </p>
<hr />
<div>The HFI validation is mostly modular. That is, each part of the pipeline, be it timeline processing, map-making, or any other, validates the results of its work at each step of the processing. In addition, we do additional validation with an eye towards overall system integrity. These are described below. <br />
<br />
==Expected systematics and tests (bottom-up approach)==<br />
<br />
{{:HFI-bottom_up}}<br />
<br />
==Generic approach to systematics==<br />
<br />
While we track and try to limit the individual effects listed above, and we do not believe there are other large effects which might compromise the data, we test this using a suite of general difference tests. As an example, the first and second years of Planck observations used almost exactly the same scanning pattern (they differed by one arc-minute at the Ecliptic plane). By differencing them, the fixed sky signal is almost completely removed, and we are left with only time variable signals, such as any gain variations and, of course, the statistical noise. <br />
<br />
In addition, while Planck scans the sky twice a year, during the first six months (or survey) and the second six months (the second survey), the orientations of the scans and optics are actually different. Thus, by forming a difference between these two surveys, in addition to similar sensitivity to the time-variable signals seen in the yearly test, the survey difference also tests our understanding and sensitivity to scan-dependent noise such as time constant and beam asymmetries. <br />
<br />
These tests use the "Yardstick" simulations below and culminate in the "Probabilities to Exceed" tests just after. <br />
<br />
==Yardstick simulations (Delouis)==<br />
<br />
The yardstick allows gauging various effects to see whether they need be included in monte-carlo to describe data. It also allows gauging the significance of validation tests on data (e.g. can null test can be described by the model?).<br />
<br />
Yardstick 3.0 that characterizes the DX9 data goes through the following steps:<br />
<br />
#The input maps are computed using the Planck Sky Model.<br />
#The LevelS is used to project input maps on timeline using the B-Spline scanning beam and the DX9 pointing (called ptcor6). The real pointing is affected by the aberration that is corrected by map-making. Yardstick does not simulate aberration. Finally, the difference between the projected pointing from simulation and from DX9 is equal to the aberration.<br />
#The simulated noise timelines, that are added to the projected signal, have the same spectrum (low and high frequency) than the characterized noise. For yardstick 3.0. No correlation in time or between detectors have been simulated.<br />
#The simulation map making step use the DX9 sample flags.<br />
#For the low frequencies (100, 143, 217, 353), the yardstick output are calibrated using the same mechanism (e.g. dipole fitting) than DX9. The calibration is not done for higher frequency (545, 857)<br />
#The Official map making is run on those timelines using the same parameters than for real data.<br />
A yardstick production is composed by all survey map (1,2 and nominal), all detector Detsets and channel maps. The Yardstick 3.0 is based on 5 noise iterations for each map realization.<br />
<br />
===Sysiphe summary including what can neglected(Montier)===<br />
<br />
==Simulations versus data results (including PTE) (Techene)==<br />
We make consistency test between DX9 and Yardstick production. Yardstick production contains<br />
sky (generated with LevelS starting from PSM177) and noise timeline realizations procedeed with<br />
the official map making. DX9 production was regenerated with the same code in order to get rid of possible differences<br />
that might appear for not running the official pipeline in the same conditions. We compare statistical<br />
properties of the cross spectra of null test maps for 100, 143, 217, 353 channels. Null test maps can be<br />
survey null test or half focal plane null test, each of them means a specific goal : survey1-survey2<br />
aims at isolating transfert function or pointing issues, while half focal plane null test enables to<br />
focus on beam issues. Comparing cross spectra we isolate systematic effects from the noise, and we<br />
can check whether they are properly simulated or need to. Spectra are computed with spice<br />
masking either DX9 point sources or simulated point sources, and masking the galactic plane with<br />
several mask width, the sky fraction from which spectra are computed are around 30%, 60% and<br />
80%.<br />
Example of mask fsky=30% :<br />
figure<br />
DX9 and the Y3.0 realizations are binned using : '/data/dmc/MISS03/DATA/BINTAB/bin_v4.7_tt'.<br />
For each bin we compute the statisical parameters of the Yardstick distribution. The following plot<br />
shows the differences between Y3.0 mean and DX9 considering the standart deviation of the<br />
yardstick. We also indicate chi square value, Chi^2 is computed over larger bin : [0,20][20,400]<br />
[400,1000][1000,2000][2000, 3000], using the ratio between (DX9-Y3.0 mean)^2 and Y3.0<br />
variance within each bin.<br />
Exemple of consistency test for 143 survey null test maps :<br />
[[File:test.jpg | 500px | center | thumb | '''Exemple of consistency test for 143 survey null test maps.''']]</div>Stechenehttps://wiki.cosmos.esa.int/planck-legacy-archive/index.php?title=HFI-Validation&diff=2443HFI-Validation2012-10-19T14:25:28Z<p>Stechene: </p>
<hr />
<div>The HFI validation is mostly modular. That is, each part of the pipeline, be it timeline processing, map-making, or any other, validates the results of its work at each step of the processing. In addition, we do additional validation with an eye towards overall system integrity. These are described below. <br />
<br />
==Expected systematics and tests (bottom-up approach)==<br />
<br />
{{:HFI-bottom_up}}<br />
<br />
==Generic approach to systematics==<br />
<br />
While we track and try to limit the individual effects listed above, and we do not believe there are other large effects which might compromise the data, we test this using a suite of general difference tests. As an example, the first and second years of Planck observations used almost exactly the same scanning pattern (they differed by one arc-minute at the Ecliptic plane). By differencing them, the fixed sky signal is almost completely removed, and we are left with only time variable signals, such as any gain variations and, of course, the statistical noise. <br />
<br />
In addition, while Planck scans the sky twice a year, during the first six months (or survey) and the second six months (the second survey), the orientations of the scans and optics are actually different. Thus, by forming a difference between these two surveys, in addition to similar sensitivity to the time-variable signals seen in the yearly test, the survey difference also tests our understanding and sensitivity to scan-dependent noise such as time constant and beam asymmetries. <br />
<br />
These tests use the "Yardstick" simulations below and culminate in the "Probabilities to Exceed" tests just after. <br />
<br />
==Yardstick simulations (Delouis)==<br />
<br />
The yardstick allows gauging various effects to see whether they need be included in monte-carlo to describe data. It also allows gauging the significance of validation tests on data (e.g. can null test can be described by the model?).<br />
<br />
Yardstick 3.0 that characterizes the DX9 data goes through the following steps:<br />
<br />
#The input maps are computed using the Planck Sky Model.<br />
#The LevelS is used to project input maps on timeline using the B-Spline scanning beam and the DX9 pointing (called ptcor6). The real pointing is affected by the aberration that is corrected by map-making. Yardstick does not simulate aberration. Finally, the difference between the projected pointing from simulation and from DX9 is equal to the aberration.<br />
#The simulated noise timelines, that are added to the projected signal, have the same spectrum (low and high frequency) than the characterized noise. For yardstick 3.0. No correlation in time or between detectors have been simulated.<br />
#The simulation map making step use the DX9 sample flags.<br />
#For the low frequencies (100, 143, 217, 353), the yardstick output are calibrated using the same mechanism (e.g. dipole fitting) than DX9. The calibration is not done for higher frequency (545, 857)<br />
#The Official map making is run on those timelines using the same parameters than for real data.<br />
A yardstick production is composed by all survey map (1,2 and nominal), all detector Detsets and channel maps. The Yardstick 3.0 is based on 5 noise iterations for each map realization.<br />
<br />
===Sysiphe summary including what can neglected(Montier)===<br />
<br />
==Simulations versus data results (including PTE) (Techene)==<br />
We make consistency test between DX9 and Yardstick production. Yardstick production contains<br />
sky (generated with LevelS starting from PSM177) and noise timeline realizations procedeed with<br />
the official map making. DX9 production was regenerated with the same code in order to get rid of possible differences<br />
that might appear for not running the official pipeline in the same conditions. We compare statistical<br />
properties of the cross spectra of null test maps for 100, 143, 217, 353 channels. Null test maps can be<br />
survey null test or half focal plane null test, each of them means a specific goal : survey1-survey2<br />
aims at isolating transfert function or pointing issues, while half focal plane null test enables to<br />
focus on beam issues. Comparing cross spectra we isolate systematic effects from the noise, and we<br />
can check whether they are properly simulated or need to. Spectra are computed with spice<br />
masking either DX9 point sources or simulated point sources, and masking the galactic plane with<br />
several mask width, the sky fraction from which spectra are computed are around 30%, 60% and<br />
80%.<br />
Example of mask fsky=30% :<br />
figure<br />
DX9 and the Y3.0 realizations are binned using : '/data/dmc/MISS03/DATA/BINTAB/bin_v4.7_tt'.<br />
For each bin we compute the statisical parameters of the Yardstick distribution. The following plot<br />
shows the differences between Y3.0 mean and DX9 considering the standart deviation of the<br />
yardstick. We also indicate chi square value, Chi^2 is computed over larger bin : [0,20][20,400]<br />
[400,1000][1000,2000][2000, 3000], using the ratio between (DX9-Y3.0 mean)^2 and Y3.0<br />
variance within each bin.<br />
Exemple of consistency test for 143 survey null test maps :<br />
figure</div>Stechenehttps://wiki.cosmos.esa.int/planck-legacy-archive/index.php?title=HFI-Validation&diff=2314HFI-Validation2012-10-19T09:13:13Z<p>Stechene: </p>
<hr />
<div>The HFI validation is mostly modular. That is, each part of the pipeline, be it timeline processing, map-making, or any other, validates the results of its work at each step of the processing. In addition, we do additional validation with an eye towards overall system integrity. <br />
<br />
==Expected systematics and tests (bottom-up approach)==<br />
<br />
{{:HFI-bottom_up}}<br />
<br />
==Top-down approach to systematics==<br />
<br />
While we track and try to limit the effects listed above, and we do not believe there are other large effects which might compromise the data, we test this using a suite of general difference tests. As an example, <br />
<br />
o Survey 3 minus Survey 1<br />
<br />
<br />
==Yardstick simulations==<br />
<br />
===Yardstick simulations (Delouis)===<br />
The yardstick allows gauging various effects to see whether they need be included in monte-carlo to describe data. It also allows gauging the significance of validation tests on data (e.g. can null test can be described by the model?).<br />
<br />
Yardstick 3.0 that characterizes the DX9 data goes through the following steps:<br />
<br />
#The input maps are computed using the Planck Sky Model.<br />
#The LevelS is used to project input maps on timeline using the B-Spline scanning beam and the DX9 pointing (called ptcor6). The real pointing is affected by the aberration that is corrected by map-making. Yardstick does not simulate aberration. Finally, the difference between the projected pointing from simulation and from DX9 is equal to the aberration.<br />
#The simulated noise timelines, that are added to the projected signal, have the same spectrum (low and high frequency) than the characterized noise. For yardstick 3.0. No correlation in time or between detectors have been simulated.<br />
#The simulation map making step use the DX9 sample flags.<br />
#For the low frequencies (100, 143, 217, 353), the yardstick output are calibrated using the same mechanism (e.g. dipole fitting) than DX9. The calibration is not done for higher frequency (545, 857)<br />
#The Official map making is run on those timelines using the same parameters than for real data.<br />
A yardstick production is composed by all survey map (1,2 and nominal), all detector Detsets and channel maps. The Yardstick 3.0 is based on 5 noise iterations for each map realization.<br />
<br />
===Sysiphe summary including what can neglected(Montier)===<br />
<br />
==Simulations versus data results (including PTE) (Techene)==<br />
We make consistency test between DX9 and Yardstick production. Yardstick production contains<br />
sky (generated with LevelS starting from PSM177) and noise timeline realizations procedeed with<br />
the official map making. DX9 production was regenerated in order to get rid of possible differences<br />
that might appear for not running the official pipeline in the same conditions. We compare statistical<br />
properties of the cross spectra of null test maps for 100, 143, 217, 353 channels. Null test maps can be<br />
survey null test or half focal plane null test, each of them means a specific goal : survey1-survey2<br />
aims at isolating transfert function or pointing issues, while half focal plane null test enables to<br />
focus on beam issues. Comparing cross spectra we isolate systematic effects from the noise, and we<br />
can check whether they are properly simulated or need to. Spectra are computed with spice<br />
masking either DX9 point sources or simulated point sources, and masking the galactic plane with<br />
several mask width, the sky fraction from which spectra are computed are around 30%, 60% and<br />
80%.<br />
Example of mask fsky=30% :<br />
figure<br />
DX9 and the Y3.0 iterations are binned using : '/data/dmc/MISS03/DATA/BINTAB/bin_v4.7_tt'.<br />
For each bin we compute the statisical parameters of the Yardstick distribution. The following plot<br />
shows the differences between Y3.0 mean and DX9 considering the standart deviation of the<br />
yardstick. We also indicate chi square value, Chi^2 is computed over larger bin : [0,20][20,400]<br />
[400,1000][1000,2000][2000, 3000], using the ratio between (DX9-Y3.0 mean)^2 and Y3.0<br />
variance within each bin.<br />
Exemple of consistency test for 143 survey null test maps :<br />
figure</div>Stechene