{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n",
"\t\t\t Introduction \n", "\t\t\tThis thread illustrates how to reprocess Observation Data Files (ODFs) to obtain calibrated and concatenated event lists. \n", "\t\t\tExpected Outcome \n", "\t\t\tThe user will obtain calibrated and concatenated event lists which can be directly used to generate scientific products (images, spectra, light curves) through the SAS tasks evselect or xmmselect. \n", "\t\t\tSAS Tasks to be Used \n", "\t\t\t\n", "\t\t\tPrerequisites \n", "\t\t\t
Useful Links \n", "\t\t\tAlternatively to following this thread, the pipeline products contain calibrated and concatenated event lists. Both ODFs and pipeline products can be downloaded from the XMM-Newton Science Archive. \n", "\t\t\t
Last Reviewed: 25 May 2023, for SAS v21\n", "\t\t\tLast Updated: 15 March 2021\n", "\t\t\t | \n",
"\t\t
Run the EPIC reduction meta-tasks.
\n", "emproc
\n", "\tepproc
\n", "\tThat's it! The default values of these meta-tasks are appropriate for most practical cases. You may have a look at the next section in this thread to learn how to perform specific reduction sub-tasks using emproc or epproc.
\n", "The files produced by epproc are the following:
\n", "The files produced by emproc are conceptually the same. The main difference in the naming convention is that the string EPN is replaced by EMOS1 and EMOS2 for each EPIC-MOS camera, respectively.
\n", "In order to run the following blocks, SAS needs to be inizialized in the terminal from which the Jupyter nootebook has been opened. This has to be done prior to launching the notebook. Follow the steps provided in the SAS Startup Thread in Python.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Import the following Python libraries,
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pysas.wrapper import Wrapper as w\n", "import os\n", "import os.path\n", "from os import path\n", "import subprocess\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "from astropy.io import fits\n", "from astropy.table import Table\n", "from matplotlib.colors import LogNorm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before we start, lets see what se have already defined in terms of SAS variables,
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "inargs = []" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "t = w('sasver', inargs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "t.run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We still need to define the SAS_ODF and SAS_CCF variables (make sure the SAS_CCFPATH is also defined). Use the SAS task startsas as described in the SAS Startup Thread in Python to download an ODF. For this thread, we will assume that an ODF has already been downloaded. We will use the ODF with id 010486050 as an example and assume that there is already a directory (sas_file in the block below) with an existing CIF and Summary File. In the block below introduce the absoute path to the Working directory (finished with '/') and the location of the CIF and SAS Summary File,
\n", "\n", "Note: the path to the CIF and SAS Summary File must be an absolute path begining with '/'." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "work_dir = 'absolute_path_to_wrk_directory'\n", "sas_file = 'absolute_path_to_cifSUMSAS_directory'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "inargs = [f'sas_ccf={sas_file}ccf.cif', f'sas_odf={sas_file}0466_0104860501_SCX00000SUM.SAS', f'workdir={work_dir}']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "w('startsas', inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This process should have defined the SAS_CCF an SAS_ODF variables.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following blocks will produce EPIC-pn and EPIC-MOS event files.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "emproc and epproc are highly flexible tasks, which allow the user to perform a wide range of customized reduction tasks. Some emproc examples are listed below. The same customized reduction tasks can be performed for the EPIC-pn as well, just by substituting emproc with epproc in the commands.
\n", "\n", "Please be aware that if you want to supply coordinates for the analysis of the EPIC-MOS Timing mode, the command is slightly different, e.g.:
\n", "\n", "emproc withsrccoords=yes srcra=34.65646 srcdec=-12.876546
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Parameters can be combined to accomplish simultaneously two or more of the above tasks during the same run.
\n", "\n", "The user is referred to the on-line documentation of emproc and epproc for a complete list of the available options.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Most exposures in EPIC-pn Timing Mode are affected by X-ray Loading (XRL; cf. Sect.3.1 in Guainazzi et al., 2013, XMM-SOC-CAL-TN-0083). Furthermore, a residual dependence of the energy scale on the total count rate is corrected through the \"Rate-Dependent PHA\" correction (Guainazzi, 2014, XMM-CCF-REL-312). In order to correct for these effects a set of default calibration settings have been identified. As of SAS v14.0, this is controlled by a single parameter within the tasks epproc and epchain. This parameter is called withdefaultcal and is set to yes by default. Setting withdefaultcal=yes implies runepreject=yes withxrlcorrection=yes runepfast=no withrdpha=yes. So one shall run the EPIC-pn reduction meta-tasks as follows:
\n", "\n", "epproc
\n", "\n", "or:
\n", "\n", "epchain datamode=TIMING
\n", "\n", "For more information please refer to the documentation of epproc and epchain.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Most exposures in EPIC-pn Burst Mode are affected by X-ray Loading (XRL; cf. Sect.3.1 in Guainazzi et al., 2013, XMM-SOC-CAL-TN-0083). Furthermore, a residual dependence of the energy scale on the total count rate is corrected through the \"Rate-Dependent CTI\" correction. In order to correct for these effects a set of default calibration settings have been identified. As of SAS v14.0, this is controlled by a single parameter within the tasks epproc and epchain. This parameter is called withdefaultcal and is set to yes by default. Setting withdefaultcal=yes implies runepreject=yes withxrlcorrection=yes runepfast=yes withrdpha=no. So one shall run the EPIC-pn reduction meta-tasks as follows:
\n", "\n", "epproc burst=yes
\n", "\n", "Notice the inclusion of the extra parameter burst=yes in the call to epproc. The meta-task epchain also needs an extra parameter:
\n", "\n", "epchain datamode=BURST
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SAS Command\n", "cmd = \"epchain\" # SAS task to be executed \n", "\n", "# Arguments of SAS Command\n", "inargs = ['datamode=BURST']\n", "\n", "print(\" SAS command to be executed: \"+cmd+\", with arguments; \\n\")\n", "inargs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "w(cmd,inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Concatenated and calibrated EPIC event lists are already available in the PPS Pipeline Products. These are produced with the most updated software and calibrations available at the moment the observation was performed and - normally a couple of weeks later - the ODF are produced. There is therefore no need for you to regenerate the EPIC event lists yourself, unless substantial changes in the software and/or calibration occurred between the time when the Pipeline Products were generated and the moment when you are analyzing the data. Of course, there is no general answer or recipe one can apply to decide if/when this is the case. In order to collect all the available and necessary elements to formulate your judgment, follow these steps:
\n", "\n", "If you feel lost or unsure, and prefer to stay on the safe side, you had probably better regenerate your EPIC calibrated event lists.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\t\t\t\n", "\t\t\t | \n", "\t\t
" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.2" } }, "nbformat": 4, "nbformat_minor": 2 }