xun4> heasoft xun4> ciao xun4> sasstart
For example in my analysis of cluster J1226, I created a directory called j1226 and unpacked my cd to there, giving me:
xun4> ls odf/ x_20010727_0000003189_01_02.dsk* pipeprod/ x_20010727_0000003189____02.set* voldesc.sfd*
e.g. While still in j1226, I typed:
xun4> upcase.csh odf --------------------UPCASE.CSH version 1--------------------- Input was /exgal1/bjm/scripts/upcase.csh odf Filenames in odf have been uppercased, uppercaseing script written to rename.csh -------------------------------------------------------------where rename.csh is the script that actually renames the files (just a list of 'mv name.fit NAME.FIT')
Note also that if you start working in a new terminal, or logout, and come back to this data, you'll need to set the environment variables again. To do this, simply run xmmsetup.csh as before, in the odf directory. If files called *SUM.SAS and ccf.cif exist, then the script won't run odfingest or cifbuild respectively. Some datasets may come with SUM.SAS or CIF files already created, so you will need to rename these if you want xmmsetup.csh to create your own for you.
So I did:
xun4> cd odf/ xun4> source /xmm2/bjm/scripts/xmmsetup.csh ----------------------XMMSETUP.CSH version 1.1--------------------- No ccf.cif file found, so running cifbuild...\n
No SUM.SAS file found, so running odfingest...\n
-------------------------------------------------------------------
xun4> setenv | grep SAS SAS_DIR=/soft/xmm/solaris/xmmsas_20010917_1110 SAS_PATH=/soft/xmm/solaris/xmmsas_20010917_1110 SAS_CCF=/xmm2/bjm/xmm/newj1226/odf/ccf.cif SAS_CCFPATH=/xmm2/caldb/xmm/ccf SAS_IMAGEVIEWER=ds9 SAS_ODF=/xmm2/bjm/xmm/newj1226/odf/0279_0070340501_SCX00000SUM.SAS SAS_CCFFILES=/xmm2/caldb/xmm/ccf SAS_VERBOSITY=1 SAS_SUPRESS_WARNING=3
Move up a level, and make directories called epchain and emchain, then move into one, say emchain. For these stages, it's useful to keep a log of the outputs of emchain and epchain...
xun4> ls emchain/ rename.csh* epchain/ voldesc.sfd* odf/ x_20010727_0000003189_01_02.dsk* pipeprod/ x_20010727_0000003189____02.set* xun4> cd emchain/ xun4> nice +19 emchain |& tee emchain_log.txtThis last line starts emchain running (nicely!) which will output LOADS of info to the screen, and take a while (30-60mins). The command tee keeps a log of everything that goes to the screen in the file emchain_log.txt (the & also logs standard error output)
Next you want to move to the epchain directory, and run epchain
xun4> cd ../epchain/ xun4> nice +19 epchain |& tee epchain_log.txtWhich will output even more, and take even longer (couple of hours).
Note here that these chains will have already been done on your data to produce the files in the products/ directory on your CD or in the archive. I personally prefer to run them myself, and create all my own products from the ODF, but note that this may just duplicate work that has already been done in the products/ directory, and the files there can be used for science, and are certainly useful to have a quick look at the data.
xun4> ls emchain/ atthk.dat emchain_log.txt P0070340501M1S001MIEVLI0000.FIT P0070340501M2S002MIEVLI0000.FIT xun4> ls epproc/ P0070340501PNS003PIEVLI0000.FIT atthk.dat epchain_log.txtI then rename these to something obvious in my j1226 directory, like:
xun4> cp emchain/atthk.dat atthk.dat xun4> cp emchain/P0070340501M1S001MIEVLI0000.FIT mos1_raw_evt.fits xun4> cp emchain/P0070340501M2S002MIEVLI0000.FIT mos2_raw_evt.fitsThen I zip up the contents of emchain/ for safe keeping, and do the same for PN events, so now in j1226, I have:
xun4> ls atthk.dat odf/ emchain/ pn_raw_evt.fits epchain/ rename.csh* voldesc.sfd* pipeprod/ mos1_raw_evt.fits x_20010727_0000003189_01_02.dsk* mos2_raw_evt.fits x_20010727_0000003189____02.set*
Make a directory called cleaning, move into it, and make links to your filtered events. e.g.:
xun4> ln -s ../mos1_raw_evt.fitsNow run xmmlight_clean.csh, a script which which recursively cleans the data by removing all bins with counts > 3sigma from mean, finds new mean, new sigma, and repeats till it's stable. It is a bit sensitive to binsize, so vary this to see if there is a large variation in the amount of time removed. Note that by default the script makes a lightcurve in the range 10-15keV and uses this as the basis for the cleaning.
This step can be combined with some data quality filtering in one step, and it is easier, and probably better to do this, but I explain it seperately here to illustrate a couple of points. See the note below.
xun4> xmmlight_clean.csh mos1_raw_evt.fits none 50 m1clean_ | tee m1clean_log.txt --------------------XMMLIGHT_CLEAN.CSH version 2--------------------- Input was /exgal1/bjm/scripts/xmmlight_clean.csh mos1_raw_evt.fits none 50 m1lclean_ Creating lightcurve histogram... histogram written to m1lclean_lc_hist.fits Recursively cleaning the data until the mean counts per bin is constant step 1 reduction in mean rate = 107.675 counts per bin step 2 reduction in mean rate = 81.0015 counts per bin step 3 reduction in mean rate = 57.4504 counts per bin step 4 reduction in mean rate = 43.4806 counts per bin step 5 reduction in mean rate = 19.1034 counts per bin step 6 reduction in mean rate = 20.5706 counts per bin step 7 reduction in mean rate = 14.1304 counts per bin step 8 reduction in mean rate = 19.104 counts per bin step 9 reduction in mean rate = 20.9666 counts per bin step 10 reduction in mean rate = 20.2208 counts per bin step 11 reduction in mean rate = 21.3565 counts per bin step 12 reduction in mean rate = 29.4227 counts per bin step 13 reduction in mean rate = 12.8295 counts per bin step 14 reduction in mean rate = 7.25295 counts per bin step 15 reduction in mean rate = 8.90328 counts per bin step 16 reduction in mean rate = 8.59943 counts per bin step 17 reduction in mean rate = 4.99702 counts per bin step 18 reduction in mean rate = 8.01233 counts per bin step 19 reduction in mean rate = 3.08048 counts per bin step 20 reduction in mean rate = 1.49638 counts per bin step 21 reduction in mean rate = 0 counts per bin Cleaned histogram written to m1lclean_lc_hist_clean.fits Creating GTI file for time bins with (-2793.21 < counts < 1042.72) (<20.8544 counts/s)... GTI written to m1lclean_gti.fits Applying GTI to events... Cleaned events written to m1lclean_clean_evt.fits with mean rate 6.80458 counts/s and a standard deviation of 4.6833 counts/s Old LIVETIME was 2.97136378364563E+04s, xmmlight_clean removed 7244.12s, leaving a LIVETIME of 2.24695147724152E+04s Lightcurve plotted to m1lclean_lc.eps and chipsscript written to m1lclean_chipsscript.txt ----------------------------------------------------------------------
Repeat the cleaning for mos2 and pn, then move the cleaned, filtered data back up a level, and delete any unwanted intermediate files - in cleaning you should keep all the rootchipsscript, rootlc_hist, rootlc_hist_clean, rootlc.eps files, and of course, your logs!
xun4> fstatistic mos1_raw_evt.fits flag - The sum of the selected column is 2785304.0 The mean of the selected column is 5.2464014 The standard deviation of the selected column is 103.41005 The minimum of selected column is 0. The maximum of selected column is 2050.0000 The number of points used in calculation is 530898So before I do any cleaning, I have flags up to 2050
We will also want to filter on pattern - events are given a pattern depending on how many pixels they are detected in, i.e. single pixels, double pixels...
For MOS, (see file:///soft/xmm/linux/xmmsas_20010618_1522/doc/emevents/node4.html) and PN (see http://xmm.vilspa.esa.es/sas/current/doc/epevents/node10.html)
pattern 0 = single event pattern 1-4 = double event 5-8 = triple 9-12 = quadNote, though, that the spectral responses are well calibrated for patterns <= 4 for PN, and patterns <=12 for MOS. My approach has been to filter the events lists into these ranges for each camera at this stage. However, higher patterns should be okay for imaging (though not quite as reliable) so if you're short of photons, you may want to apply less strict filtering for imaging work.
So, I'm going to filter my MOS data with flag #XMMEA_EM and pattern <= 12
To do this, I use a script called xmmfilter.csh which simply acts as a user-friendly front end for the SAS tool evselect. (NOTE: This step can be combined with lightcurve cleaning, but is included as an example of using xmmfilter.csh - the next step explains how to do this.)
xun4> xmmfilter.csh m1lclean_clean_evt.fits events mos1_P12Flag_evt.fits '#XMMEA_EM&&(PATTERN <= 12)' none --------------------xmmfilter.csh version 1--------------------- Input was /exgal1/bjm/scripts/xmmfilter.csh m1lclean_clean_evt.fits events mos1_P12Flag_evt.fits #XMMEA_EM&&(PATTERN <= 12) none Creating new events file... evselect table=mos1_raw_evt.fits withfilteredset=yes filteredset=mos1_P12Flag_evt.fits filtertype=expression expression=#XMMEA_EM&&(PATTERN <= 12) destruct=yes keepfilteroutput=yes writedss=yes Output events written to mos1_P12Flag_evt.fits There are 529034 events in the filter. -----------------------------------------------------------------This step is repeated for MOS2 and PN, though with PN, my filter is '#XMMEA_EP&&(PATTERN <= 4)'
xun4> ln -s ../mos1_raw_evt.fits
and then used:
xun4> xmmlight_clean.csh mos1_raw_evt.fits '#XMMEA_EM&&(PATTERN <= 12)' 50
m1lclean_ | tee m1lclean_log.txt
So in j1226/ I now have:
xun4> ls
atthk.dat odf/
cleaning/ pipeprod/
emchain/ pn_clean_evt.fits
epchain/ pn_raw_evt.fits
mos1_clean_evt.fits rename.csh*
mos1_raw_evt.fits voldesc.sfd*
mos2_clean_evt.fits x_20010727_0000003189_01_02.dsk*
mos2_raw_evt.fits x_20010727_0000003189____02.set*
Right, now we've done the data prep, we can do some analysis. This will
fall into the largely separate categories of imaging analysis, and spectral analysis.