ALMA Imaging Pipeline Reprocessing for Manually Calibrated Data
About This Guide
This guide describes some examples for creating the interferometric imaging products using the ALMA Cycle 5 Pipeline, for manually calibrated data.
Note that the scripts described in this guide have only been tested in Linux.
How to Restore Manually Calibrated Measurement Set
If the delivered data has been manually calibrated, PI should be able to restore the manually calibrated data by running the scriptForPI.py (named member.<uid_name>.scriptForPI.py in /script directory).
Follow instructions in your README for restoring pipeline calibrated data using the scriptForPI.py. NOTE: the SPACESAVING parameter cannot be larger than 1
Once completed, the following files and directories will be present, with specific things about pipeline re-imaging noted:
- This directory contains a file(s) called <uid_name>.ms.split.cal (one for each execution in the MOUS) -- these type of files have been split to contain the calibrated pipeline uv-data in the DATA column, and only the science spectral window ids (spws), though importantly the spws have been re-indexed to start at zero, i.e. they will not match spws listed in the pipeline weblog or other pipeline produce products like the science target flag template files (*.flagtargetstemplate.txt) or continuum ranges (cont.dat). Though this type of file has been the starting point for manual ALMA imaging, ms.split.cal files CANNOT BE DIRECTLY USED IN THE EXAMPLES GIVEN IN THIS GUIDE.
- Provided that the restore is done with a SPACESAVING=1, within the calibrated directory there is a "working" directory which does contain the <uid_name>.ms (i.e. no split has been run on them) that is of the form expected as the starting point of the ALMA imaging pipeline. This directory also contains the *.flagtargetstemplate.txt for each execution which can be used to do science target specific flagging. This is the best location to do ALMA pipeline image reprocessing.
- This directory contains the <mous_name>.cont.dat file with the frequency ranges identified by the pipeline as being likely to only contain continuum emission. If a file called cont.dat (i.e. with mous_name stripped off) is present in the directory where pipeline imaging tasks are run, it will be used.
- This directory contains the <mous_name>.casa_commands.log which contains all the equivalent casa commands run during the course of the pipeline processing, in particular the tclean commands to make the image products.
- The original pipeline image products
- The original pipeline weblog
- The raw asdm(s)
- File containing information about the QA2 For the MOUS and may contain specific notes about the image quality.
- Contains the scriptForPI.py
Getting and Starting CASA
If you do not already have CASA installed on your machine, you will have to download and install it.
Download and installation instructions are available here:
CASA 5.1.1 or later is required to reprocess ALMA Cycle 5 data using the scripts in this guide.
NOTE: To use pipeline tasks, you must start CASA with
Re-imaging Examples with Automatically Chosen Parameters by Pipeline
# General __rethrow_casa_exceptions = True vislist=['uid___A002_Xbe2ed7_Xb524.ms'] pipelinemode="automatic" try: # Initialize the pipeline h_init() # Load the data hifa_importdata (vis=vislist, dbservice=False, pipelinemode=pipelinemode) # Check product size limits and mitigate imaging parameters hif_checkproductsize(maxcubesize=30.0, maxcubelimit=40.0, maxproductsize=400.0) # Split out the target data hif_mstransform (pipelinemode=pipelinemode) # Flag the target data hifa_flagtargets (pipelinemode=pipelinemode) # Make a list of expected targets to be cleaned in mfs mode (used for continuum subtraction) hif_makeimlist (specmode='mfs', pipelinemode=pipelinemode) # Find continuum frequency ranges hif_findcont(pipelinemode=pipelinemode) # Fit the continuum using frequency ranges from hif_findcont hif_uvcontfit(pipelinemode=pipelinemode) # Subtract the continuum fit hif_uvcontsub(pipelinemode=pipelinemode) # Make clean mfs images for the selected targets hif_makeimages (pipelinemode=pipelinemode) # Make a list of expected targets to be cleaned in cont (aggregate over all spws) mode hif_makeimlist (specmode='cont', pipelinemode=pipelinemode) # Make clean cont images for the selected targets hif_makeimages (pipelinemode=pipelinemode) # Make a list of expected targets to be cleaned in continuum subtracted cube mode hif_makeimlist (pipelinemode=pipelinemode) # Make clean continuum subtracted cube images for the selected targets hif_makeimages (pipelinemode=pipelinemode) # Make a list of expected targets to be cleaned in continuum subtracted PI cube mode hif_makeimlist (specmode='repBW', pipelinemode=pipelinemode) # Make clean continuum subtracted PI cube hif_makeimages (pipelinemode=pipelinemode) finally: h_save()
The above example script will produce three types of images: multifrequency synthesis image for each spw without continuum subtraction, continuum image for aggregate spws excluding line channels identified by hif_findcont task and linecube image for each spw. Depending on the data volume, the channel binning, image size, cell size, number of sources and spectral windows for imaging, can be mitigated by hif_checkproductsize.
For customized imaging with different imaging parameters, please consult the link below that introduces several use cases. Note that the continuum range file, cont.dat either determined by the hif_findcont task in above or by user manually, should exist before making customized imaging.
Also if the user is not satisfied with the clean mask from automasking algorithm, it is possible to change the automasking parameters in hif_makeimages by setting pipelinemode='interactive'. For detailed instructions and guides for using automasking, please consult with the link below.
Relevant PL parameters corresponding to the ones in the automasking guide, are hm_sidelobethreshold, hm_noisethreshold, hm_lownoisethreshold, hm_negativethreshold, hm_minbeamfrac and hm_growiterations.