ALMA Imaging Pipeline Reprocessing for Manually Calibrated Data (Cycle 10)

From CASA Guides
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

About This Guide

Last checked on CASA Version 6.5.4

This guide describes some examples for creating the interferometric imaging products using the ALMA Cycle 10 Pipeline, for manually calibrated data. And it does NOT work for a concatenated measurement set, often named as calibrated_final.ms .

Note that the scripts described in this guide have only been tested in Linux.

How to Restore Manually Calibrated Measurement Set

If the delivered data has been manually calibrated, the PI should be able to restore the manually calibrated data by running the scriptForPI.py (named member.<uid_name>.scriptForPI.py in /script directory).

Follow instructions in your README for restoring pipeline calibrated data using the scriptForPI.py. NOTE: the SPACESAVING parameter cannot be larger than 1

Once the restoration is completed, the following files and directories will be present, with specific things about pipeline re-imaging noted:

  • calibrated/
    • This directory contains a file(s) called <uid_name>.ms.split.cal (one for each execution in the MOUS) -- these type of files have been split to contain the calibrated pipeline uv-data in the DATA column, and only the science spectral window ids (spws), though importantly the spws have been re-indexed to start at zero, i.e. they will not match spws listed in the pipeline weblog or other pipeline produce products like the science target flag template files (*.flagtargetstemplate.txt) or continuum ranges (cont.dat). Though this type of file has been the starting point for manual ALMA imaging, ms.split.cal files CANNOT BE DIRECTLY USED IN THE EXAMPLES GIVEN IN THIS GUIDE.
    • Provided that the restore is done with a SPACESAVING=1, within the calibrated directory there is a "working" directory which does contain the <uid_name>.ms (i.e. no split has been run on them) that is of the form expected as the starting point of the ALMA imaging pipeline. This directory also contains the *.flagtargetstemplate.txt for each execution which can be used to do science target specific flagging. This is the best location to do ALMA imaging pipeline reprocessing. (Older data sets may have a <uid_name>.calibration directory rather than "working".) Place your edited copy of the sample script below into the "working" directory to run it.
  • calibration/
    • This directory contains the <mous_name>.cont.dat file with the frequency ranges identified by the pipeline as being likely to only contain continuum emission. If a file called cont.dat (i.e. with mous_name stripped off) is present in the directory where pipeline imaging tasks are run, it will be used. Otherwise, the pipeline task hif_findcont will generate a cont.dat file (which can also be edited if necessary) in the "calibrated/working" directory when it is run.
  • log/
    • This directory contains the <mous_name>.casa_commands.log which contains all the equivalent casa commands run during the course of the pipeline processing, in particular the tclean commands to make the image products.
  • product/
    • The original pipeline image products
  • qa/
    • The original pipeline weblog
  • raw/
    • The raw asdm(s)
  • README
    • File containing information about the QA2 For the MOUS and may contain specific notes about the image quality.
  • script/
    • Contains the scriptForPI.py

Getting and Starting CASA

If you do not already have CASA installed on your machine, you will have to download and install it.

Download and installation instructions are available here:

http://casa.nrao.edu/casa_obtaining.shtml

CASA 6.5.4 or later is required to reprocess ALMA data using the scripts in this guide.

NOTE: To use pipeline tasks, you must start CASA with

casa --pipeline

If you are using an NRAO machine, however, you'll need

casa-alma --pipeline

as casa alone will call the latest stable release of CASA, 6.6, rather than the 6.5.4 version needed for the Cycle 10 pipeline.

Re-imaging Examples with Automatically Chosen Parameters by Pipeline

# Be sure to edit mymss!
import os
import sys
__rethrow_casa_exceptions = True

mymss = ['uid___A002_Xbe2ed7_Xb524.ms']

for myms in mymss:
    if not os.path.exists(myms+'.flagversions'):
        print('Not found: '+myms+'.flagversions')
        sys.exit('ERROR: you must provide the flagversions files for all input MSs.')
try:
    # initialize the pipeline
    h_init()
    
    # load the data
    hifa_importdata(vis=mymss,dbservice=False)

    # if you do not have a check source, comment out these two stages:
    hif_makeimlist(intent='CHECK')
    hif_makeimages()

    # imageprecheck selects the robust parameter for tclean
    hifa_imageprecheck()

    # you can change these parameters (or comment out the step entirely) 
    #  based on your computing resources, and this
    #  stage will mitigate the imaging parameters accordingly
    hif_checkproductsize(maxcubesize=40.0, maxcubelimit=100.0, maxproductsize=500.0)

    # splits out the target data
    hif_mstransform()

    # flag the target data
    hifa_flagtargets()

    # make a list of expected targets to be cleaned in mfs mode (used for
    #   continuum subtraction)
    hif_makeimlist(specmode='mfs')

    # find continuum frequency ranges
    hif_findcont()

    # fit and subtract the continuum
    hif_uvcontsub()

    # make clean mfs images for the selected targets
    hif_makeimages()

    # make a list of expected targets to be cleaned in cont 
    #  (aggregate over all spws) mode, and make the images
    hif_makeimlist(specmode='cont')
    hif_makeimages()

    # make a list of expected targets to be cleaned in continuum subtracted 
    #  cube mode and make the images (comment out if you have cont-only data)
    hif_makeimlist(specmode='cube')
    hif_makeimages()

    # Selfcal:  skip these stages if you know your target cannot be self-calibrated, 
    #  otherwise the pipeline will decide whether selfcal improves the data, and 
    #  remake the mfs, cont, and cube images if successful 
    hif_selfcal()
    hif_makeimlist(specmode='mfs', datatype='selfcal')
    hif_makeimages()
    hif_makeimlist(specmode='cont', datatype='selfcal')
    hif_makeimages()
    hif_makeimlist(specmode='cube', datatype='selfcal')
    hif_makeimages()
    

    # export the images to fits files (only needed if one wants fits files)
    hifa_exportdata()

finally:
    h_save()

The above example script will produce three types of images: multifrequency synthesis (specmode='mfs') image for each spw without continuum subtraction, continuum (specmode='cont') image for aggregate spws excluding line channels identified by hif_findcont task and linecube (specmode='cube') image for each spw. Depending on the data volume, the channel binning, image size, cell size, number of sources, and spectral windows for imaging can be mitigated by hif_checkproductsize.

For customized imaging with different imaging parameters, please consult the link below that introduces several use cases. Note that the continuum range file, cont.dat either determined by the hif_findcont task in above or by user manually, should exist before making customized imaging.

Common Re-Imaging Examples from the ALMA Cycle 10 Imaging Pipeline Reprocessing Guide

Also if the user is not satisfied with the clean mask from automasking algorithm, it is possible to change the automasking parameters in hif_makeimages by setting pipelinemode='interactive'. For detailed instructions and guides for using automasking, please consult with the link below.

Automasking Guide

Relevant PL parameters corresponding to the ones in the automasking guide, are hm_sidelobethreshold, hm_noisethreshold, hm_lownoisethreshold, hm_negativethreshold, hm_minbeamfrac and hm_growiterations.