ALMA Cycle 4 Imaging Pipeline Reprocessing

From CASA Guides
Revision as of 10:31, 25 October 2016 by Cbrogan (talk | contribs)
Jump to navigationJump to search

About This Guide

This guide describes a few options for perfecting the imaging products from the ALMA Cycle 4 Pipeline. After Section #Restore Pipeline Calibration and Prepare for Re-imaging, each reprocessing option is self-contained.

Note that the scripts described in this guide have only been tested in Linux.

Getting and Starting CASA

If you do not already have CASA installed on your machine, you will have to download and install it.

Download and installation instructions are available here:

http://casa.nrao.edu/casa_obtaining.shtml

CASA 4.7.0 or later is required to reprocess ALMA Cycle 4 data using the scripts in this guide.

NOTE: To use pipeline tasks, you must start CASA with

casa --pipeline

Restore Pipeline Calibration and Prepare for Re-imaging (all Options)

STEP 1: Follow instructions in your README for restoring pipeline calibrated data using the scriptForPI.py. NOTE: the SPACESAVING parameter cannot be larger than 1

STEP 2: Change to directory that contains the calibrated data (i.e. *.ms) called "calibrated/working" after the pipeline restore and start CASA 4.7.0 or later.

casa --pipeline

STEP 3: Run the following command in CASA to copy the pipeline file that contains the frequency ranges used to create the continuum images and the continuum subtraction to the directory you will be working in.

os.system('cp ../../calibration/uid*cont.dat ./cont.dat')

Restore Pipeline Continuum Subtraction and Manually Make Image Products

Restore Pipeline Continuum Subtraction and Make Pipeline Aggregate Continuum Image With All Channels

## Edit the USER SET INPUTS section below and then execute
## this script (note it must be in the 'calibrated/working' directory.

import glob as glob
__rethrow_casa_exceptions = True
pipelinemode='automatic'
context = h_init()

###########################################################
## USER SET INPUTS

## Select a title for the weblog
context.project_summary.proposal_code='NEW AGGREGATE CONT'

############################################################

## Move cont.dat to another name if it exists
os.system('mv cont.dat original.cont.dat')

## Make a list of all uv-datasets appended with *.split.cal
MyVis=glob.glob('*.ms')

try:
    ## Load the *.ms files into the pipeline
    hifa_importdata(vis=MyVis, pipelinemode=pipelinemode)

    ## Split off the science target data into its own ms (called
    ## *target.ms) and apply science target specific flags
    hif_mstransform(pipelinemode=pipelinemode)
    hifa_flagtargets(pipelinemode=pipelinemode)

    ## Skip the continuum subtraction steps and make an aggregate 
    ## continuum image with all unflagged channels (file named 
    ## cont.dat should NOT be present in directory).
    hif_makeimlist(specmode='cont',pipelinemode=pipelinemode)
    hif_makeimages(pipelinemode=pipelinemode)

    ## Export new images to fits format if desired.
    hif_exportdata(pipelinemode=pipelinemode)

finally:
    h_save()

Revise the cont.dat Before Pipeline Continuum Subtraction and Remake Pipeline Images

## Edit the cont.dat file(s) for the spw(s) you want 
## to change the continuum subtraction for. In this example 
## spw 17 was changed.

## Edit the USER SET INPUTS section below and then execute
## this script (note it must be in the 'calibrated/working' directory.

import glob as glob
__rethrow_casa_exceptions = True
pipelinemode='automatic'
context = h_init()

###########################################################
## USER SET INPUTS

## Select a title for the weblog
context.project_summary.proposal_code = 'NEW CONTSUB' 

## Select spw(s) that have new cont.dat parameters
MySpw='17'

############################################################

## Make a list of all uv-datasets appended with *.split.cal
MyVis=glob.glob('*.ms')

try:
    ## Load the *.ms files into the pipeline
    hifa_importdata(vis=MyVis, pipelinemode=pipelinemode)

    ## Split off the science target data into its own ms (called
    ## *target.ms) and apply science target specific flags
    hif_mstransform(pipelinemode=pipelinemode)
    hifa_flagtargets(pipelinemode=pipelinemode)

    ## Fit and subtract the continuum using revised cont.dat for all spws
    hif_uvcontfit(pipelinemode=pipelinemode)
    hif_uvcontsub(pipelinemode=pipelinemode)

    ## Make new per spw continuum for revised spw(s) and new aggregate cont
    hif_makeimlist(specmode='mfs',spw=MySpw)
    hif_makeimages(pipelinemode=pipelinemode)
    hif_makeimlist(specmode='cont',pipelinemode=pipelinemode) 
    hif_makeimages(pipelinemode=pipelinemode)    

    ## Make new continuum subtracted cube for revised spw(s)
    hif_makeimlist(specmode='cube',spw=MySpw,pipelinemode=pipelinemode) 
    hif_makeimages(pipelinemode=pipelinemode)

    ## Export new images to fits format if desired.
    hif_exportdata(pipelinemode=pipelinemode)

finally:
    h_save()

Restore Pipeline Continuum Subtraction for Subset of SPWs and Fields and Use Channel Binning for Cubes

Edit the USER SET INPUTS section below and then execute
## this script (note it must be in the 'calibrated/working' directory.

import glob as glob
__rethrow_casa_exceptions = True
pipelinemode='automatic'
context = h_init()

###########################################################
## USER SET INPUTS

## Select a title for the weblog
context.project_summary.proposal_code = 'SUBSET CUBE IMAGING' 

## Select spw(s) to image and channel binning for each spcified
## MySpw. All spws listed in MySpw must have a corresponding MyNbins
## entry, even if it is 1 for no binning.
MySpw='17,23'
MyNbins='17:8,23:2'

## Select subset of sources to image by field name.
## To select all fields, set MyFields=''
MyFields='CoolSource1,CoolSource2'

## Select Briggs Robust factor for data weighting (affects angular 
## resolution of images)
MyRobust=1.5

############################################################

## Make a list of all uv-datasets appended with *.split.cal
MyVis=glob.glob('*.ms')

try:
    ## Load the *.ms files into the pipeline
    hifa_importdata(vis=MyVis, pipelinemode=pipelinemode)

    ## Split off the science target data into its own ms (called
    ## *target.ms) and apply science target specific flags
    ## In this example we split off all science targets and science 
    ## spws, however these steps could also contain the spw and field
    ## selections
    hif_mstransform(pipelinemode=pipelinemode)
    hifa_flagtargets(pipelinemode=pipelinemode)

    ## Fit and subtract the continuum using existing cont.dat
    ## for selected spws and fields only.
    hif_uvcontfit(spw=MySpw,field=MyFields,pipelinemode=pipelinemode)
    hif_uvcontsub(spw=MySpw,field=MyFields,pipelinemode=pipelinemode)   

    ## Make new continuum subtracted cube for selected spw(s) and fields
    hif_makeimlist(specmode='cube',spw=MySpw,nbins=MyNbins,field=MyFields,
                   pipelinemode=pipelinemode) 
    hif_makeimages(robust=MyRobust,pipelinemode=pipelinemode)

    ## Export new images to fits format if desired.
    hif_exportdata(pipelinemode=pipelinemode)

finally:
    h_save()