ALMA Cycle 5 Imaging Pipeline Reprocessing: Difference between revisions
No edit summary |
No edit summary |
||
(86 intermediate revisions by 2 users not shown) | |||
Line 3: | Line 3: | ||
'''This guide describes some examples for perfecting the interferometric imaging products from the ALMA Cycle 5 Pipeline.''' If your data were manually imaged by ALMA, you should instead consult the scriptForImaging.py delivered with your data. | '''This guide describes some examples for perfecting the interferometric imaging products from the ALMA Cycle 5 Pipeline.''' If your data were manually imaged by ALMA, you should instead consult the scriptForImaging.py delivered with your data. | ||
The Section [[#Restore Pipeline Calibration and Prepare for Re-imaging (all Options)|Restore Pipeline Calibration and Prepare for Re-imaging]] describes the first steps to do. After that, the individual sections are self-contained (and they typically assume the "Restore" has been performed). The exception is [[#CASA pipescript to fully reproduce the pipeline products|Use casa_pipescript to fully reproduce the pipeline products]] -- this section is fully self contained and does not presuppose that the "Restore" has been run. It illustrates how to completely re-run the pipeline from beginning to end in order to reproduce the pipeline run done at your ARC. | |||
Additional documentation on the Cycle 5 pipeline can be found at [https://almascience.nrao.edu/processing/science-pipeline the ALMA Science Portal] | |||
Note that the scripts described in this guide have only been tested in Linux. | Note that the scripts described in this guide have only been tested in Linux. | ||
Line 11: | Line 11: | ||
== How to Decide Whether to Reprocess Pipeline Images == | == How to Decide Whether to Reprocess Pipeline Images == | ||
In order to decide whether reprocessing will be beneficial for your project you should examine both the results of the pipeline imaging via the delivered weblog and note any imaging specific comments in your | In order to decide whether reprocessing will be beneficial for your project you should examine both the results of the pipeline imaging via the delivered weblog and note any imaging specific comments in your QA2 report. The details of the various imaging pipeline stages, as well as examples of weblog output can be found in the [https://almascience.nrao.edu/processing/documents-and-tools/alma-science-pipeline-users-guide-casa-5-1.1 Users Guide], especially useful in this regard are Sections 8 and 9. | ||
For Cycle | For Cycle 4, the primary goal of the imaging pipeline was to produce images that are of sufficient quality that QA2 can be successfully performed, and that give users a good idea of what the data contain. While this is still true in Cycle 5, significant improvements have been made in the quality of pipeline imaging, notably through the introduction of an adaptive CLEAN auto-masking algorithm. In some cases these images are fine for doing science while in others, significant benefits may be obtained by re-imaging with particular science goals in mind. | ||
Typical reasons for re-imaging include: | Typical reasons for re-imaging include: | ||
* Imaging improvements to be gained from interactively generating emission specific clean mask and cleaning more deeply. The Cycle 5 pipeline currently uses | * Imaging improvements to be gained from interactively generating emission specific clean mask and cleaning more deeply. The Cycle 5 pipeline currently uses an 'automated' clean mask created by automasking algorithm based on noise thresholds. In cases where this automated clean mask is not optimal, users may benefit from clean with interactive clean masking. For peak S/N > about 100, the images can also often be improved by self-calibration coupled with deeper clean with manual clean masks. | ||
* Non-optimal continuum ranges. The pipeline uses heuristics that attempt to correctly identify continuum channels over a very broad range of science target line properties. Particularly for strong line forests (hot-cores) and occasionally for TDM continuum projects the pipeline ranges can be non-optimal -- too much in the first case and too little in the second. | * Non-optimal continuum ranges. The pipeline uses heuristics that attempt to correctly identify continuum channels over a very broad range of science target line properties. Particularly for strong line forests (hot-cores) and occasionally for TDM continuum projects the pipeline ranges can be non-optimal -- too much in the first case and too little in the second. | ||
* Other science goal driven reprocessing needs may include | * Other science goal driven reprocessing needs may include | ||
Line 40: | Line 40: | ||
casa --pipeline | casa --pipeline | ||
</pre> | </pre> | ||
== CASA pipescript to fully reproduce the pipeline products == | |||
Although a user can restore their data by running the scriptForPI.py (named member.<uid_name>.scriptForPI.py in /script directory), a full CASA pipeline script that reproduces all pipeline products is also provided in the package. This script named member.<uid_name>.hifa_calimage.casa_pipescript.py, is in the /script directory; it is shown below as an example. | |||
If the user executes | |||
<source lang="python"> | |||
execfile('member.<uid_name>.hifa_calimage.casa_pipescript.py') | |||
</source> | |||
the script runs all necessary pipeline tasks to reproduce the calibration and imaging results produced by pipeline. The workflows in the following sections reproduce parts of, and/or variants of, what is produced by this script. The subsequent workflows all start from the simple "restore" that is done by the scriptForPI.py. | |||
<source lang="python"> | |||
from recipes.almahelpers import fixsyscaltimes # SACM/JAO - Fixes | |||
__rethrow_casa_exceptions = True | |||
context = h_init() | |||
context.set_state('ProjectSummary', 'proposal_code', '2017.1.00XXX.S') | |||
context.set_state('ProjectSummary', 'piname', 'unknown') | |||
context.set_state('ProjectSummary', 'proposal_title', 'unknown') | |||
context.set_state('ProjectStructure', 'ous_part_id', 'X749565832') | |||
context.set_state('ProjectStructure', 'ous_title', 'Undefined') | |||
context.set_state('ProjectStructure', 'ppr_file', 'PPR_uid___A002_Xc24c3f_X2bb.xml') | |||
context.set_state('ProjectStructure', 'ps_entity_id', 'uid://A002/Xc24c3f/X2b6') | |||
context.set_state('ProjectStructure', 'recipe_name', 'hifa_calimage') | |||
context.set_state('ProjectStructure', 'ous_entity_id', 'uid://A002/Xc24c3f/X2b1') | |||
context.set_state('ProjectStructure', 'ousstatus_entity_id', 'uid://A002/Xc24c3f/X2ba') | |||
try: | |||
hifa_importdata(vis=['uid___A002_Xc3412f_X53ff'], dbservice=False, session=['session_1']) | |||
fixsyscaltimes(vis = 'uid___A002_Xc3412f_X53ff.ms')# SACM/JAO - Fixes | |||
h_save() # SACM/JAO - Finish weblog after fixes | |||
h_init() # SACM/JAO - Restart weblog after fixes | |||
hifa_importdata(vis=['uid___A002_Xc3412f_X53ff'], dbservice=False, session=['session_1']) | |||
hifa_flagdata(pipelinemode="automatic") | |||
hifa_fluxcalflag(pipelinemode="automatic") | |||
hif_rawflagchans(pipelinemode="automatic") | |||
hif_refant(pipelinemode="automatic") | |||
h_tsyscal(pipelinemode="automatic") | |||
hifa_tsysflag(pipelinemode="automatic") | |||
hifa_antpos(pipelinemode="automatic") | |||
hifa_wvrgcalflag(pipelinemode="automatic") | |||
hif_lowgainflag(pipelinemode="automatic") | |||
hif_setmodels(pipelinemode="automatic") | |||
hifa_bandpassflag(pipelinemode="automatic") | |||
hifa_spwphaseup(pipelinemode="automatic") | |||
hifa_gfluxscaleflag(pipelinemode="automatic") | |||
hifa_gfluxscale(pipelinemode="automatic") | |||
hifa_timegaincal(pipelinemode="automatic") | |||
hif_applycal(pipelinemode="automatic") | |||
hifa_imageprecheck(pipelinemode="automatic") | |||
hif_makeimlist(intent='PHASE,BANDPASS,CHECK') | |||
hif_makeimages(pipelinemode="automatic") | |||
hif_checkproductsize(maxcubelimit=40.0, maxproductsize=400.0, maxcubesize=30.0) | |||
hifa_exportdata(pipelinemode="automatic") | |||
hif_mstransform(pipelinemode="automatic") | |||
hifa_flagtargets(pipelinemode="automatic") | |||
hif_makeimlist(specmode='mfs') | |||
hif_findcont(pipelinemode="automatic") | |||
hif_uvcontfit(pipelinemode="automatic") | |||
hif_uvcontsub(pipelinemode="automatic") | |||
hif_makeimages(pipelinemode="automatic") | |||
hif_makeimlist(specmode='cont') | |||
hif_makeimages(pipelinemode="automatic") | |||
hif_makeimlist(specmode='cube') | |||
hif_makeimages(pipelinemode="automatic") | |||
hif_makeimlist(specmode='repBW') | |||
hif_makeimages(pipelinemode="automatic") | |||
finally: | |||
h_save() | |||
</source> | |||
The relevant tasks for imaging pipeline reprocessing described in this CASA guide are hifa_importdata, hif_mstransform, hifa_flagtargets, hif_checkproductsize, hif_uvcontfit, hif_uvcontsub, hif_makeimlist, hif_makeimages. | |||
'''Note''': One of important features of ALMA pipeline is to check the final imaging product size and make necessary adjustment to the channel binning, cell size, image size and possibly the number of fields to be imaged, in order to void not to create large images and cubes that takes up significant computing resources and is not necessary for user's science goal. hif_checkproductsize task does this jobs and we insert this task in all imaging example script in below. We recommend that user copies hif_checkproductsize task from the provided casa_pipescript.py without changing parameters: maxcubelimit, maxproductsize and maxcubesize. However users can comment it out if they don't want this size mitigation or they can explicitly specify the nbins, hm_imsize and hm_cell parameters in hif_makeimlist task. | |||
For reference, the description of pipeline tasks for interferometric and single dish data reduction can be found in the [https://almascience.nrao.edu/documents-and-tools/alma-science-pipeline-reference-manual-4-7.2 CASA 4.7.2 Pipeline Reference Manual] | |||
== Restore Pipeline Calibration and Prepare for Re-imaging (all Options) == | == Restore Pipeline Calibration and Prepare for Re-imaging (all Options) == | ||
'''STEP 1:''' Follow instructions in your | '''STEP 1:''' Follow instructions in your QA2 report for restoring pipeline calibrated data using the scriptForPI.py. NOTE: the SPACESAVING parameter cannot be larger than 1, and for pipeline calibrated and imaged data, scriptForPI.py does not automatically split science spectral windows. | ||
Once completed, the following files and directories will be present, with specific things about pipeline re-imaging noted: | Once completed, the following files and directories will be present, with specific things about pipeline re-imaging noted: | ||
* calibrated/ | * calibrated/ | ||
** | ** In case where PI explicitly set "DOSPLIT=True" before running scriptForPI.py, this directory contains a file(s) called <uid_name>.ms.split.cal (one for each execution in the MOUS) -- these type of files have been split to contain the calibrated pipeline uv-data in the DATA column, and only the science spectral window ids (spws) that matches spws listed in the pipeline weblog or other pipeline produce products like the science target flag template files (*.flagtargetstemplate.txt) or continuum ranges (cont.dat). Though this type of file has been the starting point for manual ALMA imaging, ms.split.cal files CANNOT BE DIRECTLY USED IN THE EXAMPLES GIVEN IN THIS GUIDE. | ||
** Provided that the restore is done with a SPACESAVING=1, within the calibrated directory is a "working" directory which does contain the <uid_name>.ms (i.e. no split has been run on them) that is of the form expected as the starting point of the ALMA imaging pipeline. This directory also contains the *.flagtargetstemplate.txt for each execution which can be used to do science target specific flagging. This is the best location to do ALMA pipeline image reprocessing. | ** Provided that the restore is done with a SPACESAVING=1, within the "calibrated" directory there is a "working" directory which does contain the <uid_name>.ms (i.e. no split has been run on them) that is of the form expected as the starting point of the ALMA imaging pipeline. This directory also contains the *.flagtargetstemplate.txt for each execution which can be used to do science target specific flagging. This is the best location to do ALMA pipeline image reprocessing. | ||
* calibration/ | * calibration/ | ||
** This directory contains | ** This directory contains a continuum range file named "cont.dat", with the frequency ranges identified by the pipeline as being likely to only contain continuum emission. If the cont.dat is present in the "calibrated/working" directory where pipeline imaging tasks are run, it will be used. | ||
* log/ | * log/ | ||
** This directory contains the <mous_name>.casa_commands.log which contains all the equivalent casa commands run during the course of the pipeline processing, in particular the tclean commands to make the image products. | ** This directory contains the <mous_name>.hifa_calimage.casa_commands.log which contains all the equivalent casa commands run during the course of the pipeline processing, in particular the tclean commands to make the image products. | ||
* product/ | * product/ | ||
** The original pipeline image products | ** The original pipeline image products | ||
Line 59: | Line 135: | ||
* raw/ | * raw/ | ||
** The raw asdm(s) | ** The raw asdm(s) | ||
* README | * README and QA2 report | ||
** | ** README file contains information about the content of the package and link to obtain QA2 report in SnooPI. | ||
* script/ | ** QA2 report contains the summary of scheduling block (SB), calibration and imaging results. | ||
** Contains the scriptForPI.py | * script/ | ||
** Contains the scriptForPI.py (named member.<uid_name>.scriptForPI.py) which internally run member.<uid_name>.hifa_calimage.casa_piperestorescript.py and other necessary tasks to restore the data. | |||
** Also contains member.<uid_name>.hifa_calimage.casa_pipescript.py, a full CASA pipeline script that reproduces all pipeline products. | |||
'''STEP 2:''' Change to directory that contains the calibrated data suitable for running pipeline imaging tasks (i.e. *.ms) called "calibrated/working" after the pipeline restore and start CASA | '''STEP 2:''' Change to directory that contains the calibrated data suitable for running pipeline imaging tasks (i.e. *.ms) called "calibrated/working" after the pipeline restore and start CASA 5.1.1 or later. | ||
<pre style="background-color: #fffacd;"> | <pre style="background-color: #fffacd;"> | ||
casa --pipeline | casa --pipeline | ||
</pre> | </pre> | ||
'''STEP 3:''' Run the following command in CASA to copy the | '''STEP 3:''' Run the following command in CASA to copy the cont.dat file that contains the frequency ranges used to create the continuum images and the continuum subtraction to the directory you will be working in. | ||
<source lang="python"> | <source lang="python"> | ||
os.system('cp ../../calibration/ | os.system('cp ../../calibration/cont.dat ./cont.dat') | ||
</source> | </source> | ||
Line 88: | Line 166: | ||
==== Option A: Re-determine and Apply Pipeline Continuum Subtraction using Pipeline Tasks ("Old" method) ==== | ==== Option A: Re-determine and Apply Pipeline Continuum Subtraction using Pipeline Tasks ("Old" method) ==== | ||
The following script splits off the calibrated science target data for all spws and fields for each execution, applies any flagging commands found in the <uid_name>_flagtargetstemplate.txt file(s) (one for each execution), uses the existing cont.dat file to fit and subtract the continuum emission, leaving the result in the CORRECTED column. Before running this script, you can manually modify both the <uid_name>_flagtargetstemplate.txt file(s) and cont.dat file to add flag commands or change the cont.dat frequency ranges. | The following script splits off the calibrated science target data for all spws and fields for each execution, applies any flagging commands found in the <uid_name>_flagtargetstemplate.txt file(s) (one for each execution), uses the existing cont.dat file to fit and subtract the continuum emission, leaving the result in the CORRECTED column. Before running this script, you can manually modify both the <uid_name>_flagtargetstemplate.txt file(s) and cont.dat file to add flag commands or change the cont.dat frequency ranges. Once you're happy with the script, you can run it in a CASA session (that was started with the --pipeline option) using execfile(script_name). | ||
<source lang="python"> | <source lang="python"> | ||
Line 112: | Line 190: | ||
try: | try: | ||
## Load the *.ms files into the pipeline | ## Load the *.ms files into the pipeline | ||
hifa_importdata(vis=MyVis, pipelinemode=pipelinemode) | hifa_importdata(vis=MyVis, dbservice=False, pipelinemode=pipelinemode) | ||
## Split off the science target data into its own ms (called | ## Split off the science target data into its own ms (called | ||
Line 135: | Line 213: | ||
<pre> | <pre> | ||
cd ../../calibration/ | cd ../../calibration/ | ||
gunzip -c uid*. | gunzip -c member.uid*.hifa_calimage.auxcaltables.tgz | tar xvf - | ||
</pre> | </pre> | ||
There will be one such table for each execution, with file names like (MOUS | There will be one such table for each execution, with file names like member.(MOUS id).session_2.auxcaltables.tgz, member.(MOUS id).session 3.auxcaltables.tgz, etc.; repeat the above steps for each. | ||
'''STEP 2:''' | '''STEP 2:''' | ||
Copy the auxilliary calibration tables into the working directory | Copy the auxilliary calibration tables into the working directory | ||
<pre> | <pre> | ||
cp -r *uvcontfit*uvcont.tbl ../calibrated/working | cp -r *uvcontfit*uvcont.tbl ../calibrated/working | ||
</pre> | </pre> | ||
While | While in /calibration directory, take a look at the files called *auxcalapply*txt. There will be one file for each execution, with names like (EB uid_name)_target.ms.auxcalapply.txt. These contain the applycal() statements with which, with minor modifications, you will apply the uv continuum subtraction as a calibration. You will use these in the following step. The contents will look something like this: | ||
<source lang="python"> | <source lang="python"> | ||
applycal(vis='/lustre/naasc/sciops/comm/amcnicho/pipeline/root/2013.1.00722.S_2017_09_13T14_30_33.955/SOUS_uid___A001_X145_X134/GOUS_uid___A001_X145_X135/MOUS_uid___A001_X145_X136/working/uid___A002_X9fddd8_Xc52_target.ms', field='', intent='', spw='17,19,21,23', antenna='0~35', gaintable='/lustre/naasc/sciops/comm/amcnicho/pipeline/root/2013.1.00722.S_2017_09_13T14_30_33.955/SOUS_uid___A001_X145_X134/GOUS_uid___A001_X145_X135/MOUS_uid___A001_X145_X136/working/uid___A002_X9fddd8_Xc52_target.ms.hif_uvcontfit.s27_3.SPT0346-52.uvcont.tbl', gainfield='', spwmap=[], interp='', calwt=False) | applycal(vis='/lustre/naasc/sciops/comm/amcnicho/pipeline/root/2013.1.00722.S_2017_09_13T14_30_33.955/SOUS_uid___A001_X145_X134/GOUS_uid___A001_X145_X135/MOUS_uid___A001_X145_X136/working/uid___A002_X9fddd8_Xc52_target.ms', field='', intent='', spw='17,19,21,23', antenna='0~35', gaintable='/lustre/naasc/sciops/comm/amcnicho/pipeline/root/2013.1.00722.S_2017_09_13T14_30_33.955/SOUS_uid___A001_X145_X134/GOUS_uid___A001_X145_X135/MOUS_uid___A001_X145_X136/working/uid___A002_X9fddd8_Xc52_target.ms.hif_uvcontfit.s27_3.SPT0346-52.uvcont.tbl', gainfield='', spwmap=[], interp='', calwt=False) | ||
</source> | </source> | ||
Looking ahead, the changes you will make to these statements will be to eliminate the long "full-path" prefix since you will be working with all required files in the "working" directory, which is where you will run the pipeline. | Looking ahead, the changes you will make to these statements will be to eliminate the long "full-path" prefix since you will be working with all required files in the "working" directory, which is where you will run the pipeline. | ||
Line 181: | Line 261: | ||
MySpw='' | MySpw='' | ||
# PLEASE NOTE that for this use case you will also need to edit in | |||
# the applycal() statements for continuum subtraction in the section | |||
# indicated below. | |||
############################################################ | ############################################################ | ||
## Make a list of all uv-datasets appended with *.ms | ## Make a list of all uv-datasets appended with *.ms | ||
Line 187: | Line 270: | ||
try: | try: | ||
## Load the *.ms files into the pipeline | ## Load the *.ms files into the pipeline | ||
hifa_importdata(vis=MyVis, pipelinemode=pipelinemode) | hifa_importdata(vis=MyVis, dbservice=False, pipelinemode=pipelinemode) | ||
## Split off the science target data into its own ms (called | ## Split off the science target data into its own ms (called | ||
Line 199: | Line 282: | ||
#hif_uvcontsub(pipelinemode=pipelinemode) | #hif_uvcontsub(pipelinemode=pipelinemode) | ||
# For CASA 5.0 and 5.1, applycal | # For CASA 5.0 and 5.1, | ||
# uvcontfit table in applycal commmand is not yet compliant with | |||
# the new underlying "VI2" infrastructure, requiring an environment | # the new underlying "VI2" infrastructure, requiring an environment | ||
# variable "VI1CAL" be set. this limitation will be removed in the | # variable "VI1CAL" be set. this limitation will be removed in the | ||
# future, obviating the need for this workaround. | # future, obviating the need for this workaround. | ||
try: | try: | ||
vi1cal = os.environ['VI1CAL'] | vi1cal = os.environ['VI1CAL'] | ||
Line 226: | Line 310: | ||
'''STEP 4:''' | '''STEP 4:''' | ||
* verify that each of the gaintables referenced (*uvcontfit*uvcont.tbl) in the applycal() commands you inserted exist in the working directory. | |||
* verify that each of the gaintables referenced (* | |||
* verify that each of the ms's referenced exists, except you do not need the _target suffix (the _target.ms files are produced by the hif_mstransform() step). In this example we are looking to verify that the files uid___A002_X9f54f7_X183.ms and uid___A002_X9fddd8_Xc52.ms exist in 'working', which should be the case if you successfully ran the restore ('''STEP 1'''). | * verify that each of the ms's referenced exists, except you do not need the _target suffix (the _target.ms files are produced by the hif_mstransform() step). In this example we are looking to verify that the files uid___A002_X9f54f7_X183.ms and uid___A002_X9fddd8_Xc52.ms exist in 'working', which should be the case if you successfully ran the restore ('''STEP 1'''). | ||
* | * in a CASA pipeline session, execute the script using execfile() | ||
execfile( | |||
---- | ---- | ||
Line 240: | Line 318: | ||
'''Result''' | '''Result''' | ||
The result of following either of the above procedures for continuum subtraction (Option A or Option B) will be a measurement set called (MOUS UID name)_target.ms. The DATA column of this MS will have the fully calibrated but not continuum subtracted visibilities. The CORRECTED column have the fully calibrated and also continuum subtracted visibilities. This is the standard format _target.ms file that the Cycle 4 and 5 pipelines produce. Only science spectral windows and science targets (not calibrators). | The result of following either of the above procedures for continuum subtraction (Option A or Option B) will be a measurement set called (MOUS UID name)_target.ms. The DATA column of this MS will have the fully calibrated but not continuum subtracted visibilities. The CORRECTED column have the fully calibrated and also continuum subtracted visibilities. This is the standard format _target.ms file that the Cycle 4 and 5 pipelines produce. Only science spectral windows and science targets (not calibrators), are included in this target MS. | ||
---- | ---- | ||
==== Make Images Manually ==== | ==== Make Images Manually ==== | ||
Line 251: | Line 329: | ||
To manually clean your data at this stage, there are two options: | To manually clean your data at this stage, there are two options: | ||
# Use modified versions of the relevant {{tclean}} commands from the "logs/<MOUS_name>.casa_commands.log". These are the exact commands originally run by the imaging pipeline to produce your imaging products. | # Use modified versions of the relevant {{tclean}} commands from the "logs/<MOUS_name>.hifa_calimage.casa_commands.log". These are the exact commands originally run by the imaging pipeline to produce your imaging products. | ||
#* They will contain within them the frequency ranges (from the cont.dat) used for making the various images. | #* They will contain within them the frequency ranges (from the cont.dat) used for making the various images. | ||
#* There will be two {{tclean}} commands per image product, the first with an image name containing '''iter0''' only makes a dirty image, while the second with '''iter1''' makes a cleaned image. | #* There will be two {{tclean}} commands per image product, the first with an image name containing '''iter0''' only makes a dirty image, while the second with '''iter1''' makes a cleaned image. | ||
#* For example to make the aggregate continuum image but with interactive clean masking, simply copy the corresponding '''iter1''' command (it will contain all of the spw numbers in its name), but set interactive=True, calcres=True, calcpsf=True, restart=False. Additionally set mask=''. If you are using the *.target.ms file(s) you can keep datacolumn='DATA'. | #* For example to make the aggregate continuum image but with interactive clean masking, simply copy the corresponding '''iter1''' command (it will contain all of the spw numbers in its name), but set interactive=True, calcres=True, calcpsf=True, restart=False. Additionally set mask=''. If you are using the *.target.ms file(s) you can keep datacolumn='DATA'. | ||
#* Note if you are trying to save the model, i.e. for self-calibration, you must also set savemodel='modelcolumn' (or virtual). Also be aware that exiting from interactive clean using the Red X symbol in the interactive viewer, does not save the model in 4.7.0 {{tclean}}. To fill the model after stopping this way, rerun same clean command (being careful not to remove existing files) except set restart=True, calcpsf=False, calcres=False, niter=0, interactive=False. This re-run only takes a couple minutes with these settings. | #* Note if you are trying to save the model, i.e. for self-calibration, you must also set savemodel='modelcolumn' (or virtual). Also be aware that exiting from interactive clean using the Red X symbol in the interactive viewer, does not save the model in 4.7.0 {{tclean}}. To fill the model after stopping this way, rerun same clean command (being careful not to remove existing files) except set restart=True, calcpsf=False, calcres=False, niter=0, interactive=False. This re-run only takes a couple minutes with these settings. | ||
#* If you have split off the data of interest for self-calibration (as recommended above), you will first need to image the datacolumn='DATA'. After applying a self-calibration table, you will want to image the datacolumn='CORRECTED'. | #* If you have split off the data of interest for self-calibration (as recommended above), you will first need to image the datacolumn='DATA'. After applying a self-calibration table, you will want to image the datacolumn='CORRECTED'. This should happen by default in typical data reduction use cases since TCLEAN defaults to using the CORRECTED column (when it exists) for imaging, and automatically falls back to the DATA column (if it does not exist). | ||
# Use examples on the casaguide page [[TCLEAN_and_ALMA]] to formulate your own special purpose commands. | # Use examples on the casaguide page [[TCLEAN_and_ALMA]] to formulate your own special purpose commands. | ||
Line 288: | Line 366: | ||
try: | try: | ||
## Load the *.ms files into the pipeline | ## Load the *.ms files into the pipeline | ||
hifa_importdata(vis=MyVis, pipelinemode=pipelinemode) | hifa_importdata(vis=MyVis, dbservice=False, pipelinemode=pipelinemode) | ||
## Split off the science target data into its own ms (called | ## Split off the science target data into its own ms (called | ||
Line 294: | Line 372: | ||
hif_mstransform(pipelinemode=pipelinemode) | hif_mstransform(pipelinemode=pipelinemode) | ||
hifa_flagtargets(pipelinemode=pipelinemode) | hifa_flagtargets(pipelinemode=pipelinemode) | ||
## check the imaging product size and adjust the relevent | |||
## imaging parameters (channel binning, cell size and image size) | |||
## User can comment this out if they don't want size mitigation. | |||
hif_checkproductsize(maxcubelimit=40.0, maxproductsize=400.0, maxcubesize=30.0) | |||
## Skip the continuum subtraction steps and make an aggregate | ## Skip the continuum subtraction steps and make an aggregate | ||
Line 302: | Line 385: | ||
## Export new images to fits format if desired. | ## Export new images to fits format if desired. | ||
hifa_exportdata(pipelinemode=pipelinemode) | |||
finally: | finally: | ||
Line 344: | Line 427: | ||
try: | try: | ||
## Load the *.ms files into the pipeline | ## Load the *.ms files into the pipeline | ||
hifa_importdata(vis=MyVis, pipelinemode=pipelinemode) | hifa_importdata(vis=MyVis, dbservice=False, pipelinemode=pipelinemode) | ||
## Split off the science target data into its own ms (called | ## Split off the science target data into its own ms (called | ||
Line 354: | Line 437: | ||
hif_uvcontfit(pipelinemode=pipelinemode) | hif_uvcontfit(pipelinemode=pipelinemode) | ||
hif_uvcontsub(pipelinemode=pipelinemode) | hif_uvcontsub(pipelinemode=pipelinemode) | ||
## check the imaging product size and adjust the relevent | |||
## imaging parameters (channel binning, cell size and image size) | |||
## User can comment this out if they don't want size mitigation. | |||
hif_checkproductsize(maxcubelimit=40.0, maxproductsize=400.0, maxcubesize=30.0) | |||
## Make new per spw continuum for revised spw(s) and new aggregate cont | ## Make new per spw continuum for revised spw(s) and new aggregate cont | ||
Line 366: | Line 454: | ||
## Export new images to fits format if desired. | ## Export new images to fits format if desired. | ||
hifa_exportdata(pipelinemode=pipelinemode) | |||
finally: | finally: | ||
Line 416: | Line 504: | ||
try: | try: | ||
## Load the *.ms files into the pipeline | ## Load the *.ms files into the pipeline | ||
hifa_importdata(vis=MyVis, pipelinemode=pipelinemode) | hifa_importdata(vis=MyVis, dbservice=False, pipelinemode=pipelinemode) | ||
## Split off the science target data into its own ms (called | ## Split off the science target data into its own ms (called | ||
Line 430: | Line 518: | ||
hif_uvcontfit(spw=MySpw,field=MyFields,pipelinemode=pipelinemode) | hif_uvcontfit(spw=MySpw,field=MyFields,pipelinemode=pipelinemode) | ||
hif_uvcontsub(spw=MySpw,field=MyFields,pipelinemode=pipelinemode) | hif_uvcontsub(spw=MySpw,field=MyFields,pipelinemode=pipelinemode) | ||
## check the imaging product size and adjust the relevent | |||
## imaging parameters (channel binning, cell size and image size) | |||
## User can comment this out if they don't want size mitigation. | |||
hif_checkproductsize(maxcubelimit=40.0, maxproductsize=400.0, maxcubesize=30.0) | |||
## Make new continuum subtracted cube for selected spw(s) and fields | ## Make new continuum subtracted cube for selected spw(s) and fields | ||
Line 437: | Line 530: | ||
## Export new images to fits format if desired. | ## Export new images to fits format if desired. | ||
hifa_exportdata(pipelinemode=pipelinemode) | |||
finally: | finally: | ||
Line 445: | Line 538: | ||
==== Using uvcont table ==== | ==== Using uvcont table ==== | ||
This example uses the uvcont table to remake the cubes for a subset of spws and fields with channel binning and a more naturally-weighted Briggs robust parameter. | This example uses the uvcont table to remake the cubes for a subset of spws and fields with channel binning and a more naturally-weighted Briggs robust parameter. It assumes you have performed the steps in the preceding section ( | ||
[https://casaguides.nrao.edu/index.php/ALMA_Cycle_5_Imaging_Pipeline_Reprocessing#Option_B:_Restore_Pipeline_Continuum_Subtraction_using_UVCONT_Table_.28.22new.22_method.29]) to unpack the UVCONT calibration table and retrieve the corresponding applycal() statements. | |||
<source lang="python"> | <source lang="python"> | ||
## Edit the USER SET INPUTS section below and then execute | ## Edit the USER SET INPUTS section below and then execute | ||
Line 476: | Line 571: | ||
## resolution of images) | ## resolution of images) | ||
MyRobust=1.5 | MyRobust=1.5 | ||
############################################################ | ############################################################ | ||
Line 486: | Line 578: | ||
try: | try: | ||
## Load the *.ms files into the pipeline | ## Load the *.ms files into the pipeline | ||
hifa_importdata(vis=MyVis, pipelinemode=pipelinemode) | hifa_importdata(vis=MyVis, dbservice=False, pipelinemode=pipelinemode) | ||
## Split off the science target data into its own ms (called | ## Split off the science target data into its own ms (called | ||
Line 493: | Line 585: | ||
hifa_flagtargets(pipelinemode=pipelinemode) | hifa_flagtargets(pipelinemode=pipelinemode) | ||
# For CASA 5.0 and 5.1, applycal | # For CASA 5.0 and 5.1, | ||
# uvcontfit table in applycal commmand is not yet compliant with | |||
# the new underlying "VI2" infrastructure, requiring an environment | # the new underlying "VI2" infrastructure, requiring an environment | ||
# variable "VI1CAL" be set. this limitation will be removed in the | # variable "VI1CAL" be set. this limitation will be removed in the | ||
# future, obviating the need for this workaround. | # future, obviating the need for this workaround. | ||
try: | try: | ||
vi1cal = os.environ['VI1CAL'] | vi1cal = os.environ['VI1CAL'] | ||
Line 518: | Line 611: | ||
else: | else: | ||
os.environ['VI1CAL']=vi1cal | os.environ['VI1CAL']=vi1cal | ||
## check the imaging product size and adjust the relevent | |||
## imaging parameters (channel binning, cell size and image size) | |||
## User can comment this out if they don't want size mitigation. | |||
hif_checkproductsize(maxcubelimit=40.0, maxproductsize=400.0, maxcubesize=30.0) | |||
## Make new per spw continuum for revised spw(s) and new aggregate cont | ## Make new per spw continuum for revised spw(s) and new aggregate cont | ||
hif_makeimlist(specmode='mfs',spw=MySpw) | hif_makeimlist(specmode='mfs',spw=MySpw,field=MyFields) | ||
hif_makeimages(pipelinemode=pipelinemode) | hif_makeimages(robust=MyRobust,pipelinemode=pipelinemode) | ||
hif_makeimlist(specmode='cont', | hif_makeimlist(specmode='cont',field=MyFields) | ||
hif_makeimages(pipelinemode=pipelinemode) | hif_makeimages(robust=MyRobust,pipelinemode=pipelinemode) | ||
## Make new continuum subtracted cube for revised spw(s) | ## Make new continuum subtracted cube for revised spw(s) | ||
hif_makeimlist(specmode='cube',spw=MySpw,pipelinemode=pipelinemode) | hif_makeimlist(specmode='cube',spw=MySpw,nbins=MyNbins,field=MyFields, | ||
hif_makeimages(pipelinemode=pipelinemode) | pipelinemode=pipelinemode) | ||
hif_makeimages(robust=MyRobust,pipelinemode=pipelinemode) | |||
## Export new images to fits format if desired. | ## Export new images to fits format if desired. |
Latest revision as of 21:36, 15 December 2021
About This Guide
This guide describes some examples for perfecting the interferometric imaging products from the ALMA Cycle 5 Pipeline. If your data were manually imaged by ALMA, you should instead consult the scriptForImaging.py delivered with your data.
The Section Restore Pipeline Calibration and Prepare for Re-imaging describes the first steps to do. After that, the individual sections are self-contained (and they typically assume the "Restore" has been performed). The exception is Use casa_pipescript to fully reproduce the pipeline products -- this section is fully self contained and does not presuppose that the "Restore" has been run. It illustrates how to completely re-run the pipeline from beginning to end in order to reproduce the pipeline run done at your ARC.
Additional documentation on the Cycle 5 pipeline can be found at the ALMA Science Portal
Note that the scripts described in this guide have only been tested in Linux.
How to Decide Whether to Reprocess Pipeline Images
In order to decide whether reprocessing will be beneficial for your project you should examine both the results of the pipeline imaging via the delivered weblog and note any imaging specific comments in your QA2 report. The details of the various imaging pipeline stages, as well as examples of weblog output can be found in the Users Guide, especially useful in this regard are Sections 8 and 9.
For Cycle 4, the primary goal of the imaging pipeline was to produce images that are of sufficient quality that QA2 can be successfully performed, and that give users a good idea of what the data contain. While this is still true in Cycle 5, significant improvements have been made in the quality of pipeline imaging, notably through the introduction of an adaptive CLEAN auto-masking algorithm. In some cases these images are fine for doing science while in others, significant benefits may be obtained by re-imaging with particular science goals in mind.
Typical reasons for re-imaging include:
- Imaging improvements to be gained from interactively generating emission specific clean mask and cleaning more deeply. The Cycle 5 pipeline currently uses an 'automated' clean mask created by automasking algorithm based on noise thresholds. In cases where this automated clean mask is not optimal, users may benefit from clean with interactive clean masking. For peak S/N > about 100, the images can also often be improved by self-calibration coupled with deeper clean with manual clean masks.
- Non-optimal continuum ranges. The pipeline uses heuristics that attempt to correctly identify continuum channels over a very broad range of science target line properties. Particularly for strong line forests (hot-cores) and occasionally for TDM continuum projects the pipeline ranges can be non-optimal -- too much in the first case and too little in the second.
- Other science goal driven reprocessing needs may include
- Desire to bin channels in the imaging stage to increase the S/N of cubes
- Desire to use a different Briggs Robust image weighting than the default of robust=0.5 (smaller robust = more toward uniform weighting, smaller beam, poorer S/N; larger robust = more toward natural weighting, larger beam, better S/N)
- Desire to uv-taper images to focus on extended emission (only available manually presently)
The examples below demonstrate some of the more common ways that users may wish to perfect their imaging products to meet their science goals.
Getting and Starting CASA
If you do not already have CASA installed on your machine, you will have to download and install it.
Download and installation instructions are available here:
http://casa.nrao.edu/casa_obtaining.shtml
CASA 5.1.1 or later is required to reprocess ALMA Cycle 5 data using the scripts in this guide.
NOTE: To use pipeline tasks, you must start CASA with
casa --pipeline
CASA pipescript to fully reproduce the pipeline products
Although a user can restore their data by running the scriptForPI.py (named member.<uid_name>.scriptForPI.py in /script directory), a full CASA pipeline script that reproduces all pipeline products is also provided in the package. This script named member.<uid_name>.hifa_calimage.casa_pipescript.py, is in the /script directory; it is shown below as an example. If the user executes
execfile('member.<uid_name>.hifa_calimage.casa_pipescript.py')
the script runs all necessary pipeline tasks to reproduce the calibration and imaging results produced by pipeline. The workflows in the following sections reproduce parts of, and/or variants of, what is produced by this script. The subsequent workflows all start from the simple "restore" that is done by the scriptForPI.py.
from recipes.almahelpers import fixsyscaltimes # SACM/JAO - Fixes
__rethrow_casa_exceptions = True
context = h_init()
context.set_state('ProjectSummary', 'proposal_code', '2017.1.00XXX.S')
context.set_state('ProjectSummary', 'piname', 'unknown')
context.set_state('ProjectSummary', 'proposal_title', 'unknown')
context.set_state('ProjectStructure', 'ous_part_id', 'X749565832')
context.set_state('ProjectStructure', 'ous_title', 'Undefined')
context.set_state('ProjectStructure', 'ppr_file', 'PPR_uid___A002_Xc24c3f_X2bb.xml')
context.set_state('ProjectStructure', 'ps_entity_id', 'uid://A002/Xc24c3f/X2b6')
context.set_state('ProjectStructure', 'recipe_name', 'hifa_calimage')
context.set_state('ProjectStructure', 'ous_entity_id', 'uid://A002/Xc24c3f/X2b1')
context.set_state('ProjectStructure', 'ousstatus_entity_id', 'uid://A002/Xc24c3f/X2ba')
try:
hifa_importdata(vis=['uid___A002_Xc3412f_X53ff'], dbservice=False, session=['session_1'])
fixsyscaltimes(vis = 'uid___A002_Xc3412f_X53ff.ms')# SACM/JAO - Fixes
h_save() # SACM/JAO - Finish weblog after fixes
h_init() # SACM/JAO - Restart weblog after fixes
hifa_importdata(vis=['uid___A002_Xc3412f_X53ff'], dbservice=False, session=['session_1'])
hifa_flagdata(pipelinemode="automatic")
hifa_fluxcalflag(pipelinemode="automatic")
hif_rawflagchans(pipelinemode="automatic")
hif_refant(pipelinemode="automatic")
h_tsyscal(pipelinemode="automatic")
hifa_tsysflag(pipelinemode="automatic")
hifa_antpos(pipelinemode="automatic")
hifa_wvrgcalflag(pipelinemode="automatic")
hif_lowgainflag(pipelinemode="automatic")
hif_setmodels(pipelinemode="automatic")
hifa_bandpassflag(pipelinemode="automatic")
hifa_spwphaseup(pipelinemode="automatic")
hifa_gfluxscaleflag(pipelinemode="automatic")
hifa_gfluxscale(pipelinemode="automatic")
hifa_timegaincal(pipelinemode="automatic")
hif_applycal(pipelinemode="automatic")
hifa_imageprecheck(pipelinemode="automatic")
hif_makeimlist(intent='PHASE,BANDPASS,CHECK')
hif_makeimages(pipelinemode="automatic")
hif_checkproductsize(maxcubelimit=40.0, maxproductsize=400.0, maxcubesize=30.0)
hifa_exportdata(pipelinemode="automatic")
hif_mstransform(pipelinemode="automatic")
hifa_flagtargets(pipelinemode="automatic")
hif_makeimlist(specmode='mfs')
hif_findcont(pipelinemode="automatic")
hif_uvcontfit(pipelinemode="automatic")
hif_uvcontsub(pipelinemode="automatic")
hif_makeimages(pipelinemode="automatic")
hif_makeimlist(specmode='cont')
hif_makeimages(pipelinemode="automatic")
hif_makeimlist(specmode='cube')
hif_makeimages(pipelinemode="automatic")
hif_makeimlist(specmode='repBW')
hif_makeimages(pipelinemode="automatic")
finally:
h_save()
The relevant tasks for imaging pipeline reprocessing described in this CASA guide are hifa_importdata, hif_mstransform, hifa_flagtargets, hif_checkproductsize, hif_uvcontfit, hif_uvcontsub, hif_makeimlist, hif_makeimages.
Note: One of important features of ALMA pipeline is to check the final imaging product size and make necessary adjustment to the channel binning, cell size, image size and possibly the number of fields to be imaged, in order to void not to create large images and cubes that takes up significant computing resources and is not necessary for user's science goal. hif_checkproductsize task does this jobs and we insert this task in all imaging example script in below. We recommend that user copies hif_checkproductsize task from the provided casa_pipescript.py without changing parameters: maxcubelimit, maxproductsize and maxcubesize. However users can comment it out if they don't want this size mitigation or they can explicitly specify the nbins, hm_imsize and hm_cell parameters in hif_makeimlist task.
For reference, the description of pipeline tasks for interferometric and single dish data reduction can be found in the CASA 4.7.2 Pipeline Reference Manual
Restore Pipeline Calibration and Prepare for Re-imaging (all Options)
STEP 1: Follow instructions in your QA2 report for restoring pipeline calibrated data using the scriptForPI.py. NOTE: the SPACESAVING parameter cannot be larger than 1, and for pipeline calibrated and imaged data, scriptForPI.py does not automatically split science spectral windows.
Once completed, the following files and directories will be present, with specific things about pipeline re-imaging noted:
- calibrated/
- In case where PI explicitly set "DOSPLIT=True" before running scriptForPI.py, this directory contains a file(s) called <uid_name>.ms.split.cal (one for each execution in the MOUS) -- these type of files have been split to contain the calibrated pipeline uv-data in the DATA column, and only the science spectral window ids (spws) that matches spws listed in the pipeline weblog or other pipeline produce products like the science target flag template files (*.flagtargetstemplate.txt) or continuum ranges (cont.dat). Though this type of file has been the starting point for manual ALMA imaging, ms.split.cal files CANNOT BE DIRECTLY USED IN THE EXAMPLES GIVEN IN THIS GUIDE.
- Provided that the restore is done with a SPACESAVING=1, within the "calibrated" directory there is a "working" directory which does contain the <uid_name>.ms (i.e. no split has been run on them) that is of the form expected as the starting point of the ALMA imaging pipeline. This directory also contains the *.flagtargetstemplate.txt for each execution which can be used to do science target specific flagging. This is the best location to do ALMA pipeline image reprocessing.
- calibration/
- This directory contains a continuum range file named "cont.dat", with the frequency ranges identified by the pipeline as being likely to only contain continuum emission. If the cont.dat is present in the "calibrated/working" directory where pipeline imaging tasks are run, it will be used.
- log/
- This directory contains the <mous_name>.hifa_calimage.casa_commands.log which contains all the equivalent casa commands run during the course of the pipeline processing, in particular the tclean commands to make the image products.
- product/
- The original pipeline image products
- qa/
- The original pipeline weblog
- raw/
- The raw asdm(s)
- README and QA2 report
- README file contains information about the content of the package and link to obtain QA2 report in SnooPI.
- QA2 report contains the summary of scheduling block (SB), calibration and imaging results.
- script/
- Contains the scriptForPI.py (named member.<uid_name>.scriptForPI.py) which internally run member.<uid_name>.hifa_calimage.casa_piperestorescript.py and other necessary tasks to restore the data.
- Also contains member.<uid_name>.hifa_calimage.casa_pipescript.py, a full CASA pipeline script that reproduces all pipeline products.
STEP 2: Change to directory that contains the calibrated data suitable for running pipeline imaging tasks (i.e. *.ms) called "calibrated/working" after the pipeline restore and start CASA 5.1.1 or later.
casa --pipeline
STEP 3: Run the following command in CASA to copy the cont.dat file that contains the frequency ranges used to create the continuum images and the continuum subtraction to the directory you will be working in.
os.system('cp ../../calibration/cont.dat ./cont.dat')
Common Re-imaging Examples
Next, chose the example below that best fits your use case. Due to the need to preserve the indentation of the python commands, the examples will work best if you copy the entire block of python commands (orange-shaded regions) for a particular example into its own python script, check that the indentation is preserved, edit the USER SET INPUTS section, and then execute the file.
Restore Pipeline Continuum Subtraction and Manually Make Image Products
Starting in Cycle 5, ALMA pipeline-calibrated data will be delivered with a calibration table which describes the continuum subtraction the pipeline did. This provides two options to perform the pipeline-determined continuum subtraction: applying this calibration table (the new method); or re-running the pipeline stages that determine and perform the continuum subtraction. Both methods are equivalent. The first (new) method will be quicker and will exactly reproduce the pipeline continuum subtraction under a wider range of circumstances (for instance, with a newer CASA version). The second (older) method is somewhat more time consuming, but more readily allows tweaking the continuum range selection.
Option A: Re-determine and Apply Pipeline Continuum Subtraction using Pipeline Tasks ("Old" method)
The following script splits off the calibrated science target data for all spws and fields for each execution, applies any flagging commands found in the <uid_name>_flagtargetstemplate.txt file(s) (one for each execution), uses the existing cont.dat file to fit and subtract the continuum emission, leaving the result in the CORRECTED column. Before running this script, you can manually modify both the <uid_name>_flagtargetstemplate.txt file(s) and cont.dat file to add flag commands or change the cont.dat frequency ranges. Once you're happy with the script, you can run it in a CASA session (that was started with the --pipeline option) using execfile(script_name).
## Edit the USER SET INPUTS section below and then execute
## this script (note it must be in the 'calibrated/working' directory.
import glob as glob
__rethrow_casa_exceptions = True
pipelinemode='automatic'
context = h_init()
###########################################################
## USER SET INPUTS
## Select a title for the weblog
context.project_summary.proposal_code='Restore Continuum Subtraction'
############################################################
## Make a list of all uv-datasets appended with *.ms
MyVis=glob.glob('*.ms')
try:
## Load the *.ms files into the pipeline
hifa_importdata(vis=MyVis, dbservice=False, pipelinemode=pipelinemode)
## Split off the science target data into its own ms (called
## *target.ms) and apply science target specific flags
hif_mstransform(pipelinemode=pipelinemode)
hifa_flagtargets(pipelinemode=pipelinemode)
## Fit and subtract the continuum using the cont.dat for all spws all fields
hif_uvcontfit(pipelinemode=pipelinemode)
hif_uvcontsub(pipelinemode=pipelinemode)
finally:
h_save()
Option B: Restore Pipeline Continuum Subtraction using UVCONT Table ("new" method)
STEP 1:
Unpack the auxilliary calibration tables, which contain a description of the continuum subtraction.
cd ../../calibration/ gunzip -c member.uid*.hifa_calimage.auxcaltables.tgz | tar xvf -
There will be one such table for each execution, with file names like member.(MOUS id).session_2.auxcaltables.tgz, member.(MOUS id).session 3.auxcaltables.tgz, etc.; repeat the above steps for each.
STEP 2:
Copy the auxilliary calibration tables into the working directory
cp -r *uvcontfit*uvcont.tbl ../calibrated/working
While in /calibration directory, take a look at the files called *auxcalapply*txt. There will be one file for each execution, with names like (EB uid_name)_target.ms.auxcalapply.txt. These contain the applycal() statements with which, with minor modifications, you will apply the uv continuum subtraction as a calibration. You will use these in the following step. The contents will look something like this:
applycal(vis='/lustre/naasc/sciops/comm/amcnicho/pipeline/root/2013.1.00722.S_2017_09_13T14_30_33.955/SOUS_uid___A001_X145_X134/GOUS_uid___A001_X145_X135/MOUS_uid___A001_X145_X136/working/uid___A002_X9fddd8_Xc52_target.ms', field='', intent='', spw='17,19,21,23', antenna='0~35', gaintable='/lustre/naasc/sciops/comm/amcnicho/pipeline/root/2013.1.00722.S_2017_09_13T14_30_33.955/SOUS_uid___A001_X145_X134/GOUS_uid___A001_X145_X135/MOUS_uid___A001_X145_X136/working/uid___A002_X9fddd8_Xc52_target.ms.hif_uvcontfit.s27_3.SPT0346-52.uvcont.tbl', gainfield='', spwmap=[], interp='', calwt=False)
Looking ahead, the changes you will make to these statements will be to eliminate the long "full-path" prefix since you will be working with all required files in the "working" directory, which is where you will run the pipeline.
Finally, go back to the working directory
cd ../calibrated/working
STEP 3:
Edit the applycal() statements into the following script in the indicated place:
## Edit the USER SET INPUTS section below and then execute
## this script (note it must be in the 'calibrated/working' directory.
import glob as glob
import os
__rethrow_casa_exceptions = True
pipelinemode='automatic'
context = h_init()
###########################################################
## USER SET INPUTS
## Select a title for the weblog-
context.project_summary.proposal_code = 'PIPELINE CONTSUB'
# if you wish for some reason to restrict the number of SPWs that are imaged-
MySpw=''
# PLEASE NOTE that for this use case you will also need to edit in
# the applycal() statements for continuum subtraction in the section
# indicated below.
############################################################
## Make a list of all uv-datasets appended with *.ms
MyVis=glob.glob('*.ms')
try:
## Load the *.ms files into the pipeline
hifa_importdata(vis=MyVis, dbservice=False, pipelinemode=pipelinemode)
## Split off the science target data into its own ms (called
## *target.ms) and apply science target specific flags
hif_mstransform(pipelinemode=pipelinemode)
hifa_flagtargets(pipelinemode=pipelinemode)
## Fit and subtract the continuum using revised cont.dat for all spws
# we are skipping these in favor of the applycal() which is faster.
#hif_uvcontfit(pipelinemode=pipelinemode)
#hif_uvcontsub(pipelinemode=pipelinemode)
# For CASA 5.0 and 5.1,
# uvcontfit table in applycal commmand is not yet compliant with
# the new underlying "VI2" infrastructure, requiring an environment
# variable "VI1CAL" be set. this limitation will be removed in the
# future, obviating the need for this workaround.
try:
vi1cal = os.environ['VI1CAL']
except KeyError:
vi1cal = None
finally:
os.environ['VI1CAL']= '1'
#### PUT THE AUXCALAPPLY.TXT STATEMENTS HERE####
# the vis and gaintables need to be edited to a valid absolute or relative path.
#
applycal(vis='uid___A002_X9f54f7_X183_target.ms', field='', intent='', spw='17,19,21,23', antenna='0~36', gaintable='uid___A002_X9f54f7_X183_target.ms.hif_uvcontfit.s27_1.SPT0346-52.uvcont.tbl', gainfield='', spwmap=[], interp='', calwt=False)
applycal(vis='uid___A002_X9fddd8_Xc52_target.ms', field='', intent='', spw='17,19,21,23', antenna='0~35', gaintable='uid___A002_X9fddd8_Xc52_target.ms.hif_uvcontfit.s27_3.SPT0346-52.uvcont.tbl', gainfield='', spwmap=[], interp='', calwt=False)
#
#### END AUXCALAPPLY STATEMENTS ####
finally:
h_save()
STEP 4:
- verify that each of the gaintables referenced (*uvcontfit*uvcont.tbl) in the applycal() commands you inserted exist in the working directory.
- verify that each of the ms's referenced exists, except you do not need the _target suffix (the _target.ms files are produced by the hif_mstransform() step). In this example we are looking to verify that the files uid___A002_X9f54f7_X183.ms and uid___A002_X9fddd8_Xc52.ms exist in 'working', which should be the case if you successfully ran the restore (STEP 1).
- in a CASA pipeline session, execute the script using execfile()
Result
The result of following either of the above procedures for continuum subtraction (Option A or Option B) will be a measurement set called (MOUS UID name)_target.ms. The DATA column of this MS will have the fully calibrated but not continuum subtracted visibilities. The CORRECTED column have the fully calibrated and also continuum subtracted visibilities. This is the standard format _target.ms file that the Cycle 4 and 5 pipelines produce. Only science spectral windows and science targets (not calibrators), are included in this target MS.
Make Images Manually
At this point you will have created a *target.ms for each execution of your SB. Each of these measurement sets contains the original calibrated continuum + line data in the DATA column and the calibrated continuum subtracted data in the the CORRECTED column. The new CASA task for imaging tclean (which is used by the ALMA Pipeline) allows the user to select which column to use for imaging. tclean also allows a list for the vis parameter so that it is not necessary to concat the data before imaging.
NOTE: If you think you might want to self-calibrate your data using either the continuum or line emission it is ESSENTIAL that you first split off the column that you want to operate on before imaging. Otherwise, the CORRECTED column containing the continuum subtracted data will be overwritten when applycal is run during the self-calibration process. Users of CASA 5.1.x using TCLEAN() with multi-scale should also be aware that there is a known issue that the MODEL column will not be correctly written under some circumstances. The issue and the work-around for it are described at [1]
To manually clean your data at this stage, there are two options:
- Use modified versions of the relevant tclean commands from the "logs/<MOUS_name>.hifa_calimage.casa_commands.log". These are the exact commands originally run by the imaging pipeline to produce your imaging products.
- They will contain within them the frequency ranges (from the cont.dat) used for making the various images.
- There will be two tclean commands per image product, the first with an image name containing iter0 only makes a dirty image, while the second with iter1 makes a cleaned image.
- For example to make the aggregate continuum image but with interactive clean masking, simply copy the corresponding iter1 command (it will contain all of the spw numbers in its name), but set interactive=True, calcres=True, calcpsf=True, restart=False. Additionally set mask=. If you are using the *.target.ms file(s) you can keep datacolumn='DATA'.
- Note if you are trying to save the model, i.e. for self-calibration, you must also set savemodel='modelcolumn' (or virtual). Also be aware that exiting from interactive clean using the Red X symbol in the interactive viewer, does not save the model in 4.7.0 tclean. To fill the model after stopping this way, rerun same clean command (being careful not to remove existing files) except set restart=True, calcpsf=False, calcres=False, niter=0, interactive=False. This re-run only takes a couple minutes with these settings.
- If you have split off the data of interest for self-calibration (as recommended above), you will first need to image the datacolumn='DATA'. After applying a self-calibration table, you will want to image the datacolumn='CORRECTED'. This should happen by default in typical data reduction use cases since TCLEAN defaults to using the CORRECTED column (when it exists) for imaging, and automatically falls back to the DATA column (if it does not exist).
- Use examples on the casaguide page TCLEAN_and_ALMA to formulate your own special purpose commands.
Make Pipeline Aggregate Continuum Image With All Channels
This example moves the cont.dat file to a backup name so it is not picked up by pipeline, in which case all unflagged channels are used to make an aggregate continuum image with no continuum subtraction and default pipeline cleaning. This may be beneficial for continuum only projects for which the hif_findcont stage of the weblog shows that more continuum bandwidth is possible than it identified (i.e. due to noise spikes etc).
## Edit the USER SET INPUTS section below and then execute
## this script (note it must be in the 'calibrated/working' directory.
import glob as glob
__rethrow_casa_exceptions = True
pipelinemode='automatic'
context = h_init()
###########################################################
## USER SET INPUTS
## Select a title for the weblog
context.project_summary.proposal_code='NEW AGGREGATE CONT'
############################################################
## Move cont.dat to another name if it exists
os.system('mv cont.dat original.cont.dat')
## Make a list of all uv-datasets appended with *.ms
MyVis=glob.glob('*.ms')
try:
## Load the *.ms files into the pipeline
hifa_importdata(vis=MyVis, dbservice=False, pipelinemode=pipelinemode)
## Split off the science target data into its own ms (called
## *target.ms) and apply science target specific flags
hif_mstransform(pipelinemode=pipelinemode)
hifa_flagtargets(pipelinemode=pipelinemode)
## check the imaging product size and adjust the relevent
## imaging parameters (channel binning, cell size and image size)
## User can comment this out if they don't want size mitigation.
hif_checkproductsize(maxcubelimit=40.0, maxproductsize=400.0, maxcubesize=30.0)
## Skip the continuum subtraction steps and make an aggregate
## continuum image with all unflagged channels (file named
## cont.dat should NOT be present in directory).
hif_makeimlist(specmode='cont',pipelinemode=pipelinemode)
hif_makeimages(pipelinemode=pipelinemode)
## Export new images to fits format if desired.
hifa_exportdata(pipelinemode=pipelinemode)
finally:
h_save()
Revise the Continuum Ranges (cont.dat) Before Pipeline Continuum Subtraction and Remake Pipeline Images
This example uses the pipeline imaging tasks to remake the pipeline imaging products for one spw (17 in the example) after manually editing the cont.dat file.
## Edit the cont.dat file(s) for the spw(s) you want
## to change the continuum subtraction for. In this example
## spw 17 was changed.
## Edit the USER SET INPUTS section below and then execute
## this script (note it must be in the 'calibrated/working' directory.
import glob as glob
__rethrow_casa_exceptions = True
pipelinemode='automatic'
context = h_init()
###########################################################
## USER SET INPUTS
## Select a title for the weblog
context.project_summary.proposal_code = 'NEW CONTSUB'
## Select spw(s) that have new cont.dat parameters
## If all spws have changed use MySpw=''
MySpw='17'
############################################################
## Make a list of all uv-datasets appended with *.ms
MyVis=glob.glob('*.ms')
try:
## Load the *.ms files into the pipeline
hifa_importdata(vis=MyVis, dbservice=False, pipelinemode=pipelinemode)
## Split off the science target data into its own ms (called
## *target.ms) and apply science target specific flags
hif_mstransform(pipelinemode=pipelinemode)
hifa_flagtargets(pipelinemode=pipelinemode)
## Fit and subtract the continuum using revised cont.dat for all spws
hif_uvcontfit(pipelinemode=pipelinemode)
hif_uvcontsub(pipelinemode=pipelinemode)
## check the imaging product size and adjust the relevent
## imaging parameters (channel binning, cell size and image size)
## User can comment this out if they don't want size mitigation.
hif_checkproductsize(maxcubelimit=40.0, maxproductsize=400.0, maxcubesize=30.0)
## Make new per spw continuum for revised spw(s) and new aggregate cont
hif_makeimlist(specmode='mfs',spw=MySpw)
hif_makeimages(pipelinemode=pipelinemode)
hif_makeimlist(specmode='cont',pipelinemode=pipelinemode)
hif_makeimages(pipelinemode=pipelinemode)
## Make new continuum subtracted cube for revised spw(s)
hif_makeimlist(specmode='cube',spw=MySpw,pipelinemode=pipelinemode)
hif_makeimages(pipelinemode=pipelinemode)
## Export new images to fits format if desired.
hifa_exportdata(pipelinemode=pipelinemode)
finally:
h_save()
Restore Pipeline Continuum Subtraction for Subset of SPWs and Fields and Use Channel Binning for Pipeline Imaging of Cubes
Using Pipeline Tasks
This example uses the pipeline imaging tasks to remake the cubes for a subset of spws and fields with channel binning and a more naturally-weighted Briggs robust parameter.
## Edit the USER SET INPUTS section below and then execute
## this script (note it must be in the 'calibrated/working' directory.
import glob as glob
__rethrow_casa_exceptions = True
pipelinemode='automatic'
context = h_init()
###########################################################
## USER SET INPUTS
## Select a title for the weblog
context.project_summary.proposal_code = 'SUBSET CUBE IMAGING'
## Select spw(s) to image and channel binning for each spcified
## MySpw. All spws listed in MySpw must have a corresponding MyNbins
## entry, even if it is 1 for no binning.
MySpw='17,23'
MyNbins='17:8,23:2'
## Select subset of sources to image by field name.
## To select all fields, set MyFields=''
MyFields='CoolSource1,CoolSource2'
## Select Briggs Robust factor for data weighting (affects angular
## resolution of images)
MyRobust=1.5
############################################################
## Make a list of all uv-datasets appended with *.ms
MyVis=glob.glob('*.ms')
try:
## Load the *.ms files into the pipeline
hifa_importdata(vis=MyVis, dbservice=False, pipelinemode=pipelinemode)
## Split off the science target data into its own ms (called
## *target.ms) and apply science target specific flags
## In this example we split off all science targets and science
## spws, however these steps could also contain the spw and field
## selections
hif_mstransform(pipelinemode=pipelinemode)
hifa_flagtargets(pipelinemode=pipelinemode)
## Fit and subtract the continuum using existing cont.dat
## for selected spws and fields only.
hif_uvcontfit(spw=MySpw,field=MyFields,pipelinemode=pipelinemode)
hif_uvcontsub(spw=MySpw,field=MyFields,pipelinemode=pipelinemode)
## check the imaging product size and adjust the relevent
## imaging parameters (channel binning, cell size and image size)
## User can comment this out if they don't want size mitigation.
hif_checkproductsize(maxcubelimit=40.0, maxproductsize=400.0, maxcubesize=30.0)
## Make new continuum subtracted cube for selected spw(s) and fields
hif_makeimlist(specmode='cube',spw=MySpw,nbins=MyNbins,field=MyFields,
pipelinemode=pipelinemode)
hif_makeimages(robust=MyRobust,pipelinemode=pipelinemode)
## Export new images to fits format if desired.
hifa_exportdata(pipelinemode=pipelinemode)
finally:
h_save()
Using uvcont table
This example uses the uvcont table to remake the cubes for a subset of spws and fields with channel binning and a more naturally-weighted Briggs robust parameter. It assumes you have performed the steps in the preceding section ( [2]) to unpack the UVCONT calibration table and retrieve the corresponding applycal() statements.
## Edit the USER SET INPUTS section below and then execute
## this script (note it must be in the 'calibrated/working' directory.
import glob as glob
import os
__rethrow_casa_exceptions = True
pipelinemode='automatic'
context = h_init()
###########################################################
## USER SET INPUTS
## Select a title for the weblog-
context.project_summary.proposal_code = 'PIPELINE CONTSUB'
## Select spw(s) to image and channel binning for each spcified
## MySpw. All spws listed in MySpw must have a corresponding MyNbins
## entry, even if it is 1 for no binning.
MySpw='17,23'
MyNbins='17:8,23:2'
## Select subset of sources to image by field name.
## To select all fields, set MyFields=''
MyFields='CoolSource1,CoolSource2'
## Select Briggs Robust factor for data weighting (affects angular
## resolution of images)
MyRobust=1.5
############################################################
## Make a list of all uv-datasets appended with *.ms
MyVis=glob.glob('*.ms')
try:
## Load the *.ms files into the pipeline
hifa_importdata(vis=MyVis, dbservice=False, pipelinemode=pipelinemode)
## Split off the science target data into its own ms (called
## *target.ms) and apply science target specific flags
hif_mstransform(pipelinemode=pipelinemode)
hifa_flagtargets(pipelinemode=pipelinemode)
# For CASA 5.0 and 5.1,
# uvcontfit table in applycal commmand is not yet compliant with
# the new underlying "VI2" infrastructure, requiring an environment
# variable "VI1CAL" be set. this limitation will be removed in the
# future, obviating the need for this workaround.
try:
vi1cal = os.environ['VI1CAL']
except KeyError:
vi1cal = None
finally:
os.environ['VI1CAL']= '1'
#### PUT THE AUXCALAPPLY.TXT STATEMENTS HERE####
# the vis and gaintables need to be edited to a valid absolute or relative path.
#
applycal(vis='uid___A002_X9f54f7_X183_target.ms', field='', intent='', spw='17,19,21,23', antenna='0~36', gaintable='uid___A002_X9f54f7_X183_target.ms.hif_uvcontfit.s27_1.SPT0346-52.uvcont.tbl', gainfield='', spwmap=[], interp='', calwt=False)
applycal(vis='uid___A002_X9fddd8_Xc52_target.ms', field='', intent='', spw='17,19,21,23', antenna='0~35', gaintable='uid___A002_X9fddd8_Xc52_target.ms.hif_uvcontfit.s27_3.SPT0346-52.uvcont.tbl', gainfield='', spwmap=[], interp='', calwt=False)
#
#### END AUXCALAPPLY STATEMENTS ####
# now unset VI1CAL if it was originally unset--
if vi1cal is None:
del os.environ['VI1CAL']
else:
os.environ['VI1CAL']=vi1cal
## check the imaging product size and adjust the relevent
## imaging parameters (channel binning, cell size and image size)
## User can comment this out if they don't want size mitigation.
hif_checkproductsize(maxcubelimit=40.0, maxproductsize=400.0, maxcubesize=30.0)
## Make new per spw continuum for revised spw(s) and new aggregate cont
hif_makeimlist(specmode='mfs',spw=MySpw,field=MyFields)
hif_makeimages(robust=MyRobust,pipelinemode=pipelinemode)
hif_makeimlist(specmode='cont',field=MyFields)
hif_makeimages(robust=MyRobust,pipelinemode=pipelinemode)
## Make new continuum subtracted cube for revised spw(s)
hif_makeimlist(specmode='cube',spw=MySpw,nbins=MyNbins,field=MyFields,
pipelinemode=pipelinemode)
hif_makeimages(robust=MyRobust,pipelinemode=pipelinemode)
## Export new images to fits format if desired.
hifa_exportdata(pipelinemode=pipelinemode)
finally:
h_save()