Guide To Processing ALMA Data for Cycle 0: Difference between revisions

From CASA Guides
Jump to navigationJump to search
No edit summary
No edit summary
Line 5: Line 5:
We will use a sample data set from ALMA Cycle 0 in this guide.  Data for Cycle 1 and beyond will be delivered in a different format and will require a separate guide.
We will use a sample data set from ALMA Cycle 0 in this guide.  Data for Cycle 1 and beyond will be delivered in a different format and will require a separate guide.


=== About the Sample Data : H2D+ in TW Hya ===
In Cycle 0, ALMA data is delivered with a set of calibrated data files and a sample of reference images.  The data were calibrated and imaged by an ALMA scientist at one of the ALMA Regional Centers (ARCs).  The user can start with the data supplied and do full imaging.  In many cases, the imaging can be dramatically improved by including "self-calibration" steps.  Self-calibration is the process of using the detected signal in the target source, itself, to tune the phase (and to a lesser extent, amplitude) calibrations, as a function of time.
 
The data package includes the calibration scripts used by the ARC scientist to perform the initial calibration and imaging steps.  In most cases, users will not need to modify the calibration.  But in some cases, some tuning of the calibration steps can improve the final images.
 
Typically, users interested in doing science with Cycle 0 data from the ALMA archive will take the following steps:
 
# Download the data from the Archive
# Inspect the Quality Assessment plots and data files
# Inspect the reference images supplied with the data package
# Combine the calibrated data sets into a single calibrated measurement set
# Self-calibrate and image the combined data set
# Generate moment maps and other analysis products
 
Interested users may wish to review the calibration steps in detail, make modifications to the calibration script, and generate new calibrated data sets.
 
=== About the Sample Data: H2D+ in TW Hya ===


The data for this example comes from ALMA Project 2011.0.00340.S, "Searching for H2D+ in the disk of TW Hya v1.5", for which the PI is Chunhua Qi.  Part of the data for this project has been published in [http://adsabs.harvard.edu/abs/2013Sci...341..630Q Qi et al. 2013].
The data for this example comes from ALMA Project 2011.0.00340.S, "Searching for H2D+ in the disk of TW Hya v1.5", for which the PI is Chunhua Qi.  Part of the data for this project has been published in [http://adsabs.harvard.edu/abs/2013Sci...341..630Q Qi et al. 2013].
Line 15: Line 30:
== Prerequisites : Computing Requirements ==
== Prerequisites : Computing Requirements ==


ALMA data sets can be very large and require significant computing resources for efficient processing.  The data set used in this example begins with a download of 176 GB of data files.  A description of recommended computing resources is given [http://casa.nrao.edu/casa_hardware-requirements.shtml here].  Those who do not have sufficient computing power may wish to arrange a visit to one of the ALMA Regional Centers (ARCs) to use the computing facilities at these sites.  To arrange a visit to an ARC, submit a ticket to the [https://help.almascience.org/ ALMA Helpdesk].
ALMA data sets can be very large and require significant computing resources for efficient processing.  The data set used in this example begins with a download of 176 GB of data files.  A description of recommended computing resources is given [http://casa.nrao.edu/casa_hardware-requirements.shtml here].  Those who do not have sufficient computing power may wish to arrange a visit to one of the ARCs to use the computing facilities at these sites.  To arrange a visit to an ARC, submit a ticket to the [https://help.almascience.org/ ALMA Helpdesk].


== Getting the Data: The ALMA Data Archive ==
== Getting the Data: The ALMA Data Archive ==

Revision as of 12:25, 28 February 2014

Processing ALMA Data

This guide describes steps that you can use to process ALMA data, beginning with locating and downloading your data from the public archive, to making science-ready images.

We will use a sample data set from ALMA Cycle 0 in this guide. Data for Cycle 1 and beyond will be delivered in a different format and will require a separate guide.

In Cycle 0, ALMA data is delivered with a set of calibrated data files and a sample of reference images. The data were calibrated and imaged by an ALMA scientist at one of the ALMA Regional Centers (ARCs). The user can start with the data supplied and do full imaging. In many cases, the imaging can be dramatically improved by including "self-calibration" steps. Self-calibration is the process of using the detected signal in the target source, itself, to tune the phase (and to a lesser extent, amplitude) calibrations, as a function of time.

The data package includes the calibration scripts used by the ARC scientist to perform the initial calibration and imaging steps. In most cases, users will not need to modify the calibration. But in some cases, some tuning of the calibration steps can improve the final images.

Typically, users interested in doing science with Cycle 0 data from the ALMA archive will take the following steps:

  1. Download the data from the Archive
  2. Inspect the Quality Assessment plots and data files
  3. Inspect the reference images supplied with the data package
  4. Combine the calibrated data sets into a single calibrated measurement set
  5. Self-calibrate and image the combined data set
  6. Generate moment maps and other analysis products

Interested users may wish to review the calibration steps in detail, make modifications to the calibration script, and generate new calibrated data sets.

About the Sample Data: H2D+ in TW Hya

The data for this example comes from ALMA Project 2011.0.00340.S, "Searching for H2D+ in the disk of TW Hya v1.5", for which the PI is Chunhua Qi. Part of the data for this project has been published in Qi et al. 2013.

The observation was set up with two spectral windows ... frequency/bandwidth/chan spacing/etc

The project required three executions of the scheduling block. explain ...

Prerequisites : Computing Requirements

ALMA data sets can be very large and require significant computing resources for efficient processing. The data set used in this example begins with a download of 176 GB of data files. A description of recommended computing resources is given here. Those who do not have sufficient computing power may wish to arrange a visit to one of the ARCs to use the computing facilities at these sites. To arrange a visit to an ARC, submit a ticket to the ALMA Helpdesk.

Getting the Data: The ALMA Data Archive

The ALMA data archive is part of the ALMA Science Portal. A copy of the archive is stored at each of the ARCs, and you can connect to the nearest archive through these links:

  1. North America
  2. Europe
  3. East Asia


Archive interface.png
The ALMA Archive Query page.

Upon entry into the ALMA Archive Query page, set the "Results View" option to "project" (see the red highlight #1 in the figure) and specify the Project Code to 2011.0.00340.S (red highlight #2). Note, if you leave the "Results View" set to "raw data", you will see three rows of data sets in the results page. These correspond to three executions of the observing script. In fact, cor Cycle 0 data these rows contains copies of the same data set, so use care not to download the (large!) data set three times. By setting "Results View" to project, you see just one entry, and that is the one you'd like to download.

You can download the data through the Archive GUI. For more control over the download process, you can use the Unix shell script provided on the Request Handler page. This script has a name like "downloadRequest84998259script.sh". You need to put this file into a directory that contains ample disk space, and execute is in you shell. For example, in bash:

% chmod +x downloadRequest84998259script.sh
% ./downloadRequest84998259script.sh

Unpacking the data

The data you have downloaded includes 17 tar files. Unpack these using the following command:

% for i in `ls *.tar`; do tar -xvf $i; done

At this point you will have a directory called "2011.0.00340.S" with the full data distribution.

Overview of Delivered Data and Products

All of the data files are several directories in to the data distribution. To get to the relevant directory, do:

% cd 2011.0.00340.S/sg_ouss_id/group_ouss_id/member_ouss_2012-12-05_id

Here you will find the following entries:

% ls
calibrated  calibration  log  product  qa  raw  README  script

The README file describes the files in the distribution and includes notes from the ALMA scientist who performed the initial calibration and imaging.

The directories contain:

  1. calibrated
    1. uid___A002_X554543_X207.ms.split.cal
    2. uid___A002_X554543_X3d0.ms.split.cal
    3. uid___A002_X554543_X667.ms.split.cal

These are the calibrated data sets, ready to be combined and imaged.

  1. calibration
    1. uid___A002_X554543_X207.calibration
    2. uid___A002_X554543_X207.calibration.plots
    3. uid___A002_X554543_X3d0.calibration
    4. uid___A002_X554543_X3d0.calibration.plots
    5. uid___A002_X554543_X667.calibration
    6. uid___A002_X554543_X667.calibration.plots

The "calibration" directories contain auxiliary measurement sets generated in the calibration process.

The "calibration.plots" directories contain (a few hundred) plots generated during the calibration process. These can be useful for the expert user to assess the quality of the calibration at each step.

  1. log
    1. 340.log
    2. Imaging.log
    3. uid___A002_X554543_X207.calibration.log
    4. uid___A002_X554543_X3d0.calibration.log
    5. uid___A002_X554543_X667.calibration.log

Describe here the data products and scripts. Pretty confused by these! See Scott Schnee.

  1. product
    1. TWHya.continuum.fits
    2. TWHya.continuum.mask
    3. TWHya.H2D+.mask
    4. TWHya.N2H+.fits
    5. TWHya.N2H+.mask

These files present the final product determined in the calibration and imaging process. These are "reference" images that are used to determine the quality of the observation, but they are not necessarily science-ready. They are useful for initial inspection.

  1. qa
    1. uid___A002_X554543_X207__qa2_part1.png
    2. uid___A002_X554543_X207__qa2_part2.png
    3. uid___A002_X554543_X207__qa2_part3.png
    4. uid___A002_X554543_X207__textfile.txt
    5. uid___A002_X554543_X3d0__qa2_part1.png
    6. uid___A002_X554543_X3d0__qa2_part2.png
    7. uid___A002_X554543_X3d0__qa2_part3.png
    8. uid___A002_X554543_X3d0__textfile.txt
    9. uid___A002_X554543_X667__qa2_part1.png
    10. uid___A002_X554543_X667__qa2_part2.png
    11. uid___A002_X554543_X667__qa2_part3.png
    12. uid___A002_X554543_X667__textfile.txt

The data from each scheduling block goes through a quality assessment. The files in this directory give the results of this assessment. All data delivered to the public ALMA Archive have passed the quality assessment. It is worthwhile to review the plots and text files contained here. You will find plots of the antenna configuration, UV coverage, calibration results, Tsys, and so on.

  1. raw
    1. uid___A002_X554543_X207.ms.split
    2. uid___A002_X554543_X3d0.ms.split
    3. uid___A002_X554543_X667.ms.split

These are the "raw" data files. In fact, these data files already have certain a priori calibrations applied. If you consider the calibration scripts provided, these files have steps 0-6 applied. This includes a priori flagging, and application of WVR, Tsys, and antenna position corrections. If you would like to tune or refine the calibration, these files will be the starting point.

  1. script
    1. import_data.py
    2. scriptForFluxCalibration.py
    3. scriptForImaging.py
    4. uid___A002_X554543_X207.ms.scriptForCalibration.py
    5. uid___A002_X554543_X3d0.ms.scriptForCalibration.py
    6. uid___A002_X554543_X667.ms.scriptForCalibration.py

These are the scripts developed and applied by the ALMA scientist, to calibrate the data and generate reference images. These scripts cannot be applied directly to the raw data provided in this data distribution, but they can serve as a valuable reference to see the steps that need to be taken to reprocess the data, if you so choose.





CASA 3.4 vs. 4.2

Typical approach: Use the delivered calibrated data, and user only needs to do self-cal and imaging.

For those who want to tune the calibration: go through the steps

A Recommended Course of Action

  1. Inspect the reference data in the "product" directory
  2. Make initial images from the calibrated data
  3. Self-cal
  4. Make final images

Refining the Calibration

uid___A002_X554543_X207.ms.scriptForCalibration.py uid___A002_X554543_X3d0.ms.scriptForCalibration.py uid___A002_X554543_X667.ms.scriptForCalibration.py

The delivered data has cal steps 0-6 (a priori) completed. User can pick up with step 7.

Flux Calibration

Note, the scripts refers to the combined data set. Flux cal script does the cal, then combines the 3 cal executions:

concat(vis = ['uid___A002_X554543_X207.ms.split.cal', 'uid___A002_X554543_X3d0.ms.split.cal', 'uid___A002_X554543_X667.ms.split.cal'],concatvis = 'calibrated.ms')

The imaging script will then work with the calibrated.ms

Imaging