Difference between revisions of "VLA Self-calibration Tutorial-CASA5.7.0"

From CASA Guides
Jump to: navigation, search
m (Obtaining the Data)
 
(297 intermediate revisions by 2 users not shown)
Line 1: Line 1:
<div style="background-color: salmon">
+
<!--<div style="background-color: salmon">
 
<div style="background-color: salmon; margin: 20px">
 
<div style="background-color: salmon; margin: 20px">
 
<br>
 
<br>
This page is currently under construction.
+
This page is currently under development.
 +
Please do not use this guide until this message has been removed.
 
<br><br>
 
<br><br>
 
</div>
 
</div>
</div>
+
</div> -->
  
 
__TOC__
 
__TOC__
 +
 +
 
== Introduction ==
 
== Introduction ==
This CASA guide describes the basics of the self-calibration process and in particular how to choose parameters to achieve the best result. Even after the initial calibration of the dataset using the amplitude calibrator and the phase calibrator, there are likely to be residual phase and/or amplitude errors in the data. Self-calibration is the process of using an existing model, often constructed from imaging the data itself, to reduce the remaining phase and amplitude errors in your image.
+
After calibrating a data set using the observed calibrator sources (standard calibration), there may be residual phase and/or amplitude errors in the calibrated data of the target source that degrade the image quality. Self-calibration (selfcal) is the process of using a model of the target source to reduce the phase and amplitude errors in the visibilities of the same target source. While some other CASA guides include a self-calibration step, this guide describes the process in greater detail, including how to optimize parameters to achieve the best result.
 +
 
 +
Fundamentally, self-calibration is almost identical to standard calibration. 
 +
Both standard calibration and selfcal work by comparing the visibility data with a model to solve for calibration solutions. With standard calibration, we are usually provided a model of our calibrator source by the observatory (e.g., VLA Flux-density calibrators) or we adopt a simple model (e.g., a 1 Jy point source at the phase center is a common assumption for VLA phase calibrators). With self-calibration we need to set a model for our target source, e.g., by imaging the target visibilities.  Then for both standard calibration and selfcal we solve for calibration solutions after making choices about the solution interval, signal-to-noise, etc. When applying the standard calibration solutions we use interpolation to correct the target data, but for selfcal we apply the calibration solutions directly to the target field from which they were derived. For additional details about self-calibration, see [https://ui.adsabs.harvard.edu/abs/1999ASPC..180..187C/abstract Lecture 10] of Synthesis Imaging in Radio Astronomy II (eds. Taylor, Carilli & Perley).
 +
 
 +
In this guide, we will create a model using the target data (by running {{tclean}}) and use this model to solve for and apply calibration solutions (by running {{gaincal}} and {{applycal}}). <!--, then iteratively improve this model with further rounds of selfcal.--> This is the most common procedure, but there are other variants that are outside the scope of this guide.  For example, your initial model for the target may come from fitting a model to the visibilities instead of imaging, or may be based on ''a priori'' knowledge of the target field. <!-- Some applications of selfcal may use other calibration tasks, e.g., {{bandpass}}, instead of or in addition to {{gaincal}}. -->
 +
 
 +
Each "round" of self-calibration presented here will follow the same general procedure:
 +
# Create an initial model by conservatively cleaning the target field (see Section [[#The Initial Model|The Initial Model]]).
 +
# Use '''{{gaincal}}''' with an initial set of parameters to calculate a calibration table (see Section [[#Solving for the First Self-Calibration Table|Solving for the First Self-Calibration Table]]).
 +
# Inspect the calibration solutions using '''{{plotms}}''' (see Section [[#Plotting the First Self-Calibration Table|Plotting the First Self-Calibration Table]]).
 +
# Optimize the calibration parameters (see Sections [[#Examples of Various Solution Intervals|Examples of Various Solution Intervals]] and [[#Comparing the Solution Intervals|Comparing the Solution Intervals]]).
 +
# Use '''{{applycal}}''' to apply the table of solutions to the data (see Section [[#Applying the First Self-Calibration Table|Applying the First Self-Calibration Table]]).
 +
# Use '''{{tclean}}''' to produce the self-calibrated image (see Section [[#Imaging the Self-calibrated Data|Imaging the Self-calibrated Data]]).
 +
<!--
 +
# Use '''{{split}}''' to write out the calibrated data with the applied solutions, the starting point for the next round of selfcal.
 +
# Start the next round of selfcal.
 +
 
 +
'''Note''': There is an alternative and equally valid variant of the self-calibration procedure that is outlined in Appendix ... but not used in this guide. 
 +
 
 +
Overall, this guide will cover:
 +
* How to run the general self-calibration process
 +
* How to determine optimal selfcal parameters (e.g. solution interval)
 +
* When to stop the selfcal process
 +
* Advanced topics related to selfcal (e.g., peeling)  -->
 +
 
 +
The data set in this guide is a VLA observation of a massive galaxy cluster, MOO J1506+5137, at z=1.09 and is part of the Massive and Distant Clusters of ''WISE'' Survey (MaDCoWS: [https://ui.adsabs.harvard.edu/abs/2019ApJS..240...33G/abstract Gonzalez et al. 2019]). MOO J1506+5137 stands out in the MaDCoWS sample due to its high radio activity. From the 1300 highest significance MaDCoWS clusters in the FIRST footprint, a sample of ~50 clusters with extended radio sources defined as having at least one FIRST source with a deconvolved size exceeding 6.5" within 1' of the cluster center was identified. This sample was observed with the VLA (PI: Gonzalez, 16B-289, 17B-197; PI: Moravec, 18A-039) as a part of a larger study ([https://ui.adsabs.harvard.edu/abs/2020ApJ...888...74M/abstract Moravec et al. 2020a]). Through these follow-up observations, it was discovered that MOO J1506+5137 had high radio activity compared to other clusters in the sample with five radio sources of which three had complex structure and two were bent-tail sources. The scientific question at hand is, why does this cluster have such high radio activity? The VLA data showcased in this tutorial, combined with other data sets, suggest that the exceptional radio activity among the massive galaxy population is linked to the dynamical state of the cluster ([https://ui.adsabs.harvard.edu/abs/2020ApJ...898..145M/abstract Moravec et al. 2020b]).
 +
 
 +
 
 +
== When to Use Self-calibration ==
 +
There are instances in which selfcal can improve your image quality and others when it will not.
 +
 
 +
A couple typical cases in which selfcal can help improve the image of the target source:
 +
* Extensive artifacts from the source of interest due to calibration errors
 +
* Extensive artifacts from a background source due to direction-independent calibration errors
 +
 
 +
Some cases in which selfcal will '''*not*''' improve the image of the target source:
 +
* When the image artifacts are due to errors in creating the image (e.g., ignoring wide-field effects)
 +
* When the image artifacts are due to errors in deconvolution (e.g., ignoring wide-band effects)
 +
* When the image artifacts are due to unflagged RFI in the target visibilities
 +
* When the image artifacts are due to insufficient UV coverage (e.g., missing short spacings, snapshot synthesis)
 +
* When the source(s) are too weak to achieve sufficient signal-to-noise in the calibration solutions
 +
* When there is a bright outlying source with direction-dependent calibration errors
 +
 
 +
It can be difficult to determine the origin of an image artifact based solely on its appearance, especially without a lot of experience in radio astronomy. But generally speaking, the errors that selfcal will help address will be convolutional in nature and direction-independent.  This means that every source of real emission in the image will have an error pattern of the same shape, and the brightness of the error pattern will scale with the brightness of the source. If the error pattern is symmetric (an even function) then it is most likely dominated by an error in visibility amplitude, and if the error pattern is asymmetric (an odd function) then it is probably due to an error in visibility phase. Selfcal can address both amplitude and phase errors. For a more complete discussion on error recognition, see [https://ui.adsabs.harvard.edu/abs/1999ASPC..180..321E/abstract Lecture 15] of Synthesis Imaging in Radio Astronomy II (eds. Taylor, Carilli & Perley).
 +
<!--'''EM''': could we expound upon how one could tell the difference between the cases in which selfcal WILL help and bullets 1,2, and 4 when selfcal will not help? What do those actually look like? How does one identify each type of error? People who are just starting may not know how to tell the difference between these different types of errors in an image. To be honest, I am not sure I could tell you what the last 4 bullets looks like. -->
 +
 
 +
In the case of this guide, we believed that these data were a good candidate for selfcal because there were extensive artifacts centered on the source of interest (something very closely resembling Figure 4A) after an initial cleaning. These errors manifested as strong sidelobes radiating out from the sources of strong emission and with a shape that resembles the VLA dirty beam (i.e., a shape that is related to the observation's UV coverage). The artifacts did not lessen as we cleaned more deeply but instead appeared stronger relative to the residual image. Therefore, because phase and/or amplitude calibration errors could be a potential cause for the artifacts, and because the target source is relatively bright, we thought that selfcal could help improve the image quality.
  
The dataset that will be used for this CASA tutorial is an observation of a massive galaxy cluster at z~1 which was taken with the goal to determine the morphology of the radio sources within the cluster.
 
  
 
== Data for this Tutorial ==
 
== Data for this Tutorial ==
Line 17: Line 66:
 
=== Obtaining the Data ===
 
=== Obtaining the Data ===
  
The original observation was calibrated using the VLA CASA Pipeline. It can be downloaded from the archive by searching for the following SDM name: '''17B-197.sb34290063.eb34589992.58039.86119096065'''.
+
The original observation has been calibrated using the VLA CASA Pipeline. Therefore, the measurement set (MS) we will be downloading will contain both the raw (uncalibrated) visibilities and the calibrated visibilities, which will appear in the 'DATA' and 'CORRECTED_DATA' columns of the MS, respectively. The raw data alone is 11 GB, and this will grow to 21 GB after applying the calibration.  
 
 
We will instruct the archive to apply the calibration solutions derived by the original pipeline execution.  Therefore, the measurement set (MS) we will be downloading will contain both the raw (uncalibrated) visibilities and the calibrated visibilities, which will appear as the 'DATA' and 'CORRECTED_DATA' columns of the MS, respectively.  
 
 
 
The raw data alone is 11.4 GB, and this will grow to 20.5 GB after applying the calibration.  
 
  
As an alternative to requesting the data from the archive you may instead download the calibrated MS directly here:
+
<!--It can be downloaded from the archive by searching for the following SDM name: '''17B-197.sb34290063.eb34589992.58039.86119096065'''.
  
-- link to data download --
+
We will instruct the archive to apply the calibration solutions derived by the original pipeline execution by selecting 'Calibrated MS' from the download options. 
 +
As an alternative to requesting these data from the archive you may instead download the calibrated MS directly here:
 +
-->
 +
You may download the calibrated MS directly here:
 +
[https://casa.nrao.edu/Data/EVLA/17B-197/17B-197.sb34290063.eb34589992.58039.86119096065.tar '''17B-197.sb34290063.eb34589992.58039.86119096065.tar (21 GB)''']
 +
<!-- A smaller data set is also available (see Section [[#Splitting the Target Visibilities|Splitting the Target Visibilities]]) -->
  
 
=== Observation Details ===
 
=== Observation Details ===
Once CASA is up and running in the directory containing the data, then start your data reduction by getting some basic information about the data. The task {{listobs}} can be used to get a listing of the individual scans comprising the observation, the frequency setup, source list, and antenna locations.  
+
First, we will start CASA in the directory containing the data and then collect some basic information about the observation. The task {{listobs}} can be used to display the individual scans comprising the observation, the frequency setup, source list, and antenna locations.  
  
 
<source lang="python">
 
<source lang="python">
 
# in CASA
 
# in CASA
listobs(vis='MOO_1506+5136_Cband.ms')
+
listobs(vis='17B-197.sb34290063.eb34589992.58039.86119096065.ms')
 
</source>
 
</source>
 +
 +
A portion of the listobs output appears below:
  
 
<pre style="background-color: #fffacd;">
 
<pre style="background-color: #fffacd;">
 
================================================================================
 
================================================================================
           MeasurementSet Name:  /filepath/MOO_1506+5136_Cband.ms      MS Version 2
+
           MeasurementSet Name:  17B-197.sb34290063.eb34589992.58039.86119096065.ms      MS Version 2
 
================================================================================
 
================================================================================
 
   Observer: Prof. Anthony H. Gonzalez    Project: uid://evla/pdb/34052589   
 
   Observer: Prof. Anthony H. Gonzalez    Project: uid://evla/pdb/34052589   
Observation: EVLA(27 antennas)
+
Observation: EVLA
 
Data records: 5290272      Total elapsed time = 2853 seconds
 
Data records: 5290272      Total elapsed time = 2853 seconds
 
   Observed from  13-Oct-2017/20:40:09.0  to  13-Oct-2017/21:27:42.0 (UTC)
 
   Observed from  13-Oct-2017/20:40:09.0  to  13-Oct-2017/21:27:42.0 (UTC)
  
 +
  ObservationID = 0        ArrayID = 0
 +
  Date        Timerange (UTC)          Scan  FldId FieldName            nRows    SpwIds  Average Interval(s)    ScanIntent
 +
  13-Oct-2017/20:40:09.0 - 20:45:03.0    1      0 J1549+5038              550368  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [SYSTEM_CONFIGURATION#UNSPECIFIED]
 +
              20:45:06.0 - 20:50:03.0    2      0 J1549+5038              555984  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [CALIBRATE_AMPLI#UNSPECIFIED,CALIBRATE_PHASE#UNSPECIFIED]
 +
              20:50:06.0 - 20:59:27.0    3      1 MOO_1506+5136          1050192  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [OBSERVE_TARGET#UNSPECIFIED]
 +
              20:59:30.0 - 21:00:51.0    4      0 J1549+5038              151632  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [CALIBRATE_AMPLI#UNSPECIFIED,CALIBRATE_PHASE#UNSPECIFIED]
 +
              21:00:54.0 - 21:10:15.0    5      1 MOO_1506+5136          1050192  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [OBSERVE_TARGET#UNSPECIFIED]
 +
              21:10:18.0 - 21:11:39.0    6      0 J1549+5038              151632  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [CALIBRATE_AMPLI#UNSPECIFIED,CALIBRATE_PHASE#UNSPECIFIED]
 +
              21:11:42.0 - 21:21:03.0    7      1 MOO_1506+5136          1050192  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [OBSERVE_TARGET#UNSPECIFIED]
 +
              21:21:06.0 - 21:22:27.0    8      0 J1549+5038              151632  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [CALIBRATE_AMPLI#UNSPECIFIED,CALIBRATE_PHASE#UNSPECIFIED]
 +
              21:22:30.0 - 21:27:03.0    9      2 3C286                  511056  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [CALIBRATE_BANDPASS#UNSPECIFIED,CALIBRATE_FLUX#UNSPECIFIED]
 +
              21:27:06.0 - 21:27:42.0    10      2 3C286                    67392  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [CALIBRATE_BANDPASS#UNSPECIFIED,CALIBRATE_FLUX#UNSPECIFIED]
 +
          (nRows = Total number of rows per scan)
 
Fields: 3
 
Fields: 3
 
   ID  Code Name                RA              Decl          Epoch  SrcId      nRows
 
   ID  Code Name                RA              Decl          Epoch  SrcId      nRows
Line 67: Line 132:
 
   14    EVLA_C#B0D0#14    64  TOPO    6256.000      2000.000    128000.0  6319.0000      15  RR  RL  LR  LL
 
   14    EVLA_C#B0D0#14    64  TOPO    6256.000      2000.000    128000.0  6319.0000      15  RR  RL  LR  LL
 
   15    EVLA_C#B0D0#15    64  TOPO    6384.000      2000.000    128000.0  6447.0000      15  RR  RL  LR  LL
 
   15    EVLA_C#B0D0#15    64  TOPO    6384.000      2000.000    128000.0  6447.0000      15  RR  RL  LR  LL
Antennas: 27 'name'='station'
+
 
  ID=  0-3: 'ea01'='W12', 'ea02'='W04', 'ea03'='W28', 'ea04'='E24',
 
  ID=  4-7: 'ea05'='E04', 'ea06'='N20', 'ea07'='N16', 'ea08'='W16',
 
  ID=  8-11: 'ea09'='N12', 'ea10'='E08', 'ea11'='N28', 'ea12'='E28',
 
  ID= 12-15: 'ea13'='E16', 'ea14'='E36', 'ea15'='N32', 'ea16'='W24',
 
  ID= 16-19: 'ea17'='N04', 'ea18'='W36', 'ea19'='W20', 'ea20'='N24',
 
  ID= 20-23: 'ea21'='E20', 'ea22'='W32', 'ea23'='E12', 'ea24'='W08',
 
  ID= 24-26: 'ea25'='N08', 'ea26'='E32', 'ea27'='N36'
 
  
 
</pre>
 
</pre>
  
== Initial Imaging ==
+
=== Initial Data Inspection ===
First, we want to make an initial image which showcases why we need self-calibration in this case.
+
Since we have obtained the calibrated visibilites for the calibrator fields, we can now take this opportunity to investigate the phase stability in these observations.  It is easier to do this inspection on a bright calibrator field where the signal-to-noise is high, and we will assume that the same degree of stability is present throughout the observation.  In this section, we will characterize the magnitude and timescale of the phase fluctuations that we will be trying to correct for with selfcal.
 +
 
 +
Looking at the output of {{listobs}} we see that there is a long scan on the amplitude calibrator, 3C 286 (field ID 2). A feature of the VLA CASA pipeline is that it only applies scan-averaged calibration solutions to the calibrator fields, so it will not have corrected for any variations within a scan.  We will plot the calibrated phase vs. time on a single baseline:
 +
 
 +
<source lang="python">
 +
# in CASA
 +
plotms(vis='17B-197.sb34290063.eb34589992.58039.86119096065.ms', xaxis='time', yaxis='phase', ydatacolumn='corrected', field='2', antenna='ea05', correlation='RR,LL', avgchannel='64', iteraxis='baseline', coloraxis='spw')
 +
</source>
 +
[[Image:Selfcal 3c286 phase.png|200px|thumb|right|Figure 1: The phase vs. time on the ea04-ea05 baseline for field 2.]]
 +
* '' vis='17B-197.sb34290063.eb34589992.58039.86119096065.ms' '': To plot visibilities from the pipeline calibrated MS.
 +
* '' xaxis='time', yaxis='phase' '': To set time as the x-axis and phase as the y-axis of the plot.
 +
* '' ydatacolumn='corrected' '': To plot the calibrated data (from the CORRECTED_DATA column).
 +
* '' field='2' '': To select visibilities from field ID 2, i.e., the amplitude calibrator 3C 286.
 +
* '' antenna='ea05', iteraxis='baseline' '': To view a single baseline at a time. Any single antenna can be chosen here.
 +
* '' correlation='RR,LL' '': To plot both parallel-hand correlation products.
 +
* '' avgchannel='64' '': To average all channels in each SPW to increase signal-to-noise. Since the bandpass solutions have been applied to these data the channels will average coherently.
 +
* '' coloraxis='spw' '': To plot each SPW as a different color, which will make it easier to distinguish them.
 +
 
 +
 
 +
Use the 'Next Iteration' button of the {{plotms}} GUI to cycle through additional baselines. You should see plots that look similar to the example image of the ea04-ea05 baseline (see Figure 1). The plotted data have a mean of zero phase because the pipeline calibration solutions have already been applied. The phase is seen to vary with time over a large range (in some cases more than +/- 100 degrees) and the variations appear to be smooth over time scales of a few integrations.  For a given baseline, we can see that all of the spectral windows and both correlations approximately follow the same trend with time. Additionally, the magnitude of the phase variations is larger for the higher frequency spectral windows, a pattern that is consistent with changes in atmospheric density.
 +
 
 +
'''Optional extra steps:'''  Create and inspect similar plots using scan 2 of the phase calibrator field (J1549+5038).  Repeat for baselines to other antennas.
 +
 
 +
=== Splitting the Target Visibilities ===
 +
 
 +
CASA calibration tasks always operate by comparing the visibilities in the DATA column to the source model, where the source model is given by either the MODEL_DATA column, a model image or component list, or the default model of a 1 Jy point source at the phase center. For example, the calibration pipeline used the raw visibilities in the DATA column to solve for calibration tables and then created the CORRECTED_DATA column by applying these tables to the DATA column. With this context in mind, '''an essential step for self-calibration is to split the calibrated visibilities for the target we want to self-calibrate,''' meaning that the visibilities of the target source get copied from the CORRECTED_DATA column of the pipeline calibrated MS to the DATA column of a new measurement set. Self-calibration will work in the same way as the initial calibration, i.e., by comparing the pipeline calibrated visibilities (which are now in the DATA column of the new split MS) to a model, solving for self-calibration tables, and then creating a new CORRECTED_DATA column by applying the self-calibration tables. If we did not split the data, we would need to constantly re-apply all of the calibration tables from the pipeline (both on-the-fly when computing the self-calibration solutions and then again when applying the self-calibration), which would make the process much more cumbersome.
  
We split off the calibrated target field data, meaning that the visibilities of the target source to get copied from the CORRECTED_DATA column to the DATA column of a new measurement set. This is convenient for further processing.
 
 
<source lang="python">
 
<source lang="python">
 
# in CASA
 
# in CASA
split(vis='MOO_1506+5136_Cband.ms',datacolumn='corrected',field='1',outputvis='obj.ms')
+
split(vis='17B-197.sb34290063.eb34589992.58039.86119096065.ms',datacolumn='corrected',field='1', correlation='RR,LL', outputvis='obj.ms')
 
</source>
 
</source>
 +
* ''vis='17B-197.sb34290063.eb34589992.58039.86119096065.ms' '': The input visibilities for split. Here, these are the visibilities produced by the pipeline.
 +
* ''datacolumn='corrected' '': To copy the calibrated visibilities from the input MS.
 +
* ''field='1' '': The field ID of the target we want to self-calibrate.
 +
* ''correlation='RR,LL' '': To select only the parallel hand correlations. This will make the output data set smaller by about a factor of two.
 +
* ''outputvis='obj.ms' '': The name of the new measurement set that split will create.
 +
 +
<!-- The output MS can be can be directly downloaded here: [http://www.aoc.nrao.edu/~jmarvil/selfcal_casaguide/obj.tar.gz '''obj.ms  (3.4 GB)'''] -->
 +
 +
 +
== The Initial Model ==
 +
Now that we understand the data a bit better and know that we need to apply self-calibration, we will begin to work our way through the steps outlined in the Introduction (create initial model, create calibration table, inspect solutions, determine best solution interval, applycal, split, next round). We first begin with creating an initial model which '''{{gaincal}}''' will compare to the data in order to create the calibration table that will be applied to the data in the first round of self-calibration.
 +
 +
=== Preliminary Imaging ===
 +
Prior to solving for self-calibration solutions we need to make an initial model of the target field, which we will generate by deconvolving the target field using the task {{tclean}}. There are several imaging considerations that we should address when making this model (discussed below).  See the [https://casaguides.nrao.edu/index.php?title=VLA_CASA_Imaging-CASA5.5.0 VLA CASA Guide on Imaging] for more details about these parameters.
 +
 +
'''Image field-of-view''':  Ideally, we want our self-calibration model to include all of the sources present in the data (e.g., sources near the edge of the primary beam or in the first sidelobe).  This is typically achieved by making an image large enough to encompass all of the apparent sources, or by making a smaller image of the target plus one or more outlier fields. We will start with a large dirty image of the entire primary beam (PB) in order to better understand the sources in the galaxy cluster plus any background sources that will need to be cleaned. A rule of thumb for the VLA is that the FWHM of the PB in arcminutes is approximately 42 * (1 GHz / nu).  At the center frequency of our C-band observations (5.5 GHz) the VLA primary beam is ~8' FWHM. In order to image the entire PB and the first sidelobe we need an image field of view that is about four times larger, so we will choose a 32' field-of-view for our initial image (see [https://casa.nrao.edu/casadocs/casa-6.1.0/imaging/synthesis-imaging/wide-field-imaging-full-primary-beam casadocs] and the [https://science.nrao.edu/facilities/vla/docs/manuals/oss/performance/fov VLA OSS] for further discussion of primary beams).
 +
 +
'''Primary beam mask''':  By default, the 'pblimit' parameter will add an image mask everywhere the value of the primary beam is less than 10%. We want to turn off this mask, as it would prevent us from viewing the image over our desired field of view.  This mask is turned off by setting the magnitude of the 'pblimit' parameter to be negative. The actual value of this parameter is unimportant for the imaging we will be doing (i.e., the 'standard', 'widefield' and 'wproject' gridders).
 +
 +
'''Image cell size''':  There are a few different ways to estimate the synthesized beam size for these observations taken with the C-band in the B-configuration. For one, we can use [https://science.nrao.edu/facilities/vla/docs/manuals/oss2017B/performance/resolution this table in the VLA Observational Status Summary], which gives a resolution of 1.0".  It is recommended to choose a cell size that will result in at least 5 image pixels across the FWHM of the synthesized beam, therefore we require a cell size of 0.20"/pixel or smaller.
 +
 +
'''Image size in pixels''': We can convert our desired field-of-view to pixels using the cell size:  32' * (60" / 1') * (1 pixel / 0.20") = 9600 pixels.
 +
<!-- However, when making large images, CASA will run faster if we choose an image size that is optimized for the FFT algorithm. The recommendation is to choose an image size that can be expressed as 5*2^n*3^m.  This is so that after CASA applies an internal padding factor of 1.2, the images being used in the FFTs can be broken down into small matrices.  So we will choose an optimized image size of 9720 (n=3,m=5) for this image. -->
  
 +
'''Wide-field effects''':  Large images may require additional consideration due to non-coplanar baselines (the W-term). In CASA, this is usually addressed by turning on the W-project algorithm.  [https://casa.nrao.edu/casadocs/casa-5.4.1/synthesis-imaging/wide-field-imaging-full-primary-beam See this link to CASAdocs for a more detailed discussion.]
 +
 +
We can estimate whether our image requires W-projection by calculating the recommended number of w-planes using this formula taken from page 392 of the [https://ui.adsabs.harvard.edu/abs/1999ASPC..180..383P/abstract NRAO 'white book'],
 +
 +
<math>
 +
N_{wprojplanes} = \left ( \frac{I_{FOV}}{\theta_{syn}} \right ) \times \left ( \frac{I_{FOV}}{1\, \mathrm{radian}} \right )
 +
</math>
 +
 +
where I_FOV is the image field-of-view (32') and theta_syn is the synthesized beam size (1.0"). Working in units of arcseconds ((32 x 60) / 1.0) * ((32 x 60) / 206265) evaluates to wprojplanes ~ 18 so we will choose to turn on the w-project algorithm with ''gridder='widefield''' and set ''wprojplanes=18''.  If the recommended number of planes had been <= 1 then we would not have needed to turn on the wide-field gridder, and we could have used ''gridder='standard' '' instead.
 +
 +
We will now create a preliminary dirty image using these parameters.
  
In order to get a feel for the sources in the clusters and any outliers that will need to be cleaned, we make a dirty image of the primary beam.
 
 
<source lang="python">
 
<source lang="python">
 
# in CASA
 
# in CASA
tclean(vis='obj.ms',imagename='obj.dirty.2000pix.8am.024as',imsize=2000,cell='0.24arcsec',weighting='briggs',niter=1,interactive=False)
+
tclean(vis='obj.ms',imagename='obj.dirty.9600pix', datacolumn='data', imsize=9600, cell='0.2arcsec', pblimit=-0.1, gridder='widefield', wprojplanes=18 )
 
</source>
 
</source>
* ''cell='0.24arcsec''': The observations were taken in C-band in configuration B which gives a resolution of 1.2". Thus to 5 elements across the resolution of 1.2", we require a cell size of 0.24"/pixel.
+
*'' datacolumn='data' '': To image the visibilities in the measurement set's DATA column.
*''imsize=2000'': In this case, the primary beam is ~8'. Thus to get an image of the entire PB, we use an imsize of 2000 pixels.
+
*''imsize=9600'': The number of pixels across one side of the (square) image.
* ''niter=1'': To make a dirty image, we only need one iteration.
+
* ''cell='0.2arcsec''': The size of an image pixel (see above).
* ''interactive=False'': Turn off the interactive feature to make the dirty image.
+
*''pblimit=-0.1'': We set this to a small negative number to turn off the PB mask, allowing us to view the entire image.
{|
+
*''gridder='widefield''': To turn on the w-project algorithm.
| [[Image:MOO_1506+3156_dirty.png|200px|thumb|right|Figure 1A: The 8' dirty image using the Rainbow 3 color map.]]
+
*''wprojplanes=18'': The number of w-planes to use for w-projection.
| [[Image:MOO_1506+3156_dirty_zoom.png|200px|thumb|right|Figure 1B: Zooming on the obejcts of interest in the 8' dirty image.]]
+
 
 +
Figure 2A below shows the resulting dirty image, and Figure 2B shows a zoom-in of the central region of the image. Several outlying sources are detectable; the four brightest are marked with magenta circles.
 +
 
 +
{|
 +
|-  valign="top" ! scope="row" |
 +
|| [[Image:MOO_1506+3156_dirty.png|200px|thumb|Figure 2A: The 32' dirty image. Scaling Power Cycles has been set to -2 in the viewer's 'Data Display Options' to stretch the color scale. The locations of the four brightest far-field sources are marked with magenta circles.]]  
 +
|| [[Image:MOO_1506+3156_dirty_zoom.png|200px|thumb|Figure 2B: The same image as in Figure 2A after zooming in on the central objects.]]
 
|}
 
|}
  
Now, we can make a preliminary clean image before rounds of self-calibration to use as a reference.
+
We have a few options about how to deal with these outlying sources:
 +
 
 +
* Proceed with the self-calibration procedure using a large field-of-view that includes all the outlying sources.
 +
 
 +
* Proceed with the self-calibration procedure using a small field-of-view that includes only the central sources, and add an outlier field on each of the outlying sources.
 +
 
 +
<!--* Model and UV-subtract the outlying sources. -->
 +
<!--* Peel the outlying sources.  -->
 +
* Proceed with the self-calibration procedure using a small field-of-view that includes only the central sources, ignoring the outlying sources.
 +
 
 +
In this guide, we will first choose to ignore the outlying sources in order to present a simplified self-calibration procedure. This is also what was chosen for the scientific image and analysis because the artifacts from the outlying sources did not strongly effect the area of scientific interest (inner 3').  For more information about the other options described above, see the [https://casaguides.nrao.edu/index.php?title=VLA_CASA_Imaging-CASA5.5.0 casaguide on VLA Imaging] and the  [https://casa.nrao.edu/casadocs/casa-6.1.0/global-task-list/task_tclean/about CASAdocs page for the tclean task]. Sometimes, more advanced techniques are used for outlying sources such as UV-subtraction, peeling or direction-dependent calibration, but these are outside the scope of this guide.
 +
 
 +
=== Creating the Initial Model ===
 +
 
 +
After deciding how to deal with the outlying sources, our next step is to make the initial model that we will use for self-calibration. There are a few additional parameters that apply to this step, discussed below.
 +
 
 +
'''Image field-of-view''': For this science case we are only interested in sources within ~1.5' radius of the cluster center. Since we have chosen to ignore the outlying sources at this stage, we will proceed with an image field-of-view of 3'.
 +
 
 +
'''Wide-field effects''': We repeat the calculation of wprojplanes from the Initial Imaging section using our new field of view of 3'.  This results in wprojplanes ~ 1 so we turn off the correction for non-coplanar baselines by setting '' gridder='standard' ''.
 +
 
 +
'''Wide-band imaging''': Our images will combine data from all spectral windows, spanning a frequency range of about 4.5-6.5 GHz (a fractional bandwidth of about 36%).  Each source's amplitude may vary substantially over this frequency range, due to either the source's intrinsic spectral variation and/or the frequency dependence of the VLA's primary beam. To mitigate these errors during deconvolution we will use '' deconvolver='mtmfs' '' and ''nterms=2''. For further discussion of wide-band imaging, see the [https://casa.nrao.edu/casadocs/latest/imaging/synthesis-imaging/wide-band-imaging CASA documentation for wide-band imaging] and the [https://casaguides.nrao.edu/index.php/VLA_CASA_Imaging-CASA5.4.0 VLA Imaging CASAguide].
 +
 
 +
'''Imaging weights''': When constructing the initial model, especially when there are large image artifacts, it is recommended to use "robust" imaging weights.  In CASA, this is enabled with '' weighting='briggs' '', and then choosing a value for the ''robust'' parameter between -2 and +2.  Values of robust near +2 (approximately natural weighting) often result in large positive PSF sidelobes while robust near -2 (approximately uniform weighting) often produce large negative PSF sidelobes. Since many types of image artifacts scale with PSF sidelobe levels, a reasonable compromise is often around ''robust=0''.
 +
 
 +
'''Image deconvolution''':  We will need to deconvolve (clean) this image in order to produce a model of the field.  We will want to control the cleaning depth and masking interactively, so we set ''interactive=True''. We also must choose the number of clean iterations with the ''niter'' parameter. A suggested starting value is ''niter''= 1000 iterations, but this can be changed interactively after we start cleaning.
 +
 
 +
'''Saving the model''': After deconvolution, there are a couple options for how to write the model back to the measurement set, controlled by the ''savemodel'' parameter.  The default is ''none'' which will not save the model.  '''It is essential that this default is changed or else the selfcal procedure will not work properly.'''  The option '' savemodel='virtual' '' will save a copy of the image to the SOURCE subtable of the measurement set to be used later for on-the-fly model visibility prediction.  This option is sometimes recommended for very large data sets, but is not generally recommended.  The other option, '' savemodel='modelcolumn' '' is the recommended setting and the one that we will use in this guide.  This option will predict the model visibilities after cleaning and save the result to the MODEL_DATA column.
 +
 
 +
Now we are ready to create our first clean image. This image will provide the starting model that is required by the calibration routines, and it will showcase why we need self-calibration for these data.
 +
 
 
<source lang="python">
 
<source lang="python">
 
# in CASA
 
# in CASA
tclean(vis='obj.ms',imagename='obj.prelim_clean.3am',imsize=750,cell='0.24arcsec',weighting='briggs',deconvolver='mtmfs',niter=1000,interactive=True)
+
tclean(vis='obj.ms',imagename='obj.prelim_clean.3arcmin', datacolumn='data', imsize=900, cell='0.2arcsec', pblimit=-0.1, gridder='standard', deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='modelcolumn')
 
</source>
 
</source>
* ''cell='0.24arcsec''': The observations were taken in C-band in configuration B which gives a resolution of 1.2". Thus to 5 elements across the resolution of 1.2", we require a cell size of 0.24"/pixel.
+
*'' datacolumn='data' '': To image the visibilities in the measurement set's DATA column.
*''imsize=750'': For the science, we are only interested in the sources within ~1.5' of the cluster center. From the dirty image we also know that there are not any particularly problematic outliers in the field nearby, thus we create a 3' image. Thus to get an image of the entire PB, we use an imsize of 750 pixels.
+
* ''imsize=900'': To create an image with a 3' field-of-view.
*''deconvolver='mtmfs''': '''Josh:'''I believe you recommended this to be based on the width of the band of observations with respect to the observing frequency -- I don't remember the qualitative guidelines for center_frequency/total_bandwidth. This corrects for the spectral slope when wideband width -- in this case 1.87GHz.
+
* ''cell='0.2arcsec''': The size of an image pixel.
 +
* ''pblimit=-0.1'': To remove the PB mask; see previous sections.
 +
* ''gridder='standard''': We select the default gridder (assumes coplanar baselines).
 +
* ''deconvolver='mtmfs''': We will turn on the wide-band deconvolution algorithm as discussed above.
 +
* ''nterms=2'': The number of Taylor terms for wide-band deconvolution.
 
* ''niter=1000'': Set a relatively large number of iterations as a starting point.
 
* ''niter=1000'': Set a relatively large number of iterations as a starting point.
 
* ''interactive=True'': So we can interactively place the mask.
 
* ''interactive=True'': So we can interactively place the mask.
 +
* ''weighting='briggs''': Turn on 'robust' image weighting
 +
* ''robust=0'': Set the value of the robust weighting.
 +
* ''savemodel='modelcolumn''': To enable writing the MODEL_DATA column to the MS after imaging.  '''**important**'''
  
For this preliminary clean, we place circles around each of the strong sources in turn:
+
During the interactive cleaning, place conservative circular masks around each of the strong sources in turn, starting with the brightest:
* The rightmost source (Figure 2A) -- continue forward and let it clean by pressing the green circle arrow
+
* First mask the rightmost source (Figure 3A), press the green circle arrow in the CASA viewer to perform one cycle of cleaning, and wait for focus to return to the viewer. The viewer will then show you the current residual image (i.e., the image after subtracting some flux from within the first mask).
* The leftmost double-lobed source (Figure 2B)
+
* Then mask the leftmost double-lobed source (Figure 3B) and press the green circle arrow in the CASA viewer. This will perform the next cleaning cycle, after which focus will return to the viewer. Cleaning has now taken place inside the masks of both sources and the brightest source in the new residual image will be in the middle.
* The middle source (Figure 2C)
+
* Finally, mask the middle source (Figure 3C), press the green circle arrow again, and wait for CASA to complete the next cycle of cleaning.
At this point, there is no more emission that is believably real and we stop cleaning (a total of about 225 iterations)(see Figure 2D for an example of the artifacts that are not believable). The final cleaned image without the help of self-cal (Figure 3) shows a lot of artifacts and demonstrates a need for self-calibration (Josh: more details about why need self-cal? or why should we think self-cal would help?).
+
 
 +
At this point, the residual emission is at about the same level as the artifacts so we stop cleaning (press the red X in the CASA viewer; see Figure 3D for an example of the artifacts). '''It is strongly recommended to only mask and clean emission that is believed to be real so as not to include artifacts in the model.'''  <!--At this point we will have used a total of about 200 iterations.  -->
 +
<!--
 +
{|
 +
|-  valign="top"
 +
! scope="row" | [[Image:prelim_clean_1.png|200px|thumb|right|Figure 3A: The mask for rightmost source.]]
 +
|| [[Image:prelim_clean_2.png|200px|thumb|right|Figure 3B: The mask for the left double-lobed source.]]
 +
|| [[Image:prelim_clean_3_source.png|200px|thumb|right|Figure 3C: The mask for the middle source.]]
 +
|| [[Image:prelim_clean_4.png|200px|thumb|right|Figure 3D: The rightmost source after 3 iterations of adding masks. The emission around the current mask is characteristic of artifacts.]]
 +
|| [[Image:prelim_clean.png|200px|thumb|right|Figure 4A: The preliminarily cleaned image.]]
 +
|}
 +
-->
 +
{|
 +
|-  valign="top" ! scope="row" |
 +
|| [[Image:prelim_clean_1.png|200px|thumb|right|Figure 3A: The mask for rightmost source.]] || [[Image:prelim_clean_2.png|200px|thumb|right|Figure 3B: The mask for the left double-lobed source.]] || [[Image:prelim_clean_3_source.png|200px|thumb|right|Figure 3C: The mask for the middle source.]] || [[Image:prelim_clean_4.png|200px|thumb|right|Figure 3D: The rightmost source after 3 iterations of adding masks. The emission around the current mask is characteristic of artifacts.]]
 +
|}
  
 +
Figure 4A below shows the resulting clean image (obj.prelim_clean.3arcmin.image.tt0) that we will try to improve through the use of self-calibration.
 +
For reference, we will measure some simple image figures of merit to compare with the image after self-calibration.
 +
Specifically, we measure the peak intensity (maximum pixel value; see Figure 4B) in the image to be 6.67 mJy and image noise (RMS of pixel values; see Figure 4C) in a source-free region to be 18.2 uJy. This gives a ratio between the maximum and the noise of 366, which is called the dynamic range.
 
{|
 
{|
| [[Image:prelim_clean_1.png|200px|thumb|right|Figure 2A: The mask for rightmost source.]]
+
|-  valign="top" ! scope="row" |  
| [[Image:prelim_clean_2.png|200px|thumb|right|Figure 2B: The mask for the left double-lobed source.]]
+
|| [[Image:prelim_clean.png|200px|thumb|left|Figure 4A: The preliminarily cleaned image.  We have used Scaling Power Cycles -2 and divided the minimum of the data range by 10 to emphasize the artifacts.]]  
| [[Image:prelim_clean_3.png|200px|thumb|right|Figure 2C: The mask for the middle.]]
+
|| [[Image:prelim_clean_max.png|300px|thumb|left|Figure 4B: Measuring the maximum value by drawing a rectangular region over the entire image.]]  
| [[Image:prelim_clean_4.png|200px|thumb|right|Figure 2D: The rightmost source after 3 iterations of adding masks. The emission around the current mask is characteristic of artifacts.]]
+
|| [[Image:prelim_clean_rms.png|300px|thumb|left|Figure 4C: Measuring the source-free RMS by drawing a rectangular region near the source of interest and large enough to measure unbiased statistics (i.e., many synthesized beams), but avoiding any obvious real sources of emission.]]
 
|}
 
|}
  
[[Image:prelim_clean.png|300px|thumb|center|Figure 3: The preliminarily cleaned image.]]
+
=== Verifying the Initial Model ===
  
== Self-Calibration Process Outline ==
+
There have been reported instances where CASA fails to save the model visibilities when using interactive clean. It is crucial that the model is saved correctly, otherwise self-calibration will use the 'default' model of a 1 Jy point source at the phase center. The default model may be very different from your target field and we do not want to carry out the selfcal procedure with this incorrect model. Therefore, it is recommended to verify that the model has been saved correctly by inspecting the model visibilities.
What is self-calibration (self-cal)? Self-calibration is the process of using an existing model, often constructed from imaging the data itself, to reduce the remaining phase and amplitude errors in your image. First you create a model using the data itself (using {{tclean}}), then iteratively improve this model with further rounds of self-cal (using {{gaincall}} to compare the model to the data and {{applycal}} to create a corrected data column).  
 
  
A few typical cases in which self-cal will help improve the image (Josh: any other ideas?):
+
<source lang="python">
* Extensive sidelobes/artifacts from the source of interest or outliers
+
# in CASA
 +
plotms(vis='obj.ms', xaxis='UVwave', yaxis='amp', ydatacolumn='model', avgchannel='64', avgtime='300')
 +
</source>
 +
[[Image:Moo_model_uvwave.png|200px|thumb|right| Figure 5: The model visibilities.]]
 +
* '' vis='obj.ms' '': To plot visibilities from the split MS.
 +
* '' xaxis='UVwave', yaxis='amp' '': To set UV-distance in wavelengths as the x-axis and amplitude as the y-axis of the plot.
 +
* '' ydatacolumn='model' '': To plot the model visibilities (from the MODEL_DATA column).
 +
* '' avgchannel='64' '': To average all channels per SPW.
 +
* '' avgtime='300' '': To average in time in chunks of 300 seconds.
  
Each "round" of self-calibration follows a general procedure:
+
The resulting plot should resemble Figure 5 on the right. This plot shows that some baselines see up to 15 mJy of flux, but that the source becomes resolved on the longer baselines. Note that the visibilities plotted here are for correlations RR and LL since we dropped RL and LR with the {{split}} task.  However, had we retained RL and LR, they would equal zero since we only made a Stokes I model image.
# Conservatively clean target and save the model. Using interactive clean, only include the pixels that you are absolutely certain are real emission as this will be the basis of the model for calibration.
+
 
# Use '''{{gaincal}}''' with a particular solution interval to calculate a table of solutions.
+
This model that has been plotted is clearly not the default model of a 1 Jy point source (if it was, all amplitudes would be at 1 Jy) and so we have verified that {{tclean}} has correctly written the MODEL_DATA column of the MS.
# Check the solutions using '''{{plotcal}}.'''
 
# Determine if the solutions should be applied (or if it is time to stop). Is there structure to the solutions?
 
# Use '''{{applycal}}''' to apply the table of solutions to the data.
 
# Use '''{{split}}''' to make the calibrated data with the applied solutions, the new data (the starting point for the next round of self-cal).
 
# Start the next round of self-cal.
 
  
This guide will cover:
 
* How to do the self-calibration process in general.
 
* How to determine optimal selfcal parameters (e.g. solution interval).
 
* When to stop self-cal.
 
  
 
== First Round of Self-Calibration ==
 
== First Round of Self-Calibration ==
For this first round of self-cal, we will explore various parameters in order to settle on the optimal parameters. First, we make a conservative clean, meaning that we include in the mask only the pixels that we are absolutely convinced that is real emission.
 
  
=== Preliminary Clean ===
+
=== Solving for the First Self-Calibration Table ===
 +
For this first round of selfcal we will use the model that we just created above and compare it to the data in order to create a table of corrections to apply to the data. We are now ready to solve for these first selfcal solutions.
 +
We will explore various parameters of the task {{gaincal}} in order to learn more about the data and settle on the optimal parameters.
 +
The most relevant parameters are discussed below:
 +
 
 +
'''Solution interval''': This is controlled with the ''solint'' parameter and is one of the most fundamental parameters for self-calibration. The value of this parameter can vary between '' 'int' '' which stands for integration and will be the time of a single integration for that data set (corresponding to 3 seconds for this data set) up to '' 'inf' '' for infinite (meaning either an entire scan or the entire observation, depending on the value of the ''combine'' parameter). We typically want to choose the shortest solution interval for which we can achieve adequate signal-to-noise in the calibration solutions.
 +
 
 +
'''Data combination''':  The data can be combined in multiple ways to improve signal-to-noise, but if the target source is bright enough to obtain good calibration solutions in a short timescale without data combination then these options are not necessary. However if low signal-to-noise messages appear across antennas, times, and SPWs then both parallel-hand correlations, if present, can be combined by setting '' gaintype='T' '' instead of '' gaintype='G' '' and this will generally increase the signal-to-noise by an additional factor of root 2. If gaincal still produces a lot of low signal-to-noise messages, one can try to combine multiple SPWs with '' combine='spw' '' if the SPWs are at similar frequencies, and can generally expect to increase the solution's signal-to-noise by the square root of the number of SPWs that are combined. Combining scans during self-calibration is not usually recommended.
 +
 
 +
'''Amplitude and phase correction''': Because large phase errors will result in incoherent averaging and lead to lower amplitudes, we always want to start with phase-only self-calibration. We achieve this by setting '' calmode='p' ''. In later rounds of selfcal, after the phases have been well corrected, we can try '' calmode='ap' '' to include an amplitude component in the solutions.  When solving for amplitudes, we may also want to consider normalizing them with the ''solnorm'' parameter.
 +
 
 +
'''Reference antenna''':  As with standard calibration, we want to choose a reference antenna for the calibration solutions.  It is generally recommended to choose one that is near the center of the array but not heavily flagged. In order to determine which one to use, use ''plotants'' to plot the positions of the antennas and choose one near the center. To find the percent data flagged per antenna, you could run flagdata with mode='summary'.
 +
 
 +
'''Signal-to-noise ratio (SNR)''': The default minimum SNR in {{gaincal}} is 3.0, but this can be adjusted with the ''minsnr'' parameter. Solutions below this minimum are flagged in the output calibration table.  Sometimes we want to increase this minimum, e.g., to 5.0, to reject noisy solutions. Alternatively, we may want to lower this minimum, e.g., to zero, usually for inspection purposes.
 +
 
 +
We will now create our initial self-calibration table. This will not be the final table for the first round of self-calibration, but rather, a temporary table that we will inspect to help determine the optimal parameters.
 +
<source lang="python">
 +
# in CASA
 +
gaincal(vis='obj.ms',caltable='selfcal_initial.tb',solint='int',refant='ea24',calmode='p',gaintype='G',minsnr=0)
 +
</source>
 +
*''caltable='selfcal_initial.tb':'''Name the calibration tables something intuitive to distinguish each one.
 +
*''solint='int':'' We choose a solution interval equal to the integration time (3 seconds) in order to get a sense of the structure and timescale of the variations.
 +
*''refant='ea24':'' The chosen reference antenna.
 +
*''calmode='p':'' To start with phase only calibration.
 +
*''gaintype='G':'' To solve for the polarizations separately.
 +
*''minsnr=0'': To turn off flagging of low-SNR solutions, so that we can inspect all the solutions.
 +
 
 +
You may see several messages printed to the terminal while {{gaincal}} is running, e.g.,
 +
 
 +
<pre style="background-color:lightgrey;">
 +
Found no unflagged data at:  (time=2017/10/13/20:50:07.5 field=0 spw=0 chan=0)
 +
</pre>
 +
 
 +
This means that all the input data was flagged for this solution interval. This is generally harmless unless there are far fewer solutions in the output table than you were expecting. 
 +
 
 +
It is recommended to check the logger messages written by {{gaincal}} to find the total number of solution intervals, i.e.,
 +
 
 +
<pre style="background-color:#fffacd;">
 +
INFO gaincal Calibration solve statistics per spw:  (expected/attempted/succeeded):
 +
INFO gaincal   Spw 0: 561/522/522
 +
INFO gaincal   Spw 1: 561/522/522
 +
...  ...          ...
 +
</pre>
 +
 
 +
This shows that {{gaincal}} successfully found solutions for most of the solution intervals.
 +
 
 +
=== Plotting the First Self-Calibration Table ===
 +
 
 +
To view these solutions, we use {{plotms}}.
 +
[[Image:Selfcal_initial_plotms1.png|200px|thumb|right|Figure 6: The phase solutions vs. time for the first 9 antennas, colored by polarization.]]
 +
<source lang="python">
 +
# in CASA
 +
plotms(vis='selfcal_initial.tb',xaxis='time',yaxis='phase',iteraxis='antenna',gridrows=3, gridcols=3, coloraxis='corr')
 +
</source>
 +
*''xaxis='time' & yaxis='phase' '': View the phase variations over time with respect to antenna 24.
 +
*''iteraxis='antenna' '': Create separate plots of the corrections for each antenna.
 +
*''gridrows=3 & gridcols=3'': It can be helpful to view multiple plots at once, as we will be stepping through several plots. In this case, 9 plots per page.
 +
*''coloraxis='corr' '': To use different colors when plotting different polarizations (R and L will be black and red, respectively).
 +
 
 +
Iterate through these plots using the 'Next Iteration' button (green triangle) to inspect the solutions for all antennas.  Some noteworthy observations include:
 +
 
 +
* There are large, coherent phase changes of more than 100 degrees.
 +
* The timescale of these changes is fairly short, about 20 seconds (zoom in on the variations for a particular antenna to see this).
 +
* The scatter in these signals is high (approximately a few 10s of degrees), indicating low signal-to-noise.
 +
* The phase changes in the two polarizations appear to match each other.
 +
* There is some interesting behavior in the 3rd scan for antennas ea15 and ea27.
 +
''Note:'' The plot of the reference antenna, ea24, is not unusual and should be considered to be consistent with zero phase.
 +
 
 +
''' ''It is apparent from these plots that we can combine polarizations to improve the solution signal-to-noise ratio, since we observed that the solutions for the two polarizations were very similar. '' '''
 +
 
 +
The next thing we want to understand is if we can combine SPWs, and if so, which ones.  We can plot the previous solutions in a slightly different way to help answer this question.  We will view these solutions again using {{plotms}}, but this time we will color the solutions by SPW.
 +
[[Image:Selfcal_initial_plotms2.png|200px|thumb|right|Figure 7: The phase solutions vs. time, colored by spectral window, second iteration.]]
 +
<source lang="python">
 +
# in CASA
 +
plotms(vis='selfcal_initial.tb',xaxis='time',yaxis='phase',iteraxis='antenna',gridrows=3, gridcols=3, coloraxis='spw')
 +
</source>
 +
*''xaxis='time' & yaxis='phase' '': View the phase variations over time with respect to antenna 24.
 +
*''iteraxis='antenna' '': Create separate plots of the corrections for each antenna.
 +
*''gridrows=3 & gridcols=3'': It can be helpful to view multiple plots at once, as we will be stepping through several plots. In this case, 3 plots per page.
 +
*''coloraxis='spw' '': To use different colors when plotting different SPWs.
 +
 
 +
Iterate again through these plots using the 'Next Iteration' button (green triangle) to inspect the solutions for all antennas. When you get to ea15 it should be clear that the solutions are not the same for all SPWs. This is also true but less obvious for ea27 due to the limited number of colors available to {{plotms}}.
 +
 
 +
We can inspect this further in the following plot:
 +
[[Image:Selfcal_initial_plotms3.png|200px|thumb|right|Figure 8: The phase solutions vs. time for antenna ea15, colored by scan, second iteration.]]
 +
<source lang="python">
 +
# in CASA
 +
plotms(vis='selfcal_initial.tb',xaxis='time',yaxis='phase',antenna='ea15',iteraxis='spw',gridrows=3, gridcols=3, coloraxis='scan')
 +
</source>
 +
*''xaxis='time' & yaxis='phase' '': View the phase variations over time with respect to antenna 24.
 +
*''antenna='ea15' '': To select only antenna ea15.
 +
*''iteraxis='spw' '': Create separate plots of the corrections for each SPW.
 +
*''gridrows=3 & gridcols=3'': To view multiple plots at once. In this case, 9 plots per page.
 +
*''coloraxis='scan' '': To use different colors when plotting different scans.
 +
 
 +
The first set of 9 plots should have a similar pattern.  On the next iteration, this pattern should continue for SPWs 9~11, but then change for SPWs 12~15. The signal-to-noise for SPW 13 is also noticeably lower. If we create these plots for ea27 we will see a similar pattern, only this time the pattern is constant over SPWs 0~5 and then it changes to a new pattern that is constant for SPWs 6~15.  For both ea15 and ea27, the change only happens in the third of the three scans.
 +
 
 +
''' ''Unfortunately, since all SPWs do not show the same phase solutions, it will not be trivial to combine them to increase the signal-to-noise ratio of the solutions.  Therefore, we will continue without combining SPWs.'' '''  <!--, but in Appendix ... we show a more complicated example of how to handle this situation. -->
 +
 
 +
=== Examples of Various Solution Intervals ===
 +
 
 +
Now that we have made the decision about how to handle SPW combination, we will move on to consider time averaging. We observed the previous solutions to display large, coherent phase changes but also to have significant scatter due to low signal-to-noise. We could increase the signal-to-noise by root 2 for each doubling of the solution interval, but it does not make sense to average over timescales larger than the characteristic time over which the phase remains constant (approximately 20 seconds for these data). In this section, we will demonstrate these effects by creating and plotting tables over a range of solution intervals. We will also combine both polarizations (with gaintype='T' ) to improve the solution signal-to-noise ratio, since we observed the two polarizations to measure approximately the same phase changes.
 +
 
 +
These commands will create 6 new tables having solution intervals of 3, 6, 12, 24, 48 and 96 seconds (1, 2, 4, 8, 16 and 32 times the data's integration time)
 +
<source lang="python">
 +
# in CASA
 +
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_3.tb',solint='int',refant='ea24',calmode='p',gaintype='T', minsnr=0)
 +
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_6.tb',solint='6s',refant='ea24',calmode='p',gaintype='T', minsnr=0)
 +
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_12.tb',solint='12s',refant='ea24',calmode='p',gaintype='T', minsnr=0)
 +
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_24.tb',solint='24s',refant='ea24',calmode='p',gaintype='T', minsnr=0)
 +
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_48.tb',solint='48s',refant='ea24',calmode='p',gaintype='T', minsnr=0)
 +
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_96.tb',solint='96s',refant='ea24',calmode='p',gaintype='T', minsnr=0)
 +
</source>
 +
 
 +
These commands will plot each of the newly created tables.  Run the commands sequentially and use the plotms GUI to iterate through the plots of additional antennas.
 +
<source lang="python">
 +
# in CASA
 +
plotms(vis='selfcal_combine_pol_solint_3.tb',yaxis='phase',iteraxis='antenna',gridrows=2, gridcols=2, coloraxis='spw')
 +
plotms(vis='selfcal_combine_pol_solint_6.tb',yaxis='phase',iteraxis='antenna',gridrows=2, gridcols=2, coloraxis='spw')
 +
plotms(vis='selfcal_combine_pol_solint_12.tb',yaxis='phase',iteraxis='antenna',gridrows=2, gridcols=2, coloraxis='spw')
 +
plotms(vis='selfcal_combine_pol_solint_24.tb',yaxis='phase',iteraxis='antenna',gridrows=2, gridcols=2, coloraxis='spw')
 +
plotms(vis='selfcal_combine_pol_solint_48.tb',yaxis='phase',iteraxis='antenna',gridrows=2, gridcols=2, coloraxis='spw')
 +
plotms(vis='selfcal_combine_pol_solint_96.tb',yaxis='phase',iteraxis='antenna',gridrows=2, gridcols=2, coloraxis='spw')
 +
</source>
 +
 
 +
The following figures show examples of the first iteration of each of the above plots.
 +
{|
 +
|-  valign="top" ! scope="row" |
 +
|| [[Image:Selfcal_solint_3s.png|250px|thumb|right|Figure 9A: The phase solutions vs. time of the solint=3s table, first four antennas, colored by SPW.]]
 +
|| [[Image:Selfcal_solint_6s.png|250px|thumb|right|Figure 9B: The phase solutions vs. time of the solint=6s table, first four antennas, colored by SPW.]]
 +
|| [[Image:Selfcal_solint_12s.png|250px|thumb|right|Figure 9C: The phase solutions vs. time of the solint=12s table, first four antennas, colored by SPW.]]
 +
|-
 +
|| [[Image:Selfcal_solint_24s.png|250px|thumb|right|Figure 9D: The phase solutions vs. time of the solint=24s table, first four antennas, colored by SPW.]]
 +
|| [[Image:Selfcal_solint_48s.png|250px|thumb|right|Figure 9E: The phase solutions vs. time of the solint=48s table, first four antennas, colored by SPW.]]
 +
|| [[Image:Selfcal_solint_96s.png|250px|thumb|right|Figure 9F: The phase solutions vs. time of the solint=96s table, first four antennas, colored by SPW.]]
 +
|}
 +
 
 +
=== Comparing the Solution Intervals ===
 +
 
 +
We can see from plotting these solutions that the shortest timescale solutions capture the structure of the phase variations, but with a large dispersion.  If we were to apply these low signal-to-noise ratio (SNR) solutions then we would, on average, be correcting for the large phase changes but we would also introduce random phase errors that could reduce the sensitivity of our observation. Another concern with using low-SNR solutions is that they can overfit the noise in the visibilities, leading to biases in the self-calibrated image. In this guide we adopt a conservative minimum SNR of 6 in order to guard against these biases. 
 +
 
 +
Let's take a closer look at the SNR of the table with the 3 second integration time.  We will use the table toolkit ('''tb''') to extract the SNR of each solution which will return the SNR values as a '''numpy ndarray''' (and the '''numpy ravel''' method will flatten the result into a 1-dimensional array). Then we use numpy and scipy to print some statistical quantities and matplotlib to make a histogram.
 +
[[Image:Selfcal_3s_SNR_hist.png|200px|thumb|right|Figure 10: Distribution of signal-to-noise ratios for the selfcal table with solint = 3s.]]
 +
<source lang="python">
 +
# in CASA
 +
import matplotlib.pyplot as plt
 +
from scipy import stats
 +
 
 +
tb.open( 'selfcal_combine_pol_solint_3.tb' )
 +
snr_3s = tb.getcol( 'SNR' ).ravel()
 +
tb.close()
 +
 
 +
plt.hist( snr_3s, bins=50 )
 +
 
 +
print( 'median = {0}'.format( np.median( snr_3s ) ) )
 +
print( 'P(<=6) = {0}'.format( stats.percentileofscore( snr_3s, 6 ) ) )
 +
</source>
 +
 
 +
We can see from the output that the median SNR is about 5.7 for this table and that enforcing our desired minimum SNR of 6 would flag 61% of the solutions.  We want to avoid flagging such a high fraction of solutions and so we need to consider longer solution intervals to raise the SNR.
 +
 
 +
[[Image:Selfcal_compare_3s_96s.png|200px|thumb|right|Figure 11: The phase solutions vs. time of the solint=3s table (blue) and 96s table (red), for antenna 0, spw 1 and scan 7.]]
 +
But the longest solution interval in our examples (96 seconds) has a different problem.  Specifically, we can see that the intrinsic phase is varying faster than the solution interval and so the solutions no longer do a good job of capturing the changes that we are trying to correct for. Applying such long timescale solutions may lead to some improvement in the image, but we would be leaving residual phase errors in the corrected data.  This is particularly obvious if we overplot the solutions using {{plotms}}. This can not be done from the command line but can be done from the {{plotms}} GUI.  All you need to do is plot the first calibration table, click the 'Add Plot' button in the lower left corner of the GUI window, enter the parameters for your second calibration table and click the 'Plot' button.  The figure to the right shows one such example, comparing the 3 second solution intervals with the 96 second solution intervals.
 +
 
 +
''' ''Given these considerations, we suggest that the optimal selfcal parameters will use the shortest possible interval for which the signal-to-noise is also sufficient.'' ''' Having identified shortcomings with the shortest (3s) and longest (96s) solution intervals in our set of example tables, we will now take a closer look at the SNR of the intermediate tables. The following code will compare the SNR histograms and compute the fraction of solutions less than a SNR of 6.
 +
 
 +
[[Image:Selfcal_snr_comparison.png|200px|thumb|right|Figure 12: The SNR distribution for several example selfcal tables.]]
 +
<source lang="python">
 +
# in CASA
 +
import matplotlib.pyplot as plt
 +
 
 +
tb.open( 'selfcal_combine_pol_solint_6.tb' )
 +
snr_6s = tb.getcol( 'SNR' ).ravel()
 +
tb.close()
 +
 
 +
tb.open( 'selfcal_combine_pol_solint_12.tb' )
 +
snr_12s = tb.getcol( 'SNR' ).ravel()
 +
tb.close()
 +
 
 +
tb.open( 'selfcal_combine_pol_solint_24.tb' )
 +
snr_24s = tb.getcol( 'SNR' ).ravel()
 +
tb.close()
 +
 
 +
tb.open( 'selfcal_combine_pol_solint_48.tb' )
 +
snr_48s = tb.getcol( 'SNR' ).ravel()
 +
tb.close()
 +
 
 +
plt.hist( snr_6s, bins=50, normed=True, histtype='step', label='6 seconds' )
 +
plt.hist( snr_12s, bins=50, normed=True, histtype='step', label='12 seconds' )
 +
plt.hist( snr_24s, bins=50, normed=True, histtype='step', label='24 seconds' )
 +
plt.hist( snr_48s, bins=50, normed=True, histtype='step', label='48 seconds' )
 +
plt.legend( loc='upper right' )
 +
plt.xlabel( 'SNR' )
 +
 
 +
print( 'P(<=6) = {0}  ({1})'.format( stats.percentileofscore( snr_6s, 6 ), '6s' ) )
 +
print( 'P(<=6) = {0}  ({1})'.format( stats.percentileofscore( snr_12s, 6 ), '12s' ) )
 +
print( 'P(<=6) = {0}  ({1})'.format( stats.percentileofscore( snr_24s, 6 ), '24s' ) )
 +
print( 'P(<=6) = {0}  ({1})'.format( stats.percentileofscore( snr_48s, 6 ), '48s' ) )
 +
</source>
 +
 
 +
These results show that imposing a minimum SNR of 6 on the 6 second table will flag more than 11% of the solutions and that the same restriction will flag less than 2% of the 12 second table's solutions. Based on these numbers, we conclude that the 12 second table provides a reasonable compromise between time resolution and signal-to-noise.
 +
 
 +
We now need to reproduce this table to impose our desired minimum SNR condition of 6.
 +
<source lang="python">
 +
# in CASA
 +
 
 +
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_12_minsnr_6.tb',solint='12s',refant='ea24',calmode='p',gaintype='T', minsnr=6)
 +
</source>
 +
 
 +
You may see messages printed to the terminal during the gaincal execution about solutions being flagged, e.g.,
 +
 
 +
<pre style="background-color:lightgrey;">
 +
3 of 27 solutions flagged due to SNR < 6 in spw=0 at 2017/10/13/20:59:21.0
 +
</pre>
 +
 
 +
This is telling us how many solutions have been flagged for being below the minimum signal-to-noise ratio set by the ''minsnr'' parameter.  You will not have seen these messages in the first execution of {{gaincal}} because we set ''minsnr=0'', but you are likely to see them in subsequent executions now that ''minsnr=6''. One such message is printed per-SPW and per-solution interval (time bin) if one or more of the solutions is flagged.  In our example, there are 27 total solutions (one per antenna), but if we had not elected to combine polarizations there would be 56 total solutions (one per-antenna per-polarization).  If you see a large number of these messages, it can be useful to try to determine if they correspond to the same antenna, SPW or time as this may indicate the presence of bad data. If these messages appear across antennas, times and SPWs then this likely indicates that the signal-to-noise is too low and that more data needs to be combined (see above recommendations in '''data combination''').
 +
 
 +
=== Applying the First Self-Calibration Table ===
 +
 
 +
Now that we have converged on a table of self-calibration solutions we are ready to apply these to our data.
 +
The default {{applycal}} parameters are adequate to apply this table.
 +
<source lang="python">
 +
# in CASA
 +
applycal(vis='obj.ms',gaintable='selfcal_combine_pol_solint_12_minsnr_6.tb')
 +
</source>
 +
 
 +
Note: If you decide to apply solutions that you created by combining all the spectral window together (''combine''='spw') then in {{applycal}} you will have to set spwmap=[0 x number of spectral windows] in order to tell CASA to apply the combined solution to all of the spectral windows -- in this case ''spwmap'' = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0].
 +
 
 +
Check the CASA Logger to confirm the amount of data flagged in this stage.  You will see a message like:
 +
<pre style="background-color:#fffacd;">
 +
INFO applycal   T Jones: In: 86701414 / 403273728  (21.4993955669%) --> Out: 97559200 / 403273728  (24.1918065141%) (selfcal_combine_pol_solint_12_minsnr_6.tb)
 +
</pre>
 +
 
 +
This means that 21.5% of the data were already flagged prior to running {{applycal}}, and that a total of 24.2% of data are now flagged after application of the self-calibration solutions.  This is reasonable, given our previous calculation of 1.8% flagged solutions and since the percentages reported in the logger are based on counting baseline-based flags.
 +
 
 +
Another important line from the CASA Logger to pay attention to is this one:
 +
<pre style="background-color:#fffacd;">
 +
INFO FlagVersion Creating new backup flag file called applycal_1
 +
</pre>
 +
 
 +
This is telling you that a copy of the initial (21.5%) flags was saved.  You can restore the initial state of the flags using the task {{flagmanager}} should you ever wish to undo this step.
 +
 
 +
=== Summary of First Round of Self-Calibration ===
 +
1) Create a calibration table with ''solint='int' '' and ''minsnr=0''. Inspect the solutions, focusing on the timescale of the phase variations.
 +
2) Create several calibration tables using multiples of the integration time.
 +
3) Inspect these tables and choose one that balances SNR and capturing the real variations in the phase.
 +
4) Apply chosen solution.
 +
 
 +
 
 +
== Imaging the Self-calibrated Data ==
 +
 
 +
The next thing that you may want to do is create a new image to assess the effects of the first round of self-calibration.
 +
We can do this by running {{tclean}} using similar parameters as used previously, with two notable exceptions: (1) we instruct {{tclean}} to read visibilities from the CORRECTED_DATA column since that is the column to which the self-calibration solutions have been applied, and (2) we turn off saving of the model visibilities as to not overwrite the current MODEL_DATA column.
 +
 
 +
<source lang="python">
 +
# in CASA
 +
tclean(vis='obj.ms',imagename='obj.selfcal1_clean.3arcmin', datacolumn='corrected', gridder='standard', cell='0.2arcsec', imsize=900, pblimit=-0.1, deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='none')
 +
</source>
 +
*'' datacolumn='corrected' '': To image the visibilities in the measurement set's CORRECTED_DATA column.
 +
* ''savemodel='none''': To disable writing the MODEL_DATA column.
 +
 
 +
During the interactive cleaning, we will again place circular masks around each of the strong sources.
 +
First we mask the rightmost source and click the green arrow. Comparing the sidelobe pattern around the rightmost source with Figure 3A, you may notice that the pattern is now more symmetric.  This is a good sign that the deconvolution will improve.
 +
When focus returns to the viewer window, mask the leftmost double-lobed source and click the green arrow. Compare this source with Figure 3B and you will notice a similar improvement.  When focus returns to the viewer, proceed to mask the central source.  Also check on the masks of the first two sources.  Depending on how you drew the mask for the leftmost source, you may want to enlarge the mask to include additional, low-level emission. Then click the green arrow.
 +
 
 +
We have now reached the point where we stopped the cleaning of the initial image, because the residual image appeared to be dominated by artifacts and noise.  This is clearly not the case for this self-calibrated image as there is more convincingly real emission that needs to be cleaned. Specifically, there is a new source about 15 arcsec to the Southeast of the central source, and the masks around the leftmost and central sources need to be expanded to include additional emission.  Continue the process of cleaning and masking a couple more times until the residual image appears dominated by artifacts / noise. Then click the red X to finish cleaning.
 +
 
 +
=== Inspecting the Self-calibrated Image ===
 +
We can compare the intial and self-calibrated images by loading them both in the viewer.  It is a good idea to set an identical data range and scaling power cycles in the viewer's data display options,  This is made easier by checking the box 'Global Color Settings' at the bottom of the 'Data Display Options' window.  After checking this box, any adjustments to the data range and scaling power cycles will apply to all loaded images. We find that a data range of [-5e-05, 0.01] and scaling power cycles of -2.5 provides a good comparison. 
 +
 
 +
Switching between the two images (shown below) reveals a dramatic improvement in image quality.  Specifically, the artifact patterns centered on the bright sources have been almost entirely eliminated, except for the rightmost source where the artifacts have been reduced to a level that is no longer problematic for scientific analysis of the central source. This is primarily the result of improving phase calibration, and also potentially due to the extra flagging of the low SNR solutions (which may have corresponded to some bad data). Re-measuring our fundamental image statistics, we see that the peak flux has increased from 6.67 to 9.10 mJy and the image RMS has decreased from 18.2 to 8.24 uJy.  This is an improvement in dynamic range of a factor of 3x, from 366 to 1104! 
 +
 
 +
Note: when multiple images have been loaded in the viewer, the statistics may relate to an image other than the currently displayed image, and can be cycled through images independently of the displayed image by using the 'Next' button.
 +
 
 +
{| 
 +
|-  valign="top" ! scope="row" |
 +
|| [[Image:preliminary_image_before_selfcal.png|350px|thumb|Figure 13A: The 3' clean image made before self-calibration. This is the same image as in Figure 4A but with a different color mapping. ]]
 +
|| [[Image:comparison_image_after_selfcal.png|350px|thumb|Figure 13B: The 3' clean image after applying self-calibration. The color scale matches that of Figure 13A.]]
 +
|}
 +
 
 +
 
 +
== Possible Next Steps ==
 +
 
 +
The items in this section are intended to provide some guidance on how to proceed after the first round of self-calibration.  Code examples are included in some subsections for added clarity.  Please note that each block of code is not intended to be run sequentially.  Instead, this section is intended to be treated as a decision tree.
 +
 
 +
=== Decide to Stop ===
 +
We have now completed the first round of self-calibration and seen a dramatic improvement in image quality. The first question we want to ask is ''' ''is this good enough to meet our scientific requirements?'' '''.  The pursuit of a 'perfect' looking image is typically unnecessary and may be a large time sink. Therefore, it is important to continue trying to improve the image only if absolutely necessary.  If the image is usable as-is, then we should stop here. In this specific case (with the data set used in this tutorial), one round of self-calibration was enough to achieve the scientific goals (determining the morphology of the radio sources in this galaxy cluster).
 +
 
 +
=== Modify Image Parameters ===
 +
We may want to accept that the self-calibrated data is good enough for our science requirements, but revisit the imaging parameters used to make our final image.  For example, the 3 arcmin image suffers at a low level from a source outside the image's field of view (the western source circled in Figure 2A). We could address this by repeating the final imaging with an outlier field on this source or by increasing the image size until the field of view is large enough to include the source.  Other parameters we might want to think about if we decide to re-image could be: using multi-scale deconvolution, changing the value of Briggs robust weighting, changing the number of terms used for wide-band deconvolution and changing the gridder (e.g., using awproject instead of standard).
 +
 
 +
=== Freeze in the Self-calibration Solutions ===
 +
We may want to make a copy of the self-calibrated visibilities by running the task {{split}}.  The calibrated data is in the measurement set's CORRECTED_DATA column, and if we select this column with the {{split}} task then it will place the calibrated data in the DATA column of the output MS.  This can greatly reduce the size of the measurement set since two of the three large columns of visibilities (MODEL_DATA and CORRECTED_DATA) will not be present in the output MS.  Running {{split}} in this manner can be a great way to archive the final self-calibrated visibilities, prepare the data for combination with other observations, prepare the data for further rounds of self-calibration, etc. Below is an example of this step.
 +
 
 
<source lang="python">
 
<source lang="python">
 
# in CASA
 
# in CASA
tclean(vis='obj.ms',imagename='obj.sc_init',imsize=750,cell='0.24arcsec',weighting='briggs',deconvolver='mtmfs',niter=500,savemodel='modelcolumn',interactive=True)
+
split(vis='obj.ms', datacolumn='corrected', outputvis='obj_selfcal.ms')
 
</source>
 
</source>
We keep the same parameters as before, but reduce the number of iterations and include savemodel='modelcolumn' which is crucial. By including savemodel, we ensure that the model is explicitly saved to the model column.
 
  
Show an example of the masks we used and say approximately how many iterations. It looks like we did just the left source and the middle source. Do you perhaps know why we didn't include the right source at all in the mask? This is perplexing to me as that is the main source that we would want to model I would think.  
+
=== Further Rounds of Phase-only Self-calibration ===
 +
There are a few options for how to continue trying to improve the calibration with additional rounds of selfcal. Each of these options is considered to be equally valid, but there are differences in how one keeps track of the data and the solutions. 
 +
 
 +
The fundamental idea behind multiple rounds of selfcal is that we can continue to create a better model and therefore solve for more accurate solutions.  Think back to our initial image, which involved a shallow, conservative clean that only modeled the brightest peaks of a few sources. Now that we have improved the calibration and can clean deeper, we can make a more complete model of the field and feed this back to solve for better solutions.  Typically, successive rounds of selfcal suffer from diminishing returns, wherein the first round makes the largest corrections and yields the most noticeable image improvement. However, there are cases where several rounds are necessary to obtain the desired image quality. If you are unsure if multiple rounds are needed, one approach is just to try another round and then compare the image dynamic range and artifacts to your previous result.
 +
 
 +
===== Option 1 =====
 +
This option involves running {{split}} as above to freeze in the previous self-calibration solutions.  Subsequent iterations of selfcal therefore produce incremental corrections to those that have been previously applied. This has the advantage that inspection of these 2nd order corrections can be useful when deciding to continue or stop further rounds of selfcal. Specifically, if the 2nd order solutions are noise-like with no discernible structure or offsets from zero phase, then applying them to the visibilities is unlikely to improve the image.  
  
Check that the model saved?
+
To proceed with this variant of selfcal, simply run {{split}} as above to freeze in the first round of solutions, then proceed with split's output measurement set in the same way we used '''obj.ms''' in this guide, i.e., set the model with {{tclean}}, then run {{gaincal}}, {{applycal}} and finally {{tclean}} again to produce the round 2 self-calibrated image. Below is an example of this procedure.
  
=== Choose a Solution Interval ===
 
Next, we investigate which solution interval (solint) is optimal for this particular dataset. We will do this by generating and viewing the corrections calculated over several solution intervals (namely on the order of the integration time, 15 seconds, and 45 seconds) using {{gaincal}}.
 
 
<source lang="python">
 
<source lang="python">
 
# in CASA
 
# in CASA
gaincal(vis='obj.ms',caltable='sc1_tb_int',solint='int',refant='ea24',calmode='p',gaintype='T')
+
 
 +
split(vis='obj.ms', datacolumn='corrected', outputvis='obj_selfcal.ms')
 +
 
 +
tclean(vis='obj_selfcal.ms',imagename='obj.selfcal1_prelim_clean.3arcmin', datacolumn='data', imsize=900, cell='0.2arcsec', pblimit=-0.1, gridder='standard', deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='modelcolumn')
 +
 
 +
gaincal(vis='obj_selfcal.ms',caltable='selfcal_round2.tb',solint='12s',refant='ea24',calmode='p',gaintype='T', minsnr=6)
 +
 
 +
applycal(vis='obj_selfcal.ms',gaintable='selfcal_round2.tb')
 +
 
 +
tclean(vis='obj_selfcal.ms',imagename='obj.selfcal2_clean.3arcmin', datacolumn='corrected', gridder='standard', cell='0.2arcsec', imsize=900, pblimit=-0.1, deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='none')
 
</source>
 
</source>
*''caltable='sc1_tb_int':'''Name the calibration tables something intuitive to distinguish each one. In this case, sc1 for self-calibration round 1, tb for table, and int for a solution interval of the integration time.
 
*''solint='int':'' For this first calibration table, we choose the solutions to be calculated on the level of the integration time (3 seconds in this case) in order to get a sense of the structure and durations of the variations.
 
*''refant='ea24':'' Choose a reference antenna that is near the center of the array but not the exact center (Josh: I think that is why we chose this one?). In order to view what antennas are near the center use {{plotants}}.
 
*''calmode='p':'' Start with phase only calibration as that is where the most gain is to be had (there is a way more technical way to say that).
 
*''gaintype='T':'' It is often a good idea to combine the polarizations to understand the overall data.
 
  
To view these solutions, we use {{plotcal}}.
+
===== Option 2 =====
 +
This option avoids running {{split}} and continues to work with the original measurement set, i.e., '''obj.ms'''.  First run {{tclean}} to set the model, then run {{gaincal}}.  Unlike Option 1, these solutions will not be incremental.  Instead, you are basically re-solving for the 1st order corrections the same way as in round 1, except you are using an updated model to do so.  Then you run {{applycal}} and apply the new calibration table instead of the round 1 table.  Finally, run {{tclean}} again to produce the round 2 self-calibrated image. There will not be any incremental solutions to inspect to help decide when to stop, so you will need to compare the selfcal tables and/or the final image properties. Below is an example of this procedure.
 +
 
 
<source lang="python">
 
<source lang="python">
 
# in CASA
 
# in CASA
plotcal(caltable='sc1_tb_int.gain',xaxis='time',yaxis='phase',iteration='antenna',subplot=321)
+
 
 +
tclean(vis='obj.ms',imagename='obj.selfcal1_prelim_clean.3arcmin', datacolumn='data', imsize=900, cell='0.2arcsec', pblimit=-0.1, gridder='standard', deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='modelcolumn')
 +
 
 +
gaincal(vis='obj.ms',caltable='selfcal_round2.tb',solint='12s',refant='ea24',calmode='p',gaintype='T', minsnr=6)
 +
 
 +
applycal(vis='obj.ms',gaintable='selfcal_round2.tb')
 +
 
 +
tclean(vis='obj.ms',imagename='obj.selfcal2_clean.3arcmin', datacolumn='corrected', gridder='standard', cell='0.2arcsec', imsize=900, pblimit=-0.1, deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='none')
 
</source>
 
</source>
*''xaxis='time' & yaxis='phase':'' View the phase variations over time with respect to antenna 24.
 
*''iteration='antenna':'' View the corrections for each antenna.
 
*''subplot=321:'' It can be helpful to view multiple plots at once, as we will be scrolling through the solutions for each antenna. In this case, 3 rows and two columns.
 
  
Show examples of a few of these. {{Gaincal}} calculates the gains for each antenna/spwid are determined from the ratio of the data column (raw data), divided by the model column, for the specified data selection.(Josh: a bit more about technically what we are looking at and how they affect the data? -- these are the variations in phase between pairs of antennas which can cause the artifacts?) Each color represents a different spectral window. There is definitely structure we want to correct for. We use this solint = int to get a sense of on what time scale the variations occur and the amplitude of the variations. Use the zoom in feature to zoom in on these variations and estimate what the time interval of these variations are. The solint of int shows the variations, but we postulate that a 15 second or 45 second interval would catch these overall variations better.  
+
===== Option 3 =====
 +
This option also avoids running {{split}} and continues to work with the original measurement set, but unlike Option 2, allows you to create incremental solutions.  First, run {{tclean}} to set the model. Then run {{gaincal}} and use the ''gaintable'' parameter to provide a list of all previous selfcal tables. In our example, if this was round 2, we would set ''gaintable='selfcal_combine_pol_solint_12_minsnr_6.tb' ''. This would pre-apply the round 1 selfcal table and write only incremental solutions to the output table.  Then run {{applycal}} and provide a list of all previous selfcal tables plus the new table created by gaincal. Finally, run {{tclean}} again to produce the next self-calibrated image. This variant shares some benefits of Options 1 and 2, but can become unwieldly if you start to build up long lists of tables after several iterations. Below is an example of this procedure.
  
 
<source lang="python">
 
<source lang="python">
 
# in CASA
 
# in CASA
gaincal(vis='obj.ms',caltable='sc1_tb_15s',solint='15s',refant='ea24',calmode='p',gaintype='T')
+
 
gaincal(vis='obj.ms',caltable='sc1_tb_45s',solint='45s',refant='ea24',calmode='p',gaintype='T')
+
tclean(vis='obj.ms',imagename='obj.selfcal1_prelim_clean.3arcmin', datacolumn='data', imsize=900, cell='0.2arcsec', pblimit=-0.1, gridder='standard', deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='modelcolumn')
 +
 
 +
gaincal(vis='obj.ms',caltable='selfcal_round2.tb',solint='12s',refant='ea24',calmode='p',gaintype='T', minsnr=6, gaintable='selfcal_combine_pol_solint_12_minsnr_6.tb')
 +
 
 +
applycal(vis='obj.ms',gaintable=['selfcal_combine_pol_solint_12_minsnr_6.tb','selfcal_round2.tb'])
 +
 
 +
tclean(vis='obj.ms',imagename='obj.selfcal2_clean.3arcmin', datacolumn='corrected', gridder='standard', cell='0.2arcsec', imsize=900, pblimit=-0.1, deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='none')
 
</source>
 
</source>
  
Images of both of these. Comparing the two, we see that they both encapsulate the structure of the variations, however solint=15s encapsulates the variations the best. Thus we choose to apply sc1_tb_15s.gain. Note the amplitude of these variations in sc1_tb_15s.gain; they are typically up to 40 degrees and -60 yo -80 degrees.
+
=== Amplitude Self-calibration ===
 +
 
 +
When phase-only selfcal is no longer able to improve the image, and if you think there are still remaining calibration errors, then you may want to try amplitude selfcal. Amplitude selfcal works fundamentally the same as the phase-only examples, except that in {{gaincal}} you change ''calmode='ap' ''.  It is important that the amplitude corrections are solved for incrementally to the phase-only corrections so as not to apply amplitude corrections that compensate for decorrelation. That means using either Option 1 or 3 described above but not Option 2. Additionally, amplitude solutions require the fitting of an extra parameter and therefore the SNR may be lower than the phase-only solutions. It is generally recommended to include amplitude normalization by setting ''solnorm=True'', which will force the mean (or median to improve outlier rejection) gain over times and antennas to be 1.0; this will typically help to preserve the flux-density scale, especially after multiple iterations of amplitude selfcal.  Below is an example based on the procedure in Option 1, where we increase the solution interval to 48s to compensate for the intrinsically lower SNR.
  
=== Apply Calibration ===
 
 
<source lang="python">
 
<source lang="python">
 
# in CASA
 
# in CASA
applycal(vis='obj.ms',gaintable='sc1_tb_15s.gain')
+
 
 +
split(vis='obj.ms', datacolumn='corrected', outputvis='obj_selfcal.ms')
 +
 
 +
tclean(vis='obj_selfcal.ms',imagename='obj.selfcal1_prelim_clean.3arcmin', datacolumn='data', imsize=900, cell='0.2arcsec', pblimit=-0.1, gridder='standard', deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='modelcolumn')
 +
 
 +
gaincal(vis='obj_selfcal.ms',caltable='selfcal_amplitude.tb',solint='48s',refant='ea24',calmode='ap', solnorm=True, normtype='median', gaintype='T', minsnr=6)
 +
 
 +
applycal(vis='obj_selfcal.ms',gaintable='selfcal_amplitude.tb')
 +
 
 +
tclean(vis='obj_selfcal.ms',imagename='obj.selfcal_amplitude_clean.3arcmin', datacolumn='corrected', gridder='standard', cell='0.2arcsec', imsize=900, pblimit=-0.1, deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='none')
 
</source>
 
</source>
  
Other potential intervals that could be useful to use are 5 seconds and 60 seconds. Additionally, if the plotcal reveals no structure in these intervals, you can combine the spws together and apply a combined solution by setting combine='spw' (you will have to set spwmap=[0 x number of spectral windows] in apply cal in order to tell CASA to apply the combined solution to all of the spectral windows -- in this case spwmap = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0])
 
  
Our goal is to create a better model of the source using the data itself to create a better image. The process with calculating corrections and applying them has (if done correctly) reduced the errors allowing us to make a better image and create a better model. {{Applycal}} applies the solutions to the (raw) MS DATA column and writes the calibrated data into the CORRECTED_DATA column, thus for determining the next order/level of corrections during the next round of calibration we need to make the corrected data column the data column.  
+
 
 +
 
 +
 
 +
Questions about this tutorial? Please contact the [https://science.nrao.edu/observing/helpdesk NRAO Helpdesk].
 +
 
 +
{{Checked 5.7.0}}
 +
 
 +
<!--
 +
=== Direction-dependent Self-calibration ===
 +
 
 +
CASA has limited ability to solve for direction-dependent calibration solutions. 
 +
The classical 'peeling' algorithm can be run in CASA, but it can be tedious and scripting this procedure is highly recommended.
 +
Alternatively, there are several software packages compatible with the measurement set data format that you could use instead.
 +
-->
 +
 
 +
 
 +
<!--
 +
Our goal is to create a better model of the source using the data itself to create a better image. The process with calculating corrections and applying them has (if done correctly) reduced the errors allowing us to make a better image and create a better model. {{Applycal}} DATA column and writes the calibrated data into the CORRECTED_DATA column, thus for determining the next order/level of corrections during the next round of calibration we need to make the corrected data column the data column.  
  
 
<source lang="python">
 
<source lang="python">
Line 207: Line 717:
 
split(vis='obj.ms',outputvis='sc1_obj.ms')
 
split(vis='obj.ms',outputvis='sc1_obj.ms')
 
</source>
 
</source>
 +
-->
  
 +
<!--
 
== Second Round of Self-Calibration ==  
 
== Second Round of Self-Calibration ==  
 
We will go through the same process again: image, gaincal, applycal, and image. This time the image we create at first will be an indication if the solutions we applied made a difference.  
 
We will go through the same process again: image, gaincal, applycal, and image. This time the image we create at first will be an indication if the solutions we applied made a difference.  
Line 223: Line 735:
 
*''solint='15s':'' Since we found a good interval of 15 seconds previously, we move forward with 15s.
 
*''solint='15s':'' Since we found a good interval of 15 seconds previously, we move forward with 15s.
  
Josh: do we want to suggest/show other solution intervals? Do we want to comment on using various intervals -- when is it useful to do several rounds with one interval versus several rounds with each a different solint?
+
'''EM''': Once someone mentioned to me applying solutions of various intervals. Do we want to comment on this -- when is it useful to do several rounds with one interval versus several rounds with each a different solint?
  
 
View the solutions.
 
View the solutions.
Line 236: Line 748:
 
Notes about amplitude corrections (?) (mode='ap' or 'a').
 
Notes about amplitude corrections (?) (mode='ap' or 'a').
  
When self-cal helped, but didn't improve the image enough: (suggestions for other things to do)
+
When selfcal helped, but didn't improve the image enough: (suggestions for other things to do)
 +
 
 +
 
 +
=== Preliminary Clean ===
 +
<source lang="python">
 +
# in CASA
 +
tclean(vis='obj.ms',imagename='obj.sc_init',imsize=750,cell='0.24arcsec',weighting='briggs',deconvolver='mtmfs',niter=500,savemodel='modelcolumn',interactive=True)
 +
</source>
 +
We keep the same parameters as before, but reduce the number of iterations and include savemodel='modelcolumn' which is crucial. By including savemodel, we ensure that the model is explicitly saved to the model column.
 +
 
 +
Show an example of the masks we used and say approximately how many iterations. It looks like we did just the left source and the middle source. Do you perhaps know why we didn't include the right source at all in the mask? This is perplexing to me as that is the main source that we would want to model I would think.
 +
 
 +
Check that the model saved?
 +
 
 +
 
 +
=== Choose a Solution Interval ===
 +
 
 +
== Appendix ==
 +
 
 +
=== PB calculations ===
 +
 
 +
=== PSF calculations ===
 +
 
 +
=== Variant of the Selfcal Procedure ===
 +
 
 +
'''Note''': There is an alternative and equally valid variant of the self-calibration procedure that is outlined below but not used in this guide.  This variant avoids the use of {{split}} to freeze-in the selfcal solutions and instead builds up a list containing the calibration table(s) from each round of selfcal.
 +
# Conservatively clean the target and save the model.
 +
# Use '''{{gaincal}}''' with a set of parameters to calculate a calibration table, pre-applying all previous selfcal tables 'on-the-fly' with the '''gaintable''' parameter.
 +
# Inspect the new calibration solutions using '''{{plotms}}.'''
 +
# Determine if the solutions should be applied, re-calculated with different parameters, or if it is time to stop.
 +
# Append the new solutions to the list of calibration tables.
 +
# Use '''{{applycal}}''' to apply the list of tables to the data.
 +
# Start the next round of selfcal.
 +
 
 +
 
 +
=== Using combine='spw' and spwmap ===
 +
 
 +
 
 +
Appendix about combine='spw'
 +
 
 +
 
 +
From listobs, the three target scans are: 3,5,7
 +
 
 +
we can use combine='spw' for scans 3 and 5
 +
<source lang="python">
 +
# in CASA
 +
gaincal(vis='obj.ms',caltable='selfcal_combine_spw_scan_3and5.tb',solint='int',refant='ea24',calmode='p',gaintype='T',combine='spw',scan='3,5')
 +
</source>
 +
*''caltable='selfcal_combine_spw_scan_3and5.tb':'''Name the calibration tables something intuitive to distinguish each one.
 +
*''solint='int':'' We choose a solution interval equal to the integration time (3 seconds) in order to get a sense of the structure and timescale of the variations.
 +
*''refant='ea24':'' The chosen reference antenna.
 +
*''calmode='p':'' To start with phase only calibration.
 +
*''gaintype='T':'' To combine polarizations.
 +
*''combine='spw' '': To also combine SPWs.
 +
*''scan='3,5' '': To select only scans 3 and 5.
 +
 
 +
 
 +
 
 +
for scan 7, the solutions are similar over SPWs 0~5 and over SPWs 12~15 so these can be combined separately. 
 +
 
 +
We can do this by running {{gaincal}} multiple times and setting ''append=True''.
 +
 
 +
 
 +
 
 +
 
 +
<source lang="python">
 +
# in CASA
 +
gaincal(vis='obj.ms',caltable='selfcal_combine_spw_scan7.tb',solint='int',refant='ea24',calmode='p',gaintype='T',combine='spw',spw='0~5',scan='7',append=True)
 +
</source>
 +
*''caltable='selfcal_combine_spw_scan7.tb':'''Name the calibration tables something intuitive to distinguish each one.
 +
*''solint='int':'' We choose a solution interval equal to the integration time (3 seconds) in order to get a sense of the structure and timescale of the variations.
 +
*''refant='ea24':'' The chosen reference antenna.
 +
*''calmode='p':'' To start with phase only calibration.
 +
*''gaintype='T':'' To combine polarizations.
 +
*''combine='spw' '': To also combine SPWs.
 +
*''spw='0~5' '': To select only SPWs 0~5.
 +
*''scan='7' '': To select only scan 7.
 +
*''append=True'': To append solutions to the existing table of the same name.
 +
 
 +
 
 +
 
 +
 
 +
<source lang="python">
 +
# in CASA
 +
gaincal(vis='obj.ms',caltable='selfcal_combine_spw_scan7.tb',solint='int',refant='ea24',calmode='p',gaintype='T',combine='spw',spw='6~11',scan='7',append=True)
 +
</source>
 +
*''caltable='selfcal_combine_spw_scan7.tb':'''Name the calibration tables something intuitive to distinguish each one.
 +
*''solint='int':'' We choose a solution interval equal to the integration time (3 seconds) in order to get a sense of the structure and timescale of the variations.
 +
*''refant='ea24':'' The chosen reference antenna.
 +
*''calmode='p':'' To start with phase only calibration.
 +
*''gaintype='T':'' To combine polarizations.
 +
*''combine='' '': To turn off combination of SPWs.
 +
*''spw='6~11' '': To select only SPWs 6~11.
 +
*''scan='7' '': To select only scan 7.
 +
*''append=True'': To append solutions to the existing table of the same name.
 +
 
 +
 
 +
 
 +
<source lang="python">
 +
# in CASA
 +
gaincal(vis='obj.ms',caltable='selfcal_combine_spw_scan7.tb',solint='int',refant='ea24',calmode='p',gaintype='T',combine='spw',spw='12~15',scan='7',append=True)
 +
</source>
 +
*''caltable='selfcal_combine_spw_scan7.tb':'''Name the calibration tables something intuitive to distinguish each one.
 +
*''solint='int':'' We choose a solution interval equal to the integration time (3 seconds) in order to get a sense of the structure and timescale of the variations.
 +
*''refant='ea24':'' The chosen reference antenna.
 +
*''calmode='p':'' To start with phase only calibration.
 +
*''gaintype='T':'' To combine polarizations.
 +
*''combine='spw' '': To turn back on combination of SPWs.
 +
*''spw='12~15' '': To select only SPWs 12~15.
 +
*''scan='7' '': To select only scan 7.
 +
*''append=True'': To append solutions to the existing table of the same name.
 +
 
 +
 
 +
Note that when combining SPWs, only one set of solutions are written.  These solutions are indexed by the lowest numbered SPW that was used.  So the solutions created by combining SPWs 0~5 were written out as the solutions of SPW 0, and the solutions created by combining SPWs 12~15 were written out as the solutions of SPW 12. If we were to apply these solutions in the typical fashion, {{applycal}} would not see solutions for SPWs 1~5 or 13~15 and the data would not be corrected.  Therefore, we must use the ''spwmap'' parameter of {{applycal}} to map the combined solutions back onto all the SPWs from which they were derived. The following calls to {{applycal}} demonstrate this.
 +
 
 +
 
 +
First, apply the solutions from scans 3 and 5, which combined all of the SPWs.
 +
<source lang="python">
 +
# in CASA
 +
applycal(vis='obj.ms',gaintable='selfcal_combine_spw_scan_3and5.tb',interp='nearest',scan='3,5',spw='',spwmap=16*[0])
 +
</source>
 +
*''gaintable='selfcal_combine_spw_scan_3and5.tb':'''Name the calibration tables something intuitive to distinguish each one.
 +
*''interp='nearest':'' Correct the visibilities using the solution from the closest time interval.
 +
*''scan='3,5' '': To select scans 3 and 5 for correction.
 +
*''spw='' '': To select all SPWs for correction.
 +
*''spwmap=16*[0]'': To map the solutions from SPW 0 to all 16 SPWs.
 +
 
 +
 
 +
Then apply the solutions from scan 7. We will need to map the solutions from SPW 0 to SPWs 0~5, the solutions from SPWs 6~11 to themselves, and the solutions from SPW 12 to SPWs 12~15. 
 +
<source lang="python">
 +
# in CASA
 +
applycal(vis='obj.ms',gaintable='selfcal_combine_spw_scan7.tb',interp='nearest',scan='7',spw='',spwmap=6*[0]+range(6,12)+4*[12])
 +
</source>
 +
*''gaintable='selfcal_combine_spw_scan7.tb':'''Name the calibration tables something intuitive to distinguish each one.
 +
*''interp='nearest':'' Correct the visibilities using the solution from the closest time interval.
 +
*''spw='' '': To select all SPWs for correction.
 +
*''scan='7' '': To select scans 3 and 5 for correction.
 +
*''spwmap=6*[0]+range(6,12)+4*[12]'': To map the solutions from SPW 0 to all 16 SPWs. -->

Latest revision as of 14:40, 18 October 2020



Introduction

After calibrating a data set using the observed calibrator sources (standard calibration), there may be residual phase and/or amplitude errors in the calibrated data of the target source that degrade the image quality. Self-calibration (selfcal) is the process of using a model of the target source to reduce the phase and amplitude errors in the visibilities of the same target source. While some other CASA guides include a self-calibration step, this guide describes the process in greater detail, including how to optimize parameters to achieve the best result.

Fundamentally, self-calibration is almost identical to standard calibration. Both standard calibration and selfcal work by comparing the visibility data with a model to solve for calibration solutions. With standard calibration, we are usually provided a model of our calibrator source by the observatory (e.g., VLA Flux-density calibrators) or we adopt a simple model (e.g., a 1 Jy point source at the phase center is a common assumption for VLA phase calibrators). With self-calibration we need to set a model for our target source, e.g., by imaging the target visibilities. Then for both standard calibration and selfcal we solve for calibration solutions after making choices about the solution interval, signal-to-noise, etc. When applying the standard calibration solutions we use interpolation to correct the target data, but for selfcal we apply the calibration solutions directly to the target field from which they were derived. For additional details about self-calibration, see Lecture 10 of Synthesis Imaging in Radio Astronomy II (eds. Taylor, Carilli & Perley).

In this guide, we will create a model using the target data (by running tclean) and use this model to solve for and apply calibration solutions (by running gaincal and applycal). This is the most common procedure, but there are other variants that are outside the scope of this guide. For example, your initial model for the target may come from fitting a model to the visibilities instead of imaging, or may be based on a priori knowledge of the target field.

Each "round" of self-calibration presented here will follow the same general procedure:

  1. Create an initial model by conservatively cleaning the target field (see Section The Initial Model).
  2. Use gaincal with an initial set of parameters to calculate a calibration table (see Section Solving for the First Self-Calibration Table).
  3. Inspect the calibration solutions using plotms (see Section Plotting the First Self-Calibration Table).
  4. Optimize the calibration parameters (see Sections Examples of Various Solution Intervals and Comparing the Solution Intervals).
  5. Use applycal to apply the table of solutions to the data (see Section Applying the First Self-Calibration Table).
  6. Use tclean to produce the self-calibrated image (see Section Imaging the Self-calibrated Data).

The data set in this guide is a VLA observation of a massive galaxy cluster, MOO J1506+5137, at z=1.09 and is part of the Massive and Distant Clusters of WISE Survey (MaDCoWS: Gonzalez et al. 2019). MOO J1506+5137 stands out in the MaDCoWS sample due to its high radio activity. From the 1300 highest significance MaDCoWS clusters in the FIRST footprint, a sample of ~50 clusters with extended radio sources defined as having at least one FIRST source with a deconvolved size exceeding 6.5" within 1' of the cluster center was identified. This sample was observed with the VLA (PI: Gonzalez, 16B-289, 17B-197; PI: Moravec, 18A-039) as a part of a larger study (Moravec et al. 2020a). Through these follow-up observations, it was discovered that MOO J1506+5137 had high radio activity compared to other clusters in the sample with five radio sources of which three had complex structure and two were bent-tail sources. The scientific question at hand is, why does this cluster have such high radio activity? The VLA data showcased in this tutorial, combined with other data sets, suggest that the exceptional radio activity among the massive galaxy population is linked to the dynamical state of the cluster (Moravec et al. 2020b).


When to Use Self-calibration

There are instances in which selfcal can improve your image quality and others when it will not.

A couple typical cases in which selfcal can help improve the image of the target source:

  • Extensive artifacts from the source of interest due to calibration errors
  • Extensive artifacts from a background source due to direction-independent calibration errors

Some cases in which selfcal will *not* improve the image of the target source:

  • When the image artifacts are due to errors in creating the image (e.g., ignoring wide-field effects)
  • When the image artifacts are due to errors in deconvolution (e.g., ignoring wide-band effects)
  • When the image artifacts are due to unflagged RFI in the target visibilities
  • When the image artifacts are due to insufficient UV coverage (e.g., missing short spacings, snapshot synthesis)
  • When the source(s) are too weak to achieve sufficient signal-to-noise in the calibration solutions
  • When there is a bright outlying source with direction-dependent calibration errors

It can be difficult to determine the origin of an image artifact based solely on its appearance, especially without a lot of experience in radio astronomy. But generally speaking, the errors that selfcal will help address will be convolutional in nature and direction-independent. This means that every source of real emission in the image will have an error pattern of the same shape, and the brightness of the error pattern will scale with the brightness of the source. If the error pattern is symmetric (an even function) then it is most likely dominated by an error in visibility amplitude, and if the error pattern is asymmetric (an odd function) then it is probably due to an error in visibility phase. Selfcal can address both amplitude and phase errors. For a more complete discussion on error recognition, see Lecture 15 of Synthesis Imaging in Radio Astronomy II (eds. Taylor, Carilli & Perley).

In the case of this guide, we believed that these data were a good candidate for selfcal because there were extensive artifacts centered on the source of interest (something very closely resembling Figure 4A) after an initial cleaning. These errors manifested as strong sidelobes radiating out from the sources of strong emission and with a shape that resembles the VLA dirty beam (i.e., a shape that is related to the observation's UV coverage). The artifacts did not lessen as we cleaned more deeply but instead appeared stronger relative to the residual image. Therefore, because phase and/or amplitude calibration errors could be a potential cause for the artifacts, and because the target source is relatively bright, we thought that selfcal could help improve the image quality.


Data for this Tutorial

Obtaining the Data

The original observation has been calibrated using the VLA CASA Pipeline. Therefore, the measurement set (MS) we will be downloading will contain both the raw (uncalibrated) visibilities and the calibrated visibilities, which will appear in the 'DATA' and 'CORRECTED_DATA' columns of the MS, respectively. The raw data alone is 11 GB, and this will grow to 21 GB after applying the calibration.

You may download the calibrated MS directly here: 17B-197.sb34290063.eb34589992.58039.86119096065.tar (21 GB)

Observation Details

First, we will start CASA in the directory containing the data and then collect some basic information about the observation. The task listobs can be used to display the individual scans comprising the observation, the frequency setup, source list, and antenna locations.

# in CASA
listobs(vis='17B-197.sb34290063.eb34589992.58039.86119096065.ms')

A portion of the listobs output appears below:

================================================================================
           MeasurementSet Name:  17B-197.sb34290063.eb34589992.58039.86119096065.ms      MS Version 2
================================================================================
   Observer: Prof. Anthony H. Gonzalez     Project: uid://evla/pdb/34052589  
Observation: EVLA
Data records: 5290272       Total elapsed time = 2853 seconds
   Observed from   13-Oct-2017/20:40:09.0   to   13-Oct-2017/21:27:42.0 (UTC)

   ObservationID = 0         ArrayID = 0
  Date        Timerange (UTC)          Scan  FldId FieldName             nRows     SpwIds   Average Interval(s)    ScanIntent
  13-Oct-2017/20:40:09.0 - 20:45:03.0     1      0 J1549+5038              550368  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [SYSTEM_CONFIGURATION#UNSPECIFIED]
              20:45:06.0 - 20:50:03.0     2      0 J1549+5038              555984  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [CALIBRATE_AMPLI#UNSPECIFIED,CALIBRATE_PHASE#UNSPECIFIED]
              20:50:06.0 - 20:59:27.0     3      1 MOO_1506+5136          1050192  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [OBSERVE_TARGET#UNSPECIFIED]
              20:59:30.0 - 21:00:51.0     4      0 J1549+5038              151632  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [CALIBRATE_AMPLI#UNSPECIFIED,CALIBRATE_PHASE#UNSPECIFIED]
              21:00:54.0 - 21:10:15.0     5      1 MOO_1506+5136          1050192  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [OBSERVE_TARGET#UNSPECIFIED]
              21:10:18.0 - 21:11:39.0     6      0 J1549+5038              151632  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [CALIBRATE_AMPLI#UNSPECIFIED,CALIBRATE_PHASE#UNSPECIFIED]
              21:11:42.0 - 21:21:03.0     7      1 MOO_1506+5136          1050192  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [OBSERVE_TARGET#UNSPECIFIED]
              21:21:06.0 - 21:22:27.0     8      0 J1549+5038              151632  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [CALIBRATE_AMPLI#UNSPECIFIED,CALIBRATE_PHASE#UNSPECIFIED]
              21:22:30.0 - 21:27:03.0     9      2 3C286                   511056  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [CALIBRATE_BANDPASS#UNSPECIFIED,CALIBRATE_FLUX#UNSPECIFIED]
              21:27:06.0 - 21:27:42.0    10      2 3C286                    67392  [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]  [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] [CALIBRATE_BANDPASS#UNSPECIFIED,CALIBRATE_FLUX#UNSPECIFIED]
           (nRows = Total number of rows per scan) 
Fields: 3
  ID   Code Name                RA               Decl           Epoch   SrcId      nRows
  0    NONE J1549+5038          15:49:17.468534 +50.38.05.78820 J2000   0        1561248
  1    NONE MOO_1506+5136       15:06:20.353700 +51.36.53.63460 J2000   1        3150576
  2    NONE 3C286               13:31:08.287984 +30.30.32.95886 J2000   2         578448
Spectral Windows:  (16 unique spectral windows and 1 unique polarization setups)
  SpwID  Name           #Chans   Frame   Ch0(MHz)  ChanWid(kHz)  TotBW(kHz) CtrFreq(MHz) BBC Num  Corrs          
  0      EVLA_C#A0C0#0      64   TOPO    4488.000      2000.000    128000.0   4551.0000       12  RR  RL  LR  LL
  1      EVLA_C#A0C0#1      64   TOPO    4616.000      2000.000    128000.0   4679.0000       12  RR  RL  LR  LL
  2      EVLA_C#A0C0#2      64   TOPO    4744.000      2000.000    128000.0   4807.0000       12  RR  RL  LR  LL
  3      EVLA_C#A0C0#3      64   TOPO    4872.000      2000.000    128000.0   4935.0000       12  RR  RL  LR  LL
  4      EVLA_C#A0C0#4      64   TOPO    5000.000      2000.000    128000.0   5063.0000       12  RR  RL  LR  LL
  5      EVLA_C#A0C0#5      64   TOPO    5128.000      2000.000    128000.0   5191.0000       12  RR  RL  LR  LL
  6      EVLA_C#A0C0#6      64   TOPO    5256.000      2000.000    128000.0   5319.0000       12  RR  RL  LR  LL
  7      EVLA_C#A0C0#7      64   TOPO    5384.000      2000.000    128000.0   5447.0000       12  RR  RL  LR  LL
  8      EVLA_C#B0D0#8      64   TOPO    5488.000      2000.000    128000.0   5551.0000       15  RR  RL  LR  LL
  9      EVLA_C#B0D0#9      64   TOPO    5616.000      2000.000    128000.0   5679.0000       15  RR  RL  LR  LL
  10     EVLA_C#B0D0#10     64   TOPO    5744.000      2000.000    128000.0   5807.0000       15  RR  RL  LR  LL
  11     EVLA_C#B0D0#11     64   TOPO    5872.000      2000.000    128000.0   5935.0000       15  RR  RL  LR  LL
  12     EVLA_C#B0D0#12     64   TOPO    6000.000      2000.000    128000.0   6063.0000       15  RR  RL  LR  LL
  13     EVLA_C#B0D0#13     64   TOPO    6128.000      2000.000    128000.0   6191.0000       15  RR  RL  LR  LL
  14     EVLA_C#B0D0#14     64   TOPO    6256.000      2000.000    128000.0   6319.0000       15  RR  RL  LR  LL
  15     EVLA_C#B0D0#15     64   TOPO    6384.000      2000.000    128000.0   6447.0000       15  RR  RL  LR  LL


Initial Data Inspection

Since we have obtained the calibrated visibilites for the calibrator fields, we can now take this opportunity to investigate the phase stability in these observations. It is easier to do this inspection on a bright calibrator field where the signal-to-noise is high, and we will assume that the same degree of stability is present throughout the observation. In this section, we will characterize the magnitude and timescale of the phase fluctuations that we will be trying to correct for with selfcal.

Looking at the output of listobs we see that there is a long scan on the amplitude calibrator, 3C 286 (field ID 2). A feature of the VLA CASA pipeline is that it only applies scan-averaged calibration solutions to the calibrator fields, so it will not have corrected for any variations within a scan. We will plot the calibrated phase vs. time on a single baseline:

# in CASA
plotms(vis='17B-197.sb34290063.eb34589992.58039.86119096065.ms', xaxis='time', yaxis='phase', ydatacolumn='corrected', field='2', antenna='ea05', correlation='RR,LL', avgchannel='64', iteraxis='baseline', coloraxis='spw')
Figure 1: The phase vs. time on the ea04-ea05 baseline for field 2.
  • vis='17B-197.sb34290063.eb34589992.58039.86119096065.ms' : To plot visibilities from the pipeline calibrated MS.
  • xaxis='time', yaxis='phase' : To set time as the x-axis and phase as the y-axis of the plot.
  • ydatacolumn='corrected' : To plot the calibrated data (from the CORRECTED_DATA column).
  • field='2' : To select visibilities from field ID 2, i.e., the amplitude calibrator 3C 286.
  • antenna='ea05', iteraxis='baseline' : To view a single baseline at a time. Any single antenna can be chosen here.
  • correlation='RR,LL' : To plot both parallel-hand correlation products.
  • avgchannel='64' : To average all channels in each SPW to increase signal-to-noise. Since the bandpass solutions have been applied to these data the channels will average coherently.
  • coloraxis='spw' : To plot each SPW as a different color, which will make it easier to distinguish them.


Use the 'Next Iteration' button of the plotms GUI to cycle through additional baselines. You should see plots that look similar to the example image of the ea04-ea05 baseline (see Figure 1). The plotted data have a mean of zero phase because the pipeline calibration solutions have already been applied. The phase is seen to vary with time over a large range (in some cases more than +/- 100 degrees) and the variations appear to be smooth over time scales of a few integrations. For a given baseline, we can see that all of the spectral windows and both correlations approximately follow the same trend with time. Additionally, the magnitude of the phase variations is larger for the higher frequency spectral windows, a pattern that is consistent with changes in atmospheric density.

Optional extra steps: Create and inspect similar plots using scan 2 of the phase calibrator field (J1549+5038). Repeat for baselines to other antennas.

Splitting the Target Visibilities

CASA calibration tasks always operate by comparing the visibilities in the DATA column to the source model, where the source model is given by either the MODEL_DATA column, a model image or component list, or the default model of a 1 Jy point source at the phase center. For example, the calibration pipeline used the raw visibilities in the DATA column to solve for calibration tables and then created the CORRECTED_DATA column by applying these tables to the DATA column. With this context in mind, an essential step for self-calibration is to split the calibrated visibilities for the target we want to self-calibrate, meaning that the visibilities of the target source get copied from the CORRECTED_DATA column of the pipeline calibrated MS to the DATA column of a new measurement set. Self-calibration will work in the same way as the initial calibration, i.e., by comparing the pipeline calibrated visibilities (which are now in the DATA column of the new split MS) to a model, solving for self-calibration tables, and then creating a new CORRECTED_DATA column by applying the self-calibration tables. If we did not split the data, we would need to constantly re-apply all of the calibration tables from the pipeline (both on-the-fly when computing the self-calibration solutions and then again when applying the self-calibration), which would make the process much more cumbersome.

# in CASA
split(vis='17B-197.sb34290063.eb34589992.58039.86119096065.ms',datacolumn='corrected',field='1', correlation='RR,LL', outputvis='obj.ms')
  • vis='17B-197.sb34290063.eb34589992.58039.86119096065.ms' : The input visibilities for split. Here, these are the visibilities produced by the pipeline.
  • datacolumn='corrected' : To copy the calibrated visibilities from the input MS.
  • field='1' : The field ID of the target we want to self-calibrate.
  • correlation='RR,LL' : To select only the parallel hand correlations. This will make the output data set smaller by about a factor of two.
  • outputvis='obj.ms' : The name of the new measurement set that split will create.


The Initial Model

Now that we understand the data a bit better and know that we need to apply self-calibration, we will begin to work our way through the steps outlined in the Introduction (create initial model, create calibration table, inspect solutions, determine best solution interval, applycal, split, next round). We first begin with creating an initial model which gaincal will compare to the data in order to create the calibration table that will be applied to the data in the first round of self-calibration.

Preliminary Imaging

Prior to solving for self-calibration solutions we need to make an initial model of the target field, which we will generate by deconvolving the target field using the task tclean. There are several imaging considerations that we should address when making this model (discussed below). See the VLA CASA Guide on Imaging for more details about these parameters.

Image field-of-view: Ideally, we want our self-calibration model to include all of the sources present in the data (e.g., sources near the edge of the primary beam or in the first sidelobe). This is typically achieved by making an image large enough to encompass all of the apparent sources, or by making a smaller image of the target plus one or more outlier fields. We will start with a large dirty image of the entire primary beam (PB) in order to better understand the sources in the galaxy cluster plus any background sources that will need to be cleaned. A rule of thumb for the VLA is that the FWHM of the PB in arcminutes is approximately 42 * (1 GHz / nu). At the center frequency of our C-band observations (5.5 GHz) the VLA primary beam is ~8' FWHM. In order to image the entire PB and the first sidelobe we need an image field of view that is about four times larger, so we will choose a 32' field-of-view for our initial image (see casadocs and the VLA OSS for further discussion of primary beams).

Primary beam mask: By default, the 'pblimit' parameter will add an image mask everywhere the value of the primary beam is less than 10%. We want to turn off this mask, as it would prevent us from viewing the image over our desired field of view. This mask is turned off by setting the magnitude of the 'pblimit' parameter to be negative. The actual value of this parameter is unimportant for the imaging we will be doing (i.e., the 'standard', 'widefield' and 'wproject' gridders).

Image cell size: There are a few different ways to estimate the synthesized beam size for these observations taken with the C-band in the B-configuration. For one, we can use this table in the VLA Observational Status Summary, which gives a resolution of 1.0". It is recommended to choose a cell size that will result in at least 5 image pixels across the FWHM of the synthesized beam, therefore we require a cell size of 0.20"/pixel or smaller.

Image size in pixels: We can convert our desired field-of-view to pixels using the cell size: 32' * (60" / 1') * (1 pixel / 0.20") = 9600 pixels.

Wide-field effects: Large images may require additional consideration due to non-coplanar baselines (the W-term). In CASA, this is usually addressed by turning on the W-project algorithm. See this link to CASAdocs for a more detailed discussion.

We can estimate whether our image requires W-projection by calculating the recommended number of w-planes using this formula taken from page 392 of the NRAO 'white book',

[math] N_{wprojplanes} = \left ( \frac{I_{FOV}}{\theta_{syn}} \right ) \times \left ( \frac{I_{FOV}}{1\, \mathrm{radian}} \right ) [/math]

where I_FOV is the image field-of-view (32') and theta_syn is the synthesized beam size (1.0"). Working in units of arcseconds ((32 x 60) / 1.0) * ((32 x 60) / 206265) evaluates to wprojplanes ~ 18 so we will choose to turn on the w-project algorithm with gridder='widefield' and set wprojplanes=18. If the recommended number of planes had been <= 1 then we would not have needed to turn on the wide-field gridder, and we could have used gridder='standard' instead.

We will now create a preliminary dirty image using these parameters.

# in CASA
tclean(vis='obj.ms',imagename='obj.dirty.9600pix', datacolumn='data', imsize=9600, cell='0.2arcsec', pblimit=-0.1, gridder='widefield', wprojplanes=18 )
  • datacolumn='data' : To image the visibilities in the measurement set's DATA column.
  • imsize=9600: The number of pixels across one side of the (square) image.
  • cell='0.2arcsec': The size of an image pixel (see above).
  • pblimit=-0.1: We set this to a small negative number to turn off the PB mask, allowing us to view the entire image.
  • gridder='widefield': To turn on the w-project algorithm.
  • wprojplanes=18: The number of w-planes to use for w-projection.

Figure 2A below shows the resulting dirty image, and Figure 2B shows a zoom-in of the central region of the image. Several outlying sources are detectable; the four brightest are marked with magenta circles.

Figure 2A: The 32' dirty image. Scaling Power Cycles has been set to -2 in the viewer's 'Data Display Options' to stretch the color scale. The locations of the four brightest far-field sources are marked with magenta circles.
Figure 2B: The same image as in Figure 2A after zooming in on the central objects.

We have a few options about how to deal with these outlying sources:

  • Proceed with the self-calibration procedure using a large field-of-view that includes all the outlying sources.
  • Proceed with the self-calibration procedure using a small field-of-view that includes only the central sources, and add an outlier field on each of the outlying sources.
  • Proceed with the self-calibration procedure using a small field-of-view that includes only the central sources, ignoring the outlying sources.

In this guide, we will first choose to ignore the outlying sources in order to present a simplified self-calibration procedure. This is also what was chosen for the scientific image and analysis because the artifacts from the outlying sources did not strongly effect the area of scientific interest (inner 3'). For more information about the other options described above, see the casaguide on VLA Imaging and the CASAdocs page for the tclean task. Sometimes, more advanced techniques are used for outlying sources such as UV-subtraction, peeling or direction-dependent calibration, but these are outside the scope of this guide.

Creating the Initial Model

After deciding how to deal with the outlying sources, our next step is to make the initial model that we will use for self-calibration. There are a few additional parameters that apply to this step, discussed below.

Image field-of-view: For this science case we are only interested in sources within ~1.5' radius of the cluster center. Since we have chosen to ignore the outlying sources at this stage, we will proceed with an image field-of-view of 3'.

Wide-field effects: We repeat the calculation of wprojplanes from the Initial Imaging section using our new field of view of 3'. This results in wprojplanes ~ 1 so we turn off the correction for non-coplanar baselines by setting gridder='standard' .

Wide-band imaging: Our images will combine data from all spectral windows, spanning a frequency range of about 4.5-6.5 GHz (a fractional bandwidth of about 36%). Each source's amplitude may vary substantially over this frequency range, due to either the source's intrinsic spectral variation and/or the frequency dependence of the VLA's primary beam. To mitigate these errors during deconvolution we will use deconvolver='mtmfs' and nterms=2. For further discussion of wide-band imaging, see the CASA documentation for wide-band imaging and the VLA Imaging CASAguide.

Imaging weights: When constructing the initial model, especially when there are large image artifacts, it is recommended to use "robust" imaging weights. In CASA, this is enabled with weighting='briggs' , and then choosing a value for the robust parameter between -2 and +2. Values of robust near +2 (approximately natural weighting) often result in large positive PSF sidelobes while robust near -2 (approximately uniform weighting) often produce large negative PSF sidelobes. Since many types of image artifacts scale with PSF sidelobe levels, a reasonable compromise is often around robust=0.

Image deconvolution: We will need to deconvolve (clean) this image in order to produce a model of the field. We will want to control the cleaning depth and masking interactively, so we set interactive=True. We also must choose the number of clean iterations with the niter parameter. A suggested starting value is niter= 1000 iterations, but this can be changed interactively after we start cleaning.

Saving the model: After deconvolution, there are a couple options for how to write the model back to the measurement set, controlled by the savemodel parameter. The default is none which will not save the model. It is essential that this default is changed or else the selfcal procedure will not work properly. The option savemodel='virtual' will save a copy of the image to the SOURCE subtable of the measurement set to be used later for on-the-fly model visibility prediction. This option is sometimes recommended for very large data sets, but is not generally recommended. The other option, savemodel='modelcolumn' is the recommended setting and the one that we will use in this guide. This option will predict the model visibilities after cleaning and save the result to the MODEL_DATA column.

Now we are ready to create our first clean image. This image will provide the starting model that is required by the calibration routines, and it will showcase why we need self-calibration for these data.

# in CASA
tclean(vis='obj.ms',imagename='obj.prelim_clean.3arcmin', datacolumn='data', imsize=900, cell='0.2arcsec', pblimit=-0.1, gridder='standard', deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='modelcolumn')
  • datacolumn='data' : To image the visibilities in the measurement set's DATA column.
  • imsize=900: To create an image with a 3' field-of-view.
  • cell='0.2arcsec': The size of an image pixel.
  • pblimit=-0.1: To remove the PB mask; see previous sections.
  • gridder='standard': We select the default gridder (assumes coplanar baselines).
  • deconvolver='mtmfs': We will turn on the wide-band deconvolution algorithm as discussed above.
  • nterms=2: The number of Taylor terms for wide-band deconvolution.
  • niter=1000: Set a relatively large number of iterations as a starting point.
  • interactive=True: So we can interactively place the mask.
  • weighting='briggs': Turn on 'robust' image weighting
  • robust=0: Set the value of the robust weighting.
  • savemodel='modelcolumn': To enable writing the MODEL_DATA column to the MS after imaging. **important**

During the interactive cleaning, place conservative circular masks around each of the strong sources in turn, starting with the brightest:

  • First mask the rightmost source (Figure 3A), press the green circle arrow in the CASA viewer to perform one cycle of cleaning, and wait for focus to return to the viewer. The viewer will then show you the current residual image (i.e., the image after subtracting some flux from within the first mask).
  • Then mask the leftmost double-lobed source (Figure 3B) and press the green circle arrow in the CASA viewer. This will perform the next cleaning cycle, after which focus will return to the viewer. Cleaning has now taken place inside the masks of both sources and the brightest source in the new residual image will be in the middle.
  • Finally, mask the middle source (Figure 3C), press the green circle arrow again, and wait for CASA to complete the next cycle of cleaning.

At this point, the residual emission is at about the same level as the artifacts so we stop cleaning (press the red X in the CASA viewer; see Figure 3D for an example of the artifacts). It is strongly recommended to only mask and clean emission that is believed to be real so as not to include artifacts in the model.

Figure 3A: The mask for rightmost source.
Figure 3B: The mask for the left double-lobed source.
Figure 3C: The mask for the middle source.
Figure 3D: The rightmost source after 3 iterations of adding masks. The emission around the current mask is characteristic of artifacts.

Figure 4A below shows the resulting clean image (obj.prelim_clean.3arcmin.image.tt0) that we will try to improve through the use of self-calibration. For reference, we will measure some simple image figures of merit to compare with the image after self-calibration. Specifically, we measure the peak intensity (maximum pixel value; see Figure 4B) in the image to be 6.67 mJy and image noise (RMS of pixel values; see Figure 4C) in a source-free region to be 18.2 uJy. This gives a ratio between the maximum and the noise of 366, which is called the dynamic range.

Figure 4A: The preliminarily cleaned image. We have used Scaling Power Cycles -2 and divided the minimum of the data range by 10 to emphasize the artifacts.
Figure 4B: Measuring the maximum value by drawing a rectangular region over the entire image.
Figure 4C: Measuring the source-free RMS by drawing a rectangular region near the source of interest and large enough to measure unbiased statistics (i.e., many synthesized beams), but avoiding any obvious real sources of emission.

Verifying the Initial Model

There have been reported instances where CASA fails to save the model visibilities when using interactive clean. It is crucial that the model is saved correctly, otherwise self-calibration will use the 'default' model of a 1 Jy point source at the phase center. The default model may be very different from your target field and we do not want to carry out the selfcal procedure with this incorrect model. Therefore, it is recommended to verify that the model has been saved correctly by inspecting the model visibilities.

# in CASA
plotms(vis='obj.ms', xaxis='UVwave', yaxis='amp', ydatacolumn='model', avgchannel='64', avgtime='300')
Figure 5: The model visibilities.
  • vis='obj.ms' : To plot visibilities from the split MS.
  • xaxis='UVwave', yaxis='amp' : To set UV-distance in wavelengths as the x-axis and amplitude as the y-axis of the plot.
  • ydatacolumn='model' : To plot the model visibilities (from the MODEL_DATA column).
  • avgchannel='64' : To average all channels per SPW.
  • avgtime='300' : To average in time in chunks of 300 seconds.

The resulting plot should resemble Figure 5 on the right. This plot shows that some baselines see up to 15 mJy of flux, but that the source becomes resolved on the longer baselines. Note that the visibilities plotted here are for correlations RR and LL since we dropped RL and LR with the split task. However, had we retained RL and LR, they would equal zero since we only made a Stokes I model image.

This model that has been plotted is clearly not the default model of a 1 Jy point source (if it was, all amplitudes would be at 1 Jy) and so we have verified that tclean has correctly written the MODEL_DATA column of the MS.


First Round of Self-Calibration

Solving for the First Self-Calibration Table

For this first round of selfcal we will use the model that we just created above and compare it to the data in order to create a table of corrections to apply to the data. We are now ready to solve for these first selfcal solutions. We will explore various parameters of the task gaincal in order to learn more about the data and settle on the optimal parameters. The most relevant parameters are discussed below:

Solution interval: This is controlled with the solint parameter and is one of the most fundamental parameters for self-calibration. The value of this parameter can vary between 'int' which stands for integration and will be the time of a single integration for that data set (corresponding to 3 seconds for this data set) up to 'inf' for infinite (meaning either an entire scan or the entire observation, depending on the value of the combine parameter). We typically want to choose the shortest solution interval for which we can achieve adequate signal-to-noise in the calibration solutions.

Data combination: The data can be combined in multiple ways to improve signal-to-noise, but if the target source is bright enough to obtain good calibration solutions in a short timescale without data combination then these options are not necessary. However if low signal-to-noise messages appear across antennas, times, and SPWs then both parallel-hand correlations, if present, can be combined by setting gaintype='T' instead of gaintype='G' and this will generally increase the signal-to-noise by an additional factor of root 2. If gaincal still produces a lot of low signal-to-noise messages, one can try to combine multiple SPWs with combine='spw' if the SPWs are at similar frequencies, and can generally expect to increase the solution's signal-to-noise by the square root of the number of SPWs that are combined. Combining scans during self-calibration is not usually recommended.

Amplitude and phase correction: Because large phase errors will result in incoherent averaging and lead to lower amplitudes, we always want to start with phase-only self-calibration. We achieve this by setting calmode='p' . In later rounds of selfcal, after the phases have been well corrected, we can try calmode='ap' to include an amplitude component in the solutions. When solving for amplitudes, we may also want to consider normalizing them with the solnorm parameter.

Reference antenna: As with standard calibration, we want to choose a reference antenna for the calibration solutions. It is generally recommended to choose one that is near the center of the array but not heavily flagged. In order to determine which one to use, use plotants to plot the positions of the antennas and choose one near the center. To find the percent data flagged per antenna, you could run flagdata with mode='summary'.

Signal-to-noise ratio (SNR): The default minimum SNR in gaincal is 3.0, but this can be adjusted with the minsnr parameter. Solutions below this minimum are flagged in the output calibration table. Sometimes we want to increase this minimum, e.g., to 5.0, to reject noisy solutions. Alternatively, we may want to lower this minimum, e.g., to zero, usually for inspection purposes.

We will now create our initial self-calibration table. This will not be the final table for the first round of self-calibration, but rather, a temporary table that we will inspect to help determine the optimal parameters.

# in CASA
gaincal(vis='obj.ms',caltable='selfcal_initial.tb',solint='int',refant='ea24',calmode='p',gaintype='G',minsnr=0)
  • caltable='selfcal_initial.tb':'Name the calibration tables something intuitive to distinguish each one.
  • solint='int': We choose a solution interval equal to the integration time (3 seconds) in order to get a sense of the structure and timescale of the variations.
  • refant='ea24': The chosen reference antenna.
  • calmode='p': To start with phase only calibration.
  • gaintype='G': To solve for the polarizations separately.
  • minsnr=0: To turn off flagging of low-SNR solutions, so that we can inspect all the solutions.

You may see several messages printed to the terminal while gaincal is running, e.g.,

Found no unflagged data at:   (time=2017/10/13/20:50:07.5 field=0 spw=0 chan=0)

This means that all the input data was flagged for this solution interval. This is generally harmless unless there are far fewer solutions in the output table than you were expecting.

It is recommended to check the logger messages written by gaincal to find the total number of solution intervals, i.e.,

INFO gaincal	Calibration solve statistics per spw:  (expected/attempted/succeeded):
INFO gaincal	  Spw 0: 561/522/522
INFO gaincal	  Spw 1: 561/522/522
...  ...          ...

This shows that gaincal successfully found solutions for most of the solution intervals.

Plotting the First Self-Calibration Table

To view these solutions, we use plotms.

Figure 6: The phase solutions vs. time for the first 9 antennas, colored by polarization.
# in CASA
plotms(vis='selfcal_initial.tb',xaxis='time',yaxis='phase',iteraxis='antenna',gridrows=3, gridcols=3, coloraxis='corr')
  • xaxis='time' & yaxis='phase' : View the phase variations over time with respect to antenna 24.
  • iteraxis='antenna' : Create separate plots of the corrections for each antenna.
  • gridrows=3 & gridcols=3: It can be helpful to view multiple plots at once, as we will be stepping through several plots. In this case, 9 plots per page.
  • coloraxis='corr' : To use different colors when plotting different polarizations (R and L will be black and red, respectively).

Iterate through these plots using the 'Next Iteration' button (green triangle) to inspect the solutions for all antennas. Some noteworthy observations include:

  • There are large, coherent phase changes of more than 100 degrees.
  • The timescale of these changes is fairly short, about 20 seconds (zoom in on the variations for a particular antenna to see this).
  • The scatter in these signals is high (approximately a few 10s of degrees), indicating low signal-to-noise.
  • The phase changes in the two polarizations appear to match each other.
  • There is some interesting behavior in the 3rd scan for antennas ea15 and ea27.

Note: The plot of the reference antenna, ea24, is not unusual and should be considered to be consistent with zero phase.

It is apparent from these plots that we can combine polarizations to improve the solution signal-to-noise ratio, since we observed that the solutions for the two polarizations were very similar.

The next thing we want to understand is if we can combine SPWs, and if so, which ones. We can plot the previous solutions in a slightly different way to help answer this question. We will view these solutions again using plotms, but this time we will color the solutions by SPW.

Figure 7: The phase solutions vs. time, colored by spectral window, second iteration.
# in CASA
plotms(vis='selfcal_initial.tb',xaxis='time',yaxis='phase',iteraxis='antenna',gridrows=3, gridcols=3, coloraxis='spw')
  • xaxis='time' & yaxis='phase' : View the phase variations over time with respect to antenna 24.
  • iteraxis='antenna' : Create separate plots of the corrections for each antenna.
  • gridrows=3 & gridcols=3: It can be helpful to view multiple plots at once, as we will be stepping through several plots. In this case, 3 plots per page.
  • coloraxis='spw' : To use different colors when plotting different SPWs.

Iterate again through these plots using the 'Next Iteration' button (green triangle) to inspect the solutions for all antennas. When you get to ea15 it should be clear that the solutions are not the same for all SPWs. This is also true but less obvious for ea27 due to the limited number of colors available to plotms.

We can inspect this further in the following plot:

Figure 8: The phase solutions vs. time for antenna ea15, colored by scan, second iteration.
# in CASA
plotms(vis='selfcal_initial.tb',xaxis='time',yaxis='phase',antenna='ea15',iteraxis='spw',gridrows=3, gridcols=3, coloraxis='scan')
  • xaxis='time' & yaxis='phase' : View the phase variations over time with respect to antenna 24.
  • antenna='ea15' : To select only antenna ea15.
  • iteraxis='spw' : Create separate plots of the corrections for each SPW.
  • gridrows=3 & gridcols=3: To view multiple plots at once. In this case, 9 plots per page.
  • coloraxis='scan' : To use different colors when plotting different scans.

The first set of 9 plots should have a similar pattern. On the next iteration, this pattern should continue for SPWs 9~11, but then change for SPWs 12~15. The signal-to-noise for SPW 13 is also noticeably lower. If we create these plots for ea27 we will see a similar pattern, only this time the pattern is constant over SPWs 0~5 and then it changes to a new pattern that is constant for SPWs 6~15. For both ea15 and ea27, the change only happens in the third of the three scans.

Unfortunately, since all SPWs do not show the same phase solutions, it will not be trivial to combine them to increase the signal-to-noise ratio of the solutions. Therefore, we will continue without combining SPWs.

Examples of Various Solution Intervals

Now that we have made the decision about how to handle SPW combination, we will move on to consider time averaging. We observed the previous solutions to display large, coherent phase changes but also to have significant scatter due to low signal-to-noise. We could increase the signal-to-noise by root 2 for each doubling of the solution interval, but it does not make sense to average over timescales larger than the characteristic time over which the phase remains constant (approximately 20 seconds for these data). In this section, we will demonstrate these effects by creating and plotting tables over a range of solution intervals. We will also combine both polarizations (with gaintype='T' ) to improve the solution signal-to-noise ratio, since we observed the two polarizations to measure approximately the same phase changes.

These commands will create 6 new tables having solution intervals of 3, 6, 12, 24, 48 and 96 seconds (1, 2, 4, 8, 16 and 32 times the data's integration time)

# in CASA
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_3.tb',solint='int',refant='ea24',calmode='p',gaintype='T', minsnr=0)
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_6.tb',solint='6s',refant='ea24',calmode='p',gaintype='T', minsnr=0)
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_12.tb',solint='12s',refant='ea24',calmode='p',gaintype='T', minsnr=0)
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_24.tb',solint='24s',refant='ea24',calmode='p',gaintype='T', minsnr=0)
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_48.tb',solint='48s',refant='ea24',calmode='p',gaintype='T', minsnr=0)
gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_96.tb',solint='96s',refant='ea24',calmode='p',gaintype='T', minsnr=0)

These commands will plot each of the newly created tables. Run the commands sequentially and use the plotms GUI to iterate through the plots of additional antennas.

# in CASA
plotms(vis='selfcal_combine_pol_solint_3.tb',yaxis='phase',iteraxis='antenna',gridrows=2, gridcols=2, coloraxis='spw')
plotms(vis='selfcal_combine_pol_solint_6.tb',yaxis='phase',iteraxis='antenna',gridrows=2, gridcols=2, coloraxis='spw')
plotms(vis='selfcal_combine_pol_solint_12.tb',yaxis='phase',iteraxis='antenna',gridrows=2, gridcols=2, coloraxis='spw')
plotms(vis='selfcal_combine_pol_solint_24.tb',yaxis='phase',iteraxis='antenna',gridrows=2, gridcols=2, coloraxis='spw')
plotms(vis='selfcal_combine_pol_solint_48.tb',yaxis='phase',iteraxis='antenna',gridrows=2, gridcols=2, coloraxis='spw')
plotms(vis='selfcal_combine_pol_solint_96.tb',yaxis='phase',iteraxis='antenna',gridrows=2, gridcols=2, coloraxis='spw')

The following figures show examples of the first iteration of each of the above plots.

Figure 9A: The phase solutions vs. time of the solint=3s table, first four antennas, colored by SPW.
Figure 9B: The phase solutions vs. time of the solint=6s table, first four antennas, colored by SPW.
Figure 9C: The phase solutions vs. time of the solint=12s table, first four antennas, colored by SPW.
Figure 9D: The phase solutions vs. time of the solint=24s table, first four antennas, colored by SPW.
Figure 9E: The phase solutions vs. time of the solint=48s table, first four antennas, colored by SPW.
Figure 9F: The phase solutions vs. time of the solint=96s table, first four antennas, colored by SPW.

Comparing the Solution Intervals

We can see from plotting these solutions that the shortest timescale solutions capture the structure of the phase variations, but with a large dispersion. If we were to apply these low signal-to-noise ratio (SNR) solutions then we would, on average, be correcting for the large phase changes but we would also introduce random phase errors that could reduce the sensitivity of our observation. Another concern with using low-SNR solutions is that they can overfit the noise in the visibilities, leading to biases in the self-calibrated image. In this guide we adopt a conservative minimum SNR of 6 in order to guard against these biases.

Let's take a closer look at the SNR of the table with the 3 second integration time. We will use the table toolkit (tb) to extract the SNR of each solution which will return the SNR values as a numpy ndarray (and the numpy ravel method will flatten the result into a 1-dimensional array). Then we use numpy and scipy to print some statistical quantities and matplotlib to make a histogram.

Figure 10: Distribution of signal-to-noise ratios for the selfcal table with solint = 3s.
# in CASA
import matplotlib.pyplot as plt
from scipy import stats

tb.open( 'selfcal_combine_pol_solint_3.tb' )
snr_3s = tb.getcol( 'SNR' ).ravel()
tb.close()

plt.hist( snr_3s, bins=50 )

print( 'median = {0}'.format( np.median( snr_3s ) ) )
print( 'P(<=6) = {0}'.format( stats.percentileofscore( snr_3s, 6 ) ) )

We can see from the output that the median SNR is about 5.7 for this table and that enforcing our desired minimum SNR of 6 would flag 61% of the solutions. We want to avoid flagging such a high fraction of solutions and so we need to consider longer solution intervals to raise the SNR.

Figure 11: The phase solutions vs. time of the solint=3s table (blue) and 96s table (red), for antenna 0, spw 1 and scan 7.

But the longest solution interval in our examples (96 seconds) has a different problem. Specifically, we can see that the intrinsic phase is varying faster than the solution interval and so the solutions no longer do a good job of capturing the changes that we are trying to correct for. Applying such long timescale solutions may lead to some improvement in the image, but we would be leaving residual phase errors in the corrected data. This is particularly obvious if we overplot the solutions using plotms. This can not be done from the command line but can be done from the plotms GUI. All you need to do is plot the first calibration table, click the 'Add Plot' button in the lower left corner of the GUI window, enter the parameters for your second calibration table and click the 'Plot' button. The figure to the right shows one such example, comparing the 3 second solution intervals with the 96 second solution intervals.

Given these considerations, we suggest that the optimal selfcal parameters will use the shortest possible interval for which the signal-to-noise is also sufficient. Having identified shortcomings with the shortest (3s) and longest (96s) solution intervals in our set of example tables, we will now take a closer look at the SNR of the intermediate tables. The following code will compare the SNR histograms and compute the fraction of solutions less than a SNR of 6.

Figure 12: The SNR distribution for several example selfcal tables.
# in CASA
import matplotlib.pyplot as plt

tb.open( 'selfcal_combine_pol_solint_6.tb' )
snr_6s = tb.getcol( 'SNR' ).ravel()
tb.close()

tb.open( 'selfcal_combine_pol_solint_12.tb' )
snr_12s = tb.getcol( 'SNR' ).ravel()
tb.close()

tb.open( 'selfcal_combine_pol_solint_24.tb' )
snr_24s = tb.getcol( 'SNR' ).ravel()
tb.close()

tb.open( 'selfcal_combine_pol_solint_48.tb' )
snr_48s = tb.getcol( 'SNR' ).ravel()
tb.close()

plt.hist( snr_6s, bins=50, normed=True, histtype='step', label='6 seconds' )
plt.hist( snr_12s, bins=50, normed=True, histtype='step', label='12 seconds' )
plt.hist( snr_24s, bins=50, normed=True, histtype='step', label='24 seconds' )
plt.hist( snr_48s, bins=50, normed=True, histtype='step', label='48 seconds' )
plt.legend( loc='upper right' )
plt.xlabel( 'SNR' )

print( 'P(<=6) = {0}  ({1})'.format( stats.percentileofscore( snr_6s, 6 ), '6s' ) )
print( 'P(<=6) = {0}  ({1})'.format( stats.percentileofscore( snr_12s, 6 ), '12s' ) )
print( 'P(<=6) = {0}  ({1})'.format( stats.percentileofscore( snr_24s, 6 ), '24s' ) )
print( 'P(<=6) = {0}  ({1})'.format( stats.percentileofscore( snr_48s, 6 ), '48s' ) )

These results show that imposing a minimum SNR of 6 on the 6 second table will flag more than 11% of the solutions and that the same restriction will flag less than 2% of the 12 second table's solutions. Based on these numbers, we conclude that the 12 second table provides a reasonable compromise between time resolution and signal-to-noise.

We now need to reproduce this table to impose our desired minimum SNR condition of 6.

# in CASA

gaincal(vis='obj.ms',caltable='selfcal_combine_pol_solint_12_minsnr_6.tb',solint='12s',refant='ea24',calmode='p',gaintype='T', minsnr=6)

You may see messages printed to the terminal during the gaincal execution about solutions being flagged, e.g.,

3 of 27 solutions flagged due to SNR < 6 in spw=0 at 2017/10/13/20:59:21.0

This is telling us how many solutions have been flagged for being below the minimum signal-to-noise ratio set by the minsnr parameter. You will not have seen these messages in the first execution of gaincal because we set minsnr=0, but you are likely to see them in subsequent executions now that minsnr=6. One such message is printed per-SPW and per-solution interval (time bin) if one or more of the solutions is flagged. In our example, there are 27 total solutions (one per antenna), but if we had not elected to combine polarizations there would be 56 total solutions (one per-antenna per-polarization). If you see a large number of these messages, it can be useful to try to determine if they correspond to the same antenna, SPW or time as this may indicate the presence of bad data. If these messages appear across antennas, times and SPWs then this likely indicates that the signal-to-noise is too low and that more data needs to be combined (see above recommendations in data combination).

Applying the First Self-Calibration Table

Now that we have converged on a table of self-calibration solutions we are ready to apply these to our data. The default applycal parameters are adequate to apply this table.

# in CASA
applycal(vis='obj.ms',gaintable='selfcal_combine_pol_solint_12_minsnr_6.tb')

Note: If you decide to apply solutions that you created by combining all the spectral window together (combine='spw') then in applycal you will have to set spwmap=[0 x number of spectral windows] in order to tell CASA to apply the combined solution to all of the spectral windows -- in this case spwmap = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0].

Check the CASA Logger to confirm the amount of data flagged in this stage. You will see a message like:

INFO applycal	   T Jones: In: 86701414 / 403273728   (21.4993955669%) --> Out: 97559200 / 403273728   (24.1918065141%) (selfcal_combine_pol_solint_12_minsnr_6.tb)

This means that 21.5% of the data were already flagged prior to running applycal, and that a total of 24.2% of data are now flagged after application of the self-calibration solutions. This is reasonable, given our previous calculation of 1.8% flagged solutions and since the percentages reported in the logger are based on counting baseline-based flags.

Another important line from the CASA Logger to pay attention to is this one:

INFO FlagVersion	Creating new backup flag file called applycal_1

This is telling you that a copy of the initial (21.5%) flags was saved. You can restore the initial state of the flags using the task flagmanager should you ever wish to undo this step.

Summary of First Round of Self-Calibration

1) Create a calibration table with solint='int' and minsnr=0. Inspect the solutions, focusing on the timescale of the phase variations. 2) Create several calibration tables using multiples of the integration time. 3) Inspect these tables and choose one that balances SNR and capturing the real variations in the phase. 4) Apply chosen solution.


Imaging the Self-calibrated Data

The next thing that you may want to do is create a new image to assess the effects of the first round of self-calibration. We can do this by running tclean using similar parameters as used previously, with two notable exceptions: (1) we instruct tclean to read visibilities from the CORRECTED_DATA column since that is the column to which the self-calibration solutions have been applied, and (2) we turn off saving of the model visibilities as to not overwrite the current MODEL_DATA column.

# in CASA
tclean(vis='obj.ms',imagename='obj.selfcal1_clean.3arcmin', datacolumn='corrected', gridder='standard', cell='0.2arcsec', imsize=900, pblimit=-0.1, deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='none')
  • datacolumn='corrected' : To image the visibilities in the measurement set's CORRECTED_DATA column.
  • savemodel='none': To disable writing the MODEL_DATA column.

During the interactive cleaning, we will again place circular masks around each of the strong sources. First we mask the rightmost source and click the green arrow. Comparing the sidelobe pattern around the rightmost source with Figure 3A, you may notice that the pattern is now more symmetric. This is a good sign that the deconvolution will improve. When focus returns to the viewer window, mask the leftmost double-lobed source and click the green arrow. Compare this source with Figure 3B and you will notice a similar improvement. When focus returns to the viewer, proceed to mask the central source. Also check on the masks of the first two sources. Depending on how you drew the mask for the leftmost source, you may want to enlarge the mask to include additional, low-level emission. Then click the green arrow.

We have now reached the point where we stopped the cleaning of the initial image, because the residual image appeared to be dominated by artifacts and noise. This is clearly not the case for this self-calibrated image as there is more convincingly real emission that needs to be cleaned. Specifically, there is a new source about 15 arcsec to the Southeast of the central source, and the masks around the leftmost and central sources need to be expanded to include additional emission. Continue the process of cleaning and masking a couple more times until the residual image appears dominated by artifacts / noise. Then click the red X to finish cleaning.

Inspecting the Self-calibrated Image

We can compare the intial and self-calibrated images by loading them both in the viewer. It is a good idea to set an identical data range and scaling power cycles in the viewer's data display options, This is made easier by checking the box 'Global Color Settings' at the bottom of the 'Data Display Options' window. After checking this box, any adjustments to the data range and scaling power cycles will apply to all loaded images. We find that a data range of [-5e-05, 0.01] and scaling power cycles of -2.5 provides a good comparison.

Switching between the two images (shown below) reveals a dramatic improvement in image quality. Specifically, the artifact patterns centered on the bright sources have been almost entirely eliminated, except for the rightmost source where the artifacts have been reduced to a level that is no longer problematic for scientific analysis of the central source. This is primarily the result of improving phase calibration, and also potentially due to the extra flagging of the low SNR solutions (which may have corresponded to some bad data). Re-measuring our fundamental image statistics, we see that the peak flux has increased from 6.67 to 9.10 mJy and the image RMS has decreased from 18.2 to 8.24 uJy. This is an improvement in dynamic range of a factor of 3x, from 366 to 1104!

Note: when multiple images have been loaded in the viewer, the statistics may relate to an image other than the currently displayed image, and can be cycled through images independently of the displayed image by using the 'Next' button.

Figure 13A: The 3' clean image made before self-calibration. This is the same image as in Figure 4A but with a different color mapping.
Figure 13B: The 3' clean image after applying self-calibration. The color scale matches that of Figure 13A.


Possible Next Steps

The items in this section are intended to provide some guidance on how to proceed after the first round of self-calibration. Code examples are included in some subsections for added clarity. Please note that each block of code is not intended to be run sequentially. Instead, this section is intended to be treated as a decision tree.

Decide to Stop

We have now completed the first round of self-calibration and seen a dramatic improvement in image quality. The first question we want to ask is is this good enough to meet our scientific requirements? . The pursuit of a 'perfect' looking image is typically unnecessary and may be a large time sink. Therefore, it is important to continue trying to improve the image only if absolutely necessary. If the image is usable as-is, then we should stop here. In this specific case (with the data set used in this tutorial), one round of self-calibration was enough to achieve the scientific goals (determining the morphology of the radio sources in this galaxy cluster).

Modify Image Parameters

We may want to accept that the self-calibrated data is good enough for our science requirements, but revisit the imaging parameters used to make our final image. For example, the 3 arcmin image suffers at a low level from a source outside the image's field of view (the western source circled in Figure 2A). We could address this by repeating the final imaging with an outlier field on this source or by increasing the image size until the field of view is large enough to include the source. Other parameters we might want to think about if we decide to re-image could be: using multi-scale deconvolution, changing the value of Briggs robust weighting, changing the number of terms used for wide-band deconvolution and changing the gridder (e.g., using awproject instead of standard).

Freeze in the Self-calibration Solutions

We may want to make a copy of the self-calibrated visibilities by running the task split. The calibrated data is in the measurement set's CORRECTED_DATA column, and if we select this column with the split task then it will place the calibrated data in the DATA column of the output MS. This can greatly reduce the size of the measurement set since two of the three large columns of visibilities (MODEL_DATA and CORRECTED_DATA) will not be present in the output MS. Running split in this manner can be a great way to archive the final self-calibrated visibilities, prepare the data for combination with other observations, prepare the data for further rounds of self-calibration, etc. Below is an example of this step.

# in CASA
split(vis='obj.ms', datacolumn='corrected', outputvis='obj_selfcal.ms')

Further Rounds of Phase-only Self-calibration

There are a few options for how to continue trying to improve the calibration with additional rounds of selfcal. Each of these options is considered to be equally valid, but there are differences in how one keeps track of the data and the solutions.

The fundamental idea behind multiple rounds of selfcal is that we can continue to create a better model and therefore solve for more accurate solutions. Think back to our initial image, which involved a shallow, conservative clean that only modeled the brightest peaks of a few sources. Now that we have improved the calibration and can clean deeper, we can make a more complete model of the field and feed this back to solve for better solutions. Typically, successive rounds of selfcal suffer from diminishing returns, wherein the first round makes the largest corrections and yields the most noticeable image improvement. However, there are cases where several rounds are necessary to obtain the desired image quality. If you are unsure if multiple rounds are needed, one approach is just to try another round and then compare the image dynamic range and artifacts to your previous result.

Option 1

This option involves running split as above to freeze in the previous self-calibration solutions. Subsequent iterations of selfcal therefore produce incremental corrections to those that have been previously applied. This has the advantage that inspection of these 2nd order corrections can be useful when deciding to continue or stop further rounds of selfcal. Specifically, if the 2nd order solutions are noise-like with no discernible structure or offsets from zero phase, then applying them to the visibilities is unlikely to improve the image.

To proceed with this variant of selfcal, simply run split as above to freeze in the first round of solutions, then proceed with split's output measurement set in the same way we used obj.ms in this guide, i.e., set the model with tclean, then run gaincal, applycal and finally tclean again to produce the round 2 self-calibrated image. Below is an example of this procedure.

# in CASA

split(vis='obj.ms', datacolumn='corrected', outputvis='obj_selfcal.ms')

tclean(vis='obj_selfcal.ms',imagename='obj.selfcal1_prelim_clean.3arcmin', datacolumn='data', imsize=900, cell='0.2arcsec', pblimit=-0.1, gridder='standard', deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='modelcolumn')

gaincal(vis='obj_selfcal.ms',caltable='selfcal_round2.tb',solint='12s',refant='ea24',calmode='p',gaintype='T', minsnr=6)

applycal(vis='obj_selfcal.ms',gaintable='selfcal_round2.tb')

tclean(vis='obj_selfcal.ms',imagename='obj.selfcal2_clean.3arcmin', datacolumn='corrected', gridder='standard', cell='0.2arcsec', imsize=900, pblimit=-0.1, deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='none')
Option 2

This option avoids running split and continues to work with the original measurement set, i.e., obj.ms. First run tclean to set the model, then run gaincal. Unlike Option 1, these solutions will not be incremental. Instead, you are basically re-solving for the 1st order corrections the same way as in round 1, except you are using an updated model to do so. Then you run applycal and apply the new calibration table instead of the round 1 table. Finally, run tclean again to produce the round 2 self-calibrated image. There will not be any incremental solutions to inspect to help decide when to stop, so you will need to compare the selfcal tables and/or the final image properties. Below is an example of this procedure.

# in CASA

tclean(vis='obj.ms',imagename='obj.selfcal1_prelim_clean.3arcmin', datacolumn='data', imsize=900, cell='0.2arcsec', pblimit=-0.1, gridder='standard', deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='modelcolumn')

gaincal(vis='obj.ms',caltable='selfcal_round2.tb',solint='12s',refant='ea24',calmode='p',gaintype='T', minsnr=6)

applycal(vis='obj.ms',gaintable='selfcal_round2.tb')

tclean(vis='obj.ms',imagename='obj.selfcal2_clean.3arcmin', datacolumn='corrected', gridder='standard', cell='0.2arcsec', imsize=900, pblimit=-0.1, deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='none')
Option 3

This option also avoids running split and continues to work with the original measurement set, but unlike Option 2, allows you to create incremental solutions. First, run tclean to set the model. Then run gaincal and use the gaintable parameter to provide a list of all previous selfcal tables. In our example, if this was round 2, we would set gaintable='selfcal_combine_pol_solint_12_minsnr_6.tb' . This would pre-apply the round 1 selfcal table and write only incremental solutions to the output table. Then run applycal and provide a list of all previous selfcal tables plus the new table created by gaincal. Finally, run tclean again to produce the next self-calibrated image. This variant shares some benefits of Options 1 and 2, but can become unwieldly if you start to build up long lists of tables after several iterations. Below is an example of this procedure.

# in CASA

tclean(vis='obj.ms',imagename='obj.selfcal1_prelim_clean.3arcmin', datacolumn='data', imsize=900, cell='0.2arcsec', pblimit=-0.1, gridder='standard', deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='modelcolumn')

gaincal(vis='obj.ms',caltable='selfcal_round2.tb',solint='12s',refant='ea24',calmode='p',gaintype='T', minsnr=6, gaintable='selfcal_combine_pol_solint_12_minsnr_6.tb')

applycal(vis='obj.ms',gaintable=['selfcal_combine_pol_solint_12_minsnr_6.tb','selfcal_round2.tb'])

tclean(vis='obj.ms',imagename='obj.selfcal2_clean.3arcmin', datacolumn='corrected', gridder='standard', cell='0.2arcsec', imsize=900, pblimit=-0.1, deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='none')

Amplitude Self-calibration

When phase-only selfcal is no longer able to improve the image, and if you think there are still remaining calibration errors, then you may want to try amplitude selfcal. Amplitude selfcal works fundamentally the same as the phase-only examples, except that in gaincal you change calmode='ap' . It is important that the amplitude corrections are solved for incrementally to the phase-only corrections so as not to apply amplitude corrections that compensate for decorrelation. That means using either Option 1 or 3 described above but not Option 2. Additionally, amplitude solutions require the fitting of an extra parameter and therefore the SNR may be lower than the phase-only solutions. It is generally recommended to include amplitude normalization by setting solnorm=True, which will force the mean (or median to improve outlier rejection) gain over times and antennas to be 1.0; this will typically help to preserve the flux-density scale, especially after multiple iterations of amplitude selfcal. Below is an example based on the procedure in Option 1, where we increase the solution interval to 48s to compensate for the intrinsically lower SNR.

# in CASA

split(vis='obj.ms', datacolumn='corrected', outputvis='obj_selfcal.ms')

tclean(vis='obj_selfcal.ms',imagename='obj.selfcal1_prelim_clean.3arcmin', datacolumn='data', imsize=900, cell='0.2arcsec', pblimit=-0.1, gridder='standard', deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='modelcolumn')

gaincal(vis='obj_selfcal.ms',caltable='selfcal_amplitude.tb',solint='48s',refant='ea24',calmode='ap', solnorm=True, normtype='median', gaintype='T', minsnr=6)

applycal(vis='obj_selfcal.ms',gaintable='selfcal_amplitude.tb')

tclean(vis='obj_selfcal.ms',imagename='obj.selfcal_amplitude_clean.3arcmin', datacolumn='corrected', gridder='standard', cell='0.2arcsec', imsize=900, pblimit=-0.1, deconvolver='mtmfs', nterms=2, niter=1000, interactive=True, weighting='briggs', robust=0, savemodel='none')



Questions about this tutorial? Please contact the NRAO Helpdesk.

Last checked on CASA Version 5.7.0.