MG0414+0534 P-band Spectral Line Tutorial - CASA 5.0.0

From CASA Guides
Revision as of 17:46, 16 April 2018 by Akapinsk (talk | contribs) (Adding structure to the guide (sections, subsections))
Jump to navigationJump to search

This CASA Guide is for Version 5.0.0 of CASA.


This tutorial describes how to use CASA 5.0.0 to reduce spectral-line data in the low-frequency P-band of the VLA (230 – 470 MHz). The goal is to make an image cube containing HI 21cm absorption against the strong radio continuum of gravitationally lensed radio galaxy MG0414+0534. As a results of the high redshift of z=2.6365, the HI absorption signal in MG0414+0534 is redshifted to an observed frequency of 390.597 GHz. The HI absorption in MG0414+0534 was previously imaged with the VLA by Moore, Carilli & Menten 1999 (ApJ, 510, 87), before the upgrade to the WIDAR system.

Observing strategy

To perform P-band spectrsocopy, there are three important considerations for planning the observations:

  • Use a bandpass calibrator that is strong enough to accurately calibrate the frequency-dependent gain variations. This is particularly important for most HI 21cm absorption projects, which are typically performed against strong radio continuum sources. As a rule-of-thumb, use tcal > tobj × (Sobj / Scal)2, with “t” the exposure time and “S” the source flux density. This rule-of-thumb serves to avoid introducing excessive additional noise in the spectrum. In addition, if very high spectral dynamic range is needed (i.e., when expecting a ratio between the detection limit and the radio continuum of about 1:10,000 or more), consider observing a bandpass calibrator several times during your run to be able to correct for time-varying bandpass changes, which scale with the continuum emission in the target field. See this page for more information.
  • Use a bandwidth that is wide enough to perform accurate self calibration. Using a wide bandwidth for self-calibration this is important for fields with relatively weak continuum sources. For strong continuum sources, a narrower bandwidth can be used to avoid excessive RFI.

The P-band test-data on MG0414+0534 that we use in this tutorial were obtained using a large bandwidth. This to ensure that good bandpass solutions could be obtained, and that self-calibration could accurately be performed. However, due to the strong radio continuum of MG0414+0534 (3.3 Jy at 390m MHz) and the large amounts of RFI across the entire band, we only use a small fraction of the total band for data reduction and analysis in this tutorial.

Obtaining the raw data

Given the large volume of the data (142 Gb), you can skip to Section "Initial flagging" to download a 15 Gb subset to work on.

We will use test-data that was taken in a hybrid configuration when the VLA array was moved from B-config to A-config. The data set can be downloaded from the NRAO archive by searching for the following Project Code:


This returns a long list with test-data that are publicly available. Our observations were performed on 14 Sept 2016:


Note that this observation was duplicated on 15 September 2016 . This second day of observations can be found under filename: TSUB0001.sb32720781.eb32773507.57646.24443702547

For the purpose of this tutorial, we only reduce and image the first data set. The second data set can be reduced in an identical way (pending the flagging, given that the RFI conditions most likely changed). During the imaging stage you can then combine both reduced data sets.

To download the data, fill in your email, select either the "SDM-BDF (all files)" or “MS” option, and check the box next to the data sets that you want. The practical difference between selecting the MeasurementSet (MS) and Science Data Model (SDM) is that for MS the first step in the data reduction described below, namely reading in the SDM data to save it as a MeasurementSet, is already done by the archival engine. Note that you can also opt to download the data as a tar-file by clicking the appropriate box.

Optionally, you can request to discard data marked as ‘bad’ by clicking the box “Apply Telescope Flags”, which gets rid of data taken during times of instrument calibration, shadowing, slewing, etc. However, it also applies all flags on the science data that were automatically created during the observations. Therefore, a safer option is to not apply any flags before downloading the data, but first inspect and subsequently apply the flags generated during the observations using flagcmd, as part of the data reduction plan.

Click "Get My Data" will forward you to the next page, where you should choose the delivery method (either downloading over the internet, or sending home a hard-drive with the data). If you opt to retrieve the data over the internet, wait until you get an email confirming that the data is ready for download.

Examining and Editing the Data

Loading data into CASA

Start CASA by typing


on the command line. This should start a CASA interactive python (iPython) session, and open a separate log window. To guarantee that the below mentioned procedure for data reduction and imaging works, make sure you are using CASA version 5.0.0. While older version may work as well for the purpose of this tutorial, it is good read instructions on how to download and install the latest version of CASA. We will begin by importing our data from the binary format (SDM-BDF) into the MeasurementSet format, which is the standard for CASA data. For this, we use importevla:

importevla(asdm='TSUB0001.sb32720781.eb32763188.57645.263958564814', vis='', flagpol=False, applyflags=False, savecmds=True, outfile='flagfile.txt')

In this case, we do not apply the flags that were created as part of the observations, but we will write them out to a flagfile.txt file. We will inspect and apply the flags as follows:

flagcmd(vis='', action='plot')
flagcmd(vis='', action='apply')

As an alternative to flagcmd, given that we saved the flags to flagfile.txt, we can also use

flagdata(vis='', mode='list', inpfile='flagfile.txt', action='apply')

to apply the flags.

One of the flag commands is to clip all data points that have a 0-value, as these most likely have not recorded any science data. This is normally done with the settings correlation=`ABS_RR’ and correlation=`ABS_LL’. Because the VLA P-band and 4-band systems use linear XX and YY polarisation, rather than the circular RR and LL polarisation of the other VLA bands, you will encounter an error message when running flagcmd. We can get around this issue by manually clipping the 0-data for the XX and LL polarization with flagdata:

flagdata(vis='', mode='clip', correlation='ABS_XX,ABS_YY', clipzeros=True, action='apply')

Note: It is good practice to also carefully read the Operator Log that was created by the operator on duty during the observations. This can provide additional information on data that should be flagged manually during the data reduction stage. These Operator logs can be found here.

Now that the data is read in and the first flagging is performed, we will inspect the content of our data set using listobs

listobs(vis='', listfile='listobs.txt')

This normally plots an overview of the data in the CASA logger, but with the commmand listfile=`listobs.txt’ this information is written out to the file listobs.txt.

Initial flagging: calibrator data

When plotting the data in plotms (plotting amplitude against frequency), it is immediately clear that our observing band contains lots of Radio Frequency Interference (RFI), which is unavoidable at these low frequencies. The situation actually looks worse than it is, because ‘Gibbs ringing’ causes the blending of strong RFI signal into adjacent channels (Fig.1). To reduce the effect of Gibbs ringing, we first Hanning smooth the data (please note that this can take a while to run on the entire 142 Gb data set):

hanningsmooth(vis='', datacolumn='data', outputvis='')
Fig.1 Reducing the effects of Gibbs ringing by Hanning smoothing the data. Shown is only a small fraction of the band, both before and after Hanning smooth. The amplitude scale is the same in both plots.

Upon further inspection in “plotms” (using ea01, field 0, corr xx,yy, and coloraxis baseline), it is clear that there is a lot of RFI across most of the 64 GHz (or 43,000 km/s) band that we used for the observations. Luckily, the region around the expected HI absorption line at 390.6 GHz is relatively clean of RFI. Because our target is strong enough in the continuum to perform a good self-calibration, we select this relatively clean part of the band centered around the expected HI line. Note that there were two IFs with different spectral resolution included in the observations. We need only one of the two IFs, and choose the one with the highest spectral resolution. Moreover, we only need the sources 3C48 (which we will use for bandpass, gain and flux calibration) and our target MG0414+0534, hence we select field=’3,4’. This approach has the additional advantage that we reduce our data volume a lot, bringing it down to about 15 Gb.

split(vis='', outputvis='', datacolumn='data', field='3,4', spw='17:20~485', keepflags=False)

This 15 Gb data set can be downloaded by clicking here. The filename should be (add the ".tgz" if this is missing), and the file can be unpacked by typing tar xzf

We now have to perform a more in-depth flagging our bad data. Our strategy is to first focus on the calibrator source (in this case only 3C48). If the calibrator data are flagged well, we should be able to perform an accurate calibration of the target data. Later in this document we will then flag the calibrated target-data using an automated flagger.

plotms(vis='',antenna='ea01',xaxis='chan',yaxis='amp',correlation='xx,yy',coloraxis='baseline', scan='7',averagedata=True, avgtime='1e9')
Fig.2 Flagging of calibrator 3C48. Shown are the XX and YY polarization on antenna ea01, both before and after flagging the bad channels. The colors represent the different baselines with ea01.

When plotting frequency against amplitude in plotms, we see that there are still two ranges of channels filled with RFI throughout the observing run. We will flag those data with the task “flagdata”. Note that, because we are flagging channels on all scans of the bandpass calibrator 3C48, these channels cannot be calibrated accurately, so we flag these channels also in the data of the target!

flagdata(vis='', mode='manual', spw='0:148~155', action='apply')
flagdata(vis='', mode='manual', spw='0:51~59', action='apply')

The data on our bandpas+gain+flux calibrator 3C48 now look clean, with no dead or misbehaving antennas (plotted using the same plotms call as above). We therefore now start with the calibration. Many of the steps below are based on the CASA tutorial on reducing VLA P-band continuum data.


A priori Antenna Position Corrections

We start the calibration by obtaining the latest set of antenna-position corrections compared to the stored values that were derived at the start of the observing period. For this you need to run gencal with caltype=`antpos’ while being connected to the internet:

gencal(vis='', caltype='antpos', caltable='')

The output is a list of antenna position corrections that are written to the calibration table

2017-07-25 19:54:51 INFO gencal	 Determine antenna position offests from the baseline correction database
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea01 :  0.00200   0.00000   0.00200
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea02 :  0.00210   0.01800   0.00430
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea03 : -0.00700   0.01600  -0.02000
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea06 :  0.00200  -0.00200   0.00500
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea08 : -0.01290   0.01280   0.02920
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea09 : -0.01200  -0.01300   0.02400
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea15 : -0.00500  -0.01000   0.01100
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea16 :  0.01100  -0.01500  -0.01200
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea17 : -0.01300   0.00000   0.05000
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea18 : -0.00600   0.02200  -0.02000
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea22 :  0.00600   0.00000   0.00000
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea25 : -0.00140   0.00140  -0.00140
2017-07-25 19:54:52 INFO gencal	offsets for antenna ea27 : -0.00400  -0.00500   0.03100
2017-07-25 19:54:52 INFO calibrater	Beginning specifycal-----------------------
2017-07-25 19:54:52 INFO  	Creating KAntPos Jones table from specified parameters.
2017-07-25 19:54:52 WARN  	NB: This EVLA dataset appears to fall within the period
2017-07-25 19:54:52 WARN +	  of semester 16B during which the online tropospheric
2017-07-25 19:54:52 WARN +	  delay model was mis-applied.
2017-07-25 19:54:52 WARN  	A correction for the online tropospheric delay model error WILL BE APPLIED!
2017-07-25 19:54:52 WARN  	Marking antpos caltable to turn ON the trop delay correction.

Note: these observations were taken during a period in which the atmospheric delay terms were calculated incorrectly. In CASA 5.0.0 (and in fact in CASA versions 4.7.1 and up), a correction for this is taken into account automatically when running gencal with caltype=`antpos’.

Ionospheric calibration

Next we correct for ionospheric effects, which are important at frequencies below 5 GHz. For this, you need to be online. CASA’s strategy is to obtain information on the total electron content (TEC) for the date of observations. This information is based on data from the global navigation satellite system (GNSS). A series of CASA images, in the form of a 24h movie of the TEC as function of longitude and latitude, is then generated and stored as, with a corresponding TEC error movie named (these movies can be viewed in CASA’s viewer):

from recipes  import tec_maps

tec_image, tec_rms_image = tec_maps.create(vis='', doplot=True)

# If you are using CASA 5.1.0 use
tec_image, tec_rms_image, plotname = tec_maps.create(vis='', doplot=True)

Note that the information regarding the TEC is sparse, and either active or direction-dependent ionospheric conditions may not be corrected very well. Also, the online TEC information improves with time, hence the quality of these TEC-data is best about two weeks after the observations.

Fig.3 Total Electron Content (TEC) as functions of time for the day and location of the observations. The time of the observing run is shown in red.

Subsequently, an ionosphere correction table is generated using gencal, in which the projected line-of-sight TEC (which depends on the zenith angle) is sampled for all times in the observation and stored in a standard CASA caltable. Fig.3 shows a plot that is generated for this caltable. Note: if you previously ran plotants, this TEC plot is not generated. To get around this issue, exit CASA before calculating the ionospheric effects.

gencal(vis='', caltype='tecim', caltable='',infile=tec_image)

See the CASA Cookbook for a detailed description on ionosphere corrections in CASA (thanks are due to Jason Kooi, UIowa for his contributions).

Calibration of requantizer gains

Next we will calibrate the requantizer gain levels, which are the visibility amplitudes that were set as the input of the WIDAR correlator. Requantizer scans are added to a P-band observing schedule to optimize the digital power in each spectral window, which maximizes the signal-to-noise in each window. Because there is a significant variation in power across the 240 MHz of the P-band, and because some spectral windows may experience a high power due to strong RFI, setting the requantizer levels during the observations may improve the quality of the data, including the shape of the bandpass across multiple spectral windows. As part of the test-observations of MG0414+0534, the very strong source Cygnus-A was observed at the start of the run. Re-setting the requantizer levels both before observing Cygnus-A and before targeting the other (much weaker) sources was essential to optimize the digital power and avoid correlation errors. Although we do not use Cygnus-A as part of this tutorial, and we use only a single spectral window, performing calibration of the requantizer gains is still good practice for P-band spectral line reduction:

gencal(vis='', caltype='rq', caltable='')

Initial absolute flux density calibration

The next step is to set the absolute flux levels of our flux calibrator 3C48 (field='0'), for which a well-known model exists:

setjy(vis='', field='0', scalebychan=True, standard='Scaife-Heald 2012', listmodels=False, usescratch=False)

Note: it is crucial for spectral-line work to set scalebychan=True, to ensure that the absolute flux level is calculated per channel and correctly interpolated across the observing band. If scalebychan=False, then only a single value per spectral window is calculated, resulting in a step-function in flux between spectral windows.

Delay calibration

We then determine the delay of each antenna, for each polarization and each spectral window. Doing this on a short scan of the primary calibrator is generally sufficient, and will correct for internal (e.g., electronics, cables) and external (e.g., ionosphere) effects. We choose ea04 as reference antenna, as it produced good data, has baselines to all other used antennas in the array, and is located in the center of the array. We also need to apply ‘on the fly’ the calibration tables for the antenna positions, the total electron content and the requantizer levels.

gaincal(vis='', caltable='', field='0', selectdata=True, timerange='06:43:48~06:48:42', solint='inf', refant='ea04', gaintype='K', gaintable=['','',''])

The caliberation solutions can be plotted using:

plotcal(caltable='', xaxis='antenna', yaxis='delay', field='0')
Fig.4. Delays for each antenna and each polarization. Plot made with ‘plotcal’.

Note that any delays deviating by more than 30 nm should be treated with caution, and if needed, flagged.

Next step is to calibrate the bandpass, which corrects for the frequency dependent gain variations. Before determining the frequency-dependent gains, we first calibrate the phases in time, deriving a solution for each integration, to make sure we that we can derive accurate bandpass solution when integrating over a full scan on the bandpass calibrator 3C48:

gaincal(vis='', caltable='bpphase.gcal', field='0', spw='0:250~300', refant='ea04', calmode='p', solint='int', minsnr=2.0, gaintype='G', gaintable=['','','',''])

We use these phase-solutions to perform the actual bandpass calibration. In this case, we have scans of our bandpass calibrator 3C48 taken at three different times during the observations. By interpolating the bandpass solutions in time, we try to minimize any possible time-varying bandpass effects and improve our spectral dynamic range.

bandpass(vis='', caltable='', field='0', spw='0', solint='inf', combine='', refant='ea04', solnorm=False, bandtype='B', gaintable=['','','','','bpphase.gcal'])
Fig.5 Effects of bandpass calibration on the frequency-dependent amplitudes (top) and phases (bottom) across the observing band. The corrections were applied using the task “applycal” (we do not need to do this until we obtained all remaining calibration tables -- see below for details). The various colors show the different baselines with reference antenna ea04. The plots were made in ‘plotms’, with in “Axes” choosing option “Data Column: data” or “Data Column: corrected”.

Note: To interpolate between bandpass scans, it is essential to specify “ combine= ”. The reason is that the default is “ combine='scan' ”, which combined with the parameter “solint='inf' ”, would cross the scan boundaries to form one single solution for the bandpass. Only when combining “solint='inf' ” with “ combine= ”, a bandpass solution is obtained for each individual scan on 3C48.

plotms(vis='',field='0',antenna='ea04', xaxis='frequency',yaxis='amp',scan='7',correlation='xx,yy', coloraxis='baseline')
plotms(vis='',field='0',antenna='ea04', xaxis='frequency',yaxis='phase',scan='7',correlation='xx,yy', coloraxis='baseline')

After the bandpass calibration, we perform a phase calibration of our data. Because we will apply the bandpass solutions on-the-fly, we can use the full bandwidth to determine the phase solutions. We will create two different calibration tables to correct for the time-varying phases. First, we calibrate the phases on each interval with the option solint=’int’.

# In CASA 
gaincal(vis='', caltable='intphase.gcal', field='0', spw='0', refant='ea04', calmode='p', solint='int', minsnr=2.0, gaintable=['','','','',''])

The sole purpose of generating phase solutions on each 10s interval is to be able to accurately correct for the time-varying amplitudes, as we will see below. However, this method is not ideal for interpolating the phase solutions in time. For that, we do a second phase calibration, this time averaging the signal across the phase-calibrator scans with solint=’inf’:

# In CASA 
gaincal(vis='', caltable='scanphase.gcal', field='0', spw='0', refant='ea04', calmode='p', solint='inf', combine='', minsnr=2.0, gaintable=['','','','',''])

Note that an alternative approach to deriving phase solutions per scan is to use the task ‘smoothcal’ to average the phase-corrections obtained every 10s with solin=’int’ over the full duration of a single scan.

Fig. 6 Effects of phase calibration. The corrections were applied using the task "applycal” (we do not need to do this until we obtained all remaining calibration tables -- see below for details). The various colors show the different baselines with reference antenna ca04.
plotms(vis='',field='0',antenna='ea04', correlation='xx,yy',avgchannel='1e8', xaxis='time',yaxis='phase',coloraxis='baseline')

We now perform a calibration of the amplitude variations in time. For this, we will apply the phase solution obtained in each 10s interval:

gaincal(vis='', caltable='amp.gcal', field='0', spw='0', selectdata=False, solint='inf', combine='', refant='ea04', calmode='ap', minsnr=2.0, gaintable=['','','','','','intphase.gcal'])

Normally, this procedure will only calibrate the relative amplitudes, and the task fluxscale is required to place the data on an absolute flux scale. However, because our flux-calibrator is also our bandpass and gain calibrator, the absolute flux scale will already be ok, so we skip this step.

Fig.7 Effects of amplitude and flux calibration. The corrections were applied using the task “applycal” (we do not need to do this until we obtained all remaining calibration tables -- see below for details). The various colors show the different baselines with reference antenna ca04. The green points are the YY products on an offending antenna ea17, which we will flag in our target data.
plotms(vis='',field='0',antenna='ea04', correlation='xx,yy',avgchannel='1e8', xaxis='time',yaxis='amp',coloraxis='baseline')

We now finished the calibration of the data, and will apply the calibration tables that we derived from 3C48 to both the calibrator 3C48 itself and our target MG0414+0354. The task applycal creates a corrected data column where the calibrated data are stored.

applycal(vis='', field='0', gaintable=['','','','','','scanphase.gcal','amp.gcal'], gainfield=['','','','0','0','0','0'], parang=True, calwt=False, applymode='calflagstrict', flagbackup=True)
applycal(vis='', field='1', gaintable=['','','','','','scanphase.gcal','amp.gcal'], gainfield=['','','','0','0','0','0'], calwt=False, applymode='calflagstrict', flagbackup=True)

Sometimes after calibrating the data it is easier to see any offending antennas or baselines. Replotting with ydatacolumn='corrected' will produce the corrected calibration plots seen above. In this case, the calibrated data of the primary calibrator 3C48 show that the YY amplitudes of ea17 are off during the second part of the run. Because calibration solutions are antenna based, to get the best calibration, flag any poor data and re-run the calibration process:

flagdata(vis='', mode='manual', antenna='ea17', action='apply')

Then run the calibration process again.

Note: a current known issue is that gaincal will not find good solutions if any correlation (including cross-correlation) in the data is completely flagged. Thus, in this case there is no difference between flagging only the YY polarizatiobn of ea17, or the entire antenna.

Now we split the calibrated data of our target into a new measurement set, including only the XX and YY polarization products. Using ‘split’, the calibrated target data are placed into the ‘data’ column of the new measurement set:

split(vis='', outputvis='', datacolumn='corrected', field='1', correlation='xx,yy', keepflags=False)

We now have a calibrated data set that has all calibration applied, except for possible self-calibration. Self-calibration relies on the target itself to better calibrate antenna-based gains (phases and amplitudes) as a function of time. It can be applied to targets that are strong in the continuum, or have a strong enough spectral line, to derive good phase -and if desired also amplitude- solutions on short (order of minutes) timescales. The procedure relies on an initial model of the target, which you then use iteratively to improve the gains, and hence the quality of the image itself. Here we show the various steps in self-calibrating our data based on the strong continuum emission of MG0414+0354. We refer to a dedicated CASA guide on self-calibration for further details.

Note: if strong RFI is present in the target data, it may be necessary to flag your data before proceeding with self-calibration. In the case of MG0414+0354 this is not the case.

First, we create an initial model of our target field using tclean.

Note: ‘tclean’ was introduced in CASA 4.7 to supersede the task ‘clean’. When running a previous version of CASA, you can use the task ‘clean’ instead, but keep in mind that the names of some input parameters were altered with the introduction of ‘tclean’.

tclean(vis='', datacolumn='data', imagename='MG0414_d1_cont_R03', spw='0', specmode='mfs', niter=100, threshold='1.5Jy', imsize=[256,256], weighting='briggs', robust=0.3, savemodel='modelcolumn')

Note: the information in the MODEL column consist of visibilities that present a model of the target field. tclean produces a ‘.model’ image of the model that you can visualize in the VIEWER. It is essential that only components that you know for sure represent the continuum in the target field are taken into account in self-calibration. If this is not the case, you probably cleaned too deep and included noise features or artifact (e.g., from the beam-pattern) in the model. Self-calibration would permanently lock these as features in your data, so this should be avoided at all cost! On the other hand, it is equally crucial to ensure that the model is not empty, as this will result in your target being shifted to the phase center.

Next we derive phase corrections to this model. Because our target is strong enough, we should get enough S/N per baseline on a 60 sec interval:

gaincal(vis='', caltable='sc1.gcal', field='0', spw='0', refant='ea04', calmode='p', solint='60s', combine='', minsnr=2.0)

This creates a new phase-calibration table, which that we call sc1.gcal. We now apply this calibration table to the calibrated data in order to improve the phase-only solutions:

applycal(vis='', field='0', gaintable=['sc1.gcal'], interp='linear')

Because self-calibration is an iterative process, we want to repeat this step again, in order to further improve the gain calibration. For that, we first split the calibrated data, with the self-calibration solution applied, to the DATA column of a new measurement set:

split(vis='', outputvis='', datacolumn='corrected', field='0', keepflags=False)
Fig.8. Continuum image of MG0414+0354 before (left) and after (right) the first iteration of self-calibration, where we only calibrated the phases. Subsequent iterations may further improve the phases and amplitudes. This image was made in the VIEWER, and scaling is the same in both plots.

Now we can use this newly created measurement set to make a better continuum image and subsequently perform the next round of self-calibration:

tclean(vis='', datacolumn='data', imagename='MG0414_d1_cont_sc1.R03', spw='0', specmode='mfs', niter=500, threshold='500mJy', imsize=[256,256],weighting='briggs',robust=0.3,savemodel='modelcolumn')
gaincal(vis='', caltable='sc2.gcal', field='0', spw='0', refant='ea04', calmode='p', solint='30s', combine='', minsnr=2.0)
applycal(vis='', field='0', gaintable=['sc2.gcal'], interp='linear')
split(vis='', outputvis='', datacolumn='corrected', field='0', keepflags=False)
tclean(vis='', datacolumn='data', imagename='MG0414_d1_cont_sc2_R03', spw='0', specmode='mfs', niter=1000, threshold='100mJy', imsize=[256,256],weighting='briggs',robust=0.3,savemodel='modelcolumn')

Now that we have a good model of the continuum in the field of MG0414+0354, we will attempt a final self-calibration using both the phases and amplitudes:

gaincal(vis='', caltable='sc3ap.gcal', field='0', spw='0', refant='ea04', calmode='ap', solint='30s', combine='', minsnr=2.0)
applycal(vis='', field='0', gaintable=['sc3ap.gcal'], interp='linear')
split(vis='', outputvis='', datacolumn='corrected', field='0', keepflags=False)
tclean(vis='', datacolumn='data', imagename='MG0414_d1_cont_sc3ap_R03', spw='0', specmode='mfs', niter=1000, threshold='100mJy', imsize=[256,256],weighting='briggs',robust=0.3,savemodel='modelcolumn')

The output continuum image from the last iteration of tclean continues to be an improvement over previous imaging, but also shows that we are not likely to gain further improvement with additional rounds of self-calibration. This indicates that we now have a calibrated data-set of MG0414+0354 that is almost ready for final imaging.

However, before imaging, we need to flag the target data to get rid of poor-quality data. First, we run an automated flagging routine:

flagdata(vis='', mode='rflag', datacolumn='data')

After having ran this automated flagging, we inspect the data in viewer by displaying ‘time’ vs. ‘channel’ for the different baselines or you may display 'time' vs. 'baseline' for the different channels. Keep in mind that when the data are properly calibrated, it is easier to recognize low-level imperfections in the data, which may have to be flagged to improve the image quality. In this case, the viewer shows that antennas ea09 and ea14 show significantly higher noise than the other antennas, something which was not picked up by the automated flagging routine. In addition, for a number of baselines, the data quality looks relatively poor. We flag these antennas and baselines before imaging the data.

flagdata(vis='', mode='manual', antenna='ea09', datacolumn='data')
flagdata(vis='', mode='manual', antenna='ea14', datacolumn='data')

To flag offending individual baselines, in flagdata you may use antenna='...&...'.

Note: flagging data can be an iterative process. Often, the better the data is calibrated, the easier it is to recognize low-level imperfections in the data. To optimize the data reduction, you can script the calibration process, and opt to flag the data also in the original data set (i.e., before calibration), after which you run the entire calibration again.

When you flag these antennas you must go back and rerun the calibration before the final imaging.

Finally, we image the data using the task tclean (or clean for earlier CASA versions):

tclean(vis='', datacolumn='data', imagename='MG0414_d1_line_sc3ap_vel_R03', spw='0', specmode='cube', outframe='bary', veltype='optical', restfreq='390.600GHz', niter=100, threshold='1Jy', imsize=[256,256], pbcor=True, weighting='briggs', robust=0.3, savemodel='none')

Fig.9. Test-data of the redshifted HI 21cm absorption line on top of the radio continuum in MG 0414+0354; see More, Carilli & Mention 1999 (Apj, 510, 87) for the original detection.

When can inspect the resulting line-data cube in the viewer. The data cube contains both continuum and line emission, which can optionally be separated using the CASA task uvcontsub. The HI absorption feature is clearly present (Fig. 9).

To see your spectral line open the last image of the source and go to a channel that shows the source and then zoom in on the source. Then click the "Point marking" icon and find the brightest pixel and click on it. Now click on the spectral profile tool button to get the single point profile of the spectrum. Here you will see a spectrum with 2 dips that drop to zero. These dips are a part of the flagging we have done. You will need to zoom in in order to notice any spectral lines due to the auto scaling. Zoom in by clicking on the graph and creating a yellow box then release.

Last checked on CASA Version 5.0.0