Difference between revisions of "MG0414+0534 P-band Spectral Line Tutorial - CASA 5.0.0"
|Line 108:||Line 108:|
[[File:Pband_hanning.png|400px|thumb|right|Fig.1 Reducing the effects of Gibbs ringing by Hanning smoothing the data. Shown is only a small fraction of the band, both before and after Hanning smooth. The amplitude scale is the same in both plots.]]
[[File:Pband_hanning.png|400px|thumb|right|Fig.1 Reducing the effects of Gibbs ringing by Hanning smoothing the data. Shown is only a small fraction of the band, both before and after Hanning smooth. The amplitude scale is the same in both plots.]]
Upon further inspection in “plotms”
Upon further inspection in “plotms” using ea01, field 0, corr xx,yy, and coloraxis baseline, it is clear that there is a lot of RFI across most of the 64 GHz (or 43,000 km/s) band that we used for the observations. Luckily, the region around the expected HI absorption line at 390.6 GHz is relatively clean of RFI. Because our target is strong enough in the continuum to perform a good self-calibration, we select this relatively clean part of the band centered around the expected HI line. Note that there were two IFs with different spectral resolution included in the observations. We need only one of the two IFs, and choose the one with the highest spectral resolution. Moreover, we only need the sources 3C48 (which we will use for bandpass, gain and flux calibration) and our target MG0414+0534, hence we select field=’3,4’. This approach has the additional advantage that we reduce our data volume a lot, bringing it down to about 15 Gb.
Revision as of 13:31, 15 March 2018
This CASA Guide is for Version 5.0.0 of CASA.
This tutorial describes how to use CASA 5.0.0 to reduce spectral-line data in the low-frequency P-band of the VLA (230 – 470 MHz). The goal is to make an image cube containing HI 21cm absorption against the strong radio continuum of gravitationally lensed radio galaxy MG0414+0534. As a results of the high redshift of z=2.6365, the HI absorption signal in MG0414+0534 is redshifted to an observed frequency of 390.597 GHz. The HI absorption in MG0414+0534 was previously imaged with the VLA by Moore, Carilli & Menten 1999 (ApJ, 510, 87), before the upgrade to the WIDAR system.
To perform P-band spectrsocopy, there are three important considerations for planning the observations:
- Use a bandpass calibrator that is strong enough to accurately calibrate the frequency-dependent gain variations. This is particularly important for most HI 21cm absorption projects, which are typically performed against strong radio continuum sources. As a rule-of-thumb, use tcal > tobj × (Sobj / Scal)2, with “t” the exposure time and “S” the source flux density. This rule-of-thumb serves to avoid introducing excessive additional noise in the spectrum. In addition, if very high spectral dynamic range is needed (i.e., when expecting a ratio between the detection limit and the radio continuum of about 1:10,000 or more), consider observing a bandpass calibrator several times during your run to be able to correct for time-varying bandpass changes, which scale with the continuum emission in the target field. See this page for more information.
- Use a bandwidth that is wide enough to perform accurate self calibration. Using a wide bandwidth for self-calibration this is important for fields with relatively weak continuum sources. For strong continuum sources, a narrower bandwidth can be used to avoid excessive RFI.
The P-band test-data on MG0414+0534 that we use in this tutorial were obtained using a large bandwidth. This to ensure that good bandpass solutions could be obtained, and that self-calibration could accurately be performed. However, due to the strong radio continuum of MG0414+0534 (3.3 Jy at 390m MHz) and the large amounts of RFI across the entire band, we only use a small fraction of the total band for data reduction and analysis in this tutorial.
Obtaining the raw data
Given the large volume of the data (142 Gb), you can skip to Section "Initial flagging" to download a 15 Gb subset to work on.
We will use test-data that was taken in a hybrid configuration when the VLA array was moved from B-config to A-config. The data set can be downloaded from the NRAO archive by searching for the following Project Code:
This returns a long list with test-data that are publicly available. Our observations were performed on 14 Sept 2016:
Note that this observation was duplicated on 15 September 2016 . This second day of observations can be found under filename: TSUB0001.sb32720781.eb32773507.57646.24443702547
For the purpose of this tutorial, we only reduce and image the first data set. The second data set can be reduced in an identical way (pending the flagging, given that the RFI conditions most likely changed). During the imaging stage you can then combine both reduced data sets.
To download the data, fill in your email, select either the "SDM-BDF (all files)" or “MS” option, and check the box next to the data sets that you want. The practical difference between selecting the MeasurementSet (MS) and Science Data Model (SDM) is that for MS the first step in the data reduction described below, namely reading in the SDM data to save it as a MeasurementSet, is already done by the archival engine. Note that you can also opt to download the data as a tar-file by clicking the appropriate box.
Optionally, you can request to discard data marked as ‘bad’ by clicking the box “Apply Telescope Flags”, which gets rid of data taken during times of instrument calibration, shadowing, slewing, etc. However, it also applies all flags on the science data that were automatically created during the observations. Therefore, a safer option is to not apply any flags before downloading the data, but first inspect and subsequently apply the flags generated during the observations using flagcmd, as part of the data reduction plan.
Click "Get My Data" will forward you to the next page, where you should choose the delivery method (either downloading over the internet, or sending home a hard-drive with the data). If you opt to retrieve the data over the internet, wait until you get an email confirming that the data is ready for download.
Loading data into CASA
Start CASA by typing
on the command line. This should start a CASA interactive python (iPython) session, and open a separate log window. To guarantee that the below mentioned procedure for data reduction and imaging works, make sure you are using CASA version 5.0.0. While older version may work as well for the purpose of this tutorial, it is good read instructions on how to download and install the latest version of CASA. We will begin by importing our data from the binary format (SDM-BDF) into the MeasurementSet format, which is the standard for CASA data. For this, we use importevla:
# In CASA importevla(asdm='TSUB0001.sb32720781.eb32763188.57645.263958564814', vis='MG0414_d1.ms', flagpol=False, applyflags=False, savecmds=True, outfile='flagfile.txt')
In this case, we do not apply the flags that were created as part of the observations, but we will write them out to a flagfile.txt file. We will inspect and apply the flags as follows:
# In CASA flagcmd(vis='MG0414_d1.ms', action='plot') flagcmd(vis='MG0414_d1.ms', action='apply')
As an alternative to flagcmd, given that we saved the flags to flagfile.txt, we can also use
# In CASA flagdata(vis='MG0414_d1.ms', mode='list', inpfile='flagfile.txt', action='apply')
to apply the flags.
One of the flag commands is to clip all data points that have a 0-value, as these most likely have not recorded any science data. This is normally done with the settings correlation=`ABS_RR’ and correlation=`ABS_LL’. Because the VLA P-band and 4-band systems use linear XX and YY polarisation, rather than the circular RR and LL polarisation of the other VLA bands, you will encounter an error message when running flagcmd. We can get around this issue by manually clipping the 0-data for the XX and LL polarization with flagdata:
# In CASA flagdata(vis='MG0414_d1.ms', mode='clip', correlation='ABS_XX,ABS_YY', clipzeros=True, action='apply')
Note: It is good practice to also carefully read the Operator Log that was created by the operator on duty during the observations. This can provide additional information on data that should be flagged manually during the data reduction stage. These Operator logs can be found here.
Now that the data is read in and the first flagging is performed, we will inspect the content of our data set using listobs
# In CASA listobs(vis='MG0414_d1.ms', listfile='listobs.txt')
This normally plots an overview of the data in the CASA logger, but with the commmand listfile=`listobs.txt’ this information is written out to the file listobs.txt.
Initial flagging: calibrator data
When plotting the data in plotms (plotting amplitude against frequency), it is immediately clear that our observing band contains lots of Radio Frequency Interference (RFI), which is unavoidable at these low frequencies. The situation actually looks worse than it is, because ‘Gibbs ringing’ causes the blending of strong RFI signal into adjacent channels (Fig.1). To reduce the effect of Gibbs ringing, we first Hanning smooth the data (please note that this can take a while to run on the entire 142 Gb data set):
# In CASA hanningsmooth(vis='MG0414_d1.ms', datacolumn='data', outputvis='MG0414_d1_hanning.ms')
Upon further inspection in “plotms” (using ea01, field 0, corr xx,yy, and coloraxis baseline), it is clear that there is a lot of RFI across most of the 64 GHz (or 43,000 km/s) band that we used for the observations. Luckily, the region around the expected HI absorption line at 390.6 GHz is relatively clean of RFI. Because our target is strong enough in the continuum to perform a good self-calibration, we select this relatively clean part of the band centered around the expected HI line. Note that there were two IFs with different spectral resolution included in the observations. We need only one of the two IFs, and choose the one with the highest spectral resolution. Moreover, we only need the sources 3C48 (which we will use for bandpass, gain and flux calibration) and our target MG0414+0534, hence we select field=’3,4’. This approach has the additional advantage that we reduce our data volume a lot, bringing it down to about 15 Gb.
# In CASA split(vis='MG0414_d1_hanning.ms', outputvis='MG0414_d1_data.ms', datacolumn='data', field='3,4', spw='17:20~485', keepflags=False)
This 15 Gb data set can be downloaded by clicking here. The filename should be MG0414_d1_data.ms.tgz (add the ".tgz" if this is missing), and the file can be unpacked by typing tar xzf MG0414_d1_data.ms.tgz
We now have to perform a more in-depth flagging our bad data. Our strategy is to first focus on the calibrator source (in this case only 3C48). If the calibrator data are flagged well, we should be able to perform an accurate calibration of the target data. Later in this document we will then flag the calibrated target-data using an automated flagger.
When plotting frequency against aplitude in plotms, we see that there are still two ranges of channels filled with RFI throughout the observing run. We will flag those data with the task “flagdata”. Note that, because we are flagging channels on all scans of the bandpass calibrator 3C48, these channels cannot be calibrated accurately, so we flag these channels also in the data of the target!
# In CASA flagdata(vis='MG0414_d1_data.ms', mode='manual', spw='0:148~155', action='apply') flagdata(vis='MG0414_d1_data.ms', mode='manual', spw='0:51~59', action='apply')
The data on our bandpas+gain+flux calibrator 3C48 now look clean, with no dead or misbehaving antennas. We therefore now start with the calibration. Many of the steps below are based on the CASA tutorial on reducing VLA P-band continuum data.
We start the calibration by obtaining the latest set of antenna-position corrections compared to the stored values that were derived at the start of the observing period. For this you need to run gencal with caltype=`antpos’ while being connected to the internet:
# In CASA gencal(vis='MG0414_d1_data.ms', caltype='antpos', caltable='antpos.cal')
The output is a list of antenna position corrections that are written to the calibration table antpos.cal:
2017-07-25 19:54:51 INFO gencal Determine antenna position offests from the baseline correction database 2017-07-25 19:54:52 INFO gencal offsets for antenna ea01 : 0.00200 0.00000 0.00200 2017-07-25 19:54:52 INFO gencal offsets for antenna ea02 : 0.00210 0.01800 0.00430 2017-07-25 19:54:52 INFO gencal offsets for antenna ea03 : -0.00700 0.01600 -0.02000 2017-07-25 19:54:52 INFO gencal offsets for antenna ea06 : 0.00200 -0.00200 0.00500 2017-07-25 19:54:52 INFO gencal offsets for antenna ea08 : -0.01290 0.01280 0.02920 2017-07-25 19:54:52 INFO gencal offsets for antenna ea09 : -0.01200 -0.01300 0.02400 2017-07-25 19:54:52 INFO gencal offsets for antenna ea15 : -0.00500 -0.01000 0.01100 2017-07-25 19:54:52 INFO gencal offsets for antenna ea16 : 0.01100 -0.01500 -0.01200 2017-07-25 19:54:52 INFO gencal offsets for antenna ea17 : -0.01300 0.00000 0.05000 2017-07-25 19:54:52 INFO gencal offsets for antenna ea18 : -0.00600 0.02200 -0.02000 2017-07-25 19:54:52 INFO gencal offsets for antenna ea22 : 0.00600 0.00000 0.00000 2017-07-25 19:54:52 INFO gencal offsets for antenna ea25 : -0.00140 0.00140 -0.00140 2017-07-25 19:54:52 INFO gencal offsets for antenna ea27 : -0.00400 -0.00500 0.03100 2017-07-25 19:54:52 INFO calibrater Beginning specifycal----------------------- 2017-07-25 19:54:52 INFO Creating KAntPos Jones table from specified parameters. 2017-07-25 19:54:52 WARN NB: This EVLA dataset appears to fall within the period 2017-07-25 19:54:52 WARN + of semester 16B during which the online tropospheric 2017-07-25 19:54:52 WARN + delay model was mis-applied. 2017-07-25 19:54:52 WARN A correction for the online tropospheric delay model error WILL BE APPLIED! 2017-07-25 19:54:52 WARN Marking antpos caltable to turn ON the trop delay correction.
Note: these observations were taken during a period in which the atmospheric delay terms were calculated incorrectly. In CASA 5.0.0 (and in fact in CASA versions 4.7.1 and up), a correction for this is taken into account automatically when running gencal with caltype=`antpos’.
Next we correct for ionospheric effects, which are important at frequencies below 5 GHz. For this, you need to be online. CASA’s strategy is to obtain information on the total electron content (TEC) for the date of observations. This information is based on data from the global navigation satellite system (GNSS). A series of CASA images, in the form of a 24h movie of the TEC as function of longitude and latitude, is then generated and stored as filename.IGS_TEC.im, with a corresponding TEC error movie named filename.IGS_RMS_TEC.im (these movies can be viewed in CASA’s viewer):
# In CASA from recipes import tec_maps tec_image, tec_rms_image = tec_maps.create(vis='MG0414_d1_data.ms', doplot=True)
If you are using CASA 5.1.0, the last line should read: tec_image, tec_rms_image, plotname = tec_maps.create(vis='MG0414_d1_data.ms', doplot=True)
Note that the information regarding the TEC is sparse, and either active or direction-dependent ionospheric conditions may not be corrected very well. Also, the online TEC information improves with time, hence the quality of these TEC-data is best about two weeks after the observations.
Subsequently, an ionosphere correction table is generated using gencal, in which the projected line-of-sight TEC (which depends on the zenith angle) is sampled for all times in the observation and stored in a standard CASA caltable. Fig.3 shows a plot that is generated for this caltable. Note: if you previously ran plotants, this TEC plot is not generated. To get around this issue, exit CASA before calculating the ionospheric effects.
See the CASA Cookbook for a detailed description on ionosphere corrections in CASA (thanks are due to Jason Kooi, UIowa for his contributions).
Next we will calibrate the requantizer gain levels, which are the visibility amplitudes that were set as the input of the WIDAR correlator. Requantizer scans are added to a P-band observing schedule to optimize the digital power in each spectral window, which maximizes the signal-to-noise in each window. Because there is a significant variation in power across the 240 MHz of the P-band, and because some spectral windows may experience a high power due to strong RFI, setting the requantizer levels during the observations may improve the quality of the data, including the shape of the bandpass across multiple spectral windows. As part of the test-observations of MG0414+0534, the very strong source Cygnus-A was observed at the start of the run. Re-setting the requantizer levels both before observing Cygnus-A and before targeting the other (much weaker) sources was essential to optimize the digital power and avoid correlation errors. Although we do not use Cygnus-A as part of this tutorial, and we use only a single spectral window, performing calibration of the requantizer gains is still good practice for P-band spectral line reduction:
# In CASA gencal(vis='MG0414_d1_data.ms', caltype='rq', caltable='rq.cal')
The next step is to set the absolute flux levels of our flux calibrator 3C48, for which a well-known model exists:
# In CASA setjy(vis='MG0414_d1_data.ms', field='0', scalebychan=True, standard='Scaife-Heald 2012', listmodels=False, usescratch=False)
Note: it is crucial for spectral-line work to set scalebychan=True, to ensure that the absolute flux level is calculated per channel and correctly interpolated across the observing band. If scalebychan=False, then only a single value per spectral window is calculated, resulting in a step-function in flux between spectral windows.
We then determine the delay of each antenna, for each polarization and each spectral window. Doing this on a short scan of the primary calibrator is generally sufficient, and will correct for internal (e.g., electronics, cables) and external (e.g., ionosphere) effects. We choose ea04 as reference antenna, as it produced good data, has baselines to all other used antennas in the array, and is located in the center of the array. We also need to apply ‘on the fly’ the calibration tables for the antenna positions, the total electron content and the requantizer levels.
# In CASA gaincal(vis='MG0414_d1_data.ms', caltable='delays.cal', field='0', selectdata=True, timerange='06:43:48~06:48:42', solint='inf', refant='ea04', gaintype='K', gaintable=['antpos.cal','tecim.cal','rq.cal'])
The caliberation solutions can be plotted using:
# In CASA plotcal(vis='MG0414_d1_data.ms', xaxis=’antenna’, yaxis=’delay’, field=’0’)
Note that any delays deviating by more than 30 nm should be treated with caution, and if needed, flagged.
Next step is to calibrate the bandpass, which corrects for the frequency dependent gain variations. Before determining the frequency-dependent gains, we first calibrate the phases in time, deriving a solution for each integration, to make sure we that we can derive accurate bandpass solution when integrating over a full scan on the bandpass calibrator 3C48:
# In CASA gaincal(vis='MG0414_d1_data.ms', caltable='bpphase.gcal', field='0', spw='0:250~300', refant='ea04', calmode='p', solint='int', minsnr=2.0, gaintype='G', gaintable=['antpos.cal','tecim.cal','rq.cal','delays.cal'])
We use these phase-solutions to perform the actual bandpass calibration. In this case, we have scans of our bandpass calibrator 3C48 taken at three different times during the observations. By interpolating the bandpass solutions in time, we try to minimize any possible time-varying bandpass effects and improve our spectral dynamic range.
# In CASA bandpass(vis='MG0414_d1_data.ms', caltable='bandpass.cal', field='0', spw='0', solint='inf', combine='', refant='ea04', solnorm=False, bandtype='B', gaintable=['antpos.cal','tecim.cal','rq.cal','delays.cal','bpphase.gcal'])
Note: To interpolate between bandpass scans, it is essential to specify “ combine=`’ ”. The reason is that the default is “ combine=’scan’ ”, which combined with the parameter “solint=`inf’ ”, would cross the scan boundaries to form one single solution for the bandpass. Only when combining “solint=`inf’ ” with “ combine=`’ ”, a bandpass solution is obtained for each individual scan on 3C48.
After the bandpass calibration, we perform a phase calibration of our data. Because we will apply the bandpass solutions on-the-fly, we can use the full bandwidth to determine the phase solutions. We will create two different calibration tables to correct for the time-varying phases. First, we calibrate the phases on each interval with the option solint=’int’.
# In CASA gaincal(vis='MG0414_d1_data.ms', caltable='intphase.gcal', field='0', spw='0', refant='ea04', calmode='p', solint='int', minsnr=2.0, gaintable=['antpos.cal','tecim.cal','rq.cal','delays.cal','bandpass.cal'])
The sole purpose of generating phase solutions on each 10s interval is to be able to accurately correct for the time-varying amplitudes, as we will see below. However, this method is not ideal for interpolating the phase solutions in time. For that, we do a second phase calibration, this time averaging the signal across the phase-calibrator scans with solint=’inf’:
# In CASA gaincal(vis='MG0414_d1_data.ms', caltable='scanphase.gcal', field='0', spw='0', refant='ea04', calmode='p', solint='inf', combine='', minsnr=2.0, gaintable=['antpos.cal','tecim.cal','rq.cal','delays.cal','bandpass.cal'])
Note that an alternative approach to deriving phase solutions per scan is to use the task ‘smoothcal’ to average the phase-corrections obtained every 10s with solin=’int’ over the full duration of a single scan.
We now perform a calibration of the amplitude variations in time. For this, we will apply the phase solution obtained in each 10s interval:
# In CASA gaincal(vis='MG0414_d1_data.ms', caltable='amp.gcal', field='0', spw='0', selectdata=False, solint='inf', combine='', refant='ea04', calmode='ap', minsnr=2.0, gaintable=['antpos.cal','tecim.cal','rq.cal','delays.cal','bandpass.cal','intphase.gcal'])
Normally, this procedure will only calibrate the relative amplitudes, and the task fluxscale is required to place the data on an absolute flux scale. However, because our flux-calibrator is also our bandpass and gain calibrator, the absolute flux scale will already be ok, so we skip this step.
We now finished the calibration of the data, and will apply the calibration tables that we derived from 3C48 to both the calibrator 3C48 itself and our target MG0414+0354. The task applycal creates a corrected data column where the calibrated data are stored.
# In CASA applycal(vis='MG0414_d1_data.ms', field='0’, gaintable=['antpos.cal','tecim.cal','rq.cal','delays.cal','bandpass.cal','scanphase.gcal','amp.gcal'], gainfield=['','','','0','0','0','0'], parang=True, calwt=False, applymode='calflagstrict', flagbackup=True)
# In CASA applycal(vis='MG0414_d1_data.ms', field='1', gaintable=['antpos.cal','tecim.cal','rq.cal','delays.cal','bandpass.cal','scanphase.gcal','amp.gcal'], gainfield=['','','','0','0','0','0'], calwt=False, applymode='calflagstrict', flagbackup=True)
Sometimes after calibrating the data it is easier to see any offending antennas or baselines. In this case, the calibrated data of the primary calibrator 3C48 show that the YY amplitudes of ea17 are off during the second part of the run. Because calibration solutions are antenna based, to get the best calibration, flag any poor data and re-run the calibration process:
# In CASA flagdata(vis='MG0414_d1_data.ms', mode='manual', antenna='ea17', action='apply')
Then run the calibration process again.
Note: a current known issue is that gaincal will not find good solutions if any correlation (including cross-correlation) in the data is completely flagged. Thus, in this case there is no difference between flagging only the YY polarizatiobn of ea17, or the entire antenna.
Now we split the calibrated data of our target into a new measurement set, including only the XX and YY polarization products. Using ‘split’, the calibrated target data are placed into the ‘data’ column of the new measurement set:
# In CASA split(vis='MG0414_d1_data.ms', outputvis='MG0414_d1_calibrated.ms', datacolumn='corrected', field='1', correlation='xx,yy', keepflags=False)
We now have a calibrated data set that has all calibration applied, except for possible self-calibration. Self-calibration relies on the target itself to better calibrate antenna-based gains (phases and amplitudes) as a function of time. It can be applied to targets that are strong in the continuum, or have a strong enough spectral line, to derive good phase -and if desired also amplitude- solutions on short (order of minutes) timescales. The procedure relies on an initial model of the target, which you then use iteratively to improve the gains, and hence the quality of the image itself. Here we show the various steps in self-calibrating our data based on the strong continuum emission of MG0414+0354. We refer to a dedicated CASA guide on self-calibration for further details.
Note: if strong RFI is present in the target data, it may be necessary to flag your data before proceeding with self-calibration. In the case of MG0414+0354 this is not the case.
First, we create an initial model of our target field using tclean.
Note: ‘tclean’ was introduced in CASA 4.7 to supersede the task ‘clean’. When running a previous version of CASA, you can use the task ‘clean’ instead, but keep in mind that the names of some input parameters were altered with the introduction of ‘tclean’.
# In CASA tclean(vis='MG0414_d1_calibrated.ms', datacolumn='data', imagename='MG0414_d1_cont_R03', spw='0', specmode='mfs', niter=100, threshold='1.5Jy', imsize=[256,256], weighting='briggs', robust=0.3, savemodel='modelcolumn')
Note: the information in the MODEL column consist of visibilities that present a model of the target field. tclean produces a ‘.model’ image of the model that you can visualize in the VIEWER. It is essential that only components that you know for sure represent the continuum in the target field are taken into account in self-calibration. If this is not the case, you probably cleaned too deep and included noise features or artifact (e.g., from the beam-pattern) in the model. Self-calibration would permanently lock these as features in your data, so this should be avoided at all cost! On the other hand, it is equally crucial to ensure that the model is not empty, as this will result in your target being shifted to the phase center.
Next we derive phase corrections to this model. Because our target is strong enough, we should get enough S/N per baseline on a 60 sec interval:
# In CASA gaincal(vis='MG0414_d1_calibrated.ms', caltable='sc1.gcal', field='0', spw='0', refant='ea04', calmode='p', solint='60s', combine='', minsnr=2.0)
This creates a new phase-calibration table, which that we call sc1.gcal. We now apply this calibration table to the calibrated data in order to improve the phase-only solutions:
# In CASA applycal(vis='MG0414_d1_calibrated.ms', field='0', gaintable=['sc1.gcal'], interp='linear')
Because self-calibration is an iterative process, we want to repeat this step again, in order to further improve the gain calibration. For that, we first split the calibrated data, with the self-calibration solution applied, to the DATA column of a new measurement set:
# In CASA split(vis='MG0414_d1_calibrated.ms', outputvis='MG0414_d1_sc1.ms', datacolumn='corrected', field='0', keepflags=False)
Now we can use this newly created measurement set to make a better continuum image and subsequently perform the next round of self-calibration:
# In CASA tclean(vis='MG0414_d1_sc1.ms', datacolumn='data', imagename='MG0414_d1_cont_sc1.R03', spw='0', specmode='mfs', niter=500, threshold='500mJy', imsize=[256,256],weighting='briggs',robust=0.3,savemodel='modelcolumn')
# In CASA gaincal(vis='MG0414_d1_sc1.ms', caltable='sc2.gcal', field='0', spw='0', refant='ea04', calmode='p', solint='30s', combine='', minsnr=2.0)
# In CASA applycal(vis='MG0414_d1_sc1.ms', field='0', gaintable=['sc2.gcal'], interp='linear')
# In CASA split(vis='MG0414_d1_sc1.ms', outputvis='MG0414_d1_sc2.ms', datacolumn='corrected', field='0', keepflags=False)
# In CASA tclean(vis='MG0414_d1_sc2.ms', datacolumn='data', imagename='MG0414_d1_cont_sc2_R03', spw='0', specmode='mfs', niter=1000, threshold='100mJy', imsize=[256,256],weighting='briggs',robust=0.3,savemodel='modelcolumn')
Now that we have a good model of the continuum in the field of MG0414+0354, we will attempt a final self-calibration using both the phases and amplitudes:
# In CASA gaincal(vis='MG0414_d1_sc2.ms', caltable='sc3ap.gcal', field='0', spw='0', refant='ea04', calmode='ap', solint='30s', combine='', minsnr=2.0)
# In CASA applycal(vis='MG0414_d1_sc2.ms', field='0', gaintable=['sc3ap.gcal'], interp='linear')
# In CASA split(vis='MG0414_d1_sc2.ms', outputvis='MG0414_d1_sc3ap.ms', datacolumn='corrected', field='0', keepflags=False)
# In CASA tclean(vis='MG0414_d1_sc3ap.ms', datacolumn='data', imagename='MG0414_d1_cont_sc3ap_R03', spw='0', specmode='mfs', niter=1000, threshold='100mJy', imsize=[256,256],weighting='briggs',robust=0.3,savemodel='modelcolumn')
The output continuum image from the last iteration of tclean continues to be an improvement over previous imaging, but also shows that we are not likely to gain further improvement with additional rounds of self-calibration. This indicates that now have a calibrated data-set of MG0414+0354 that is almost ready for final imaging.
However, before imaging, we need to flag the target data to get rid of poor-quality data. First, we run an automated flagging routine:
# In CASA flagdata(vis='MG0414_d1_sc3ap.ms', mode='rflag', datacolumn='data')
After having ran this automated flagging, we inspect the data in the casa viewer by displaying ‘time’ vs. ‘channel’ for the different baselines. Keep in mind that when the data are properly calibrated, it is easier to recognize low-level imperfections in the data, which may have to be flagged to improve the image quality. In this case, the viewer shows that antennas ea09 and ea14 show significantly higher noise than the other antennas, something which was not picked up by the automated flagging routine. In addition, for a number of baselines, the data quality looks relatively poor. We flag these antennas and baselines before imaging the data.
# In CASA flagdata(vis='MG0414_d1_data.ms', mode='manual', antenna='ea09', datacolumn='data') flagdata(vis='MG0414_d1_data.ms', mode='manual', antenna='ea14', datacolumn='data')
To flag offending individual baselines, use the following format:
# In CASA flagdata(vis='MG0414_d1_data.ms', mode='manual', antenna='…&…', datacolumn='data')
Note: flagging data can be an iterative process. Often, the better the data is calibrated, the easier it is to recognize low-level imperfections in the data. To optimize the data reduction, you can script the calibration process, and opt to flag the data also in the original data set (i.e., before calibration), after which you run the entire calibration again.
Finally, we image the data using the task tclean (or clean for earlier CASA versions):
# In CASA tclean(vis='MG0414_d1_sc3ap.ms', datacolumn='data', imagename='MG0414_d1_line_sc3ap_vel_R03', spw='0', specmode='cube', outframe='bary', veltype='optical', restfreq='390.600GHz', niter=100, threshold='1Jy', imsize=[2048,2048], pbcor=True, weighting='briggs', robust=0.3, savemodel='none')
When can inspect the resulting line-data cube in the casas viewer. The data cube contains both continuum and line emission, which can optionally be separated using the CASA task uvcontsub. The HI absorption feature is clearly present (Fig. 9).
Last checked on CASA Version 5.0.0