Transient reduction pipeline: Difference between revisions

From CASA Guides
Jump to navigationJump to search
Line 32: Line 32:


* How much flagging should be done?  The script does some simple clipping, designed to remove any egregious RFI or horribly performing antenna or baseline.  This clipping is done in terms of the rms visibility amplitude in the science target data, flagging data above some large threshold (e.g., 50<math>\sigma</math>).  This threshold is under user control, but the current flagging in this script is certainly not equal to a human lovingly massaging the visibility data.
* How much flagging should be done?  The script does some simple clipping, designed to remove any egregious RFI or horribly performing antenna or baseline.  This clipping is done in terms of the rms visibility amplitude in the science target data, flagging data above some large threshold (e.g., 50<math>\sigma</math>).  This threshold is under user control, but the current flagging in this script is certainly not equal to a human lovingly massaging the visibility data.
These must be done before invoking the script/pipeline.  For example
<pre>
CASA <1> ASDM_name='AL007_sb123456789_1.56000.1234567890'
CASA <2> refant='ea21'
CASA <3> clip_sigma=50.0
CASA <4> execfile('transient_pipeline.py')
</pre>
A sensible, large value is adopted for <tt>clip_sigma</tt> if it is omitted.


== Processing Steps ==
== Processing Steps ==

Revision as of 13:07, 20 June 2010


Transient Reduction Pipeline

2010 June 20 - T. Joseph W. Lazio

There are a class of observations for which only relatively simple data reduction steps are needed. One such example is that of transient observations, which are typically conducted in continuum mode and for which one is merely trying to determine a flux density of an essentially unresolved or only partially resolved source. This guide describes the steps in such a pipeline.

In order to process these data in a semi-automatic fashion, certain assumptions are made

  • Data stored as an Archive Science Data Model (ASDM) file on disk.
  • Sources, listed in order of appearance, are structured as Option A or Option B
    • Option A: phase calibrator, target source, amplitude calibrator
    • Option B: amplitude calibrator, phase calibrator, target source

This guide should be set up so that a pipeline script can be extracted from it.

Warning: This guide was written at the time when the EVLA was still in its commissioning phase. As the instrument matures, specific steps taken here may need to be adjusted. Caveat emptor. Possible issues include

  • calwt=F because at the time of writing the EVLA is not reporting accurate weights (though I believe that this has a stronger effect on extended sources rather than compact sources);
  • the pipeline assumes the same number of channels per spectral window;
  • the pipeline assumes that the first scan is a dummy scan, as such is required at the time of writing.

I've restructured the document to describe the processing steps (conceptually) first, then list the actual script itself.

User Input

I am attempting to migrate the script to become a "pipeline," in the sense of not requiring any user input. However, there are some pieces of information that it is useful or required to know in order to process the data.

  • What is the name of the initial Archive Science Data Model (ASDM) file? It would be possible to assume that the script is being run in a directory that contains only a single file, which is in ASDM format, but that also seems a bit limiting.
  • What antenna should be used as a reference antenna? At the time of writing, it is not clear that a robust algorithm exists within CASA for choosing a reference antenna if the user has not specified one.
  • How much flagging should be done? The script does some simple clipping, designed to remove any egregious RFI or horribly performing antenna or baseline. This clipping is done in terms of the rms visibility amplitude in the science target data, flagging data above some large threshold (e.g., 50[math]\displaystyle{ \sigma }[/math]). This threshold is under user control, but the current flagging in this script is certainly not equal to a human lovingly massaging the visibility data.

These must be done before invoking the script/pipeline. For example

CASA <1> ASDM_name='AL007_sb123456789_1.56000.1234567890'
CASA <2> refant='ea21'
CASA <3> clip_sigma=50.0
CASA <4> execfile('transient_pipeline.py')

A sensible, large value is adopted for clip_sigma if it is omitted.

Processing Steps

1. Read the data from the ASDM file converting to a Measurement Set with importevla. Apply basic flagging operations (zeros, shadowing) here.

importevla(asdm=ASDM_name, vis=msname,
           flagzero=True, flagpol=True, shadow=True)

2. Flag first (dummy) scan.

flagdata(vis=msname,
         mode='manualflag',selectdata=T, scan='1')

3. Flag first 10 seconds of each scan.

flagdata(vis=msname,
         mode='quack',quackinterval=10,quackmode='beg',
         selectdata=F)

4. Extract various useful quantities from the data.

hdvalue=vishead(vis=msname, mode='get',
                hdkey='field', hdindex=sources['acal'])
acal_name=hdvalue[0]

spw_table=msname + '/SPECTRAL_WINDOW'
tb.open(spw_table)
freq_list=tb.getcol("REF_FREQUENCY")
channel_width=tb.getcol("CHAN_WIDTH")
num_chan=tb.getcol("NUM_CHAN")
tb.close(spw_table)

freq=freq_list[0]
if (1E9 < freq) and (freq < 2E9):
    band = 'L'
elif (2E9 < freq) and (freq < 4E9):
    band = 'S'
elif (4E9 < freq) and (freq < 8E9):
    band='C'
elif (8E9 < freq) and (freq < 12E9):
    band='X'
elif (12E9 < freq) and (freq < 40E9):
    band = 'K'
elif freq > 40E9:
    band = 'Q'
print "Observations are determined to be in the ", band, " band.\n"


wavelength=c/freq
primary_beam=(1.02*wavelength/D)*degrad   # HPBW, from Napier (1999)
FoV=(primary_beam/math.sqrt(2))*arcsecond_deg

tb.open(msname)
uvw=tb.getcol('UVW')
interval=tb.getcol('INTERVAL')
tb.close(msname)

b=[]
for u, v, w in zip(uvw[0], uvw[1], uvw[2]):
    b.append(math.sqrt(u*u+v*v)/wavelength)
bmax=max(b)
                                          # HPBW, synthesized beam,
                                          # from Bridle & Schwab (1999)
synthesized_beam=(1.2/bmax)*degrad*arcsecond_deg


##############################
# How much time-average smearing can be tolerated?
# Assume no more than time_smearing_loss peak
# intensity loss over half of primary beam.
# Follow Section 2 of Bridle
# & Schwab (1999).
#
time_smearing_loss=0.01
tau=math.sqrt(time_smearing_loss*1E9)*(synthesized_beam/FoV)

arbitrary_maximum=30
if tau > arbitrary_maximum:
    tau=arbitrary_maximum

dt=min(interval)

# tau is allowed value, dt is actual (minimum)
# (could be an issue if baseline-dependent
#  correlator accumulation used)
# make sure that tau is an integer
# multiple of dt
# 
tau=dt*math.floor(tau/dt)
if tau < dt:
    tau=dt

print "Data will be averaged in time."
print "Original time sampling [s]: ", dt
print "Averaging time [s]:         ", tau
print " "

##############################
# How much bandwidth
# smearing can be tolerated?
# Assume no more than band_smearing_loss peak
# intensity loss over half of
# primary beam.
# Follow Section 1 of Bridle
# & Schwab (1999).
# Assume square bandpass, no
# taper, expand resulting
# sine integral to lowest order
#
band_smearing_loss=0.01
eta_band=3.79
delta_nu=freq*(2/eta_band)*(synthesized_beam/FoV)*math.sqrt(18*band_smearing_loss)

# delta_nu is allowed value,
# figure out even divisibles of the actual
# value, stored in num_chan that is smaller than
# delta_nu
# potential bug if uneven channel widths
# used in different spectral windows

dnu=channel_width[0][0]
nchan_log2=math.log(num_chan[0],2)
for i in range(int(nchan_log2-1), 0, -1):
    if (dnu*math.pow(2,i)) < delta_nu:
        nchav=math.pow(2,i)
        break

if nchav < 1:
    nchav=1

print "Data will be averaged in frequency."
print "Original channel width [kHz]: ", min(channel_width[0])/1E3
print "Averaged channel width [kHz]: ", (nchav*min(channel_width[0]))/1E3
print "Number of channels: ", nchav

5. Compress data for faster processing.

outmsname=ASDM_name+'_split.ms'
tave='%.fs'%tau
split(vis=msname,outputvis=outmsname,
      datacolumn='data',
      timebin=tave,width=int(nchav))

6. Set flux density of amplitude calibrator.

  Need some heuristics here.
  Also could be an issue if the frequency setting is such that one should use a model image at a different band than one is observing, e.g., observing near the top of the C band where an X band model might be more appropriate.
for tstname in acal_std_name.keys():
    if acal_name in acal_std_name[tstname]:
        modimage=tstname
modimage=CalModels + '/' + modimage + '_' + band + '.im'
setjy(vis=outmsname,field=sources['acal'],modimage=modimage)


7. Quick-n-dirty bandpass

Bcaltable=ASDM_name + '.B1'
Gcaltable=ASDM_name + '.G0'
gaincal(vis=outmsname,caltable=Gcaltable,
        field=sources['acal'],
        solint='int',refant=refant,
        solnorm=T,calmode='p',minsnr=5)
bandpass(vis=outmsname,caltable=Bcaltable,
         field=sources['acal'],
         solint='inf',combine='scan',refant=refant,
         solnorm=T,
         gaintable=[Gcaltable])


8. amplitude and phase calibration

Gcaltable=ASDM_name + '.G1'
gaincal(vis=outmsname,caltable=Gcaltable,
        field=sources['acal'],
        solint='inf',combine='scan',refant=refant,
        minsnr=5,
        gaintable=[Bcaltable])

gaincal(vis=outmsname,caltable=Gcaltable,
        field=sources['pcal'],
        solint='inf',refant=refant,
        minsnr=5,
        append=T,
        gaintable=[Bcaltable])


9. Apply calibration.

fluxtable=ASDM_name + '.flux1'
fluxscale(vis=outmsname,caltable=Gcaltable,fluxtable=fluxtable,
          reference=sources['acal'],
          transfer=sources['pcal'])

applycal(vis=outmsname,
         field=sources['pcal'],
         gaintable=[Bcaltable, Gcaltable],
         gainfield=[' ', sources['pcal']],
         calwt=F)

applycal(vis=outmsname,
         field=sources['target'],
         gaintable=[Bcaltable, Gcaltable],
         gainfield=[' ', sources['pcal']],
         calwt=F)


10. Form target source measurement set.

hdvalue=vishead(vis=msname, mode='get',
        hdkey='field', hdindex=sources['target'])
target_name=hdvalue[0]

targetms=target_name + '.ms'
split(vis=outmsname,outputvis=targetms,
      datacolumn='corrected',
      field=sources['target'])

11. Image and CLEAN.

imagename=target_name + '.im'
cells='%.3farcsec'%(synthesized_beam/4)

clean(vis=targetms,imagename=imagename,
      mode='mfs',
      niter=300,
      psfmode='clark',
      imagermode='csclean',
      imsize=[1024,1024],cell=cells,
      stokes='IV',
      weighting='briggs',robust=-1)