VLA > Pipeline

# VLA CASA Calibration Pipeline

Contributors: Jürgen Ott, Drew Medlin

## General Description

The VLA calibration pipeline performs basic flagging and calibration using CASA. It is currently designed to work for Stokes I continuum data (except P-band), but may work in other circumstances as well. Each astronomical Scheduling Block (SB) observed by the VLA is automatically processed through the pipeline. The VLA calibration pipeline does the following:

• Loads the data into from the archival format (Science Data Model-Binary Data Format [SDM-BDF]) to a CASA MeasurementSet (MS), applies Hanning smoothing, and obtains information about the observing set-up from the MS;
• Applies online flags and other deterministic flags (shadowed data, edge channels of sub-bands, etc.);
• Prepares models for primary flux density calibrators;
• Derives pre-determined calibrations (antenna position corrections, gain curves, atmospheric opacity corrections, requantizer gains, etc.);
• Iteratively determines initial delay and bandpass calibrations, including flagging of RFI (Radio Frequency Interference) and some automated identification of system problems;
• Derives initial gain calibration, and derives the spectral index of the bandpass calibrator;
• RFI flagging is done on data with the initial calibration applied;
• Derives final delay, bandpass, and gain/phase calibrations, and applies them to the data;
• Runs the RFI flagging algorithm on the target data;
• Creates diagnostic images of calibrators.

In addition to the CASA integrated pipeline (available with the CASA package), a scripted pipeline is available for some CASA versions that may be useful. Please see the VLA scripted calibration pipeline web page for details.

Known issues are covered below, but if you have any questions or encounter issues with the pipeline, please submit comments, questions, and suggestions to the Pipeline Department of the NRAO Helpdesk.

## Obtaining the Pipeline

The pipeline is part of every other CASA release, starting with CASA 4.3.1. The Obtaining CASA webpage has links to the CASA version with the most recent VLA pipeline for a number of supported operating systems and provides access to older versions.

The scripted pipeline versions may continue to get updates in addition to the integrated versions for those who require greater flexibility when modifying their reduction procedures. If you are interested in obtaining the scripted pipeline, please see the Scripted Pipeline page.

## Pipeline Requirements

1. The VLA calibration pipeline runs on each completed SB (typically a single SDM-BDF) separately; there is currently no provision for it running on collections of SBs.
2. The pipeline relies on the correct setting of scan intents. We therefore recommend that every observer ensures that the scan intents are correctly specified in the Observation Preparation Tool (OPT) during the preparation of the SB (see OPT manual for details). Typical intents are:
• CALIBRATE_FLUX (required): flux density scale calibration scans (toward one of the standard VLA calibrators 3C48, 3C138, 3C147, or 3C286), the pipeline will use the first field with this intent; if this intent is not present, the pipeline will fail
• CALIBRATE_AMPLI and CALIBRATE_PHASE (required): temporal complex gain/phase calibration; if these intents are not present, the pipeline will fail
• CALIBRATE_BANDPASS (optional): scan that is used to obtain the bandpass calibration (only the first instance of CALIBRATE_BANDPASS is used regardless of the band; therefore multi-band scheduling blocks may encounter problems when different bandpass calibrators are used); if not present, the first field with scan intent CALIBRATE_FLUX will be used for bandpass calibration
• CALIBRATE_DELAY (optional): delay calibrator scan; if not present the first scan with a CALIBRATE_BANDPASS intent is used for delay calibration, and if that one is not available delays are calculated using the first CALIBRATE_FLUX scan.
3. The pipeline also currently requires a signal-to-noise of >~3 for each spectral window of a calibrator per integration (for each channel of the bandpass).

## Automatic Processing for Science Observations

Starting with Semester 2013A D-configuration, a version of the calibration pipeline has been run automatically at the completion of all astronomical scheduling blocks, except for P-band observations, with the resulting calibration tables and flags archived for future use. At NRAO we execute the standard pipeline, which implies that it is optimized for Stokes I continuum processing, independent of the actual science goal. A user may therefore decide to re-run the pipeline after making appropriate modifications.

Investigators are notified when the calibrated data are ready for download; detailed quality assurance checks can be performed by NRAO staff upon request. The calibrated visibility data are retained on disk for 15 days after the pipeline has completed to enable investigators to download and image at their home institution or remotely using NRAO computing resources. Calibrated data can also be restored after this nominal time period by re-generating the MS from the raw SDM-BDF data file downloaded from the NRAO archive and applying the saved calibration and flag tables. See the section "Restore calibration from archived products" for details on the restoration procedures. To request any of the automatic pipeline processed data, please submit a ticket to Pipeline Department of the NRAO Helpdesk.

## Running the Pipeline

The pipeline can take a few hours to a few days to complete depending on the specifics of the observation; ensure that your process can run for the full duration without interruption. Also, make sure that there is enough space in your directory as the data volume will increase by a factor of about four. There are several ways to run the pipeline and, in most cases, we recommend starting with the raw data (an SDM-BDF) that can be requested from the NRAO archive. Place your data (preferably the SDM-BDF, but an MS is also acceptable) in its own directory for processing. For example:

#In a Terminalmkdir myVLAdata


Next, start CASA from the same directory where you placed your SDM-BDF. Note: do not try to run the pipeline from a different directory by giving it the full path to a dataset, as some of the CASA tasks require the MS to be co-located with its associated gain tables. And do not try to run the pipeline from inside the SDM-BDF or MS directories themselves. While SDM-BDFs and MSs are directories, they should always be treated as single entities that are accessed from the outside. It is also important that a fresh instance of CASA is started from the directory that will contain the SDM-BDF or MS, rather than using an existing instance of CASA and using "cd" to move to a new directory from within CASA, as the output plots will then end up in the wrong place and potentially overwrite your previous pipeline results.

### Starting CASA with Pipeline Tasks

To start CASA with the pipeline from your own installation type:

#In a Terminalcasa --pipeline


Note that starting CASA without the --pipeline option will start CASA without any of the pipeline-specific tasks. (In turn, if you try to use 'casa --pipeline' for manual data reduction, plots that are using matplotlib, like plotcal, will not work; use 'casa'  without --pipeline for manual data reduction.)

If you are at the New Mexico Array Science Center (NMASC) or using NRAO computing resources at the NMASC, we provide a shortcut to the latest CASA version that includes the pipeline. To start this version, type:

#In a Terminalcasa-pipe


To list other versions of CASA with the pipeline that are available at the NMASC, type:

#In a Terminalcasa-pipe -ls

To start a particular version, type:

#In a Terminalcasa-pipe -r <full text from version list>

Now that CASA is open, there are several ways to start the pipeline.

### Method 1: import hifv

The pipeline comes with several predefined sets of instructions, called recipes, to accommodate for different needs: one for VLA data, one for ALMA data, one for VLASS, etc. Once started, the pipeline does not require any further input from the user and should run to completion automatically. The default VLA recipe for Stokes I continuum is called hifv which stands for 'heuristics, interferometry, VLA'. To use this VLA data recipe, you first need to import it into CASA via the following command from the CASA prompt:

# In CASA
import pipeline.recipes.hifv as hifv

and then start the pipeline via:

# In CASA
hifv.hifv(['mySDM'])

where mySDM is the name of the SDM-BDF from your observations.

### Method 2: casa_pipescript.py

If you have a 'casa_pipescript.py' file (see example below) from any previous pipeline run using the same CASA version, this file may be edited to add the SDM-BDF (or MS) name to this file and executing it directly in CASA. For this to work, the 'casa_pipescript.py' must be in the same directory where you start CASA with the pipeline. Once CASA is started (same steps as above) type:

#In CASA
execfile('casa_pipescript.py')

### Method 3: One Stage at a Time

You may notice that the 'casa_pipescript.py' is a list of specific CASA pipeline tasks being called in order to form the default pipeline. If desired, one could run each of these tasks one at a time in CASA to inspect intermediate pipeline products as an example.

If you need to exit CASA between stages, you can restart the pipeline where you left off. However, in order for this to work, none of the files can be moved to other directories. First, use the CASA pipeline task h_resume after starting CASA again. This will set up the environment again for the pipeline to work. Type:

# In CASA
h_resume()

### Execution on multiple processing engines

For faster processing, it is also possible to run the CASA pipeline in a multi-processor, multi-core environment. Most of the calibration (and imaging) tasks have been rewritten to work in such an environment. Start CASA like this:

#In a Terminal
mpicasa -n X <path_to_casa>/casa --pipeline -c script.py


where 'X' is the number of processing cores. Note that one core is always used for the management of the processes mpicasa -n 9 will therefore use 9 cores, 8 of which are used for processing the data. 'script.py' contains either the lines

#Content of script.py
import pipeline.recipes.hifv as hifv
hifv.hifv(['mySDM'])

or is the 'casa_pipescript.py' as mentioned above. The pipeline will autmatically detect that it is running in parallel mode and invoke all required, relevant task parameters (including partitioning of the data).

More information on CASA parallel processing, including running scripts like the pipeline on multiple nodes, can be found in the Parallel Processing chapter of the CASAdocs

## What you get: Pipeline Products

VLA pipeline output includes data products such as calibrated visibilities, a weblog, and all calibration tables. Note that the automated execution at NRAO will also run an additional data packaging step (hifv_exportdata' which moves most of the files to an upper level '../products' directory. This step is omitted in the manual execution and all products remain within the 'root' directory where the pipeline was executed.

The most important pipeline products include (mySDM is a placeholder for the SDM-BDF name):

• A MeasurementSet (MS) 'mySDM.ms' with applied flags and calibrated visibilities in the CORRECTED_DATA column that can be used for subsequent imaging (root directory).
• A weblog that is accessible via pipeline-YYYYMMDDTHHMMSSS/html/index.html, where the YYYYMMDDTHHMMSSS stands for the pipeline execution time stamp (multiple pipeline executions will result in multiple weblogs). The weblog contains information on the pipeline processing steps with diagnostic plots and statistics. An example is given in the VLA pipeline CASA guide.
• Calibrator images for all spws (files start with 'oussid*' in the root directory).
• All calibration tables and the 'mySDM.ms.flagversions' directory that contains various flag backups made at various stages of the pipeline run (see section Calibration Tables).
• The casapy-YYYYMMDD-HHMMSS.log CASA logger messages (in pipeline-YYYYMMDDTHHMMSSS/html/).
• 'casa_pipescript.py' (in pipeline-YYYYMMDDTHHMMSSS/html/), the script with the actually executed pipeline heuristic sequence and parameters. This file can be used to modify and re-execute the pipeline (see section The casa_pipescript.py file).
• 'casa_commands.log' (in pipeline-YYYYMMDDTHHMMSSS/html/), which contains the actual CASA commands that were generated by the pipeline heuristics (see section The casa_commands.log file).
• The output from CASA's task listobs is available at 'pipeline-YYYYMMDDTHHMMSSS/html/sessionSession_default/mySDM.ms/listobs.txt' and contains the characteristics of the observations (scans, source fields, spectral setup, antenna positions, and general information).
• As previously mentioned in Automatic Processing for Science Observations, calibrated MSs are only stored at NRAO for a period of 15 days. After that period, the pipeline products need to be re-applied to the raw data that have to be downloaded in the SDM-BDF format from the archive. To prepare for this restoring procedure, NRAO adds the hifv_exportdata pipeline task as a last step. This task packages calibration tables in 'unknown.session_1.caltables.tgz' and flag backups in 'mySDM.ms.flagversions.tgz'. An additional text file, 'mySDM.ms.calapply.txt', is also produced by hifv_exportdata that CASA task applycal uses when restoring the calibration. The restoring process itself is performed by a script called 'casa_piperestorescript.py'. See the section Restore calibration from archived products for details on the restoration procedures.

### The Pipeline Weblog

Information on the pipeline run can be inspected through a weblog that is launched by pointing a web browser to file:///<path to your working directory>/pipeline-YYYYMMDDTHHMMSSS/html/index.html. The weblog contains statistics and diagnostic plots for the SDM-BDF as a whole and for each stage of the pipeline. The weblog is the first place to check if a pipeline run was successful and to assess the quality of the calibration.

An example walkthrough of a pipeline weblog is provided in the VLA Pipeline CASA guide.

Note that we regularly test the weblog on Firefox. Other browsers may not display all items correctly.

### Calibration Tables

The final calibration tables of the pipeline are (where mySDM is a placeholder for the SDM-BDF name):

mySDM.ms.hifv_priorcals.s5_3.gc.tbl : Gaincurve
mySDM.ms.hifv_priorcals.s5_4.opac.tbl : Opacity
mySDM.ms.hifv_priorcals.s5_5.rq.tbl : Requantizer gains
mySDM.ms.hifv_priorcals.s5_6.ants.tbl : Antenna positions (if created)
mySDM.ms.finaldelay.k : Delay
mySDM.ms.finalBPcal.b : Bandpass
mySDM.ms.averagephasegain.g : Temporal Phase offsets
mySDM.ms.finalampgaincal.g : Flux calibrated Temporal Gains
mySDM.ms.finalphasegaincal.g : Temporal Phases


### The casa_pipescript.py File

The pipeline sequence of the pipeline heuristic steps are listed in the 'casa_pipescript.py' script that is located in the pipeline-YYYYMMDDTHHMMSSS/html (where YYYYMMDDTHHMMSSS is the timestamp of the execution) directory. A typical 'casa_pipescript.py' has the following structure (where mySDM is again a placeholder for the name of the SDM-BDF raw data file and will have the name of the one that was processed):

__rethrow_casa_exceptions = True
context = h_init()
context.set_state('ProjectSummary', 'observatory', 'Karl G. Jansky Very Large Array')
context.set_state('ProjectSummary', 'telescope', 'EVLA')
try:
hifv_importdata(ocorr_mode='co', nocopy=False, vis=['mySDM'], \
overwrite=False)
hifv_hanning(pipelinemode="automatic")
hifv_flagdata(intents='*POINTING*,*FOCUS*,*ATMOSPHERE*,*SIDEBAND_RATIO*, \
*UNKNOWN*, *SYSTEM_CONFIGURATION*, *UNSPECIFIED#UNSPECIFIED*', \
flagbackup=False, scan=True, baseband=True, clip=True, autocorr=True, \
hm_tbuff='1.5int', template=True, online=True, tbuff=0.0, \
hifv_vlasetjy(fluxdensity=-1, scalebychan=True, reffreq='1GHz', spix=0)
hifv_priorcals(tecmaps=False)
hifv_testBPdcals(weakbp=False)
hifv_checkflag(pipelinemode="automatic")
hifv_semiFinalBPdcals(weakbp=False)
hifv_checkflag(checkflagmode='semi')
hifv_semiFinalBPdcals(weakbp=False)
hifv_solint(pipelinemode="automatic")
hifv_fluxboot(pipelinemode="automatic")
hifv_finalcals(weakbp=False)
hifv_applycals(flagdetailedsum=True, flagbackup=True, calwt=[True], \
flagsum=True, gainmap=False)
hifv_targetflag(intents='*CALIBRATE*,*TARGET*')
hifv_statwt(pipelinemode="automatic")
hifv_plotsummary(pipelinemode="automatic")
hif_makeimlist(nchan=-1, calmaxpix=300, intent='PHASE,BANDPASS')
hif_makeimages(tlimit=2.0, hm_negativethreshold=-999.0, \
maxncleans=1, hm_growiterations=-999, cleancontranges=False, \
noise='1.0Jy', hm_minbeamfrac=-999.0, target_list={}, robust=-999.0, \
parallel='automatic', weighting='briggs', hm_noisethreshold=-999.0, \
hm_lownoisethreshold=-999.0, npixels=0, hm_sidelobethreshold=-999.0)
finally:
h_save()


(Note that executions at NRAO may show small differences, e.g., an additional final hifv_exportdata step that packages the products to be stored in the NRAO archive.)

The above is, in fact, a standard user 'casa_pipescript.py' file for the current CASA and pipeline version (download to edit and run yourself) that can be used for general pipeline processing after inserting the correct mySDM filename in hifv_importdata.

The pipeline run can be modified by adapting this script to comment out individual steps, or by providing different parameters (see the CASA help for the parameters of each task). The script can then be (re-)executed via:

# In CASA
execfile('casa_pipescript.py')


We will use this method later for an example where we modify the script for spectral line processing.

General modifications to the script include setting '__rethrow_casa_exceptions = False' to suppress CASA error messages in the weblog and 'h_init(weblog=False)' for faster processing without any weblog or plotting.

### The casa_commands.log File

casa_commands.log is another useful file in pipeline-YYYYMMDDTHHMMSSS/html (where YYYYMMDDTHHMMSSS is the timestamp of the pipeline execution) that lists all the individual CASA commands that the pipeline heuristics (hifv) tasks produced. Note that 'casa_commands.log' is not executable itself, but contains all the CASA tasks and associated parameters to trace back the individual data reduction steps.

## Restore calibration from archived products

To apply the calibration and flag tables produced by the pipeline, we recommend using the same version of CASA used by the pipeline as well as the same version of the pipeline. The version of both the pipeline and CASA used may be confirmed via the main "Observation Overview Page", the home page of the weblog. We recommend starting with a fresh SDM-BDF, although a fresh MS should work, too. There may be small differences in the final result or statistics if online flags were applied when requesting the MS.

In order for the pipeline to work properly, please follow the steps below to prepare your directory and calibration files for application. In addition to the raw SDM-BDF you will need the following pipeline products: 'unknown.session_1.caltables.tgz', 'mySDM.ms.flagversions.tgz', 'mySDM.ms.calapply.txt', 'unknown.pipeline_manifest.xml', and 'casa_piperestorescript.py' (where mySDM is a placeholder for the SDM-BDF name). To ensure everything is correctly placed for the script to apply calibration, please follow these steps:

1. Create a directory where you will work, call it something like "restoration".
mkdir restoration
2. Go into your restoration directory and create three new directories named exactly as follows:
mkdir rawdata working products
3. Place the raw SDM-BDF into the "rawdata" directory.
mv /path/to/fresh/data/mySDM rawdata/
4. Place 'unknown.session_1.caltables.tgz', 'mySDM.ms.flagversions.tgz', 'unknown.pipeline_manifest.xml', and 'mySDM.ms.calapply.txt' into the "products" directory
mv *.tgz products/
mv *.xml products/
mv *.txt products/
5. Place the 'casa_piperestorescript.py' file into the "working" directory
mv casa_piperestorescript.py working/
6. The 'casa_piperestorescript.py' looks similar to the 'casa_pipescript.py', but runs a special hifv_restoredata pipeline task to apply flags and calibration tables, followed by hifv_statwt. Edit the hifv_restoredata call to include "../rawdata/" in front of the name of the SDM-BDF (mySDM), e.g.:
__rethrow_casa_exceptions = True
h_init()
try:
hifv_restoredata (vis=['../rawdata/mySDM'], session=['session_1'],\
ocorr_mode='co',gainmap=False)
hifv_statwt()
finally:
h_save()

7. From the "working" directory, start CASA:
casa --pipeline

or if you use computers at the NMASC:
casa-pipe
8. Start the restore script from the CASA prompt:
# In CASA
execfile('casa_piperestorescript.py')
9. Enjoy calibrated data once the process finishes.

## Further flagging

### Using a flagging template

Although the pipeline attempts to remove most RFI, there are still many cases where additional flagging is required. The pipeline will then be re-started with the additional flags pre-applied.

The best way to do so is to inspect the data and to record all the required flags in a flagging template. See the CASA flagdata task help for the details on the format. Here is an example for a template:

mode='manual' scan='1~3'           #flags scans 1 to 3
mode='clip' clipminmax=[0,10]     #flags data outside an amplitude range
#here all amplitudes larger than 10 Jy
# this line will be ignored
mode='quack' scan='1~3,10~12' mode='quack' quackinterval=1.0  #removes first second
#from scans 1-3 and 10-12


The most important modes are manual to flag given time ranges, antennas, spws, scans, fields, etc., and clip to flag data exceeding threshold amplitude levels.

Flagging templates can be saved in text files with any given name, e.g. 'myflags.txt'. In 'casa_pipescript.py' modify the parameters of the hifv_flagdata task; change template=True and add filetemplate='myflags.txt' .

hifv_flagdata(intents='*POINTING*,*FOCUS*,*ATMOSPHERE*,*SIDEBAND_RATIO*,\
*UNKNOWN*, *SYSTEM_CONFIGURATION*,\
*UNSPECIFIED#UNSPECIFIED*',flagbackup=False, scan=True,\
baseband=True, clip=True, autocorr=True,\
hm_tbuff='1.5int', template=True,\
filetemplate='myflags.txt',online=True, tbuff=0.0,\


The default for filetemplate is 'mySDM.flagtemplate.txt'. Therefore, flag files with this name would not require filetemplate to be specified.

### Interactive Flagging

For some data it is not straightforward to derive flagging commands that can be placed in a template. In that case, one may use the interactive plotms or viewer/msview CASA GUIs to flag data directly in the MS. Re-execution of the pipeline (via the casa_pipescript.py file) will be possible, but a few steps require attention:

• Since Hanning smoothing was likely performed in the initial pipeline run, one should turn off Hanning smoothing for all re-executions. Otherwise the frequency resolution degrades more and more and flags will be extended to neighboring channels by smoothing an already flagged MS. To do so, comment out hifv_hanning in the 'casa_pipescript.py' file (see the section on "Spectral Line" for a similar example).
• By default, the pipeline will always revert back all flags to their original state that are saved in the 'mySDM.ms.flagversions' file. It will thus ignore all modifications that were made afterwards. To avoid resetting all flags, one should manually flag the MS and place it in a new directory. Do NOT copy over the related 'mySDM.ms.flagversions' directory. Then run the pipeline with the flagged MS as input to hifv_importdata via the modified 'casa_pipescript.py' file (it is possible to also run hifv.hifv(['MeasurementSet']) but remember that this will repeat Hanning smoothing). With this procedure the pipeline will not be able to recover original flags and will proceed with the manual, interactive flags that the user has applied directly to the MS.

### Flag the Final Gain Table

Sometimes the gain solutions of the flux calibrator are not good for all antennas. It is possible to flag the solutions directly in the 'fluxgaincal.g' calibration table. The CASA tasks plotcal or plotms can be used for the flagging. The pipeline can use the flagged table, say 'fluxgaincal_edited.g' for the flux calibration by modifying the hifv_fluxboot call in the 'casa_pipescript.py' as follows:

hifv_fluxboot(caltable='fluxgaincal_edited.g')

## Avoiding a Specific Reference Antenna

In some cases, the pipeline may chose a reference antenna that is not ideal for calibration purposes. The pipeline algorithm picks an antenna (actually a ranked list of antennas) that is not heavily flagged and that is close to the center of the array. Other factors, e.g. phase jumps, or bad deformattors that are not caught in the hifv_flagbaddef stage may still be present on the reference antenna and then be reflected on all solutions. When this happens, it is advisable to tell the pipeline not to use a specific antenna as a reference antenna. This can be achieved by the parameter refantignore that is available in some hifv tasks. E.g. if we want to avoid that the pipeline uses antenna 'ea28' as a reference antenna, the casa_pipescript.py can be modified as follows (changes are shown in red in hifv_testBPdcals, hifv_semiFinalBPdcals, hifv_solint, hifv_fluxboot, hifv_finalcals):

__rethrow_casa_exceptions = True
context = h_init()
context.set_state('ProjectSummary', 'observatory', 'Karl G. Jansky Very Large Array')
context.set_state('ProjectSummary', 'telescope', 'EVLA')
try:
hifv_importdata(ocorr_mode='co', nocopy=False, vis=['mySDM'], \
overwrite=False)
hifv_hanning(pipelinemode="automatic")
hifv_flagdata(intents='*POINTING*,*FOCUS*,*ATMOSPHERE*,*SIDEBAND_RATIO*, \
*UNKNOWN*, *SYSTEM_CONFIGURATION*, *UNSPECIFIED#UNSPECIFIED*', \
flagbackup=False, scan=True, baseband=True, clip=True, autocorr=True, \
hm_tbuff='1.5int', template=True, online=True, tbuff=0.0, \
hifv_vlasetjy(fluxdensity=-1, scalebychan=True, reffreq='1GHz', spix=0)
hifv_priorcals(tecmaps=False)
hifv_testBPdcals(weakbp=False, refantignore='ea28')
hifv_checkflag(pipelinemode="automatic")
hifv_semiFinalBPdcals(weakbp=False, refantignore='ea28')
hifv_checkflag(checkflagmode='semi')
hifv_semiFinalBPdcals(weakbp=False, refantignore='ea28')
hifv_solint(pipelinemode="automatic", refantignore='ea28')
hifv_fluxboot(pipelinemode="automatic", refantignore='ea28')
hifv_finalcals(weakbp=False, refantignore='ea28')
hifv_applycals(flagdetailedsum=True, flagbackup=True, calwt=[True], \
flagsum=True, gainmap=False)
hifv_targetflag(intents='*CALIBRATE*,*TARGET*')
hifv_statwt(pipelinemode="automatic")
hifv_plotsummary(pipelinemode="automatic")
hif_makeimlist(nchan=-1, calmaxpix=300, intent='PHASE,BANDPASS')
hif_makeimages(tlimit=2.0, hm_negativethreshold=-999.0, \
maxncleans=1, hm_growiterations=-999, cleancontranges=False, \
noise='1.0Jy', hm_minbeamfrac=-999.0, target_list={}, robust=-999.0, \
parallel='automatic', weighting='briggs', hm_noisethreshold=-999.0, \
hm_lownoisethreshold=-999.0, npixels=0, hm_sidelobethreshold=-999.0)
finally:
h_save()

## Modifying the Pipeline for non-Stokes I Continuum Data

The pipeline is developed for the Stokes I continuum case. But it is possible to modify the 'casa_pipescript.py' and run the pipeline for other use cases:

### Spectral Line

The pipeline is not optimized for calibrating spectral line data. Some pipeline steps of the regular VLA pipeline may be detrimental for spectral line setups and need to be modified or turned off. Calibrators also require enough signal-to-noise to reliably derive bandpasses, gains, phases, etc. for the typically more narrow spectral line spectral windows (spws) and channels. The pipeline will also flag edge channels for each spw. If the spectral line happens to be located on an spw edge, additional modifications to the script may be necessary.

The 'cont.dat' file

The easiest way to run the pipeline on data with spectral lines is to prepare a file, 'cont.dat', that specifies the frequencies only containing continuum (no spectral line). This will protect some procedures of the pipeline from being applied to spectral lines (see below).

The 'cont.dat' file has the following format:

Field: FIELDNAME1

SpectralWindow: SPWID1
freqrange1 LSRK (in GHz, LSRK)
freqrange2 LSKR (in GHz, LSRK)

SpectralWindow: SPWID2
freqrange1 LSRK (in GHz, LSRK)
freqrange2 LSRK (in GHz, LSRK)
...

Field: FIELDNAME2

SpectralWindow: SPWID1
freqrange1 LSRK (in GHz, LSRK)
freqrange2 LSRK (in GHz, LSRK)
...



where FIELDNAMEx is the field name for each source. This provides the flexibility to define different continuum ranges for different targets. SPWIDn stands for the spw ID. Field names and spw ids can be found in the listobs output. An example with fields M82 and NGC3077 may look like:

Field: M82

SpectralWindow: 19
37.104~38.29GHz LSRK
38.30~39.104GHz LSRK

SpectralWindow: 37
31.360~32.123GHz LSRK
32.130~33.360GHz LSRK

Field: NGC3077

SpectralWindow: 37
31.360~32.123GHz LSRK
32.130~33.360GHz LSRK


For the field M82 this file defines spw 19 frequency ranges 37.104 – 38.290 GHz and 38.300 – 39.104 GHz as containing only continuum. This would be a setup where the line is found in the 10 MHz between 38.290 – 38.300 GHz. It also treats frequencies below 37.104 GHz and above 39.104 GHz the same as spectral lines. This can be used, for example, to exclude edge channels from being part of the autoflagging and weight calculations. Analogously, a spectral line falling in the 7 MHz range between 32.123 – 32.130GHz (between the two continuum ranges in spw 37) will be protected by the specification for spw 37 for both the M82 and NGC3077 fields.

'cont.dat' shall be placed in the root directory where the SDM-BDF resides and where the pipeline is executed. The pipeline will automatically pick up the file, there is no need to explicitly provide the file name in 'casa_pipescript.py'.

Pipeline Modifications for Spectral Line Data

In addition to creating a 'cont.dat' file, we advise to modify the following tasks in 'casa_pipescript.py' for spectral line data:

• hifv_hanning: Hanning smoothing lessens the Gibbs ringing from strong spectral features, usually strong, narrow RFI, or very strong spectral lines such as masers. Hanning smoothing, however, reduces the spectral resolution. Therefore, depending on the data and the science case, one may or may not choose to apply Hanning smoothing. Disable the application of Hanning-smoothing in the pipeline if such smoothing is not needed or desired by commenting out hifv_hanning or simply removing the step from 'casa_pipescript.py'.
• hifv_flagata: The pipeline, by default, flags 5% of the data on each spw edge as well as the first and last 10 channels of each baseband. In some cases, for example spectral surveys, lines may fall right on such frequencies. The edgespw, fracspw, and baseband parameters in hifv_flagata can be adjusteded to flag different percentages of the edges.
• hifv_targetflag: Flagging prior to this step was only applied to the calibrator scans, which should be line-free. But hifv_targetflag attempts to auto-flag all fields including target fields. The rflag mode in CASA's flagdata is designed to remove outliers that deviate from a mean level. Strong spectral lines can fulfill this criterion and be flagged. The 'cont.dat' file will ensure that rflag will only be applied to the continuum frequency ranges specified in it. Alternatively, hifv_targetflag can be turned off completely, or, by specifying intents='*CALIBRATE*' (and thereby omitting '*TARGET*'), one can restrict the flagging to the calibrator data only, leaving the target data (scans with the TARGET intent) untouched. In either case, we recommend manual flagging for the spectral line frequency ranges after the pipeline has finished processing.
• hifv_statwt: A similar argument applies to the hifv_statwt step, where the visibilities are weighted by the square of the inverse of their RMS noise. Strong spectral lines will increase the RMS and will therefore be down-weighted. The cont.dat file will restrict statwt to only use the continuum frequency ranges for the rms and weight calculations and thus prevent the inclusion of spectral features. Alternatively, hifv_statwt can be excluded from the pipeline altogether and the CASA task statwt can be executed manually after the pipeline has finished, where statwt's parameter fitspw should be set to continuum channels only.

Given the above, we recommend to carefully prepare 'cont.dat' and modify 'casa_pipescript.py' as follows. The SDM-BDF name (here: mySDM) and maybe other parameters will have to be adapted for the run:

__rethrow_casa_exceptions = True
context = h_init()
context.set_state('ProjectSummary', 'observatory', 'Karl G. Jansky Very Large Array')
context.set_state('ProjectSummary', 'telescope', 'EVLA')
try:
hifv_importdata(ocorr_mode='co', nocopy=False, vis=['mySDM'], \
overwrite=False)
# Hanning smoothing is turned off in the following step.
# In the case of extreme RFI, Hanning smoothing, however,
# may still be required.
# hifv_hanning(pipelinemode="automatic")
hifv_flagdata(intents='*POINTING*,*FOCUS*,*ATMOSPHERE*,*SIDEBAND_RATIO*, \
*UNKNOWN*, *SYSTEM_CONFIGURATION*, *UNSPECIFIED#UNSPECIFIED*', \
flagbackup=False, scan=True, baseband=True, clip=True, autocorr=True, \
hm_tbuff='1.5int', template=True, online=True, tbuff=0.0, fracspw=0.05, \
hifv_vlasetjy(fluxdensity=-1, scalebychan=True, reffreq='1GHz', spix=0)
hifv_priorcals(tecmaps=False)
hifv_testBPdcals(weakbp=False)
hifv_checkflag(pipelinemode="automatic")
hifv_semiFinalBPdcals(weakbp=False)
hifv_checkflag(checkflagmode='semi')
hifv_semiFinalBPdcals(weakbp=False)
hifv_solint(pipelinemode="automatic")
hifv_fluxboot(pipelinemode="automatic")
hifv_finalcals(weakbp=False)
hifv_applycals(flagdetailedsum=True, flagbackup=True, calwt=[True], \
flagsum=True, gainmap=False)
# Keep the following two steps in the script if cont.dat exists.
# Otherwise we recommend to comment out the next two tasks,
# or at least remove '*TARGET*' from the hifv_targetflag call
hifv_targetflag(intents='*CALIBRATE*,*TARGET*')
hifv_statwt(pipelinemode="automatic")
hifv_plotsummary(pipelinemode="automatic")
hif_makeimlist(nchan=-1, calmaxpix=300, intent='PHASE,BANDPASS')
hif_makeimages(tlimit=2.0, hm_negativethreshold=-999.0, \
maxncleans=1, hm_growiterations=-999, cleancontranges=False, \
noise='1.0Jy', hm_minbeamfrac=-999.0, target_list={}, robust=-999.0, \
parallel='automatic', weighting='briggs', hm_noisethreshold=-999.0, \
hm_lownoisethreshold=-999.0, npixels=0, hm_sidelobethreshold=-999.0)
finally:
h_save()


If a spectral line happens to be close to edge channels, one can decide to turn off edge channel flagging by setting the parameter edgespw=False in hifv_flagdata (if the line falls on the edge of a baseband, one may also consider to set baseband=False to avoid flagging the outer 10 baseband edges):

hifv_flagdata(intents='*POINTING*,*FOCUS*,*ATMOSPHERE*,*SIDEBAND_RATIO*,\
*UNKNOWN*, *SYSTEM_CONFIGURATION*,\
*UNSPECIFIED#UNSPECIFIED*', flagbackup=False, scan=True,\
baseband=True, clip=True, autocorr=True,\
hm_tbuff='1.5int', template=True, online=True,\
edgespw=False)


or one can choose to reduce the fraction of edge channels being flagged. In the example below, we reduce the number to 1% on each end of each spw:

hifv_flagdata(intents='*POINTING*,*FOCUS*,*ATMOSPHERE*,*SIDEBAND_RATIO*,\
*UNKNOWN*, *SYSTEM_CONFIGURATION*,\
*UNSPECIFIED#UNSPECIFIED*', flagbackup=False, scan=True,\
baseband=True, clip=True, autocorr=True,\
hm_tbuff='1.5int', template=True, online=True,\
edgespw=True)

But note that including edge channels in the calibration may introduce uncertainties given that spw edges have low signal-to-noise and may contain correlator artifacts. Inspect the data to ensure the spw edges are usable.

Once all modifications are made, run the pipeline as:

# In CASA
execfile('casa_pipescript.py')

After the calibration has been obtained, the run can be followed up with CASA's flagdata mode='rflag' and statwt commands if required (e.g. if 'cont.dat' was not used).

### Polarization Calibration

We are currently developing polarization calibration heuristics for the VLA pipeline. At this stage, however, the VLA pipeline does not derive and apply polarization calibration. The user may decide to add polarization calibration steps after the pipeline was run by using the pipeline calibration tables for pre-calibration as required.

Polarization calibration steps are explained in the respective section of the 3C391 CASA guide (in particular the D-term and crosshand delay calibration will be required). We also refer to the corresponding chapter in CASAdocs.

### Mixed setups

If data were obtained in mixed correlator modes, the different parts (e.g., spectral line and continuum, or different frequency bands) should be separated first and then the individual parts can be calibrated via the default pipeline or by executing modified 'casa_pipescript.py' scripts. To start with, we recommend to import the SDM to an MS by applying online flags at the same time. The corresponding CASA importasdm command would look like:

# In CASA
importasdm(asdm='mySDM', vis='mySDM.ms', ocorr_mode='co',\
applyflags=True, savecmds=True, tbuff=1.5,\
outfile='mySDM.flagonline.txt')


Note that we apply the online flags via applyflags=True but still save the flag commands in an outfile in case one would like to inspect those. We set tbuff=1.5 to extend the flags to 1.5 times the integration time (which the pipeline would do in hifv_flagdata).

After that step, use the CASA command split to separate the individual parts of the data to be processed separately. Modify 'casa_pipescript.py' and use each new separate MS as input in hifv_importdata.

### Weak Calibrators

The VLA pipeline requires a minimum signal-to-noise of ~3 for each spw (each channel for the bandpass) and calibrator scan. If this criterion is not met, the pipeline will likely fail. We are currently implementing additional heuristics to deal with weak calibration sources. This code will be available in upcoming versions of the VLA pipeline.

### Incorrect scan intents

As mentioned in the "Pipeline Requirements", scan intents tell the pipeline which scans and fields are used for flux, delay, bandpass, gain and phase calibration. Scan intents should be set up correctly in the OPT before submitting the schedule block for observation.

When incorrect scan intents are identified after observations, one can still change the SDM-BDF with updated scan intents, although some care will be required.

The SDM-BDF metadata is structured in the form of XML files that can be edited. We provide a small Scan Intent Editing Perl Script to do this. The script is pretty self-explaining and can add and delete scan intents to any scan.

Alternatively, the SDM can also be manually edited. Great care, however, should be taken not to corrupt the structure of the SDM-BDF/xml. We therefore advice not to edit the SDM-BDF/xml manually but to use the Perl script instead.

However, to edit the xml manually, cd into the SDM and edit the file 'Scan.xml'. We strongly recommend creating a backup copy of the 'Scan.xml' file in case the edits corrupt the metadata.

'Scan.xml' is divided into individual <row></row> blocks that identify each scan.

An example of a scan with a single scan intent (here: OBSERVE_TARGET) may look like:

   <row>
<scanNumber>1</scanNumber>
<startTime>4870732142800000000</startTime>
<endTime>4870732322300000256</endTime>
<numIntent>1</numIntent>
<numSubscan>1</numSubscan>
<scanIntent>1 1 OBSERVE_TARGET</scanIntent>
<calDataType>1 1 NONE</calDataType>
<calibrationOnLine>1 1 false</calibrationOnLine>
<sourceName>J1041+0610</sourceName>
<flagRow>false</flagRow>
<execBlockId>ExecBlock_0</execBlockId>
</row>


We can now change the scan intent, e.g., from OBSERVE_TARGET to CALIBRATE_AMPLI by simply updating the <scanIntent> tag:

   <row>
<scanNumber>1</scanNumber>
<startTime>4870732142800000000</startTime>
<endTime>4870732322300000256</endTime>
<numIntent>1</numIntent>
<numSubscan>1</numSubscan>
<scanIntent>1 1 CALIBRATE_AMPLI</scanIntent>
<calDataType>1 1 NONE</calDataType>
<calibrationOnLine>1 1 false</calibrationOnLine>
<sourceName>J1041+0610</sourceName>
<flagRow>false</flagRow>
<execBlockId>ExecBlock_0</execBlockId>
</row>



If we want to add a second intent, we will have to make additional changes. Let's add CALIBRATE_PHASE:

   <row>
<scanNumber>1</scanNumber>
<startTime>4870732142800000000</startTime>
<endTime>4870732322300000256</endTime>
<numIntent>2</numIntent>
<numSubscan>1</numSubscan>
<scanIntent>1 2 CALIBRATE_AMPLI CALIBRATE_PHASE</scanIntent>
<calDataType>1 2 NONE NONE</calDataType>
<calibrationOnLine>1 2 false false</calibrationOnLine>
<sourceName>J1041+0610</sourceName>
<flagRow>false</flagRow>
<execBlockId>ExecBlock_0</execBlockId>
</row>


Inside <scanIntent> we added the second intent, but also increased the second number from 1 to 2. In addition, we specified <numIntent> to be 2, and added a second entry to <calDataType> and <calibrationOnLine>. For the latter two, we also updated the second number from 1 to 2.

Analoguously, if we now add a third intent, CALIBRATE_BANPDASS, to the same scan, the <row> will look like:

   <row>
<scanNumber>1</scanNumber>
<startTime>4870732142800000000</startTime>
<endTime>4870732322300000256</endTime>
<numIntent>3</numIntent>
<numSubscan>1</numSubscan>
<scanIntent>1 3 CALIBRATE_AMPLI CALIBRATE_PHASE CALIBRATE_BANDPASS</scanIntent>
<calDataType>1 3 NONE NONE NONE</calDataType>
<calibrationOnLine>1 3 false false false</calibrationOnLine>
<sourceName>J1041+0610</sourceName>
<flagRow>false</flagRow>
<execBlockId>ExecBlock_0</execBlockId>
</row>


Check with CASA's listobs on the imported MS (after importing the data to an MS via importasdm or importevla) if the scan intents are now displayed as desired. Revert back to the original 'Scan.xml' if the above was not successful and contact the NRAO helpdesk for advice.

Allowed Intents

CALIBRATE_AMPLI : Amplitude calibration scan
CALIBRATE_PHASE : Phase calibration scan
CALIBRATE_BANDPASS : Bandpass calibration scan
CALIBRATE_DELAY : Delay calibration scan
CALIBRATE_FLUX : flux measurement scan.
CALIBRATE_POINTING : Pointing calibration scan
CALIBRATE_POLARIZATION : Polarization calibration scan
CALIBRATE_POL_LEAKAGE :
CALIBRATE_POL_ANGLE :
OBSERVE_TARGET : Target source scan
CALIBRATE_ATMOSPHERE : Atmosphere calibration scan
CALIBRATE_FOCUS : Focus calibration scan. Z coordinate to be derived
CALIBRATE_FOCUS X : Focus calibration scan; X focus coordinate to be derived
CALIBRATE_FOCUS Y : Focus calibration scan; Y focus coordinate to be derived
CALIBRATE_SIDEBAND_RATIO : measure relative gains of sidebands.
CALIBRATE_WVR : Data from the water vapor radiometers (and correlation data) are used to derive their calibration parameters.
DO_SKYDIP : Skydip calibration scan
MAP_ANTENNA_SURFACE : Holography calibration scan
MAP_PRIMARY_BEAM : Data on a celestial calibration source are used to derive a map of the primary beam.
TEST : used for development.
UNSPECIFIED : Unspecified scan intent
CALIBRATE_ANTENNA_POSITION : Requested by EVLA.
CALIBRATE_ANTENNA_PHASE : Requested by EVLA.
MEASURE_RFI : Requested by EVLA.
CALIBRATE_ANTENNA_POINTING_MODEL : Requested by EVLA.
SYSTEM_CONFIGURATION : Requested by EVLA.
CALIBRATE_APPPHASE_ACTIVE : Calculate and apply phasing solutions. Applicable at ALMA.
CALIBRATE APPPHASE PASSIVE : Apply previously obtained phasing solutions. Applicable at ALMA.
OBSERVE_CHECK_SOURCE