Calibration

This section discusses general calibration and calibration strategies when preparing for VLA observations. Specific calibration during post-processing—not depending on data taken during the observations—such as improved antenna positions, opacity and ionospheric corrections, are not discussed here. Note, however, that there is specific guidance with respect to the 8/3-Bit Attenuation and Setup Scans that are required in observing scripts (scheduling blocks).

If you are looking for the listing of VLA calibrators, go to the VLA Calibrator List.  This list may also be searched in the Source Catalog Tool (SCT, aka Sources). The OPT manual has a section on how to search in the SCT.

 

Calibration Basics

What is calibration?

The main goal of calibration is to be able to correct for effects that may interfere with the scientific outcome of a measurement (an observation) due to the instrument and/or local temporary conditions so that these measurements can be compared to other measurements (at other times, other instruments, other frequencies, etc.) and theoretical predictions.

Calibration starts in the device design stage and ends in the final data presentation. Calibration can refer not only to calibration of the data, but also to the instrument as a whole or its separate components. The amount of calibration typically depends on the observing program and the science goal. Some calibrations need to be done once as their solution is fairly constant over time, while other calibrations need to be repeated regularly to capture changing properties over time. For radio astronomy, and in particular interferometry, the following general calibration steps can be identified: calibration of the instrument by observatory staff, calibration of the current observing conditions by the observer, and calibration of scientific results by the data analyzer. Some common calibration examples:

  • Calibration of the antenna (receiver frequencies, receiver system temperatures, optics), antenna positions, timing, and correlator visibilities is done by observatory staff.
  • Calibration of instrumental delay, instrumental polarization, spectral bandpass response, absolute flux density scale, and other possible properties—assumed to be constant during the observation—should be taken by the observer, typically once or twice during the entire observation for each observed frequency and correlator configuration.
  • Calibration of antenna pointing, delay, attenuator and requantizer settings, and other possible properties assumed to be only slowly varying during the observation should be performed by the observer, typically once every hour or so during the observation, or, for example, when switching frequency bands.
  • Calibration of antenna gains, atmospheric phase fluctuations, and other possible properties expected to vary more rapidly with observing conditions and geometry during the observation should be performed more frequently than the time scale over which the property changes.
  • Calibration of the position of a source with respect to another source, calibration of a frequency to a line-of-sight velocity, calibration of a polarization angle to a reference angle, calibration of the flux density scale of a single source in one observation to another observation of the same source, etc.

The taking of calibration data to be evaluated in the online system or in post-processing happens before, during, and after the observation and also may be applied before, during, and after the observation. Note that most calibration applied before and during the observation, e.g., antenna pointing, take effect during the observation and the calibration must be correct as it cannot be adjusted after acquisition.

It should be understood that calibration is an important part of the observation and must be thoughtfully included in the observation preparation and time request in the proposal. The key point of planning and including calibration is that if the observations are not properly calibrated, the science goal will not be achieved.

When to calibrate?

Calibration should be performed, at the very least, more frequently than the time scale over which the property changes and before that change becomes too large to be compensated for. Nearly constant properties should be calibrated at the start of an observation, when changes from a previous observation by another observer are to be expected. If the constant property can be applied in post-processing, this calibration can also be taken at the end of the observation. Time or geometry dependent changes should be monitored at regular intervals so that the change can unambiguously be interpolated over the observation. Planned, abrupt changes in the observation, such as a frequency change or observing a different part of the sky, would trigger calibration. Last, but not least, a very specific science goal may need a calibration that is not among the standard calibrations described here. The calibration strategy is the responsibility of the observer, but NRAO staff is available for individual advice through the NRAO Helpdesk.

Typical calibration intervals are once during the observation for flux density, bandpass/delay, and polarization angle (per frequency setting and per correlator configuration). It is prudent to at least break up the flux density calibration scan into two or more separate scans, as this is the only calibration for which there is no good alternative if it happens to be corrupted.

Typical intervals for complex gain (amplitude and phase) calibration are dependent on the weather conditions and baseline length. On longer baselines (i.e., on longer uv-distances and therefore more important for high frequency observing), the phase change on the interferometer will be more rapid and requires more frequent calibration than at the lower observing frequencies. An exception at low frequencies is when the Sun affects the ionosphere and rapid changes can be expected near sunset and sunrise. Otherwise, the largest effect on phase change is in the troposphere. The gain calibration uses the assumption that the calibration toward the sky, in which the calibrator source is observed, can be interpolated over time and viewing angle and resembles the same atmospheric conditions toward the sky of the target source. The further away the calibrator from the target, and the longer the intervals of the calibration measurement, the less strict this assumption will hold. It can happen that calibration consumes more than half of the allocated observing time in order to achieve the scientific goal. Fortunately, this may only be necessary at the highest frequencies, in the largest array configurations, and in bad weather conditions. With the introduction of dynamic scheduling at the VLA, this latter variable has been largely eliminated and gain calibration has been easier to plan (see below).

Typical calibration intervals for attenuator settings are at the beginning of an observation. Requantizer gains need to be redetermined after a change of tuning, including when returning to the original observational setup.

Calibration intervals for antenna pointing are largely dependent on the geometry of the observation. As a rule-of-thumb, after tracking for about an hour, the direction of the solutions of the antenna pointing calibration will have changed significantly from where the target position is now—on the order of 20°—warranting a new pointing solution. During the day, because of temperature changes that affect antenna optics, pointing should be repeated roughly every 30 to 40 minutes. Also, when slewing a large distance from the target to a flux density or a bandpass calibrator (over 20° in AZ and/or EL), the antenna pointing will need an updated solution.

How to calibrate?

To ensure that the instrument is delivering the expected measure, the easiest method of calibration is to insert a known signal at the input and analyze the resulting signal at the output. The calibration measurement will yield, after some massaging, the corrections that need to be applied to the output signal to obtain the true representation of the input signal. The uncorrected output signal is also known as the instrumental response for the given input signal. As determining the response directly is not always possible, alternatives may be available. These alternatives require, however, certain trade-offs and choices to be made by the observer depending on the science goals. Observatory staff can advise, and a first attempt is made below. For very specific questions, please ask the NRAO Helpdesk.

A typical calibration signal for the instrumental response of an individual antenna signal path is a pulse (the firing of a noise diode in the receiver) and standard instrumental calibration procedures are available. A typical calibration signal for the total observational response of an interferometer is to observe a point source: a signal that can be considered a simple, single, isolated object with a known constant flux density, polarization as function of frequency, and absolute sky position. As these objects are very rare for high-angular resolution and high-sensitivity interferometers like the VLA, alternatives that dominate the response (not necessarily constant in flux density) and that are near point-like are common trade-offs for any radio interferometer. Many of these calibrator sources are given in the VLA Calibrator List.

Then, depending on the signal property to calibrate, one would choose an adequate calibrator source for the property and insert it at the appropriate place in the observing schedule. This calibration is defined for a certain point in time for certain conditions and needs to be redone when it starts to become invalid. Performing a calibration on the target field—a field where obtaining the measured properties are the goal of the observation—generally is not a good idea. As a reasonable approximation in general, therefore, it is assumed that for the calibration performed near in time and near that field on the sky also holds for the target field and thus can be interpolated over the target observation. NRAO tries to provide guidance for when these assumptions hold for different sets of observations, but the observer should always be aware that certain conditions for successful interpolation of calibration may not apply.

Specific suggestions for different VLA calibrations and how to schedule them are listed elsewhere in the Guide to Observing (High Frequency Strategy, Low Frequency Strategy, and Very Low Frequency Strategy), and the OPT Manual. In this section the focus is on general, non-frequency specific, calibration during the observation by the observer. When making a schedule, consider the hints given here and elsewhere in the documentation. Do not hesitate to contact the NRAO Helpdesk for assistance or further information.

 

VLA Calibration by the Observer

When seen from the observer's perspective, for any standard observation in each scheduling block, the observer is expected to include calibration of the absolute (flux density) scale, calibration of the signals at different frequencies relative to each other over the observing bandwidth (bandpass), and calibration of the time dependent effects (complex gain (phase and amplitude)) of changing conditions due to the atmosphere and instrument. More sophisticated or challenging experiments may include more specific calibrations as described below.

 

Flux Density Scale Calibration

The correlator, where signals at specific frequencies from the antennas are combined into visibilities, only processes what it gets fed from the electronics system in terms of relative signal strength and relative phase. The correlator products, therefore, need to be readjusted to represent the flux density as measured from the sky visibilities. At the VLA this means that one observes a calibrator with a postulated (or assumed) known flux density at these frequencies along with the other observations in the scheduling block. In post-processing, the visibilities are then rescaled for this calibrator to the flux density for this frequency. Other visibilities in the same observation, using the same setup, can simply be matched using the relative scale and the absolute flux density of the calibrator.

The flux density scale adopted for the VLA between 1 and 50 GHz is based on the Perley and Butler (2017) standard. This is very close to the traditional Baars et al. scale (1977 A&A 61, 99) between 1 and 15 GHz and includes the Scaife and Heald scale (2012, MNRAS 423, L30) for frequencies below 1 GHz. Distributed versions of AIPS and CASA apply the most recent scale in their SETJY tasks.

Because of source variability, it is impossible to compile an accurate and up to date listing of flux densities for all the VLA calibrators. The values given in the VLA Calibrator List, therefore, are only approximate and for the epoch they were measured. We strongly recommend, and some science objectives require, bootstrapping the flux density of a calibrator by comparing the calibrator observations with one or several observations of 3C286, 3C48, 3C147, or 3C138**. Only in compact configurations and at low frequencies, i.e., typically at L- and P-band in C and D configuration, 3C295 or 3C196 may also be used.

Both AIPS and CASA use model images for the standard flux density calibration sources (see their respective SETJY documentation) to account for their structures which are frequency and array configuration dependent. Alternatively, u,v restrictions, or limitations on the number of antennas, can be used. When using models, the bootstrap accuracy should be within a couple to a few percent. However, at frequencies of about 15 GHz and above, there are appreciable changes in the antenna gains and atmospheric opacity as a function of elevation. By calibrating the target source with a nearby calibrator, much of these variations can be removed. If the primary flux density calibrator (e.g., 3C286) is observed at a different elevation from the secondary gain calibrator, however, then the flux density bootstrapping will have a considerable systematic error. For the higher frequencies, a typical discrepancy in actual versus measured flux density scale of the order of 20-30% is not unlikely.

Accurate models are available in both AIPS and CASA for various frequency bands for the calibrators 3C286, 3C48, 3C147, and 3C138**. However, neither 3C295 nor 3C196 has such models and the VLA CASA calibration pipeline will fail if these two calibrators are used. Also, the bright source 3C84 (J0319+4130, not to be confused with 3C48) cannot be used as an absolute flux density calibrator as it is variable, but it serves well for bandpass and delay calibration.

See also the flux density scale discussion in the VLA Observational Status Summary (OSS).

** The flux density scale calibrator 3C138 is currently undergoing a flare. From VLA calibration pipeline results, we have noticed that 3C138 is deviating from the model. The amount of this deviation is still being investigated by NRAO staff, but does seem to effect frequencies of 10 GHz and higher. At K and Ka-bands the magnitude of the flare is currently of order 40-50% compared to Perley-Butler 2017 flux scale. If you care about the flux density scale of your observations above 10 GHz, monitoring datasets are publicly available in the archive under project code TCAL0009, from which you may find an updated flux density ratio to use for your data.

 

Bandpass and Delay Calibration

Small impurities in the correlator model, such as an inaccurate antenna position, timing, etc., cause small deviations from the model that are noticeable as a time-constant linear phase slope as function of frequency in the correlated data for a single baseline. This phase slope, known as a delay, is a property of the IF baseband and the same for all subbands (spectral windows) in a baseband. If the frequencies in an observation are averaged into a continuum image, an uncorrected delay causes decorrelation of the continuum signal and is not a correct representation of the sky. The delay calibration is determined on a short time interval on a strong source in order to achieve high signal to noise for the solution without including the time dependent variations.

Small impurities in the frequency amplitude and phase response as function of frequency, independent of the delay, also occur and have to be corrected. These corrections are a property of the passband on top of the response of the baseband, and are a property of the subband and the location of the subband in a baseband. Leaving the bandpass uncorrected causes incorrect relative amplitudes and phases and does not deliver the correct spectral representation of the sky. Averaging these uncorrected impurities over frequency into a continuum image limits the achievable signal to noise and dynamic range. As for the delay, bandpass calibration is usually determined on a short time interval on a strong source to achieve high signal to noise for the solution without including the time dependent variations. Delay and bandpass calibration, therefore, is usually performed using the same calibrator. Hereafter in the text, delay calibration is implicitly included in the bandpass calibrator or bandpass calibration scan.

To achieve good results in both spectral line and continuum observations, the baseband delay and subband bandpass shapes have to be calibrated for each frequency setup. If very high dynamic range is not the aim (in many cases for continuum observations, the standard flux density calibrators are relatively convenient bandpass and delay calibrators), using flux density calibrators has the big advantage that, for these sources, the known spectral index can be included in the bandpass amplitude solution. At high frequencies, however, for which the flux density calibrators are weak, other calibrators might provide a better alternative.

The typical requirement for bandpass calibration is that it does not add to the phase or amplitude noise in the image cube, i.e, that the signal to noise of the bandpass calibration solution is comparable—or better—than the signal to noise of the flux density of the target. The requirements, however, are dependent on your science goals and type of observation. For very bright spectral lines on the order of tens of Jy in a single spectral channel, this might not be feasible; for other spectral line cases it is a good rule of thumb. As the noise is proportional to the inverse of the root of bandwidth times observing time, and with the channel bandwidth a constant between the scans, the time spent on the bandpass calibrator (tcal) should be larger than the time spent on the target (tobj) multiplied by the square of the ratio of the target (Sobj) over the flux density calibrator (Scal): tcal > tobj × (Sobj / Scal)2. That is, the brighter the calibrator, the shorter the scan needs to be. Note that instead of a bright bandpass calibrator, it is also possible to sum all scans of a gain calibrator to obtain sufficient signal to noise for a bandpass determination. For plain continuum observations this requirement can be somewhat relaxed to a good solution of a representation of the bandpass, as the data are averaged over frequency, using a minimum signal to noise at least 5 to approximately 10. If one is interested in high dynamic range or accurate polarization imaging, however, this solution typically is not enough and one would have to adopt the spectral line approach.

The Spectral Line guide describes more details. For further information or other questions contact the NRAO Helpdesk.

 

Complex Gain Calibration

Where the absolute flux density scale calibration is a static multiplier, and the bandpass/delay calibration a one-off determination of the signal path properties assumed to be largely constant for the observation, the antenna gain calibration (also known as complex gain) is anticipated to track time variable properties due to changing conditions of the instrument. Complex gain calibration also tracks changes with the environment except for antenna pointing (see below), that, if not corrected for, will be absorbed. Examples of variable instrumental properties are: receiver power level settings, corruption of baseband samplers, technicians removing receivers, etc.. The largest time variable contribution is from the environment and atmosphere: mostly the ionosphere at low frequencies, the troposphere at high frequencies, water content/opacity on cloudy days, elevation/opacity due to observing geometry and other occasional phenomena like solar flares, broadcasting satellites, tourist cell phones, digital cameras, etc. Some of these can be calibrated and corrected for using proper gain calibration while others, referred to as Radio Frequency Interference (RFI), cannot and need to be removed (flagged) from the data.

Gain calibration is normally considered antenna based and assumes that fluctuations in the antenna gain are due to slowly varying amplitudes and phases. Slowly here means that the interval of the gain calibration scans are short enough that the variations can be interpolated by a relative smooth function which would represent the true variation at the antenna. Gain phase varies much, much faster than gain amplitude as most of the time dependent effects affect phase more than amplitude. Additionally, phase variations scale with baseline length with more rapid changes on longer projected baselines. The typical time for such changes to become large enough to require calibration is referred to as the coherence time, i.e., when the visibility phase has changed by a radian. This coherence time may be some tens of minutes at short baselines and low frequencies or, in some realistic cases, sub-minute on long baselines for high frequency observations in non-optimal weather. Because of this huge range and frequency specific dependence, consult the Cycle Time section at the end of this document.

Gain calibration is performed as regularly repeated scans on the complex gain calibrator to measure the change in visibility amplitude and phase on the gain calibrator. These changes are then interpolated over the target field scan. Removing the large fluctuations, in effect, increases the coherence time of the observation; ultimately to longer than the total target observation duration. All data on the target field scan can then be coherently combined into a single image, which supposedly would be the goal of the observation. Note that the calibration solutions sometimes show deviant points with respect to the solutions of other times for the same antenna. Recall that these are the solutions (or corrections) needed to obtain the point-like input signal and are not necessarily wrong. It is a good indication that something is happening in the data and, only upon investigation, should one decide whether or not using that point in the interpolation to the target field. That is, will it improve or destroy the target data, or be irrelevant in the case of self-calibration (see below)? The discrepant solution can be deleted to prevent interpolation and thus causes target data flagging, or be kept for subsequent calibration and imaging.

Calibration can be considered for three different cases:

  • To only capture the slowly varying amplitude fluctuations. The coherence time to track and interpolate phase over the target field is irrelevant because the target field contains sufficiently bright emission to recover the phase fluctuations on time scales shorter than the coherence time. Typical cases are a targeted, not necessarily point-like, strong continuum source; a bright maser line at any observing band; or, e.g., the background continuum field for the large field of view (primary beam) of the lower frequency bands. As the information to perform this phase calibration (next to the amplitude gain calibration) is contained in the target field, this additional target field calibration is known as self-calibration or selfcal. Care should be taken in the self-calibration model as, in principle, absolute phase—and therefore absolute position—is lost in the process.
  • To capture the rapid phase variations in addition to the amplitude variations. Here it is attempted to extend the effective coherence time as self-calibration is not an option for faint targets or detection experiments. Also, very extended or complicated source structures, when a selfcal model is not easily obtained, are typical cases for this traditional gain calibration strategy where the calibrator and target are observed in alternating scans. The target field phases are tied or referenced to those of the calibrator, which is known as phase referencing. The cycle times for alternating the scans are dependent on observing frequency, baseline length, field separation, weather, and elevation. Suggested times are given in table 4.3.1 (below), where cycle times are discussed. If, after such calibration, it appears that the target field matches the requirements for self-calibration, then self-calibration may be applied to further improve the calibration.
  • To obtain astrometry. For this one should avoid applying selfcal and utilize a much more careful phase calibration scheme. Some hints on phase-referencing for astrometry are given below.

 

Specialized Calibration

Antenna Reference Pointing Calibration

When directed to a position in the sky, the individual antennas are commanded to move in azimuth (AZ) and elevation (EL) until they report that they have arrived. The a priori accuracy of pointing at the actual position in the sky is typically within about 10 arcseconds; but individual antennas may differ up to an arcminute in AZ and/or EL. and the deviation depends on many non-constant factors. At the higher frequencies (above ~15 GHz), this intrinsic pointing error may be a large fraction of the primary beam (which is one arcminute at 45 GHz, two arcminutes at 22 GHz, etc.), potentially considerably degrading the sensitivity of the antenna at the position of interest. It is therefore important to correct for this possible mechanical error before observing the calibrators (complex gain, bandpass, etc.) and the target field. Antenna pointing calibration is essential for high-frequency observing but, as the beam sizes at lower frequencies quickly become many arcminutes, pointing calibration is typically skipped for the lower frequencies. One exception is high-dynamic range imaging of large fields where differing antenna beam patterns on the sky will hinder obtaining accurate source flux densities in the entire field of view and pointing calibration also at low frequencies may be imperative.

Simple procedures, referred to as reference pointing, are in place to determine the AZ and EL position offsets during the observing program using X-band. Once determined, these offsets should be applied to the scans following to take effect immediately; this is an on-line calibration. Pointing calibration sources (see about choosing one below) can be your default calibrator sources (for flux density, bandpass and/or complex gain), as well as your target source if your science goal and target source would allow that. A pointing calibration scan needs an on-source time of 2m30s to be able to perform a five-point observation around the calibrator to derive solutions.

The antenna pointing offset will change after some time due to changes in the dish structure (optics) or because the target field is being tracked from rise in the East to set in the West. The reference pointing solution needs to be repeated before the pointing has degraded to yield less sensitivity. NRAO recommends that pointing be repeated when the difference from the last pointing calibration direction in AZ or EL coordinate has changed by 20° on the sky, i.e., after about an hour when tracking a source at the celestial equator assuming nighttime observing. During daytime observing reference pointing may need to be repeated every 30-40 minutes due to the thermal deformation of the dish structure. The recommendation noted earlier also implies that the pointing calibrator source should be closer than that range from the target field; NRAO recommends that the pointing calibrator be within of 10° of the target if at all possible. Hence, large slews to a bandpass or a flux density calibrator, or a different target patch on the sky, will require a new pointing scan regardless of how long it has been since the last one was done.

 

Polarization Calibration

The VLA observes with two orthogonal receivers. These are right and left circular polarization (RCP and LCP) for all bands except below 1 GHz where they are X and Y linear polarization. For the total intensity Stokes parameter I, the correlator produces the parallel hand correlations (RR and LL or XX and YY) from the signals of the two antennas of each interferometer. They are combined in the imaging to produce the total intensity I image of the sky; Stokes V may be created from the RCP and LCP difference. Using the default continuum modes, the cross hand polarization products (RL and LR or XY and YX) are also produced which allow to map the other Stokes parameters Q and U and combine all data into an image of the polarized sky.

In order to obtain the correct polarization characteristics, two calibrations have to be performed. First, because the receivers are independent and orthogonal but not completely free of impurity, the leakage (D-term) and relative signal between the receivers as function of frequency need to be determined in order to properly combine the signals (compare it with the frequency dependence of amplitude and phase over a bandpass). The polarization vectors should then be tied to an absolute reference with a polarization angle calibration (compare it with fixing the absolute flux density) using a source with known polarization characteristics, like 3C286.

Typical impurities of the receiver feeds are a few percent for the center of most VLA bands and degrade toward the band edges and away from the pointing center in the image plane. Without any polarization calibration, an unpolarized source will appear to be polarized at a few percent level. Without calibration of the R-L phase difference, the polarization angle is undetermined. It is not difficult to obtain a reasonably good polarization calibration under most circumstances and, with the leakage terms being fairly constant over weeks to months, they can be transferred to other observations with the same spectral setup. With a modest investment of time spent on calibrators, and a little effort, the instrumental polarization can be reduced to less than 0.1%.

More information on polarization, including the most common calibrators, can be found in the Polarimetry section.

 

Astrometry

Calibration for astrometry is a bit different with respect to the above, as one does not calibrate the instrument per se but instead calibrates the sky. The usual complex gain calibration makes use of calibrator sources and, when interpolating the phase information for the calibrators over the target field scans, one ties the position of any source in the target field to the assumed (measured) position of the calibrator source. Apart from the peculiar errors introduced by the interpolation of the calibrator phase to the target, there is an additional systematic error due to the position measurement of the calibrator. Unfortunately, this additional systematic error is typically ignored in the literature but is certainly not negligible if one attempts to do astrometry; proper motions and/or comparisons with other data that may have used different calibrator sources.

To improve on astrometric accuracy, two types of calibration should be considered. First, one would try to improve on the systematic error of the calibrator position. An updated position from, e.g., a VLBI detection, can always be included in post- processing. The VLA Calibrator List contains a position accuracy indicator of A, B, C or T, where A is the most accurate (positional accuracy < 0.002 arcseconds or 2 mas) and T should probably be avoided (positional accuracy worse than 0.15 arcseconds) for most science goals. If there is a choice in nearby calibrator sources—suitable for the array configuration and frequency band of the observation—it would be prudent to select the source with the best known position and thus the smallest systematic error.

The second place of improvement is in the actual interpolation. Typical gain phase calibration uses repeated observations of a single nearby calibrator and transfers the interpolated phases to the target field. One can improve on this interpolation by using a calibrator closer to the target field or by sampling the interpolated phases more densely via shorter cycle times or faster switching. Strictly speaking, even if the interpolation is perfect, it is derived for the direction of the calibrator position and not for the direction of the target field. There may be a constant residual phase, or a phase wedge that is not accounted for, introducing an unknown positional offset that, at best, is only a single time constant vector perpendicular to the unknown direction of the wind aloft. A possible solution would be to include more repeated calibrator scans toward suitable calibrators in other directions with respect to the target field. In post-processing, one may be able to make a more detailed interpolation of the phase and phase behavior of the target field, provided that the observations are set up to anticipate such a procedure. It is also useful to include another calibrator source with a known position and exclude it from the calibration steps by treating it as a target (also known as the phase-check source); the position derived as for a normal target field would yield an indication of the magnitude of remaining errors when compared to the known position of that phase-check calibrator.

 

Observing the Sun

As the Sun is very bright, observing the Sun requires special considerations, so please consult the Observatory Status Summary, the OPT manual and NRAO Helpdesk if observing the Sun is part of your intentions.

 

Tipping scans

The atmospheric opacity can be measured by performing so-called Tipping scans during the observation. Tipping scans can be set up for the VLA using the OPT, but at this moment there is no general way of processing the measurements and applying the opacity corrections to the data in CASA or AIPS.

 

Calibration Recommendations

The main objective of calibration scans in the observation is to correct for effects, instrumental and observational, that distort the object or field of interest. The method is to measure the response of the instrument to a known calibration signal to determine the corrections needed to retrieve or reconstruct the known signal from the data. The corrections are then applied with the assumption that the effects that distort the calibration signal also apply, whether or not interpolated, to the observations toward the object or field of interest. As previously mentioned, the different calibrators—flux density, bandpass, complex gain, and reference pointing—are selected to optimally perform the measurement requested by the observer. Each of these calibrations needs a specific property of the calibrator source that will enable a good calibration. A good pointing calibrator may not be a good bandpass calibrator and vice versa; a good complex gain calibrator may not be a good flux density calibrator, etc.

A technique of good calibration sometimes relies on finding the right trade-offs: one calibrator versus the other (e.g. flux density or source structure versus angular distance), a longer on source time versus better sampling, etc. The selection of which trade-off to use is a function of the science goal and in general cannot be answered with a straight single solution by NRAO staff without knowing all the details of the observing program. Some tips given here, however, might be helpful in making an informed decision.

Requirements for good calibration are that it yields sufficient signal-to-noise to derive an unambiguous and valid calibration solution. The aim is for a signal-to-noise of over 10–20 at the observing frequency during the solution interval (scan, part of a scan, or total observing period) for a single (longest) baseline, single polarization, and relevant spectral coverage. Spectral coverage can range from the total observing bandwidth, to the individual spectral chunks known as basebands/IFs or subbands/spectral windows, down to the narrowest spectral channel (see below about scan length).

 

Calibrator Summary

To minimize the impact on the actual observing, consider the following general statements and descriptions of known (calibration) signals:

  • Absolute Flux Density:
    • Select one of the very few standard flux density calibrators: 3C286 if you can, or alternatively 3C48, 3C138**, or 3C147. Note that for the very low frequencies other sources may be used and it is therefore wise to check up on the specific frequency guides listed below for any observation.
    • The best flux density calibrator is the one that fits best in the observing schedule and may not be 3C286; 3C48 and 3C147 in general are good alternatives but 3C138** should be avoided as it has shown variability (!). Note that 3C295 may be used only for low frequency (P and L band) observations in the more compact array configurations for flux density calibration in the northern part of the sky, i.e., where all other source Declinations are more than 34°. Similar restrictions hold for 3C196.
    • ** The flux density scale calibrator 3C138 is currently undergoing a flare. From VLA calibration pipeline results, we have noticed that 3C138 is deviating from the model. The amount of this deviation is still being investigated by NRAO staff, but does seem to effect frequencies of 10 GHz and higher. At K and Ka-bands the magnitude of the flare is currently of order 40-50% compared to Perley-Butler 2017 flux scale. If you care about the flux density scale of your observations above 10 GHz, monitoring datasets are publicly available in the archive under project code TCAL0009, from which you may find an updated flux density ratio to use for your data.
  • Delay and Bandpass:
    • In general, the delay and bandpass calibrator need the same properties; they just need to be bright at the observing frequency to obtain calibration solutions with a very high signal-to-noise ratio. The bandpass calibration scan typically can be used for the delay calibration (where the delay is calibrated and applied before determining the bandpass corrections).
    • One chooses a strong calibrator, not necessarily point-like or near the target fields, which can be your flux density or pointing calibrator (below). In that case make sure there is an additional bandpass intent in the flux density calibration scan, or an additional bandpass scan with appropriate intent directly after the pointing scan.
  • Complex Gain (amplitude and phase):
    • The best complex gain calibrator sources are bright (over a couple hundred mJy/beam), have a known structure (preferably point-like, with calibrator codes P or S), and are seen through the same atmosphere as the target (i.e., are nearby, within less than 10° for high frequency observations, or within about 15° for low frequency observations).
    • If the VLA Calibrator List does not provide a flux density for the frequency band, which is a common problem for observing in S, K or Ka-bands, you may be able to use an interpolation from the bordering frequency flux density information: if a calibrator has a flux density in Ku and Q-bands, the Ka and K-bands flux densities can be interpolated and judged for suitability, etc. Calibrators at Q-band are typically flat spectrum sources and can be assumed, if no other information is available at lower frequency bands, to have approximately a similar flux density at Ka and K-bands (assuming they are not too variable, which is not necessarily the case).
    • If there is no bright complex gain calibrator within a few degrees, use one that is weaker but closer over one that is brighter but further away if observing using the higher frequency bands. If observing at the low and very low frequency bands, where bright sources in the field of view may be confused, a different source that dominates its field may be a better choice, even if it is located further away.
  • Reference Pointing:
    • Choose a point-like (calibrator code P or S, never a W, X or "?") and bright (0.3 Jy/bm or brighter at X-band, the pointing scan observing band) calibrator near the field of interest as pointing corrections are needed for the azimuth and elevation near the target. Pointing sources should always be near the target field(s), so for the entire observing run more than one might be needed. Antenna pointing calibration is recommended at the four highest observing frequencies (Ku, K, Ka, and Q-bands) as well as for very high dynamic range imaging in the lower frequencies. A pointing calibration scan needs an on-source time of 2m30s, i.e., this should be the remaining length of the scan after slewing to the source from the previous scan.
  • Polarimetry:
    • For polarimetry, one either chooses a strongly polarized source from the list of known polarized calibrators, or an unpolarized calibrator to calibrate the D-terms. For polarization angle one observes a well known polarization calibrator such as 3C286; see the section on Polarimetry for more details.
  • Astrometry:
    • If the science goal includes astrometry for the target, the best positional information is obtained with complex gain calibrators that are near the target source carrying code P in the array configuration of the observation and, in addition, have a positional uncertainty code of A (positional accuracy better than 2 mas).

 

About Calibrator Scans

One will notice that in many cases, the ideal calibration sequence and ideal calibrator sources cannot be implemented in the schedule perfectly. Here are some logistical issues that may impact an observing session and the necessary calibrations:

  • If the target and calibrator are on opposite sides of the zenith (on either side of Declination 34), the antenna cable unwrapping can be a problem. For more information, see the Antenna Wraps section.
  • If a bandpass or flux density calibrator is not available at the start of an observation, choose another calibrator or place these scans at the end of the observation.
  • If the bandpass calibration is corrupted by RFI or missed by some event during the observation, use the accumulated data on the strongest complex gain calibrator or any other calibrator for which the accumulated data gives sufficient signal-to-noise. Alternatively, if the flux density calibrator is strong enough at the given frequency, it can be used as the bandpass calibrator.
  • If the complex gain calibrator turns out to be resolved or extended, and not point-like, self-calibrate and image the structure of the calibrator and use the image—not a point-source approximation—for complex gain calibration. Alternatively, use the suggested (u,v)-ranges in the calibration task (which can be found in the calibrator list).
  • If the complex gain calibrator source is weak, or the accumulated scan time was too short, combine the data over all frequencies to obtain sufficient signal-to-noise to derive a complex gain solution.

 

Calibrator Scan Length

The calibrator scans should be long enough to derive the desired correction with sufficient accuracy. To ensure a good solution, be conservative and account for possible loss of data due to flagging operations (scan start slewing, band edges, RFI).

The standard way to determine the approximate length of a complex gain calibrator scan, as well as scans for bandpass and flux density, is first to find the approximate flux density of the calibrator which can be extracted from the source properties in the Source Catalog Tool (SCT) in the OPT, or from the list of VLA calibrators. As flux densities may vary over time (except for the standard calibrators), assume a conservative 10–15% less in flux density for each of the calibrators to be used (e.g., not 2.4 but about 2.1 Jy/beam), maybe even ~40% less for the higher frequencies. A conservative signal-to-noise goal of 10 will always yield good calibration solutions. Divide this conservative flux density by 10 (5–10 for bandpass) for each of the calibrators to estimate the RMS noise to aim for in the calibration scan. Then bring up the VLA Exposure Calculator and enter in the following items:

  • Baseband observing frequency center
  • Relevant bandwidth:
    • narrowest subband for gain,
    • narrowest single channel for bandpass,
    • total baseband width for flux density
  • Number of antennas: 2 (two!)
  • Number of Polarizations: Single!
  • Type of Weighting: Robust
  • Elevation and Average Weather as relevant to the scheduling block details
  • Calculation type: Time
  • And for each calibrator RMS Noise: the conservative flux density divided by 10 or down to 5 as derived above

The exposure calculator will reveal the conservative minimum length of on-source time to spend on a source needed to obtain good calibration. This time does not account for data loss due to flagging of unusable data, so it is wise to add another 10 seconds or so to make up for this loss. Also realize that scans shorter than about 20 seconds are not very effective after long slews as it takes upwards of 10 seconds for the antennas to settle down from the motion. For complex gain calibration the scan has to last for at least 20 seconds after any necessary slew and while flux density or bandpass calibrator scans should be at least 40 seconds long, also after any necessary slew.

Do not skimp on calibration scans since they are important for adequate calibration. Note that trading in some time on the target to benefit calibration only results in a small increase in RMS noise in the final image (see the calibration cycle example below).

Note: The use of 2 antennas in the exposure calculator should not be interpreted as determining baseline-based calibration solutions. Standard calibration tasks (e.g., CALIB and BPASS in AIPS, gaincal and bandpass in CASA) provide antenna based solutions, and the signal-to-noise of the solutions will improve with the square root of the number of antennas in the array. However, flux densities of calibrators can vary significantly over time (sometimes by large amounts). Also, some calibrators may have structure, or may be resolved, requiring uv-range limits during calibration, which in turn will impose constraints on the available antennas to contribute to the calibration solutions. Therefore, in the above calculation, the most conservative approach is demonstrated in order to secure a successful calibration for your observations.

 

Calibration Cycles

The interval between the observations of a complex gain calibrator in a given scheduling block depends on the configuration of the array and the science goal (i.e., a detection versus a self-cal experiment).  If the target source is not bright, or has a lot of structure in the field, self-calibration usually cannot be used. In this common case, one typically observes a target source that is bracketed between two complex gain calibrator scans. This is done in order to interpolate the calibration over the target observations.

Most of the time the scan length needed to achieve the required sensitivity on the target is much longer than the useful interval between the two calibration scans. The sampling needed to capture the temporal changes due to the atmosphere, for example, is shorter than the total observing time on the target. The target scan is therefore interspersed with calibrator scans and the observing pattern cycles through the sequence calibrator—target(s)—calibrator—target(s)—calibrator… This cycle is easily achieved by using a repetitive loop consisting of target scan(s) bracketed by complex gain calibrator scans (i.e., phase-referencing).

This sequence of calibrator scan—slew to target(s)—target(s) scan(s)—slew to calibrator is known as cycle time and should be less than the coherence time for the observations. The coherence time depends on observing frequency, array configuration, angular separation between calibrator and target and scheduling constraints (minimum elevation, LST start range, maximum wind speed, and maximum Atmospheric Phase Interferometer noise). Cycle time should not be confused with bracketing, which is where the target(s) scan(s) are encapsulated by calibrator scans. Bracketing of the target source(s) is required for high frequency observations, and very strongly recommended for low frequency observations.

The maximum cycle time should not exceed the coherence time of the observations. The minimum cycle time is the sum of slew time between calibrator and target and back (two slews) plus the minimum calibrator scan length plus epsilon (any duration that this minimum cycle time is shorter than the coherence time). Epsilon in the previous sentence can be used to accumulate observing time on the target.

Using the default weather constraints, typical recommended cycle times in minutes are given in table 3.1 (below). Note that while individual scans can be up to 30 minutes, it is recommended to split the scans into scans of no longer than 15 minutes; e.g., for a cycle time of 30 minutes, use three consecutive 9 minute scans on the target field between the calibrator scans and use the other 3 minutes for slew, calibrator scan, and slew back. For high frequencies, the cycle time is most important to improve the coherence time because self-calibration on the field in general is not possible.

 

Table 3.1: Cycle times in minutes by configuration and frequency band
Band (Frequency Range) Array Configuration and Cycle Time in Minutes

 

A B C D
4 (54-86 MHz) 30 30 30 30
P (224-480 MHz) 30 30 30 30
L (1-2 GHz) 15 15 15 15
S (2-4 GHz) 15 15 15 15
C (4-8 GHz) 8 10 10 10
X (8-12 GHz) 8 10 10 10
Ku (12-18 GHz) 6 7 8 8
K (18-26.5 GHz) 4 5 6 6
Ka (26.5-40 GHz) 3 4 5 6
Q (40-50 GHz) 2 3 4 5

If the time needed to ensure a good calibration (calibrator scan length plus two slews) is a considerable fraction (over ~40%) of the cycle time, the overhead becomes prohibitively expensive. Either look for a closer bright source (shorter slew), or tighten the weather constraints (longer coherence time) in the scheduling block information tab.

In line with the above, we remind the reader that the time spent on the calibrator is governed by the needed signal-to-noise to obtain good calibration solutions (see the Calibrator Scan Length section above). Consequently, one should not choose a weak calibrator that will consume a significant portion of the cycle time on the calibrator itself.

Notes on High Frequency Cycle Times

Variations in the troposphere move across the array at about 10 m/s, and move 1.2 km in about 2 minutes. The D-configuration maximum baseline is only about 1 km in length, so the screen moves completely across the array in less than 2 minutes. Any changes in phase due to this drift will not be tracked, so a cycle time shorter than 2 minutes in D-configuration will not track the troposphere phase variations at all. Cycle times shorter than 2 minutes in the C-configuration, with a maximum baseline of 3.4 km, should provide some improvement although it may be marginal.

For the B- and A-configurations, faster cycle times should provide a means of tracking phase variations due to the troposphere, but it will only correct the phase to the stability one would obtain on ~1 km baselines. A cycle time of faster than 2 minutes in the D- and C-configurations probably wastes available observing time since the troposphere phase changes cannot be tracked. For more information, refer to VLA memo #169 and VLA memo #173.

For cycle times faster than 2 minutes, which might be used in the B- and A-configurations, one is usually limited by the relatively slow slew speeds of the VLA antennas (20 degrees/minute in elevation and 40 degrees/minute in azimuth) and how close a complex gain calibrator is to the target field. Some settling time is also required. It is important to make sure that the cycle time is not so short that it results in no time on source or on calibrator. Additionally, faster cycle times may be needed at low elevations, but cycle times for low elevations have not been investigated sufficiently to give recommendations.
For more information on rapid phase calibration and the Atmospheric Phase Interferometer (API), refer to the VLA OSS.

A note on Very Low Frequency cycle times

Considering the large field of view in 4- and P-band observations, it will always be possible to self-calibrate the target phases because there will be a strong source or several strong sources within the field. It is recommended, however, to observe a complex gain calibrator (flux density > 10 Jy) every about 30 minutes regardless of the array configuration for initial calibration and system monitoring purposes.

Calibration Cycle Example

Data that cannot be calibrated is a waste of the entire observation. Therefore, care must be taken to make sure that the data can be calibrated. Also, the easier the calibration solutions can be determined, the easier the data reduction becomes. As an eye-opener to those who want to squeeze out the maximum on-source time on a target, and cut the calibrator scan time or lengthen the cycle time, consider the following example: observe a single target in a 2-hour loop, 512 MHz total bandwidth centered at 5 GHz, a 200 mJy/beam calibrator, and without further flagging (which is unrealistic).

  • calibrator scan length 40sec, cycle time of 24min (target 22min per cycle) uses 5 cycles, 110 minutes on-source
  • 12.3 for the expected calibrator signal-to-noise ratio in a calibration scan
  • 6.5 uJy/bm expected RMS image noise on the target
  • calibrator scan length 100sec, cycle time of 15min (target 12min per cycle) uses 8 cycles, 96min on-source
  • 21.3 for the expected calibrator signal-to-noise ratio in a calibration scan
  • 6.8 uJy/bm expected RMS image noise on the target

In the second case, the RMS image noise is only about 5% more than in the first case; but it is more likely that after flagging of band edges and otherwise bad data—or less careful but much quicker or automated flagging of the calibrator data—that the second yields better calibration solutions. Data can still be calibrated in the second case if a fraction of the data needs to be flagged for whatever reason (as this additional flagging will not drive the signal-to-noise below 10), or still yield better interpolations under deteriorating weather conditions as the cycle time is shorter.
 

Further Information

 

Contact the NRAO Helpdesk for further information.

Connect with NRAO

The National Radio Astronomy Observatory and Green Bank Observatory are facilities of the U.S. National Science Foundation operated under cooperative agreement by Associated Universities, Inc.