Limitations on Imaging Performance

« Return to page index

VLA capabilities February 2014 - September 2014

1. Image Fidelity

Image fidelity is a measure of the accuracy of the reconstructed sky brightness distribution. A related metric, dynamic range, is a measure of the degree to which imaging artifacts around strong sources are suppressed, which in turn implies a higher fidelity of the on-source reconstruction.

With conventional external calibration methods, even under the best observing conditions, the achieved dynamic range will rarely exceed a few hundred. The limiting factor is most often the atmospheric phase stability, although pointing errors and changes in atmospheric opacity can also be a limiting factor.  If the target source contains compact structures of sufficient strength (depending on the band, bandwidth, atmospheric coherence time, and source complexity), self-calibration can be counted on to improve the images.  Dynamic ranges in the thousands to hundreds of thousands can be achieved using these techniques, depending on the underlying nature of the errors. With the new WIDAR correlator and its much greater bandwidths and much higher sensitivities, self-calibration methods can be extended to observations of sources with much lower flux densities than very possible with the old VLA.

The choice of image reconstruction algorithm also affects the correctness of the on-source brightness distribution. The CLEAN algorithm is most appropriate for predominantly point-source dominated fields. Extended structure is better reconstructed with multi-resolution and multi-scale algorithms. For high dynamic ranges with wide bandwidths, algorithms that model the sky spectrum as well as the average intensity can yield more accurate reconstructions.

2. Invisible Structures

An interferometric array acts as a spatial filter, so that for any given configuration, structures on a scale larger than the fringe spacing of the shortest baseline will be completely absent. Diagnostics of this effect include negative bowls around extended objects, and large-scale stripes in the image. Image reconstruction algorithms such as multi-resolution and multi-scale CLEAN can help to reduce or eliminate these negative bowls, but care must be taken in choosing appropriate scale sizes to work with.

Table 5 gives the largest scale visible to each configuration/band combination.

3. Poorly Sampled Fourier Plane

Unmeasured Fourier components are assigned values by the deconvolution algorithm. While this often works well, sometimes it fails noticeably. The symptoms depend upon the actual deconvolution algorithm used. For the CLEAN algorithm, the tell-tale sign is a fine mottling on the scale of the synthesized beam, which sometimes even organizes itself into coherent stripes. Further details are to be found in Reference 1 in Documentation.

4. Sidelobes from Confusing Sources

At the lower frequencies, large numbers of detectable background sources are located throughout the primary antenna beam, and into its first sidelobe. Sidelobes from those sources which have not been deconvolved will lower the image quality of the target source. Although bandwidth smearing and time-averaging will tend to reduce the effects of these sources, the very best images will require careful imaging of all significant background sources. The deconvolution tasks in AIPS (IMAGR) and CASA (clean) are well suited to this task.  Sidelobe confusion is a strong function of observing band -- affecting most strongly L and P-band observations.  It is rarely a significant problem for observations at frequencies above 4 GHz.

5. Sidelobes from Strong Sources

An extension of the previous section is to very strong sources located anywhere in the sky, such as the Sun (especially when a flare is active), or when observing with a few tens of degrees of the very strong sources Cygnus A and Casseopeia A. Image degradation is especially notable at lower frequencies, shorter configurations, and when using narrow-bandwidth observations (especially in spectral line work) where chromatic aberration cannot be utilized to reduce the disturbances. In general, the only relief is to include the disturbing sources in the imaging, or to observe when these objects are not in the viewable hemisphere.

6. Wide-band Imaging

The very wide bandpasses provided by the Jansky Very Large Array enable imaging over 2:1 bandwidth ratios -- at L, S, and C bands, the upper frequency is twice that of the lower frequency.   It is this wide bandwidth which enables sub-microJy sensitivity.

In many cases, where the observation goal is a simple detection, and there are no strong sources near to the region of interest, standard imaging methods that combine the data from all frequencies into one single image (multi-frequency-synthesis) may suffice.  This is because the wide-band system makes a much better synthesized beam -- especially for longer integrations -- than the old single-frequency beam, thus considerably reducing the region of sky which is affected by incorrect imaging/deconvolution.  A rough rule of thumb is that -- provided a strong source is not adjacent to the target zone -- if the necessary dynamic range in the image is less than 1000:1, (i.e., the strongest source in the beam is less than 1000 times higher than the noise), a simple wide-band map may suffice.  

For higher dynamic ranges, complications arise from the fact that the brightness in the field of view dramatically changes as a function of frequency, both due to differing structures in the actual sources in the field of view, and due to the attenuation of the sources by the primary beam.  One symptom of such problems is the appearance of radial spokes around bright sources, visible above the noise floor, when imaged as described above.  

The simplest solution is to simply make a number of maps (say, one for each subband), which can then be suitably combined after correction for the primary beam shape. But with up to 64 subbands available with the VLA's new correlator, this is not always the optimal approach.  Further, images at all bands must be smoothed to the angular resolution at the lowest frequency before any spectral information can be extracted, and with a 2:1 bandwidth the difference in angular resolution across the band will be significant.  

A better approach is to process all subbands simultaneously, utilizing software which takes into account the possibility of spatially variant spectral index and curvature, and knows the instrumentally-imposed attenuation due to the primary beam. Such wideband imaging algorithms are now available within CASA as part of the clean task, and work is under way to integrate them fully with wide-field imaging techniques.

7. Wide Field Imaging

Wide-field observing refers primarily to the non-coplanar nature of the VLA when observing in non-snapshot mode. At high angular resolutions and low frequencies, standard imaging methods will produce artifacts around sources away from the phase center.  Faceted imaging (AIPS, CASA) and w-projection (CASA) techniques can be used to solve this problem.

Another aspect of wide-field observing is the accurate representation of primary beam patterns, and their use during imaging. This is relevant only for very high dynamic ranges ( > 10000 ) or when there are very strong confusing sources at and beyond the half-power point of the primary beam.  This problem is worse with a wide-band instrument simply because the size of the primary beam (and the radius at which the half-power point occurs) varies with frequency, while there is also increased sensitivity out to a wider field of view. Work is under way to commission algorithms that deal with these effects by modeling and correcting for frequency-dependent and rotating primary beams per antenna, during imaging. Please note, however, that most advanced methods will lead to a significant increase in processing time, and may not always be required. Therefore, in the interest of practicality, they should be used only if there is evidence of artifacts without these methods.

Finally, all of the above effects come into play for mosaicing, another form of wide-field imaging in which data from multiple pointings are combined during or after imaging.