Google Group Discussion

bevinashley
Posts: 9
Joined: Tue Mar 04, 2014 1:51 pm

Google Group Discussion

Postby bevinashley » Mon Mar 31, 2014 11:35 am

Below I summarize the major points made via the transient working group discussion. If you would like to be included in the email thread and are not currently subscribed, please sign up at: https://docs.google.com/forms/d/1r7S7xWPucBIM92cSobh_YdnQput1F0TX93O_iiA-mt4/viewform

March 24th: Shami Chatterjee
> (3) Does anybody's science require angular resolution better than 10''?

I think we should push for much higher resolution, negotiating from a starting position of 1" or better. In crowded Galactic fields, 10" is not good enough for follow-up to identify counterparts, or even to firmly conclude that a source is compact, as opposed to an evolving and expanding jet or ejecta shell. Counterpart identification will be a fundamental problem with ASKAP surveys, for example, and if higher resolution is a physical possibility, we would be remiss to give it up.
(And correspondingly, I'd also favor the Galactic plane as a target rich environment compared to Stripe 82 etc.)

> (6) Should we forgo time-consuming X band observations for deeper S and C band observations in the Medium Tier survey.

Not surprisingly, my feeling is yes, higher sensitivity at lower frequencies is better. What will we gain from a non-contemporaneous 10GHz survey that we wouldn't get from (much cheaper) targeted follow-up of interesting transient events?

March 24th: Joe Lazio
- Point #1 discusses the value of "wide-field" images. The Extragalactic WG is discussing both an ALL-SKY and WIDE component. I think that we want to be clear that an "all VLA sky" survey is what is needed to serve as a reference image (unless we want to argue that the only interesting transients in the future are going to happen in the area that VLASS-WIDE covers).

- There's essentially no science in this document. There are some statements about determination of rates and characterization of the radio sky, but very little detail. There are already some nice plots showing limits on transient rate surface density, which seem to amount to a characterization of the radio transient sky. Are there any classes of transients that could *only* be detected by a VLASS-class survey? If not, why can't "we" just propose some "large" PI-driven project?

- I haven't tried to work the details, but is the "Tier 1" survey feasible? We'd want 4600 hr spread over 3 yr. That's about 1500 hr/year in a particular configuration. Let's see. The VLA survey speed around S band is something like 15 deg^2/hr. Thus, it takes about 700 hr to cover 10,000 deg^2 (ignoring any overheads). That 700 hr requires 30 days of observing, maybe 60 days with overhead. I suppose that's feasible. However, that brings me back to my question about science. People have gotten 700 hr of VLA time. Why not reduce the amount of sky covered (3000 deg^2?) and propose for some "large" project?

To my mind, the most important thing from the transient perspective is the "reference all-sky image" to serve as a baseline for transient hunting for the next decade or more.

March 24th: Cristina Romero-Canizales
I second Shami. I think we should aim for something better than what was
offered with FIRST (resolution~5"). Note that there is already a radio
survey at 1.4 GHz in A-configuration deeper than FIRST (Hodge+11) covering
Stripe 82, meant as a pilot study for future observations.

True that transient sources are unresolved for all configurations, but
still, we want to avoid confusion and be able to, e.g., disentangle a SN
from its host. Additionally, a better resolution will allow a better
matching with SDSS (as the resolution is about 1") and possibly will also
help find GAIA counterparts... 10" would be hopeless here.

March 24th: Gregg Hallinan in response to Shami's comments:
I completely agree with your comments regarding resolution. However, with a view to providing meaningful information when weighing up various science cases, 10" is defined as the minimum resolution for which transient science can be achieved. This allows resolution be traded off against other parameters, such as brightness temperature. However, every increase in resolution benefits transient science and providing specific examples, such as the ones you cite, helps us push this case forward. Furthermore, I think high resolution should be favored by the VLASS, as that is where it can achieve distinct science from other surveys.

The Galactic plane is a target rich environment for Galactic transients, but not so much for extragalactic transients :-)

I also agree that more time at S and C band might win out, at the expense at X band.

March 24th: Geoff Bower
The cadence, sensitivity, and area of the survey need to be coupled to specific science objectives and object classes, which I don't see here. I don't think it's sufficient at this point to argue in favor of characterizing the generic rate of transients. I worry that the wide survey defined here will just give a list of potential events that we can't say anything about in detail --- we won't have a timescale for evolution or a good enough localization to know if extragalactic objects are nuclear or in the outskirts of the galaxy. It seems like we require resolution of ~1 arcsc to carry out the goals fo separating the different kinds of extragalactic transients. I would argue for more epochs with cadence of months rather than separations greater than 1y. One can always integrate over multiple epochs in order to obtain the longer cadence information.

I do think that we should be pushing for S and/or C band. Evolution of synchrotron sources at L band is slow and the amplitudes tend to be weak. This does impose a significant cost in terms of survey area and/or sensitivity.

March 24th: Jim Cordes
some quick, initial reactions:

I strongly favor 1 arcsec-ish resolution and think it far better as a baseline number than 10 arcsec, which I think would be a mistake.

For the tier-2 survey, I think much more of the Galactic plane needs to be surveyed so 300/500 deg^2 is too small.

March 24th: Casey Law
Crazy idea: could the multi-frequency survey be done with subarrays?
That would increase the cadence at the cost of fixing the relative
sensitivity of the bands observed jointly. Ok, more importantly, it
would lose sensitivity due to missing cross correlations, but for
Galactic transients sensitivity is not as crucial. Simultaneity would
also make this interesting.

March 24th: Sara Turriziani
let me add my name to the team of supporters for high resolution: 10" is too shallow!! I think a survey should benefits at its best of JVLA's strengths such as resolution and sensitivity. Somebody mentioned GAIA, I would like to add that also for Pan-Starss 10" would be too big to search for unique IDs.

I believe we should favor S vs.L also for the survey speed.
For the cadence I would suggest to search for some more tailored approach than the one actually in the draft, but this would depend on the kind of transients we expect to detect.

Regarding variability in general, I think we should pursue for N > 3 epochs in order to use variability to classify the sources, e.g. 4 epochs would be a good choice for me.

March 24th: Shami Chatterjee in response to Casey Law and Joe Lazio:
> Crazy idea: could the multi-frequency survey be done with subarrays? That would increase the cadence at the cost of fixing the relative sensitivity of the bands observed jointly. Ok, more importantly, it would lose sensitivity due to missing cross correlations, but for Galactic transients sensitivity is not as crucial. Simultaneity would also make this interesting.

Neat idea, but keep in mind that the only way the overall program gets sold is by its legacy value. I liked Joe's formulation a lot:

> The most important thing from the transient perspective is the "reference all-sky image" to serve as a baseline for transient hunting for the next decade or more.

With subarrays, the quadratic loss in sensitivity at each band would really hurt legacy value - in my opinion, much more so than the gain from simultaneity.

March 24th: Assaf Horesh
I would like to raise a few points for discussion:

1. Frequency and cadence: While I agree that we probably need to focus on low frequency in order to maximize the survey speed, as you mentioned, for a large variety of known transients (SN, GRBs, TDEs), the emission peaks rather late at those frequencies. However, choosing a fixed cadence of 1 year can be problematic. First a large amount of “old” transients will be discovered. The emission from these transients in other shorter wavelength (e.g., optical, X-rays) may have faded away already. Thus, verifying the nature of these old transients and doing a multi-wavelength study will not be possible for most of them. Moreover, follow-up resources, especially in the Xray will probably will be very limited, so it is crucial to have some idea on how old is a transient in order to prioritize the targets for followup observations.

Also, while there are well defined science questions that motivate a transient survey with a slow cadence, such as what is the rate of obscured SNe and what is the beaming fraction of GRBs, we should be prepared to also explore the unknowns. For the unknowns we do not know the time-scale of the radio emission and the emission in other wavelength. We should find a balance between the important science questions that motivates the survey and the possibility of exploring a new phase space.

We have to keep in mind, that even for the known transients that we would like to explore, such as SNe, there are cases, in which the emission peaks already at lower frequencies, a few days after explosion. This phase space of short time-scale is not well explored. We should also keep in mind that most of the radio follow-up observations of transients in the past, were performed at late stages, so we might have a biased view of what we think we know about radio emission from SN and other “slow” transients. Maybe there is a large population of them with emission that peaks rather early.

There are certainly good motivation for both slow and fast cadences. I’ll be happy to see a further discussion on this. I would argue that if there is a 3 epoch only survey, certainly the first two epochs should be separated by a year. This will allow us to find the answers for the scientific question which you defined and will also allow time to properly inspect and understand the data and build up a proper reference image and be prepared for the transient search in the second epoch. At the same time, we should consider whether the 3rd epoch should be separated by a year or maybe a 1-2 months separation is better for the reasons I mentioned above.

2. Resolution: You raise an important point in which we have to be coordinated with on-going optical surveys. For that purpose, the optimal thing to do is to match the resolution of those optical surveys. I would say that a ~ 1” resolution is what we should aim for. As you said A config will be great to see whether some transients can be associated with their host center, but I think the price we have to pay for A config is too high. So for precise localization of some of these transients, we should instead request devoted VLBA followup time as part of the survey.

3. Wide vs Deep - Again this is a question of prioritizing our goals and also accommodating other non-transient science goals. I would like to raise one point regarding building a deep reference image for a future search for GW counterparts. I would suspect that a reference image with an rms of 40 micro Jy will not be sufficient. In my mind if future GW studies is an important enough goal, then a deeper survey is the way to go. Again, just something to think about.

March 24th: Joe Lazio
Perhaps even more important than the Transient survey definition is the Transient survey science result. What are the "legacy" outcomes that justify 1500+ hr of time being spent on a VLASS-Transient?

extragalactic incoherent sources (e.g., SNe, GRB afterglows, TDEs, …): a blind survey would remove issues of dust obscuration, but people already have proposed simply "large" projects already for such objects, and the proposed VLASS-WIDE might accomplish much of this science naturally

extragalactic coherent sources (FRBs?): people are already conducting "large," or not even "large," projects for these objects, and it seems difficult to justify this topic being a focus of a legacy survey until it can be demonstrated that they really are xgal (viz. recent Kulkarni et al. tome)

Galactic incoherent sources (e.g., novae, ….): much the same comments as for xgal incoherent sources above, and the proposed VLASS-Galactic might accomplish much of this science naturally

Galactic coherent sources:
- X-ray binaries: maybe some return, but accomplished via the VLASS-Galactic?
- pulsars: will a VLASS produce a high enough yield of pulsars to warrant the survey?

March 24th: Geoff Bower
I'd like to add the possibility of going after shorter timescale transients to this program. The correlator can dump data as fast as every 5 msec with its current capabilities. This requires an additional level of resources to be able to process the data offline or in real-time, but the work that Casey and I and others have been doing with the VLA shows that this is not out of the question. The white paper we wrote on this topic is attached.

I may be misunderstanding the structure of how the surveys will proceed, but my understanding was that the transient science cases would proceed commensally with the static sky surveys. The requirements of area, frequency, and resolution overlap substantially. This drives us primarily to defining cadences that extract the maximum science in the time domain. We, of course, want to steer the static surveys in a direction that provides the most benefit to time domain science.

March 24th: Joe Lazio in response to Geoff's question about the survey:
We do have to write (or contribute to) a proposal. If people are happy with the surveys proposed from the other WGs (recognizing that we might not have had much chance to review them), then that's fine. Then the task of this group would be to perform an analysis of how the various other surveys could be optimized so as to provide the best return for various classes of transients.

March 24th: Steven Meyers:
I have posted the v1.0 draft of the transients document to the Transients & Variability WG wiki page:
https://safe.nrao.edu/wiki/bin/view/JVL ... rkingGroup

bevinashley
Posts: 9
Joined: Tue Mar 04, 2014 1:51 pm

Google Group Discussion - Part II

Postby bevinashley » Mon Mar 31, 2014 11:41 am

March 25th: Tara Murphy
1) It is impossible to decide on the right survey parameters without very specific science goals. We went through this (extensive) process for VAST (ASKAP) and there are always tradeoffs. The only way of evaluating which tradeoff you should make is by aligning the outcomes with a particular science goal.

2) Will 3 epochs be enough to do science? In first wave of blind transients surveys having 2 or 3 epochs was sufficient to set limits on transient rates. However, with all the surveys that have been carried out in the last 5 years (ATA, VLA archival surveys, Molonglo archival, etc.) I'm not sure that it is as compelling any more to have setting transients limits as a major goal.

3) Will there be near-real-time processing? This is related to my previous point, and is something we are working on with the MWA transients surveys. If the processing and data analysis doesn't happen very quickly after the data are taken, then you are essentially doing another 'archival' survey, which could be of limited scientific use, particularly as radio emission often comes later than higher frequencies.

March 25th: Miguel Perez-Torres
Hi, everyone:

I won't be able to join today's telecon, so I'm sending you my comments on the drafted document.

As some of you have already expressed, without a detailed science case is difficult to justify the time, frequency, and cadence for either Tier.

The minimum 10" angular resolution requested may suffice for Galactic studies, but
a significantly better angular resolution (1"-3") is needed for Extragalactic transient studies.

I think either Tier will be much more useful if the observations are taken within the same configuration, so the observations have an homogeneous angular resolution.

Tier 1
======

- Is enough to observe once per year to reach the goals in points a-d?

- If we stick to the same configuration, this will imply that we should decrease the covered area for Tier 1, or go beyond three years. With the existent literature, we should be able to get some preliminary numbers for the rates discussed in points a),b),c),d). And from those numbers, I guess we should be able to come up with a minimum area to be covered by Tier 1.

- From a synchrotron biased point of view, C-band would seem an optimum compromise: About 1" angular resolution in B-conf; large survey speed (~7.5 sq.deg/hr); large bandwidth, which may allow for spectral index studies in one go. The negative side is that it takes twice as much time to complete the initial value of 10000 sq.deg wrt S-band observations. Assuming 1500 hr/yr + 1/3 of overhead yields about 83 days in the same configuration. While that would be my preferred option, I don't see that easily scheduled within the same session, so Tier 1 at C-band may not be feasible as it is now. Rather than forgetting about C-band observations, we should consider the science case first and then decide to which extent covering most of the SDSS/FIRST is of use.

- S-band in B, or A configuration should be comparable use, except for the caveat about the contaminant diffuse emission. Joe already made the numbers, and the 10000 sq deg in S-band can be reached in 30 days, which adding the overheads should be about 40 days. So S-band observations sound more feasible. This is also the option considered by the EWG in their Tiers 1 and 2 (S-band, A and/or B configurations, request ~1"-5"; Tier 2 also considers multiple epochs for variability studies).

Tier 2
======

- We need a science case to justify the 800 sq.deg to be covered (Stripe 82 and Galactic Plane field), or any other field, especially in view of the large time request (ssuming an overhead of 1/3, the total time request is for about 180 days).
- S-band and C-band observations will cover almost one decade in frequency, so X-band observations are unlikely to add much. In addition, X-band observations are much more time expensive. I don't think we need X-band observations.
- I'd suggest that the observations are taken with the same configuration. This is easily done if we request that the observations are done not after 1 year, but for the next configuration. This will ease any comparison.
- I suggest B-conf. (1.3" resolution at C-band, ~2.7" at S-band), but this will depend on the science goals.

March 25th: Sara Turriziani
I supposed that the idea behind a VLASS-Transients was to have some
sort of real-time processing so I did not asked about it before, and I
would like to thank Tara for addressing this point, that is crucial.
If the transients are the main driver for the survey, we should think
to organize a quick way to process the data after the observation so
to 1) generate images of
intensity and polarization every xxx seconds with no time gaps, and
2) monitor the images to detect something popping up! I mean something
like LWA TV:
http://www.phys.unm.edu/~lwa/lwatv.html

Otherwise, I agree with Tara: it will be not a transient-devoted survey but
another archival data set to be used of course to study for
variability and to search for "old" transients, but nothing more than this.
Of course, organizing such a processing pipeline is a point to discuss
also with the technical WG to address the data flow and the computing resources needed,
especially if we want to push the timing to the shortest time scales
available as suggested by Geoffrey. Realizing a sort of "radio sky monitoring"
will also act as a point of reference of what will/can be done when SKA will be
available (plus, the wider region we can probe the better, especially
if we push to the shortest time scale).

I would like to stress that the discussion on time intervals/number of
epochs should be done having in mind the class of transients we would like to probe
with such a survey. Of course, the opening of a new region in the parameter space can give
us the chance to detect something in the realm of the current unknown; however I think
we should focus first on known transients time scales. For everybody reference I
attach to this email Table 1 from Ofek et al. 2011, ApJ, 740, 65 as a review of slow
transient surveys done so far in the GHz band. They set mainly upper limits to transient
rates; only 3 surveys detected significant numbers of transients. We
should think what we can probe better with respect to those and choose survey set up accordingly.
I used the term slow transients as they all probed transients > 1 s
(for me fast transients are < 1 s).

Let's recall that slow transients are primarly explosive events or outflows
(often synchrotron emission, occasionally thermal). Know classes of
slow transients so far include:
-Novae
-Cataclysmic Variables (CVs)
-X-ray Binaries (XRBs)
-Magnetar outbursts
-Supernovae (SNe)
-Active Galactic Nuclei (AGN)
-Tidal disruption events (TDEs)
-Gamma-ray bursts (GRBs)

Plus, we should not forget scintillation (non-intrinsic variability)
and Extreme Scattering Events (that can allow us to probe ISM).
What we should expect? We can estimate the rates of known physical phenomena,
so anything with substantially different rates may be a new source class...

But* I would like to stress that rapid, accurate localization and
follow-up is critical, in order to sample the full Spectral Energy Distribution of the
transients and classify them.

March 25th: Gregg Hallinan in response to Sara
1) I agree. This point has been raised many times via e-mail. However, the documents sent around yesterday were intended to provide brief survey definitions, not to fully elaborate on the science again. This was done in the white papers, specifically, those written by Law, Hallinan, Chaterjee, Kamble (et al.). These can be found here - https://science.nrao.edu/science/survey ... ite-papers. For example, we give detailed science goals with assessment of trade-offs for surveys of different size in the white paper written by our group.

2) VLASS will not be setting transient limits. Unlike most previous blind surveys, it is sufficiently deep to easily detect expected populations of Galactic and extragalactic radio transients, rather than relying on serendipity. The attached figure, modified from Frail et al. 2012 and similar to those included in the white papers, emphasizes this for the extragalactic radio transient population. Regarding number of epochs, if you want to blindly sample the radio sky on multiple timescales, then multiple epochs spaced logarithmically is preferred. This is currently what is defined for the Medium tier survey, which actually has 6 epochs, with 5 science epochs spaced 1 hour, 1 day, 1 week, 1 month and 1 year after an initial reference epoch. However, I think a smaller number of epochs spaced further apart, with resources available for follow-up, is much more efficient. The number of radio transients at maximum brightness is essentially constant - the question is how to most efficiently detect and characterize this population. Therefore, if you just want to maximize the number of blind detections per epoch on all time-scales, then you space the epochs apart by a time greater that T, with T being the time to maximum brightness for the slowest known population of transient (likely SNe). However, you *must* have time available for multi-epoch, multi-frequency follow-up of each radio transient to properly characterize it in time and frequency after initial detection. This allows you to characterize the time-scale, despite the large gap between the science and reference epoch.

3) The necessity for real-time processing is emphasized in a number of the white papers, as well as the current transient survey definition. I agree that it is essential.

March 25th: Tara Murphy
The rapid multi-wavelength follow-up is key, regardless of the number and spacing of the epochs.

March 26th: Kunal Mooley
It seems that the popular cases are:
Tier 1: All-sky; L/S band; A/B config; months and years cadence
Tier 2: ~1000 sq deg; Galactic plane+Stripe82?; S/C band(s); A/B config;

I would like to add a vote for S band instead of L band for the all-sky component of the survey since the former will enable discovery of extragalactic afterglow transients relatively early-on in their evolution. However, L band is also compelling, given the possibility of VLA-FIRST being treated as the reference epoch ( but the detection threshold of the FIRST source catalog is ~0.9mJy I believe :( ).


1) Survey Area:
a) Arguments in favor of all-sky survey (>10,000 sq deg):
--- Future aLIGO counterpart searches (eg NS-NS mergers)
--- Usefulness to the wider astronomy / multiwavelength community
--- Increased probability of finding transient populations distributed as per Euclidean (+ maximizing search volume for transients in the local Universe)

b) Arguments in favor of medium survey (few hundred to ~1,000 sq deg):
--- Search for multiwavelength counterparts in Stripe82-like regions is easier
--- Galactic transients case is very interesting
--- Legacy value?


2) Survey Depth:
a) Arguments in favor of deep survey (coadd rms ~30uJy):
--- Good reference epoch for future aLIGO counterpart searches

b) Arguments in favor of shallow survey (coadd rms ~100uJy):
--- Easier for telescope scheduling / execution


3) Observing Band:
a) Arguments in favor of lower frequency bands (L/S):
--- Favorable for coherently emitting Galactic transients
--- Maximum survey speed

b) Arguments in favor of C band or higher:
--- Find extragalactic afterglows sooner in their evolution (+ opportunity for high frequency followup of transient?)
--- Rates of SwiftJ1644-like and SNe events increases with frequency for a given survey depth and area


4) Survey Cadence:
--- Logarithmic sampling in time is ideal
--- 1 hour, 1 day, 1 week, 1 year cadences favorable for Galactic
--- Cadence of months to years favorable for Extragalactic (with the drawback of finding only late-time emission from transients)


5) Array config
Need A/B array for good localization and avoiding source confusion. No doubt about that one.

6) Observing mode:
On-The-Fly observing would by far be the winner given the significantly decreased overheads. OTF scan speed and possible integration times need to be revisited.

7) Near-real-time processing
Near-real-time processing is key. 3-12 hours of processing time required per observation of a few hours (based on our experience with the 300 sq deg Stripe 82 transient survey).

8) Going after known slow transient populations
Galactic: Novae, CVs, XRBs, YSOs, BDs, NSs (pulsars/magnetars/etc), flares from active stars
Extragal: Extreme AGNs?, SNe, TDEs, short and long GRB afterglows

Rates: Extragalactic (adapted from Frail et al. 2012; caution: timescale axis is collapsed)

March 26th: Casey Law in response to Kunal's summary of the Transient Working Group telecon
That summary is particularly valuable because our next draft proposal needs to include science-based "tripping points" in the definition. The idea is to have a bulleted list of science goals and required minimum values of survey parameters to achieve that science. This summary is most of the way there.
If others have ideas on minimal survey parameters (configuration, band, field size, etc.) for their science, send them in now.

bevinashley
Posts: 9
Joined: Tue Mar 04, 2014 1:51 pm

Google Group Discussion - Part III

Postby bevinashley » Mon Mar 31, 2014 11:47 am

March 27th: Gerry Doyle
sorry that I could not join the telecom; in addition to the 1 hr, 1 day, 1 week, 1 year cadence could we have time to follow-up a transient at multi epochs in order to give us some idea on the character of the object .. would multi frequencies be too difficult?

March 27th: Geoff Bower in response to Gerry
Follow-up seems like it could be the domain of independent ToO proposals. The number of transients will probably be fairly small.

March 27th: Geoff Bower in response to Kunal's summary (see Google Group Discussion Part II)
It's important to add fast transients to the science case with the caveat that they are to be done on a best efforts basis given the available technology and computing.

One minor comment on your plot: It looks like you are using 5 sigma detection thresholds. This probably isn't suitable for a survey of this scale. The number of independent trials will be ~10^12 for the all sky survey, necessitating something like a 10 sigma threshold.

March 27th: Edo Berger
Dear Transienters,

I've been trying to follow the interesting email chain, and at the same time formulate my general thoughts on the VLASS idea. In particular, I wanted to step back from the very specific questions raised above (resolution, frequency, etc.) to a broader discussion of whether this is a scientifically productive use of VLA time. Below are some of my general thoughts and impressions - I generally try to avoid sending long emails, but in this case I wanted to clearly state my thoughts in a comprehensive way.

As someone who has worked on multi-wavelength studies (including radio) of a wide range of transients (long GRBs, short GRBs = NS-NS binaries, supernovae, tidal disruption events), I have to say quite bluntly that I am not in favor of spending thousands of VLA hours on a blind radio transients survey. First, this is clearly a zero-sum game that will negatively impact the on-going highly productive work being done at the VLA on radio follow-up of transients from other regimes (gamma-rays, X-rays, optical). Second, I think that the scientific motivation for the transients component of VLASS is weak (as I outline below). I reach the same conclusion in the context of my other line of work, on radio studies of low mass stars and brown dwarfs - observations of known M/L/T dwarfs are much more productive than blind searches for flares.

I reached these conclusions based on the following chain of facts:

[1] For the timescales being discussed here (days to years) the sources are going to be synchrotron emitters. Such sources obey the well-known brightness temperature limit ~10^12 K (unless they are relativistic like on-axis GRBs, but such events are far too rare to be found in a blind survey like this). The brightness temperature limit imposes a limit on the combination of source luminosity and timescale, such that extragalactic sources (>10-100 Mpc) at the flux limit of the survey (10-sigma ~ 1 mJy) will have timescales of >days-weeks. So, at the very least doing a survey with a faster cadence than ~month seems unproductive.

[2] The known source populations (GRBs, SNe, TDEs) are much easier to discover at other wavelengths (GRBs in gamma-rays, SNe/TDEs in optical) and then follow-up in the radio. This is partly because deep wide-field imaging is available in gamma-rays and optical, while the VLA is by no means a wide-field instrument. Moreover, its relative sensitivity in the context of transients is much poorer than that of gamma-ray satellites and optical telescopes like Pan-STARRS and PTF. For example, this is apparent in the fact that radio emission has essentially never been detected from SNe at >100 Mpc, while optical surveys easily find them to z~0.5.

[3] Discovery in optical/gamma-rays with follow-up in radio is also a more logical and productive approach since the emission evolves from high to low frequency. This means that for any sources discovered through their radio emission we will have no information in optical/X-rays/gamma-rays (since that emission will have faded away much earlier) and hence no way to classify what they are! These are all synchrotron sources and so their light curve are broadly identical (and the subtle differences will not be apparent in the sparsely-sampled data). So, we will not even be able to say if we're looking at a SN, off-axis GRB, TDE, etc (or maybe even an AGN flare). Essentially, all one gets from radio synchrotron emission is an energy scale and a measure of the surrounding density (if the redshift can be measured from a coincident galaxy). This is much poorer information than in the follow-up case, where the combination of radio, optical. X-rays, etc. is quite powerful.

[4] Even in follow-up work the radio detection rate is low, which means that discovery through radio emission is an inefficient process. The detection of radio emission from SNe approaches 100% only within ~10-15 Mpc; by ~100 Mpc that rate is <10%. Similarly, the radio detection rate for GRBs (even though they are on-axis and hence particularly bright) is only ~20-30% The radio detection rate for TDEs is so far ~10% (only one has been detected) and could potentially be much lower. These fractions are true for much deeper observations than those being envisioned here (< 0.1 mJy instead of ~1 mJy). At ~1 mJy the follow-up detection rates for the various transients are generally <10% This means two things: First, the blind radio detection rate will be lower than expected compared to the optimistic diagrams I've seen so far. Second, while radio non-detections of transients from other wavelengths are actually scientifically useful, radio non-detections in a radio survey are of course meaningless.

[5] The logN-logS plot shown in this email chain is misleading and much too optimistic. First, it assumes 100% detection rate for TDEs, which we already know not to be the case (it is <10%). Second, it assumes high energies and densities for NS-NS mergers, which observations of short GRBs show not to be the case. So, the theoretical lines should be shifted downward significantly. In addition, a realistic limit is ~1 mJy (10-sigma, not 3 or 5 sigma). Finally, the logN-logS plot gives the snapshot number of sources, but they do not come with a "transient" label, so this means that additional epochs are required to show variability, and hence the true discovery rate (let's say, at least 3-5 detections on a light curve) will be much lower. Thus, we need to think in the context of at most ~few detected events per 10^3 deg^2 - this is quite depressing to me considering the level of effort being proposed, and the minimal amount of science that can be extracted from radio data alone.

[6] In the context of studying flares from ultracool dwarfs, to date only sources within ~20 pc have been detected despite searches of large samples that extend to larger distances. Therefore, blind all-sky searches are a highly inefficient approach. Instead one should devote more VLA time to targeted studies of the known ultracool dwarf populations. In any way, these flares generally last <1 hour so a survey with >1 day cadence is poorly-matched to this scientific goal.

Bottom line: radio studies of transients in follow-up mode have been highly productive because the identity of the sources is actually known (GRB, SN, TDE, etc.), the non-radio data provide deep insight beyond just the classification, and the observations can go much deeper than in an all-sky blind survey. However, a blind search for radio transients as proposed in the VLASS is fraught with the disadvantages I listed above. As a result, I would argue that as a community we should convince NRAO to spend ~10^3 hours on time-domain radio follow-up rather than on a blind survey that will yield at most a few poorly-characterized transients.

I would appreciate people's responses to these thoughts. And please, do not come back with "but what about the unknown unknowns" - we are all thinking astrophysicists (unlike Rumsfeld) and anyway that is not proper scientific justification.

March 27th: Joe Lazio in response to Edo
Thx for the very nice summary.

Let me probe a bit at one of the questions that you pose. Reminder, notional ALL-SKY is 30,000 deg^2 and notional WIDE is 3600 deg^2.

I believe that Steve stated that it would be a lot easier to schedule ALL-SKY if it were broken into multiple (3?) epochs. I think that there are two options:
1. Cover 1/3 of sky to 0.1 mJy/beam rms in each epoch. Not very useful because there's no way to determine which objects might be potential transients. (Each pointing position has only a single epoch.)

2. Cover entire sky to 0.17 mJy/beam rms in each epoch and co-add to achieve final 0.1 mJy/beam rms. Effective detection threshold is probably no better than 1 mJy. Yield ~ 30 NS-NS merger candidates and ~ 30 orphan GRB afterglows. Seems like a reasonable strategy. ALL-SKY is conducted for static sources, with added value from time domain.


WIDE requires multiple epochs, but how many? I think that I've seen 5 epochs suggested. Suppose each epoch has a noise level of about 70 microJy/beam. Effective detection threshold might then be 0.5 mJy. Yield ~ 5 NS-NS merger candidates and ~ 5 orphan GRB afterglows. Nice, but perhaps not particularly exciting?

Ok, what do these numbers mean? The areal density-flux density plot must make assumptions about beaming angles and/or density of medium surrounding transient (which I'm going to have trouble looking up 'cuz on my iPhone). Can we turn these numbers of transients into interesting constraints? If we find 30 transients, we improve the constraints on beaming angles by a useful factor? (3x, 5x, 10x)

More generally, we need to state that the survey strategy will provide a sufficient yield to make inferences or place constraints on interesting sources and do so in a way that does not invite the review panel to ask why half the number of transients is not acceptable.

Also, a potential weakness with all of the above is that it is not clear that any transient found may be able to identified because any counterpart at other wavelengths might have faded by the time that the radio transient is identified.

(Apologies if some of this is in white papers already, but it should be pulled out to make it obvious.)

March 28th: Peter Williams in response to Edo
I think Edo makes an important argument that this is a real, profound
difficulty. By looking at host galaxies, fading counterparts in other
bands, archival data, etc., you'll have some probabalistic handle on
what these things are, but the sample size isn't going to be big enough
to make up for the lack of detailed information about each event.

We'll learn some things -- but the opportunity costs have to be weighed.
VLASS will mean the rejection of a *lot* of our PI proposals. Assuming
that VLASS occurs in some kind of multi-tier, ~10000-hour incarnation, I
tend to agree with Edo that some kind of 10% (even 5%) "Transient Tier"
for well-thought-out, targeted studies will have a *lot* more payoff
than tweaking the design of the All-Sky Tier.

March 28th: Casey Law in response to Edo and Peter
A "transient tier" could be interesting. Just to be clear: are you
referring to radio follow-up of VLASS-detected transients or some
broader trigger definition (all SNe within 10 Mpc, etc.)?
The former idea is something that we've been edging towards in our
white papers from the beginning. Gregg and I have been independently
working toward real-time transient detection on the slow and fast time
scales. Our white papers have requested computational support for that
during the VLASS. However, the follow-up time was not explicitly
requested, so a transient tier is new in that regard.
The latter idea (following up some broader set of triggers) sounds
too much like the science of the general observer. That is politically
not likely to fly, since the VLASS is meant to be tackling problems
that aren't accessible through the normal time allocation process.
I'd like to think more about VLASS follow-up of VLASS transients as
a transient tier. What if, instead of designing a transient-specific
survey region, we made a priority to support the real-time detection
and follow-up of VLASS transients? I know the staff at the AOC are
interested in the real-time transient detection on fast and slow
timescales. Our job may simply be to advocate that the final VLASS
survey be designed to make the radio follow-up useful.

March 31st: Gregg Hallinan in response to Edo, Peter, Casey

You raise some important points and it would be good to get the group entered into this discussion.

First and foremost, the question we are addressing here is not whether we should spend thousands of hours on a blind transient survey. The VLA Sky Survey (VLASS) is an initiative to explore the science and technical opportunities of a new centimeter-wavelength survey, with a broad range of scientific goals to benefit the wider community. The possibility of such a survey is motivated by the new capability of the upgraded VLA and the scientific return of previous surveys such as NVSS and FIRST. Within that framework, we have been asked to assess whether transient science will play a key role in defining that science case or, at the very least, establish whether transient science can be conducted commensally in a survey largely defined by other science goals. Your points regarding the impact of the VLASS on the ongoing science observing of the VLA are very valid. However, that is not the remit of our discussion. We have been asked to help define the science case for a possible VLASS which will eventually manifest as a proposal that will undergo a full peer review process to assess whether such a survey is timely and scientifically justified. Success is by no means guaranteed. If this proposal is weak, or does not clearly demonstrate the support of the wider community, it will certainly fail.

Regarding your other points:

1) Each epoch of the wide survey is defined to be > 1 year apart to ensure that we are sensitive to extragalactic and Galactic transients on all timescales. The cadence of the Galactic plane survey would be shorter, but the targeted population obviously does not fulfill the criteria you describe.

2) Your broad brush statement does not address the specific questions outlined for blind radio transient searches. For example, we propose to detect GRB *orphan afterglows* in order to constrain the inverse beaming fraction of these events. As you know better than I, it is impossible to detect such off-axis events at gamma-rays. The constraints via optical searches have been weak and ambiguous thus far (needle in a haystack). Radio observations, at the depth of the VLASS, with high spatial resolution and dedicated follow-up are a compelling means to tackle this problem. Finally, I should note that the known source population is not restricted to extragalactic transients.

3) Observations of radio transients in the VLASS will not suffer from the ambiguity that you suggest. As you pointed out, the VLA is only really sensitive to GRBs, SNes etc. in the local universe. Classification does not come from detecting the associated optical transient, but rather from localization within a host galaxy that is relatively easy to detect. With an established distance, localization, calorimetry and radio SED, differentiating between a GRB and SNe afterglow, for example, becomes perfectly reasonable.

4) It would be helpful if you would be more specific on this point. Are you suggesting, for example, that only 20-30% of on-axis GRBs produce radio emission, or that only 20-30% are close enough to produce detectable radio emission? The latter would be factored into estimates of GRB radio afterglow rates.

5) I agree with you that the TDE numbers presented in Frail et al. 2012 are now known to be too optimistic. The case is also actually worse than this for higher frequencies than L band, which is the frequency in question for that paper. The log N- log S plot shows the instantaneous number of afterglows for various classes of extragalactic transients. However, this number is representative of the rate times duration, which is actually lower at higher frequencies. Simply put, the transients don't wait around as long at higher frequencies so, while the rate remains approximately the same, an instantaneous snapshot at lower frequencies shows more detectable sources than at higher frequencies. When a survey has been better defined, these plots should be properly remade for the correct frequencies and rate for input into a proposal. The return will still be rich. Even assuming the factor of 10 that you suggest, the VLASS should still detect >100 TDEs! Finally, the detection limit used is 7.5-sigma, which is perfectly reasonable for the number of independent beams being measured.

6) I do not see how your analogy of ultracool dwarfs is generally applicable to all classes of radio transients. How does one do a targeted search for GRB orphan afterglows or Fast Radio Bursts?

March 31st: Joe Lazio in response to Edo, Peter and Gregg
I interpreted Edo's message very much in this sense, to wit: The yield of (slow) radio transients from the VLASS is unlikely to be high enough to justify radio transients driving the definition (or definitions) of VLASS, i.e., radio transients do not play a key role in defining the science case. However, radio transient followup would be greatly aided by an ALL-SKY survey at 100 microJy/beam rms because such a "reference epoch" would enable more efficient followup.

The alternate question is whether radio transients can be conducted commensally, if the VLASS is constructed properly (i.e., multi-epochs). Here, I think that our answer remains uncertain. Both Edo and I suggest that the yield of (slow) radio transients would be a few, to at most a few tens. What I've not seen described yet are the implications of that yield.

Consider orphan GRB afterglows. I don't have the current constraints on beaming angles at my fingertips, but suppose that the VLASS finds 10 candidate orphan GRB afterglows. What does that imply about beaming angles? Will we be able to improve the current constraints by a factor of 2? 3? 10?

bevinashley
Posts: 9
Joined: Tue Mar 04, 2014 1:51 pm

Google Group Discussion - Part IV

Postby bevinashley » Mon Mar 31, 2014 12:19 pm

March 31st: Edo Berger in response to Gregg

Thanks for your responses. I am happy that the discussion has broadened beyond the survey details. I would like to reiterate a few points in the responses to your responses below.

First and foremost, the question we are addressing here is not whether we should spend thousands of hours on a blind transient survey. The VLA Sky Survey (VLASS) is an initiative to explore the science and technical opportunities of a new centimeter-wavelength survey, with a broad range of scientific goals to benefit the wider community. The possibility of such a survey is motivated by the new capability of the upgraded VLA and the scientific return of previous surveys such as NVSS and FIRST. Within that framework, we have been asked to assess whether transient science will play a key role in defining that science case or, at the very least, establish whether transient science can be conducted commensally in a survey largely defined by other science goals. Your points regarding the impact of the VLASS on the ongoing science observing of the VLA are very valid. However, that is not the remit of our discussion. We have been asked to help define the science case for a possible VLASS which will eventually manifest as a proposal that will undergo a full peer review process to assess whether such a survey is timely and scientifically justified. Success is by no means guaranteed. If this proposal is weak, or does not clearly demonstrate the support of the wider community, it will certainly fail.

I believe that as radio astronomers and users of NRAO facilities it is incumbent upon all of us to push for the best use of the VLA. Right now it is my sense that various science cases (not only the time-domain, but also the Galactic and extragalactic cases) are being shoehorned into a predefined notion of a survey. I would rather see the science drive the observations and not the other way around. It has also not been clearly demonstrated that the return on investment in NVSS/FIRST has been adequate relative to the loss of other science. So, while I understand that this group has been asked to explore time-domain science with VLASS, I would like to see us go back to NRAO with our best ideas for a large VLA science program. In my mind, a ~1000 hour investment in follow-up of transients from other wavelengths makes a lot more sense.

Currently, the case for the VLASS time-domain component is highly qualitative. This is unfortunate since we actually know quite a lot already (both in a positive and negative sense). So, one should put together a robust case that is based on existing knowledge rather than guess that things will be great.

Regarding your other points:

1) Each epoch of the wide survey is defined to be > 1 year apart to ensure that we are sensitive to extragalactic and Galactic transients on all timescales. The cadence of the Galactic plane survey would be shorter, but the targeted population obviously does not fulfill the criteria you describe.

2) Your broad brush statement does not address the specific questions outlined for blind radio transient searches. For example, we propose to detect GRB *orphan afterglows* in order to constrain the inverse beaming fraction of these events. As you know better than I, it is impossible to detect such off-axis events at gamma-rays. The constraints via optical searches have been weak and ambiguous thus far (needle in a haystack). Radio observations, at the depth of the VLASS, with high spatial resolution and dedicated follow-up are a compelling means to tackle this problem. Finally, I should note that the known source population is not restricted to extragalactic transients.


As also pointed out by Joe, the orphan afterglow question needs to be quantified. In addition, I am doubtful that one could really distinguish orphan afterglows from other types of transients (see below) so it is not clear that radio will be any less ambiguous than the existing optical results. In addition, if all we are trying to do is go after known events such as GRBs, SNe, TDEs (and I would argue that any other currently unknown transients are going to be much more rare) then I would still argue that there is a more efficient way to find these than in the radio.

3) Observations of radio transients in the VLASS will not suffer from the ambiguity that you suggest. As you pointed out, the VLA is only really sensitive to GRBs, SNes etc. in the local universe. Classification does not come from detecting the associated optical transient, but rather from localization within a host galaxy that is relatively easy to detect. With an established distance, localization, calorimetry and radio SED, differentiating between a GRB and SNe afterglow, for example, becomes perfectly reasonable.

This is a very bold statement given that we haven't really seen any radio-discovered transients. Localization within the host is similar enough for most sources that even with high resolution it won't be useful on a case-by-case basis (after all, the differences in location are only borne out in large statistical samples, not in individual events). For example, TDEs are expected to be nuclear, but at the relevant redshifts, the typical localization accuracy is ~1 kpc. The SN rate within 1 kpc of a galaxy nucleus is higher than the TDE rate, so how would one classify based on location? Similarly, both GRBs and core-collapse SNe occur in/near star forming environments. Calorimetry, SEDs require a lot more data than what we're discussing here. Finally, the whole premise of this classification argument assumes that we already know the properties of the sources we're going after - if that's the case, then what's the point of the survey?

4) It would be helpful if you would be more specific on this point. Are you suggesting, for example, that only 20-30% of on-axis GRBs produce radio emission, or that only 20-30% are close enough to produce detectable radio emission? The latter would be factored into estimates of GRB radio afterglow rates.

I am saying that even in the ideal case of triggers from other wavelength, when the object classification is known the detection rates in the radio are low due to the lack of sensitivity of even the VLA. The fact is that the dynamic range between the VLA sensitivity threshold (~30 microJy for direct follow-up) and the brightest detections is not large. If we factor in a VLASS limit of ~1 mJy, then this will reduce the detection rates even further. Since the event rates are fixed (VLASS cannot detect more GRBs or SNe than exist) this means a low detection fraction of a few percent. The fact is that the SN and GRB radio luminosity functions are reasonably well known (though not perfectly) so all of this can be estimated realistically. The logN-logS plots are much too optimistic.

5) I agree with you that the TDE numbers presented in Frail et al. 2012 are now known to be too optimistic. The case is also actually worse than this for higher frequencies than L band, which is the frequency in question for that paper. The log N- log S plot shows the instantaneous number of afterglows for various classes of extragalactic transients. However, this number is representative of the rate times duration, which is actually lower at higher frequencies. Simply put, the transients don't wait around as long at higher frequencies so, while the rate remains approximately the same, an instantaneous snapshot at lower frequencies shows more detectable sources than at higher frequencies. When a survey has been better defined, these plots should be properly remade for the correct frequencies and rate for input into a proposal. The return will still be rich. Even assuming the factor of 10 that you suggest, the VLASS should still detect >100 TDEs! Finally, the detection limit used is 7.5-sigma, which is perfectly reasonable for the number of independent beams being measured.

I think the TDE detection rate could be much much lower than expected - all we have is a single event with a radio detection, and that event is quite unique. I would be hesitant to extrapolate from that event. Yes, the rate could be ~100, but the errorbar on that is roughly +/-100 events. As for 7.5-sigma, that's okay for Gaussian noise, but there is no way that the VLASS at L/S-band will achieve Gaussian noise across ~10^4 deg^2. Systematic effects due to bright sources, etc. will require at least 10-sigma. We already know this based on the previous claimed radio transients.

6) I do not see how your analogy of ultracool dwarfs is generally applicable to all classes of radio transients. How does one do a targeted search for GRB orphan afterglows or Fast Radio Bursts?

Orphan afterglows are an attractive, but in my mind weak science case. It will be very difficult to extract a better handle on the beaming corrections compared to what we already know from tens of on-axis GRBs. Fast radio bursts are not synchrotron sources so much more difficult to predict from first principles. However, the VLASS will not have msec time resolution as far as I can tell. The case for a low frequency, pulsar-like survey is very different from VLASS (and to some extent much more likely to succeed on finding new types of transients).

bevinashley
Posts: 9
Joined: Tue Mar 04, 2014 1:51 pm

Google Group Discussion - Part V

Postby bevinashley » Mon Apr 14, 2014 3:29 pm

April 7th: Gregg Hallinan

This week, Ashley and I will produce the next iteration of the VLASS survey definition document for the group to assess. However, we would first like to solicit some additional commentary from the community. First of all, some context...

The purpose of the survey definition document is to draw from the science case for radio transients in the VLASS, as defined in the VLASS white papers and revised via discussion within this group, and develop a VLASS survey definition that is best optimized for transient science. Meanwhile, other groups are completing similar processes for the extragalactic and Galactic science cases. The three survey definitions will be brought together to establish a single survey proposal that best reflects the priorities of the three groups.

Kunal sent around a nice summary of the telecon involving the transient group (see below). I will add a couple of additional points...

1) Survey Area: Discussion during the transient group telecon favored wide survey definitions, particularly all-sky, especially given the legacy value of the latter. Certainly transient science generally favors wide field surveys. if we assume that our proposed transient populations are proportional to the volume surveyed (Euclidean universe), wide and shallow generally wins. Quantitatively, for a fixed survey time, the number of transients detected is proportional to survey area, A^1/4. Thus, for example, 1000 hours spent on a 10,000 sq. deg. survey will yield ~3 times more transients than 1000 hours spent on a 100 sq. deg. survey. It's not a very strong dependence, but it is notable nonetheless. The second factor pertains to the distance of the counterpart/progenitor to the transient. For the same population, observed again with a fixed survey time, the typical distance of a counterpart/progenitor is inversely proportional to survey area, A^1/2. I.e., the closer a counterpart is, the easier it is to follow up. Folding in real populations of transients obviously complicates this discussion significantly - e.g., Galactic radio transients often have bright optical counterparts; host galaxies of radio SNe can be identified with survey data such as SDSS; GRB orphan afterglows will require deep follow-up with 8-10m class telescopes, even for a shallow sky survey.

It was also strongly emphasized that the Galactic survey should be larger in area than originally defined in our survey definition. A multi-epoch (8 epochs) survey of ~2,500 sq degrees at S and C band down to a depth of 50 microJy is one example that was discussed, and is possible with 2500 hours.

2) OTF-mapping: Our discussion led to the conclusion that each epoch should be as shallow as possible while using OTF-mapping, as it essentially does not impact overhead for the program. For example, an all-sky survey to 100 microJy depth could consist of 4 epochs, each of 200 microJy depth, using well tested OTF-mapping speeds (6 arcmin/sec).

3) Commensal V-LITE Capability: The possibility of commensal searches for fast and slow transients at meter-wavelengths, using the VLITE system, would be an immense addition to transient science with the VLASS.

4) Fast Dump Modes: The possibility of employing a fast-dump mode for the VLASS would open up the possibility of searching for radio transients on sub-second timescales. The capability has been commissioned (Law et al. white paper) and should be employed if technically feasible, although the magnitude of the associated data processing is acknowledged to be severe.

5) Follow-up: In our first definition document, we highlighted that dedicated follow-up (VLA and VLBA) were essential for a successful transient element to the VLASS. How does the community feel this should be handled?

I think the next iteration of our survey definition will likely employ a two-tier strategy, as before, but focusing on an All-Sky (S band; A or B config; 70 microJy) and Galactic (S and C band; A config; 50 microJy) components. This gels reasonably well with the existing definitions of the other groups and would easily be drawn into a single proposal. This process may involve adapting our science to the 4-tier extragalactic survey definition, for example. However, in the process of being combined with the Galactic and extragalactic survey definitions, it would be good to have community input on the tripping points for transient science. To echo Casey Law in a previous e-mail, we need to have a bulleted list of transient science goals and required minimum values of survey parameters to achieve that science. I think that this has been discussed at length for extragalactic transient science, but could use much more input for the Galactic science case.

Shami: Your white paper addresses a lot of very interesting Galactic transient science. However, you define a 5-sigma point source sensitivity of 100 microJy in an all-sky survey as the "desirable" requirement to deliver this science. That would require 10 years continuous surveying with the VLA. Even assuming you restrict the survey to the Galactic plane, it would be expensive in time, depending on what you define as the area to be covered. Is this sensitivity necessary for the entire plane? What would be the tripping points in survey depth and area? Is a deeper single S band survey preferable to S and C band?


April 7th: Gregg in response to Edo's email (see Part IV)

I believe that as radio astronomers and users of NRAO facilities it is incumbent upon all of us to push for the best use of the VLA. Right now it is my sense that various science cases (not only the time-domain, but also the Galactic and extragalactic cases) are being shoehorned into a predefined notion of a survey. I would rather see the science drive the observations and not the other way around. It has also not been clearly demonstrated that the return on investment in NVSS/FIRST has been adequate relative to the loss of other science. So, while I understand that this group has been asked to explore time-domain science with VLASS, I would like to see us go back to NRAO with our best ideas for a large VLA science program. In my mind, a ~1000 hour investment in follow-up of transients from other wavelengths makes a lot more sense.

You suggest proposing a 1000-hour follow-up program of extragalactic transients detected at optical and higher energies. This can be done via the standard Principal Investigator proposal route. It would be a large proposal, but not unprecedented, and would obviously have to be very well justified. By contrast, no single science case can justify >5,000 hours of VLA observing time, but a number of key questions spanning extragalactic, Galactic and transient science can be addressed with a combined survey effort. This concept lies at the heart of most surveys, ranging from SDSS to LSST in the optical regime, for example. One can point to the successful 'shoehorning' of various science cases into these projects - in reality, the community comes together to assess how best to maximize their diverse science with a single combined effort. This process has been exercised in the radio previously with NVSS and FIRST; I would argue that the positive benefit of the latter two surveys is undeniable. It will remain uncertain whether the VLASS will be as successful until a science case is put together and peer reviewed, hence our engaging in this process. Of course the process should include careful assessment of the opportunity cost, i.e., the science lost by commencing a VLASS. I hope it will also involve a process for the entire community to deliver direct feedback on whether the VLASS is warranted. This should allow concerned users to express reservations, such as those you mention.

Currently, the case for the VLASS time-domain component is highly qualitative. This is unfortunate since we actually know quite a lot already (both in a positive and negative sense). So, one should put together a robust case that is based on existing knowledge rather than guess that things will be great.

The survey definition document is *not* a science case for the VLASS. The white papers represent the current diverse science cases and will eventually be brought together as a single coherent proposal. There are a number of white papers focused on transients with the VLASS, including one with which you are directly involved (Kamble et al.), the latter advocating for a few thousand hours at C or S band. This strongly influenced our initial survey definition. I thought your white paper (and others) was clearly quantitative, indicating ~16 radio SNe per month would be detected with the version of the VLASS defined in that document - which would certainly be a tremendous return. It also clearly states the following...

"not all SNe could be discovered through optical and some might actually be detected only via radio emission. A radio sky survey is the most effi cient way to uncover this hidden population of SNe."

It is possible to submit a revised white paper if you think these numbers need to be modified to be more quantitative. It would certainly help an eventual VLASS proposal, which is yet to be written.

As also pointed out by Joe, the orphan afterglow question needs to be quantified. In addition, I am doubtful that one could really distinguish orphan afterglows from other types of transients (see below) so it is not clear that radio will be any less ambiguous than the existing optical results. In addition, if all we are trying to do is go after known events such as GRBs, SNe, TDEs (and I would argue that any other currently unknown transients are going to be much more rare) then I would still argue that there is a more efficient way to find these than in the radio.

Your assistance in quantifying expected GRB orphan afterglow (OAs) rates for VLASS would be very helpful. Clearly there is a lot of difficulty in detecting OAs via optical searches, by virtue of the fact that no definitive detection has yet been made, despite ongoing efforts by PTF Pan-STARSS, etc. The LSST science case suggests that, at any one time, there is ~1 orphan afterglow in the entire sky at a depth of mag 23 in optical bands - http://lsst.org/lsst/science/scientist_too. Such an afterglow will fade by many magnitudes within a few days. Detecting such an event unambiguously is quite a challenge. Even the leading current surveys will only detect a very small number, e.g., the predicted rate for the entire PTF survey is ~3 OAs. However, at the required depth, the false positive rate is absolutely enormous (<1 OA per 1 million transients!) such that finding the small number of OAs is extremely difficult.

A recent study was posted that you may be aware of, discussing the rate of radio OAs - http://arxiv.org/abs/1402.6338. Detailed modeling is presented and Figure 3 is very useful in assessing the potential impact of VLASS. Unless I'm misreading this figure, the VLASS should detect ten(s) of OAs (7.5-sigma relative to the noise of the reference and detection image added in quadrature: see below), if conducted as S-band with an All-sky and/or Wide component optimized for transient detection. Such an afterglow takes weeks to fade, unlike the optical counterpart, allowing radio follow-up at the very least. Furthermore, for a 1 mJy flux density orphan afterglow detected at S band, the total number of false positives is orders of magnitude lower than in the optical. Indeed, the total number of radio sources > 1 mJy on the entire radio sky at S band is probably less than the optical false positive rate mentioned above! Only a few percent of this quiescent population exhibit variability and the number of transient sources is << 1%. Orphan GRBs should make up ~10% of all non-nuclear extragalactic radio transients and can be distinguished from the dominant population, radio SNe, by luminosity.


This is a very bold statement given that we haven't really seen any radio-discovered transients. Localization within the host is similar enough for most sources that even with high resolution it won't be useful on a case-by-case basis (after all, the differences in location are only borne out in large statistical samples, not in individual events). For example, TDEs are expected to be nuclear, but at the relevant redshifts, the typical localization accuracy is ~1 kpc. The SN rate within 1 kpc of a galaxy nucleus is higher than the TDE rate, so how would one classify based on location? Similarly, both GRBs and core-collapse SNe occur in/near star forming environments. Calorimetry, SEDs require a lot more data than what we're discussing here. Finally, the whole premise of this classification argument assumes that we already know the properties of the sources we're going after - if that's the case, then what's the point of the survey?

There are radio discovered transients. The radio SNe discovered in M82 are a good example (Brunthaler et al. 2009; (Muxlow et al. 2010), as discussed in your white paper. Or look at Gal-Yam et al. 2006 for similar transients and, indeed, a persuasive argument for future blind radio transient surveys. Similar arguments are reached based on the Bower et al. paper in 2007. Or look at the serendipitous detection of a Galactic center radio transient by Hyman et al. reported in Nature in 2005, for an example of a new Galactic radio transient population that has yet to be understood. VLASS will be orders of magnitude deeper than any of these previous efforts.

Was localization via VLBI not used to confirm the nuclear nature of Swift J164449.3+573451? You refer to localization better than 1 kpc being difficult at the relevant redshifts for TDEs, leading to contamination via radio SNe. Is it not the case that radio SNe are far too faint, by orders of magnitude, to be a contaminating source at those same redshifts?


I am saying that even in the ideal case of triggers from other wavelength, when the object classification is known the detection rates in the radio are low due to the lack of sensitivity of even the VLA. The fact is that the dynamic range between the VLA sensitivity threshold (~30 microJy for direct follow-up) and the brightest detections is not large. If we factor in a VLASS limit of ~1 mJy, then this will reduce the detection rates even further. Since the event rates are fixed (VLASS cannot detect more GRBs or SNe than exist) this means a low detection fraction of a few percent. The fact is that the SN and GRB radio luminosity functions are reasonably well known (though not perfectly) so all of this can be estimated realistically. The logN-logS plots are much too optimistic.

This has already been addressed above referencing your group's white paper and recent work on GRB afterglows.


I think the TDE detection rate could be much much lower than expected - all we have is a single event with a radio detection, and that event is quite unique. I would be hesitant to extrapolate from that event. Yes, the rate could be ~100, but the errorbar on that is roughly +/-100 events. As for 7.5-sigma, that's okay for Gaussian noise, but there is no way that the VLASS at L/S-band will achieve Gaussian noise across ~10^4 deg^2. Systematic effects due to bright sources, etc. will require at least 10-sigma. We already know this based on the previous claimed radio transients.

If the expected rate is truly so unconstrained as to be 100 +/-100, the VLASS would be very constraining indeed, even in the event of a non-detection.

Regarding the expected statistics for VLASS, your comment is actually incorrect. This is not the VLA of a decade ago - this is the upgraded VLA with the associated increase in bandwidth, number of channels and instantaneous uv-coverage. We have just completed an S-band 300 square degree survey (~50 microJy sensitivity) and our rms noise is invariably within 20% of the predicted thermal noise and the statistics are very much Gaussian. This survey is about 2% of the VLASS in size. For the few fields with bright sources, a simple self-calibration suffices. In any case, when a 7.5-sigma transient is referred to, it is in reference to the quadrature noise of a detection and reference image. This will actually correspond to >10-sigma detection in most detection images.


Orphan afterglows are an attractive, but in my mind weak science case. It will be very difficult to extract a better handle on the beaming corrections compared to what we already know from tens of on-axis GRBs. Fast radio bursts are not synchrotron sources so much more difficult to predict from first principles. However, the VLASS will not have msec time resolution as far as I can tell. The case for a low frequency, pulsar-like survey is very different from VLASS (and to some extent much more likely to succeed on finding new types of transients).

A measurement of the orphan afterglow rate is an independent and direct measure of inverse beaming angle for GRBs, in contrast to constraints returned from jet break analyses of on-axis afterglows.

We actually will request that the ability to do fast transient science with the VLASS be considered. Members of the extended transient group (Law, Bower et al.) have demonstrated that the data processing is feasible. Whether it goes ahead or not will depend on what resources are made available for the survey. Secondly, a low frequency 10-antenna system (V-LITE) will likely be observing commensally during VLASS and would be available for transient science, as described in the Wilson et al. white paper.


April 7th: Miguel Perez-Torres in response to Gregg's reply to Edo (above):
I understand Edo's point of view that, for some science goals, the use of say, 1000 hr on a follow-up program could be very useful, but as Greg points out, the VLASS is very unlikely to become a follow-up program on specific targets. Wide-field coverage is needed for the survey to be of use to a large community.

It seems undeniable that NVSS was a great success. Many astronomers daily use NVSS images for publications, or to judge whether submitting a VLA proposal makes sense. Another story is that people may not acknowledge this explicitly in their publications, but heavy use of NVSS *static* images is being done continuously.

I thought your white paper (and others) was clearly quantitative, indicating ~16 radio SNe per month would be detected with the version of the VLASS defined in that document - which would certainly be a tremendous return. It also clearly states the following...

"not all SNe could be discovered through optical and some might actually be detected only via radio emission. A radio sky survey is the most effi cient way to uncover this hidden population of SNe."

It is possible to submit a revised white paper if you think these numbers need to be modified to be more quantitative. It would certainly help an eventual VLASS proposal, which is yet to be written.


The numbers in Kamble+14 white paper are probably optimistic. Their Fig. 1-left only plots Type Ib/c SNe, but those account only for about 23% of all SNe. The much more numerous Type IIP SNe peak at a few times 1e25 to a few times 1e26 erg/s/Hz, so the detection rate, for the nominal 100 muJy/b sensitivity, will be be significantly smaller. If Edo, or Atish, could come up with more realistic numbers, that'd be useful for the proposal.

There are radio discovered transients. The radio SNe discovered in M82 are a good example (Brunthaler et al. 2009; (Muxlow et al. 2010), as discussed in your white paper. Or look at Gal-Yam et al. 2006 for similar transients and, indeed, a persuasive argument for future blind radio transient surveys. Similar arguments are reached based on the Bower et al. paper in 2007. Or look at the serendipitous detection of a Galactic center radio transient by Hyman et al. reported in Nature in 2005, for an example of a new Galactic radio transient population that has yet to be understood. VLASS will be orders of magnitude deeper than any of these previous efforts.

Indeed, there are radio discovered events, and more beyond just M82. See, e.g., SN 2000ft in NGC 7469 (Colina et al. 2001), or the many radio supernovae discovered in the nuclei of Arp 299A (e.g. Perez-Torres et al. 2009; Bondi et al. 2012) and in Arp 220 (Batejat et al. 2011).

Was localization via VLBI not used to confirm the nuclear nature of Swift J164449.3+573451? You refer to localization better than 1 kpc being difficult at the relevant redshifts for TDEs, leading to contamination via radio SNe. Is it not the case that radio SNe are far too faint, by orders of magnitude, to be a contaminating source at those same redshifts?

Indeed, at the distance of Swift J16449.3+, a radio SN as bright as 1e28 erg/s/Hz at its peak, would show a peak flux density of just 3-6 microJy/b. No contamination from radio SNe.

April 7th: Steve Croft commenting on currently presented Transient Working Group Survey Definition:

1) Survey Area: Discussion during the transient group telecon favored wide survey definitions, particularly all-sky, especially given the legacy value of the latter. Certainly transient science generally favors wide field surveys. if we assume that our proposed transient populations are proportional to the volume surveyed (Euclidean universe), wide and shallow generally wins. Quantitatively, for a fixed survey time, the number of transients detected is proportional to survey area, A^1/4. Thus, for example, 1000 hours spent on a 10,000 sq. deg. survey will yield ~3 times more transients than 1000 hours spent on a 100 sq. deg. survey. It's not a very strong dependence, but it is notable nonetheless. The second factor pertains to the distance of the counterpart/progenitor to the transient. For the same population, observed again with a fixed survey time, the typical distance of a counterpart/progenitor is inversely proportional to survey area, A^1/2. I.e., the closer a counterpart is, the easier it is to follow up. Folding in real populations of transients obviously complicates this discussion significantly - e.g., Galactic radio transients often have bright optical counterparts; host galaxies of radio SNe can be identified with survey data such as SDSS; GRB orphan afterglows will require deep follow-up with 8-10m class telescopes, even for a shallow sky survey.


May I be a voice of dissent against wide surveys, as I was on the call? In your example, Gregg, you'll have a factor 100 fewer epochs on each source, and correspondingly poorer lightcurves. For vanilla luminosity and redshift distributions, you'll also be dominated by transients close to your detection limit in each epoch. Many of these will actually be variable sources that briefly pop up above your flux limit. Some may be imaging defects, sidelobes, statistical fluctuations, etc. which have bedevilled searches for single-epoch radio transients in the past. Robust transient detection and classification will require at least two and preferably many more detections corresponding to the characteristic timescales (or Nyquist sampled lightcurves) of progenitors of interest. I'd argue you should have at least tens of epochs, ideally even 100. In the latter case you get a deep image with 10x the sensitivity of your single epoch images, enabling you to rule out (or in!) quiescent counterparts. You can also bin your epochs to detect longer timescale variability with better sensitivity, as well as detecting intrinsically faint transients that might be missed entirely by a shallower survey.

I'd argue that the cadence should be the first thing to be decided, then number of epochs, then how many transients we hope to detect of each progenitor class, and finally balance off the observing time request with the survey area.

Smaller fields, if targeted to correspond to regions with deep multiwavelength data, can still enable identification and classification of hosts / progenitors too, although I agree that targeted followup may be harder (under the simple assumption that the radio luminosity scales with luminosity at wavelengths used for followup, which as you note is not always the case).

In summary I'd rather have three times fewer transients and have them robustly detected and classified. Obviously since other working groups are thinking about their own science too we can't just pick our survey parameters, but I'd argue for something matched to the SDSS footprint rather than all-sky, for example, or even smaller area if we can get away with that.

April 7th: Tara Murphy
I wasn't in the call, but Steve's concern is one of those I raised earlier. I understand the preference for wide field, but I am concerned about how useful it will be to have light curves of only a few points. Experience from the ATA, MWA, Bannister et al. results, shows that it isn't terribly useful (although I acknowledge that real-time processing will make a big difference).

April 7th: Shami Chatterjee in reply to Gregg and addressing concerns raised by Steve and Tara:
> Shami: Your white paper addresses a lot of very interesting Galactic transient science. However, you define a 5-sigma point source sensitivity of 100 microJy in an all-sky survey as the "desirable" requirement to deliver this science. That would require 10 years continuous surveying with the VLA. Even assuming you restrict the survey to the Galactic plane, it would be expensive in time, depending on what you define as the area to be covered. Is this sensitivity necessary for the entire plane? What would be the tripping points in survey depth and area? Is a deeper single S band survey preferable to S and C band?

Hi Greg, there are two major classes of science drivers in our neutron stars white paper:

First, the possibility that we can find exotic systems that current single dish surveys are not sensitive enough to reliably detect. This is a raw sensitivity game, and going deeper is always better. 100 microJy at 5-sigma is a "blue sky" proposal, but for example something like 250 microJy at 5-sigma would still be useful, while 1 mJy at 5-sigma is not competitive.

(Why not infinite sensitivity? There's not much advantage to pushing for objects that we can't time at single dish telescopes, so for example a 1 microJy pulsar is not useful to NANOGrav-type science. (Yet. Just you wait.))

The other driver is intermittent objects, and objects missed due to RFI or other confounding factors in current single dish surveys. Here, any observations that improve on NVSS/FIRST in sensitivity and resolution are net wins.

The major caveat is the spectral index of typical pulsars (S_nu \propto \nu^\alpha, \alpha~ -1..-2), so going up in frequency hurts sensitivity to the pulsar population. (However, more exotic objects like magnetars and some MSPs do have odd spectra.)

So our preference would be S-band only over S+C, as deep as possible while balancing other constraints, and covering at least the Galactic plane (b=+-2.5 degrees, say) but preferably covering as much of the sky as possible. In a 2-tier survey, I'd advocate for going deeper on the Galactic plane since there are more pulsars there, but covering the full sky since MSPs have large scale heights and sensitivity to the GW background improves with a distribution of "good" MSPs that cover the entire sky.

Cheers,
Shami

PS To Steve and Tara and several others who have raised the issue - if the VLASS gets done, I think [personal opinion warning] it will happen because of the overall community benefit, not because of the benefit to transient science (alone). So our best payoff probably comes from creating a transient epoch 0 resource and follow-up catalog, along the lines of what Joe Lazio said. I don't see a transient follow-up program as having great chances in a VLASS proposal, because that is rightfully classic PI-driven science. I do see a proposal that says "whatever you do, do it in N passes" as having much better prospects.

April 12th: Gregg Hallinan in reply to issues raised by Steve and Tara (see above):

I actually also favored an SDSS/FIRST footprint, but my impression from our transient group telecon was that a clear majority were favor of closer to all-sky coverage for a VLASS, particularly emphasizing the legacy value to the wider community. The latter is pretty compelling when added in with other considerations. However, I do think that the depth vs cadence of a possible survey should be examined for each population separately, for just the reasons you outline. This is quite a bit of work considering the current wide range of free parameters for a survey. For this reason, I organized a telecon yesterday involving the transient, extragalactic and galactic working group co-chairs to establish whether we can converge on a single survey definition that we can bring back to our respective groups. This will prevent a large number of future iterations of this process, while still allowing the survey to modified based on feedback from the wider groups. I will send an e-mail shortly describing the definition that we converged to.

I agree with the necessity to have characterization of time behavior. One can do some characterization via follow-up, but that does not sample the critical period prior to maximum brightness. However, I think that doing a Deep Field with 100 epochs on a 10 sq. deg. field, for example is probably too far into the regime where one over-samples the light curve at the expense of 1) detecting more transients and 2) having a nearby source population. The VLASS will likely take place over ~5 years, but epochs will be confined to periods when the array is in the correct configuration, i.e., 4 x 3-month periods spaced over 5 years, although NRAO will likely change the configuration schedule to enable VLASS. This equates to an epoch every few days, which I think is far from optimal sampling of extragalactic fields. Such a cadence may be better suited to Galactic fields, however. I will circulate the new definition and you can assess whether it addresses some of the concerns you mention, or whether we need to modify cadence further. Let me know your thoughts.


Return to “Transients & Variability Working Group”

Who is online

Users browsing this forum: No registered users and 1 guest