Next-Generation Radio Interferometric Imaging for the SKA 2015

A Royal Society South Africa-UK Scientific Seminar

Last week saw the first workshop on Next-Generation Radio Interferometric Imaging for the SKA 2015, with the aim of promoting scientific collaboration between South Africa and the UK, focusing on next-gen radio interferometric imaging techniques for the Square Kilometre Array (SKA) and pathfinder telescopes.

The SKA promises exquisite radio observations of unprecedented resolution and sensitivity, which will lead to many scientific advances. However, the imaging pipelines of current radio interferometric telescopes have been identified as a critical bottleneck in the “big-data” regime of the SKA. A lot of progress has been made recently to develop new radio interferometric imaging techniques, for example those based on the revolutionary new theory of compressive sensing.

VLA dish

VLA dish

The workshop brought together experts in radio interferometry, with experts in image processing and compressive sensing, to bring emerging radio imaging techniques to bear on real interferometric data.  A significant portion of the meeting was devoted to hack sessions to work together on codes and data.

We started with a brainstorming session collectively editing a Google Doc, which soon took on a life of its own!  The plan was to come back together after a coffee break to finalise projects and people to focus on them — but that wasn’t necessary.  By that time everyone had self-organised and started working together on many exciting projects!

It was great fun to get our hands dirty with code and data, while experts from a broad range of different areas were on hand to provide support.  Lot’s of progress was made during the week and we have a number of ongoing projects now that were initiated during the workshop.  I’m very much looking forward to seeing how these progress.  We’ll keep you updated!

Sundowners during the workshop  (Courtesy of Rahim Lakhoo)

Sundowners during the workshop (Courtesy of Rahim Lakhoo)

Many thanks once again to our sponsors:

rslogo_NRF_SouthAfrica

Biomedical and Astronomical Signal Processing (BASP) Frontiers 2015

Last week saw the third instalment of the BASP conference, which brings together the communities of astronomy, biomedical sciences, and signal processing. Although astronomical and biomedical sciences share common roots in the signal processing problems that are faced, the corresponding communities are almost completely disconnected. The goal of the BASP workshop series is to foster collaboration between the astronomical and biomedical physics communities, around common signal processing challenges.

DSC06725

Discussions during one of the deluxe poster sessions at BASP

As an astrophysicist, I was amazed to see the progress made in High Intensity Focused Ultrasound. Test patients suffering from Parkinson’s disease showed huge progress immediately following treatment, demonstrating a huge impact on people’s lives. Many biomedical scientists I spoke to were similarly amazed by the progress being made in cosmology, where we have recovered a remarkably complete picture of the history and evolution of our Universe.  I also had some very interesting discussions on how some of the techniques we have been developing for astronomical imaging might be useful for studying the development of Glaucoma, which we’ll certainly be investigating further.

IMG_20150128_160603

The slopes to ourselves at BASP

The meeting was held in a delightful setting in the Swiss Alps and many interesting scientific discussions (and debates!) were had on the ski slopes.

Proceedings are available on the website. For further discussions surrounding the meeting check out the Twitter hashtag #BASP2015.

Looking forward to BASP 2017 already!

Pioneering work helps to join the dots across the known universe… and the human brain

Compressive sensing is a recent breakthrough in information theory that has the potential to revolutionise the acquisition and analysis of data in many fields. We recently secured grants from the UK research councils to develop compressive sensing techniques to address the challenge of extracting meaningful information from big-data.

SKAall_night1.full

Artist’s impression of the Square Kilometre Array at night (Credit: SKA Organisation)

 

Reconstructed neuronal connections in the brain (Credt: Thomas Schultz)

Reconstructed neuronal connections in the brain

The techniques developed will find application in a broad range of academic fields and industries, from astronomy to medicine. They will allow high-fidelity astronomical images to be recovered from the overwhelming volumes of raw data that will be acquired by next-generation radio telescopes like the Square Kilometre Array (SKA). The new techniques will also be of direct use in neuro-imaging to accelerate the acquisition time of diffusion magnetic resonance imaging (MRI), potentially rendering its clinical use possible.

For more details see: http://www.ucl.ac.uk/mathematical-physical-sciences/maps-news-publication/maps1431

How rich is a galaxy cluster?

Post by Tom Kitching


In a recent paper http://arxiv.org/abs/1409.3571 lead by Jes Ford http://www.phas.ubc.ca/~jesford/Welcome.html the mass-“richness” relation of galaxy clusters was investigating using data from the CFHTLenS survey.

A galaxy cluster, is a cluster of galaxies… Galaxies are swarms of stars held together in a common gravitational potential, in an analogous way galaxy clusters are swarms of galaxies held together in a larger gravitational potential structure.

“Richness” is a bit of astronomical jargon that refers to the number of bright galaxies in a cluster. A cluster is “rich” if it has many massive galaxies and not rich if there are no massive galaxies. In fact, in a way that sounds quite PC, a galaxy cluster is never referred to as “poor”, but some galaxies have “very low richness”. This is a term that was first defined in the 1960s

[the richness of a cluster] is defined to be the number of member galaxies brighter than absolute magnitude Mi ≥ −19.35, which is chosen to match the limiting magnitude at the furthest cluster redshift that we probe

The clusters were detected using a 3D matched filter method. This allowed for a very large number of clusters to be found. 18,056 cluster candidates were found in total, which allowed for the statistics of this population of clusters to be measured.

The total significance of the shear measurement behind the clusters amounts to 54σ. Which corresponds to a (frequentist) probability of 1-4.645×10-636 or a

99.99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999

9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999

99999999999999999%

chance that we have detected a weak lensing signal behind these clusters and groups!

The main result in the paper was the measurement that the mass of clusters increases with the richness with a relation of M200 = M0(N200/20)^β. This may be expected, that clusters that are more massive have more bright galaxies; after all a cluster is defined as a collection of galaxies. We found a normalization M0 ∼ (2.7+0.5) × 10^13 Solar Masses, and a logarithmic slope of β ∼ 1.4 ± 0.1.

Curiously no redshift dependence of the normalization was found. This suggests that there is a mechanism that regulates the number of bright galaxies in clusters that is not affected by the evolution of cluster properties over time. We do not know why this relationship should not change over time, or why it has the values it does, but we hope to find out soon.

 

 

combining information over 13 billion years of time

Post by Tom Kitching, MSSL


Combining information over 13 billion years of time

On this blog we have already talked about 3D cosmic shear and the Cosmic Microwave background, this post is about how to combine them.

Cosmic shear is the effect where galaxy images, that are (relatively!) near-by – a mere few billion light years – are distorted by gravitational lensing caused by the local matter in the Universe. We can measure this and use the data to learn about how the distribution of matter evolved over that time.

The Cosmic Microwave Background (CMB) is the ubiquitous glow of microwaves, that comes from every part of the sky, and that was emitted nearly 14 billion years ago. Analysis of the CMB allows us to learn about the early Universe, but also the nearby Universe because the local matter also gravitationally lenses the microwave photons.

In a recent paper we have shown how to combine Cosmic Shear information and CMB together in a single all-encompassing statistic. Because we see the Universe in three dimensions (2 on the sky and one in distance or look-back-time) this new statistic needed to work in three dimensions too.

Depicts: Gravitational lensing of the Cosmic Microwave Background Copyright: ESA and the Planck Collaboration http://sci.esa.int/science-e-media/img/96/Planck_gravitational_lensing_CMB_625.jpg

What we found was the when the galaxy and microwave data are combined properly the resulting statistic is more powerful than the sum of the two previous statistics, because there is the extra information that comes from the “cross-correlation” between them. In particular we found that the extra information helps in measuring systematic effects in the cosmic shear data.

What is a “cross correlation”?

A correlation is a determined relationship between two things. The definition (that the Apple dictionary on my computer gives) is

noun

a mutual relationship or connection between two or more things

In recent cosmological literature we use this term somewhat colloquially to refer to relationship between data points in a single data set. For example one could correlate the position of galaxies separated by a particular separation – to determine if they were clustered together – or one could correlate the temperature of microwave emission from different parts of the sky (both of these have been done with much success).

The word “cross” in “cross correlation” refers to taking correlations of quantities observed from different data sets. The addition of the word “cross” seems somewhat superfluous in fact. If we have experiment A and B one can correlate the data points from A, or correlate data points from B, or correlate data points between A and B.

In the new paper we instead used a more descriptive nomenclature that refers to inter and intra datum aspects of the analysis. Intra-datum means using statistics within a single data set and inter-datum means calculating statistics between them; for example the plotting a histogram of points within a data set, compared to plotting the points from two data sets on one graph.

When should one attempted to find inter-datum correlations between any data points? In this regard there seems to be two modes of investigation that one could take, following a Popper-inspired categorisation one can define the following modes:

  • Deductive mode. In this approach one has a clearly defined scientific hypothesis, for example the measurement of some parameter (or model) predicted to have a given value(s). Then one can find a statistic that maximises the signal-to-noise (or expected evidence ratio) for that parameter or model. That statistic may or may not include inter-datum aspects.
  •  Inductive mode. Alternatively one may simply wish to correlate everything with everything, with no regard to a hypothesis or model. In this approach the motivation would just be to explore the space of possibilities; trying to find something new. If a positive correlation is found then this may, or may not, indicate an underlying physical process or causal relation between the quantities.

The danger of an inductive approach, of course, is one can find correlations for which the underlying physical process is much more complicated than that taken at face value. To illustrate this point one can look on Google Correlate and find some interesting correlations, for example:

According to Google Correlate searches for "Astronomy" are highly correlated with searches for "Bass Guitar".  http://www.google.com/trends/correlate/search?e=astronomy&e=bass+guitars&t=weekly&p=us&shift=3&filter=astronomy#scatter,60

According to Google Correlate searches for “Astronomy” are highly correlated with searches for “Bass Guitar”.
http://www.google.com/trends/correlate/search?e=astronomy&e=bass+guitars&t=weekly&p=us&shift=3&filter=astronomy#scatter,60

Which brings us to an old warning from @LegoAcademics that:

 

 

 

 

Points to Three Points

Post by Tom Kitching, MSSL

—-

In a recent paper, led by collaborator Dr Liping Fu the CFHTLenS survey was used to measure the “3-point correlation function” from weak lensing data. This is one of the first times this has been measured, and certainly one of the clearest detections.

A “2-point” statistic, in cosmology jargon, is one that uses two data points in some way. Usually an average over many pairs of objects (galaxies or stars) are used to extract information. In this case what is being measured is called the “two-point [weak lensing] correlation function” and what it measures is the excess probability that any pair of galaxies (separated by a particular angular distance) are aligned. This is slightly different to a similar statistic used in galaxy cluster analysis. The two-point correlation function is related to the Fourier transform of matter power spectrum and can be used to measure cosmological parameters, which is why we are interested in it. In a sense the two-point correlation function is like a scale-dependent measure of the variance of the gravitational lensing in the data: the mean orientation of galaxies is assumed to be zero (when averaged over a large enough number) because there is no preferred direction in the Universe, but the variance is non-zero.

2p

The measurement of the 2-point statistic is represented above, “sticks” (of various [angular] lengths) are virtually analysed on the data and the for each stick-length the ellipticity (or “ovalness”) of the galaxies along the direction of the sticks is measured. If the two galaxies are aligned then the multiplication of these ellipticities (e * e) will be positive, but if not then sometimes it will be positive and sometimes negative.

Ellipticity can be expressed as two number e1 and e2 that lie on a Cartesian graph where positive and negative values represent different alignments. When multiplied together aligned ellipticities therefore produce a positive number and anti-aligned produce a negative number. When averaged over all galaxies a purely random field with no preferred alignment (equal positive and negative) the multiplication averages to zero. If there is alignment the average is positive.

Ellipticity can be expressed as two numbers e1 and e2 that lie on a Cartesian graph where positive and negative values represent different alignments. When multiplied together aligned ellipticities therefore produce a positive number and anti-aligned produce a negative number. When averaged over all galaxies a purely random field with no preferred alignment (equal positive and negative) the multiplication averages to zero. If there is alignment the average is positive.

Galaxies will align is there is some common material that is causing the gravitational lensing to be coherent. So when averaged over many galaxies the multiplication of the ellipticities <e*e> (the angular brackets represent taking an average) for a particular stick length tells us whether there is lensing material with a scale the same as the sticks length: a positive result means there is alignment on average, a zero result means there  is no alignment on average, a negative result would mean there is anti-alignment on average.

Figure from Kitching et al. (Annals of Applied Statistics 2011, Vol. 5, No. 3, 2231-2263). Gravitational lensing from the large scale structure in the universe makes preferred aligned occur around clusters of dark matter of around voids, what we call "E-mode". Anti-alignment is not normally caused by gravitational lensing, what we call "B-mode".

Figure from Kitching et al. (Annals of Applied Statistics 2011, Vol. 5, No. 3, 2231-2263). Gravitational lensing from the large scale structure in the universe makes preferred aligned occur around clusters of dark matter of around voids, what we call “E-mode”. Anti-alignment is not normally caused by gravitational lensing, what we call “B-mode”.

In this new paper we not only measured the two-point correlation function but also the 3-point correlation function! This is an extension of the idea to now measure the excess probability that any 3 galaxies have preferred alignment. Now instead of a single angle and pairs of galaxies the measurement uses triangle configurations of galaxies and results in a measurement that depends on two angles.

 

3p

This is a much more demanding computational task, because there are many more possible ways that triangle can be drawn than a stick (for every given length of stick the other two sides of the triangle can take many different lengths). The amplitude of the 3-point correlation function tells us if there is any coherent structure on multiple-scales, and in particular allows us to test whether the simple description of large-scale structure using only the 2-point correlation function – and the matter power spectrum – is sufficient or not.

This is one of the first measurements of this statistic and paves the way for extracting much more information from lensing data sets than could be done using 2-point statistics alone.

  • The full article can be found at this link : http://arxiv.org/abs/1404.5469
  • The CFHTLenS data can be accessed here : http://www.cfhtlens.org

 

 

 

Science on the Sphere

Science on the Sphere 

A Royal Society International Scientific Seminar 

14th and 15th July 2014, Chicheley Hall

This webpage is dedicated to the organisation of the Science on the Sphere meeting, to be held at the Royal Society Chicheley Hall on 14th and 15th July 2014.

Synopsis of the Meeting:

Scientific observations are made on spherical geometries in a diverse range of fields, where it is critical to accurately account for the underlying geometry where data live. In cosmology, for example, observations are inherently made on the celestial sphere. If distance information is also available, for example as in galaxy surveys, then the sphere is augmented with the radial line, giving three dimensional data defined on the ball. Future galaxy surveys will provide data of unprecedented detail; to fully exploit such data, three-dimensional analyses that faithfully capture the underlying geometry will be essential to determine the nature of dark energy and dark matter. On stellar scales new experiments are allowing the internal structure of distant stars, and our own Sun, to be analysed for the first time. At home, on Earth, our planet is being imaged and mapped in its entirety; an increasingly important endeavour as we face global challenges. On an individual level medical imaging, and also the computer gaming and special effects industries, require spherical analysis techniques in order develop efficient algorithms. All of these areas share common problems; this seminar series will bring together experts from across these fields to share common solutions and to create new ideas in the collaborative environment of the Royal Society.

Outcomes of the Meeting:

A diverse range of fields, from cosmology and astronomy, to stellar and geophysics, to medial imaging and computer graphics, share common data analysis challenges. In all of these fields, data are observed on spherical geometries; the subsequent analysis of such data must accurately account for their underlying geometry in order to draw meaningful scientific conclusions. Indeed, the field of principled data analysis on spherical geometries is a field in itself. However, all of these fields are largely disjoint at present. The goal of this multi-disciplinary seminar series is to bring together researchers from these fields in order to address their common data analysis challenges. Seminars will be organized to introduce the assembled experts to new fields, and to the topical spherical data analysis challenges faced in these fields, where it is envisaged that insights from one field will have wide-reaching implications in other fields. By fostering contact between these diverse communities and promoting interdisciplinary collaborations, a coherent and principled approach to the analysis of data observed on spherical geometries will gain wide-spread traction, potentially leading to new and robust scientific findings in a wide range of fields.

Invited Participants:

Alan Heavens Imperial
Andrew Jaffe Imperial
Ben Wandelt IAP
Bill Chaplin Birmingham
Boris Leistedt UCL
Chris Doran Geomerics
Domenico Marinucci Rome
Farhan Feroz Cambridge
Francois Lanusse CEA Saclay
Frederik Simons Princeton
Hiranya Peiris UCL
Jason McEwen UCL
Mike Hobson Cambridge
Pierre Vandergheynst EPFL
Richard Shaw CITA
Rod Kennedy ANU
Tom Kitching UCL
Yves Wiaux Heriot Watt
Yvonne Elsworth Birmingham

Local and Travel Information

The Royal Society provide local information here (https://royalsociety.org/visit-us/chicheley/)

More information on the venue is available here (http://en.wikipedia.org/wiki/Chicheley_Hall)

Programme

Day 1

  • 0730 – 0830 : Breakfast

Day 2

  • 0730 – 0830 : Breakfast
  • Discussions and the Way Forward
  • 1600 – 1700 : Dr Jason McEwen (UCL), Dr Thomas Kitching (UCL)

This meeting is funded by the Royal Society International Scientific Seminar Scheme  

This meeting is organised by: Dr Thomas Kitching and Dr Jason McEwen