Chapter 6¶
This is the sixth in a series of notebooks related to astronomy data.
As a continuing example, we will replicate part of the analysis in a recent paper, “Off the beaten path: Gaia reveals GD-1 stars outside of the main stream” by Adrian M. Price-Whelan and Ana Bonaca.
In the previous lesson we downloaded photometry data from Pan-STARRS, which is available from the same server we’ve been using to get Gaia data.
The next step in the analysis is to select candidate stars based on the photometry data. The following figure from the paper is a color-magnitude diagram for the stars selected based on proper motion:
In red is a theoretical isochrone, showing where we expect the stars in GD-1 to fall based on the metallicity and age of their original globular cluster.
By selecting stars in the shaded area, we can further distinguish the main sequence of GD-1 from younger background stars.
Outline¶
Here are the steps in this notebook:
We’ll reload the data from the previous notebook and make a color-magnitude diagram.
Then we’ll specify a polygon in the diagram that contains stars with the photometry we expect.
Then we’ll merge the photometry data with the list of candidate stars, storing the result in a Pandas
DataFrame.
After completing this lesson, you should be able to
Use Matplotlib to specify a
Polygonand determine which points fall inside it.Use Pandas to merge data from multiple
DataFrames, much like a databaseJOINoperation.
Installing libraries¶
If you are running this notebook on Colab, you can run the following cell to install Astroquery and a the other libraries we’ll use.
If you are running this notebook on your own computer, you might have to install these libraries yourself.
If you are using this notebook as part of a Carpentries workshop, you should have received setup instructions.
TODO: Add a link to the instructions.
# If we're running on Colab, install libraries
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install astroquery astro-gala pyia python-wget
Reload the data¶
The following cell downloads the photometry data we created in the previous notebook.
import os
from wget import download
filename = 'gd1_photo.fits'
filepath = 'https://github.com/AllenDowney/AstronomicalData/raw/main/data/'
if not os.path.exists(filename):
print(download(filepath+filename))
Now we can read the data back into an Astropy Table.
from astropy.table import Table
photo_table = Table.read(filename)
Plotting photometry data¶
Now that we have photometry data from Pan-STARRS, we can replicate the color-magnitude diagram from the original paper:
The y-axis shows the apparent magnitude of each source with the g filter.
The x-axis shows the difference in apparent magnitude between the g and i filters, which indicates color.
Stars with lower values of (g-i) are brighter in g-band than in i-band, compared to other stars, which means they are bluer.
Stars in the lower-left quadrant of this diagram are less bright and less metallic than the others, which means they are likely to be older.
Since we expect the stars in GD-1 to be older than the background stars, the stars in the lower-left are more likely to be in GD-1.
import matplotlib.pyplot as plt
def plot_cmd(table):
"""Plot a color magnitude diagram.
table: Table or DataFrame with photometry data
"""
y = table['g_mean_psf_mag']
x = table['g_mean_psf_mag'] - table['i_mean_psf_mag']
plt.plot(x, y, 'ko', markersize=0.3, alpha=0.3)
plt.xlim([0, 1.5])
plt.ylim([14, 22])
plt.gca().invert_yaxis()
plt.ylabel('$g_0$')
plt.xlabel('$(g-i)_0$')
plot_cmd uses a new function, invert_yaxis, to invert the y axis, which is conventional when plotting magnitudes, since lower magnitude indicates higher brightness.
invert_yaxis is a little different from the other functions we’ve used. You can’t call it like this:
plt.invert_yaxis() # doesn't work
You have to call it like this:
plt.gca().invert_yaxis() # works
gca stands for “get current axis”. It returns an object that represents the axes of the current figure, and that object provides invert_yaxis.
In case anyone asks: The most likely reason for this inconsistency in the interface is that invert_yaxis is a lesser-used function, so it’s not made available at the top level of the interface.
Here’s what the results look like.
plot_cmd(photo_table)
Our figure does not look exactly like the one in the paper because we are working with a smaller region of the sky, so we don’t have as many stars. But we can see an overdense region in the lower left that contains stars with the photometry we expect for GD-1.
The authors of the original paper derive a detailed polygon that defines a boundary between stars that are likely to be in GD-1 or not.
As a simplification, we’ll choose a boundary by eye that seems to contain the overdense region.
Drawing a polygon¶
Matplotlib provides a function called ginput that lets us click on the figure and make a list of coordinates.
It’s a little tricky to use ginput in a Jupyter notebook.
Before calling plt.ginput we have to tell Matplotlib to use TkAgg to draw the figure in a new window.
When you run the following cell, a figure should appear in a new window. Click on it 10 times to draw a polygon around the overdense area. A red cross should appear where you click.
import matplotlib as mpl
if IN_COLAB:
coords = None
else:
mpl.use('TkAgg')
plot_cmd(photo_table)
coords = plt.ginput(10)
mpl.use('agg')
The argument to ginput is the number of times the user has to click on the figure.
The result from ginput is a list of coordinate pairs.
coords
[(0.2150537634408602, 17.548197203826344),
(0.3897849462365591, 18.94628403237675),
(0.5376344086021505, 19.902869757174393),
(0.7034050179211468, 20.601913171449596),
(0.8288530465949819, 21.300956585724798),
(0.6630824372759856, 21.52170713760118),
(0.4301075268817204, 20.785871964679913),
(0.27329749103942647, 19.71891096394408),
(0.17473118279569888, 18.688741721854306),
(0.17473118279569888, 17.95290654893304)]
If ginput doesn’t work for you, you could use the following coordinates.
if coords is None:
coords = [(0.2, 17.5),
(0.2, 19.5),
(0.65, 22),
(0.75, 21),
(0.4, 19),
(0.4, 17.5)]
The next step is to convert the coordinates to a format we can use to plot them, which is a sequence of x coordinates and a sequence of y coordinates. The NumPy function transpose does what we want.
import numpy as np
xs, ys = np.transpose(coords)
xs, ys
(array([0.21505376, 0.38978495, 0.53763441, 0.70340502, 0.82885305,
0.66308244, 0.43010753, 0.27329749, 0.17473118, 0.17473118]),
array([17.5481972 , 18.94628403, 19.90286976, 20.60191317, 21.30095659,
21.52170714, 20.78587196, 19.71891096, 18.68874172, 17.95290655]))
To display the polygon, we’ll draw the figure again and use plt.plot to draw the polygon.
plot_cmd(photo_table)
plt.plot(xs, ys);
If it looks like your polygon does a good job surrounding the overdense area, go on to the next section. Otherwise you can try again.
If you want a polygon with more points (or fewer), you can change the argument to ginput.
The polygon does not have to be “closed”. When we use this polygon in the next section, the last and first points will be connected by a straight line.
Which points are in the polygon?¶
Matplotlib provides a Path object that we can use to check which points fall in the polygon we selected.
Here’s how we make a Path using a list of coordinates.
from matplotlib.path import Path
path = Path(coords)
path
Path(array([[ 0.21505376, 17.5481972 ],
[ 0.38978495, 18.94628403],
[ 0.53763441, 19.90286976],
[ 0.70340502, 20.60191317],
[ 0.82885305, 21.30095659],
[ 0.66308244, 21.52170714],
[ 0.43010753, 20.78587196],
[ 0.27329749, 19.71891096],
[ 0.17473118, 18.68874172],
[ 0.17473118, 17.95290655]]), None)
Path provides contains_points, which figures out which points are inside the polygon.
To test it, we’ll create a list with two points, one inside the polygon and one outside.
points = [(0.4, 20),
(0.4, 30)]
Now we can make sure contains_points does what we expect.
inside = path.contains_points(points)
inside
array([ True, False])
The result is an array of Boolean values.
We are almost ready to select stars whose photometry data falls in this polygon. But first we need to do some data cleaning.
Reloading the data¶
Now we need to combine the photometry data with the list of candidate stars we identified in a previous notebook. The following cell downloads it:
import os
from wget import download
filename = 'gd1_candidates.hdf5'
filepath = 'https://github.com/AllenDowney/AstronomicalData/raw/main/data/'
if not os.path.exists(filename):
print(download(filepath+filename))
import pandas as pd
candidate_df = pd.read_hdf(filename, 'candidate_df')
candidate_df is the Pandas DataFrame that contains the results from Notebook XX, which selects stars likely to be in GD-1 based on proper motion. It also includes position and proper motion transformed to the ICRS frame.
Merging photometry data¶
Before we select stars based on photometry data, we have to solve two problems:
We only have Pan-STARRS data for some stars in
candidate_df.Even for the stars where we have Pan-STARRS data in
photo_table, some photometry data is missing.
We will solve these problems in two step:
We’ll merge the data from
candidate_dfandphoto_tableinto a single PandasDataFrame.We’ll use Pandas functions to deal with missing data.
candidate_df is already a DataFrame, but results is an Astropy Table. Let’s convert it to Pandas:
photo_df = photo_table.to_pandas()
for colname in photo_df.columns:
print(colname)
source_id
g_mean_psf_mag
i_mean_psf_mag
Now we want to combine candidate_df and photo_df into a single table, using source_id to match up the rows.
You might recognize this task; it’s the same as the JOIN operation in ADQL/SQL.
Pandas provides a function called merge that does what we want. Here’s how we use it.
merged = pd.merge(candidate_df,
photo_df,
on='source_id',
how='left')
merged.head()
| source_id | ra | dec | pmra | pmdec | parallax | parallax_error | radial_velocity | phi1 | phi2 | pm_phi1 | pm_phi2 | g_mean_psf_mag | i_mean_psf_mag | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 635559124339440000 | 137.586717 | 19.196544 | -3.770522 | -12.490482 | 0.791393 | 0.271754 | NaN | -59.630489 | -1.216485 | -7.361363 | -0.592633 | NaN | NaN |
| 1 | 635860218726658176 | 138.518707 | 19.092339 | -5.941679 | -11.346409 | 0.307456 | 0.199466 | NaN | -59.247330 | -2.016078 | -7.527126 | 1.748779 | 17.8978 | 17.517401 |
| 2 | 635674126383965568 | 138.842874 | 19.031798 | -3.897001 | -12.702780 | 0.779463 | 0.223692 | NaN | -59.133391 | -2.306901 | -7.560608 | -0.741800 | 19.2873 | 17.678101 |
| 3 | 635535454774983040 | 137.837752 | 18.864007 | -4.335041 | -14.492309 | 0.314514 | 0.102775 | NaN | -59.785300 | -1.594569 | -9.357536 | -1.218492 | 16.9238 | 16.478100 |
| 4 | 635497276810313600 | 138.044516 | 19.009471 | -7.172931 | -12.291499 | 0.425404 | 0.337689 | NaN | -59.557744 | -1.682147 | -9.000831 | 2.334407 | 19.9242 | 18.334000 |
The first argument is the “left” table, the second argument is the “right” table, and the keyword argument on='source_id' specifies a column to use to match up the rows.
The argument how='left' means that the result should have all rows from the left table, even if some of them don’t match up with a row in the right table.
If you are interested in the other options for how, you can read the documentation of merge.
You can also do different types of join in ADQL/SQL; you can read about that here.
The result is a DataFrame that contains the same number of rows as candidate_df.
len(candidate_df), len(photo_df), len(merged)
(7346, 3724, 7346)
And all columns from both tables.
for colname in merged.columns:
print(colname)
source_id
ra
dec
pmra
pmdec
parallax
parallax_error
radial_velocity
phi1
phi2
pm_phi1
pm_phi2
g_mean_psf_mag
i_mean_psf_mag
Detail You might notice that Pandas also provides a function called join; it does almost the same thing, but the interface is slightly different. We think merge is a little easier to use, so that’s what we chose. It’s also more consistent with JOIN in SQL, so if you learn how to use pd.merge, you are also learning how to use SQL JOIN.
Also, someone might ask why we have to use Pandas to do this join; why didn’t we do it in ADQL. The answer is that we could have done that, but since we already have the data we need, we should probably do the computation locally rather than make another round trip to the Gaia server.
Missing data¶
Let’s add columns to the merged table for magnitude and color.
merged['mag'] = merged['g_mean_psf_mag']
merged['color'] = merged['g_mean_psf_mag'] - merged['i_mean_psf_mag']
These columns contain the special value NaN where we are missing data.
We can use notnull to see which rows contain value data, that is, not null values.
merged['color'].notnull()
0 False
1 True
2 True
3 True
4 True
...
7341 True
7342 False
7343 False
7344 True
7345 False
Name: color, Length: 7346, dtype: bool
And sum to count the number of valid values.
merged['color'].notnull().sum()
3724
For scientific purposes, it’s not obvious what we should do with candidate stars if we don’t have photometry data. Should we give them the benefit of the doubt or leave them out?
In part the answer depends on the goal: are we trying to identify more stars that might be in GD-1, or a smaller set of stars that have higher probability?
In the next section, we’ll leave them out, but you can experiment with the alternative.
Selecting based on photometry¶
Now let’s see how many of these points are inside the polygon we chose.
We can use a list of column names to select color and mag.
points = merged[['color', 'mag']]
points.head()
| color | mag | |
|---|---|---|
| 0 | NaN | NaN |
| 1 | 0.3804 | 17.8978 |
| 2 | 1.6092 | 19.2873 |
| 3 | 0.4457 | 16.9238 |
| 4 | 1.5902 | 19.9242 |
The result is a DataFrame that can be treated as a sequence of coordinates, so we can pass it to contains_points:
inside = path.contains_points(points)
inside
array([False, False, False, ..., False, False, False])
The result is a Boolean array. We can use sum to see how many stars fall in the polygon.
inside.sum()
496
Now we can use inside as a mask to select stars that fall inside the polygon.
selected = merged[inside]
Let’s make a color-magnitude plot one more time, highlighting the selected stars with green x marks.
plot_cmd(photo_table)
plt.plot(xs, ys)
plt.plot(selected['color'], selected['mag'], 'gx');
It looks like the selected stars are, in fact, inside the polygon, which means they have photometry data consistent with GD-1.
Finally, we can plot the coordinates of the selected stars:
plt.figure(figsize=(10,2.5))
x = selected['phi1']
y = selected['phi2']
plt.plot(x, y, 'ko', markersize=0.7, alpha=0.9)
plt.xlabel('ra (degree GD1)')
plt.ylabel('dec (degree GD1)')
plt.axis('equal');
This example includes two new Matplotlib commands:
figurecreates the figure. In previous examples, we didn’t have to use this function; the figure was created automatically. But when we call it explicitly, we can provide arguments likefigsize, which sets the size of the figure.axiswith the parameterequalsets up the axes so a unit is the same size along thexandyaxes.
In an example like this, where x and y represent coordinates in space, equal axes ensures that the distance between points is represented accurately.
Write the data¶
Let’s write the merged DataFrame to a file.
filename = 'gd1_merged.hdf5'
merged.to_hdf(filename, 'merged')
selected.to_hdf(filename, 'selected')
!ls -lh gd1_merged.hdf5
-rw-rw-r-- 1 downey downey 2.0M Oct 19 17:21 gd1_merged.hdf5
If you are using Windows, ls might not work; in that case, try:
!dir gd1_merged.hdf5
Save the polygon¶
Reproducibile research is “the idea that … the full computational environment used to produce the results in the paper such as the code, data, etc. can be used to reproduce the results and create new work based on the research.”
This Jupyter notebook is an example of reproducible research because it contains all of the code needed to reproduce the results, including the database queries that download the data and and analysis.
However, when we used ginput to define a polygon by hand, we introduced a non-reproducible element to the analysis. If someone running this notebook chooses a different polygon, they will get different results. So it is important to record the polygon we chose as part of the data analysis pipeline.
Since coords is a NumPy array, we can’t use to_hdf to save it in a file. But we can convert it to a Pandas DataFrame and save that.
As an alternative, we could use PyTables, which is the library Pandas uses to read and write files. It is a powerful library, but not easy to use directly. So let’s take advantage of Pandas.
coords_df = pd.DataFrame(coords)
filename = 'gd1_polygon.hdf5'
coords_df.to_hdf(filename, 'coords_df')
We can read it back like this.
coords2_df = pd.read_hdf(filename, 'coords_df')
coords2 = coords2_df.to_numpy()
And verify that the data we read back is the same.
np.all(coords2 == coords)
True
Summary¶
In this notebook, we worked with two datasets: the list of candidate stars from Gaia and the photometry data from Pan-STARRS.
We drew a color-magnitude diagram and used it to identify stars we think are likely to be in GD-1.
Then we used a Pandas merge operation to combine the data into a single DataFrame.
Best practices¶
If you want to perform something like a database
JOINoperation with data that is in a PandasDataFrame, you can use thejoinormergefunction. In many cases,mergeis easier to use because the arguments are more like SQL.Use Matplotlib options to control the size and aspect ratio of figures to make them easier to interpret. In this example, we scaled the axes so the size of a degree is equal along both axes.
Matplotlib also provides operations for working with points, polygons, and other geometric entities, so it’s not just for making figures.
Be sure to record every element of the data analysis pipeline that would be needed to replicate the results.