DOI: 10.14714/CP91.1510

© by the author(s). This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0.

Visualizing Bird Migration with Animated Maps

Brian Jacobs, National Geographic Magazine | briantjacobs@gmail.com

The following is an updated edition of an article that originally appeared in Source:
source.opennews.org/articles/how-we-made-billions-birds-migrate.

“Billions of Birds Migrate” is a data-driven feature published in February 2018 on nationalgeographic.com, showing the journey of several species of migratory birds across the western hemisphere, through text, audio, photography, and animated maps. The project began as part of National Geographic magazine’s involvement with the Year of the Bird campaign, an effort to bring more attention to the role of birds in their habitats and why birds matter. Alongside the web feature, the magazine also covered the same topic in a map poster created by Lauren E. James and Fernando Baptista, circulated as a supplement in the March 2018 issue. The poster includes maps of migration routes in both the eastern and western hemispheres. It’s a concept that was previously explored in posters made in 1979 and 2004.

Bird Migration in the Americas, supplement to the August 1979 issue of National Geographic. How Birds Migrate, supplement to the March 2018 issue. Not shown: the eastern hemisphere flyways on the reverse side of the 2018 poster.

Left: Bird Migration in the Americas, supplement to the August 1979 issue of National Geographic. Right: How Birds Migrate, supplement to the March 2018 issue. Not shown: the eastern hemisphere flyways on the reverse side of the 2018 poster.

This new map updates bird routes to reflect the knowledge we’ve gained over the past decades about where birds go, and features original illustrations that highlight important species. This map was breathtaking in scale and detail, but how would it translate for the web? It is possible to take an existing print graphic like this one—the result of many hundreds of hours of research and artistry—and adapt it for screens. Some projects lend themselves well to such a conversion, but some are so complex and heavily annotated that it becomes a burden to funnel it into a vastly different medium. For this poster, there was a danger of doing a disservice to the vision of the graphic by shoehorning the double-sided layout into a pocket-sized screen.

That risk, combined with the more practical fact that I would need to work simultaneously on the digital piece while the poster was still being completed, led us, the editors and directors in the maps and graphics department, to agree to not fully adapt the poster for the web. Ultimately, what I used from the poster was a re-colored (and reprojected) single image of the “flyways” of the western hemisphere (the interstate system of seasonal migration), for 5 different groups of birds.

Instead of converting the poster, we chose to use an approach that would take better advantage of an interactive, screen-based medium, highlighting different aspects of bird migration through fewer species. The creation of this web-based spin on the topic of bird migration is what I will describe below.

Left: The flyway image used in the digital version differed slightly from the print poster. I reprojected the map because bipolar oblique was incompatible with my automated video creation workflow, which used the GDAL raster processing library. I also recolored the map to work better on the dark background of NASA’s night lights (right).

Left: The flyway image used in the digital version differed slightly from the print poster. I reprojected the map because bipolar oblique was incompatible with my automated video creation workflow, which used the GDAL raster processing library. I also recolored the map to work better on the dark background of NASA’s night lights (right).

DATA AND DESIGN

The framing and design of this project started with evaluating available bird migration data to narrow the selection of bird species to highlight. Through a collaboration with the Cornell Lab of Ornithology, I consulted with eBird project lead Marshall Iliff. eBird is a vast, crowdsourced database of bird sightings around the world. I knew that eBird had created what they call STEM data: Spatio Temporal Ebird Modelling. Reports of bird sightings tell us where birds are, but they’re also a reflection of where birdwatchers go. The STEM models account for this and attempt to paint a broader picture of bird abundance, combining sightings with bird habitat information and satellite data to model the movement of entire species of birds. eBird had previously published animated maps of these models and this was our starting point.

Screen capture from an animated abundance map of the Wood Thrush, by eBird.

Screen capture from an animated abundance map of the Wood Thrush, by eBird.

Iliff advised which species would yield the highest quality data from existing STEM models. We tried to strike a balance between which species would show the robust patterns that make for a good visualization, and which would tell a story about the diversity of migration strategies employed by different bird types and within different geographies. STEM modeling does not exist for the eastern hemisphere, so we focused on the west. These data and research led to initial designs.

In my initial mockups, I not only showed individual species through photography, maps, and birdsong, but also placed these birds into a larger context of continental and intercontinental migration through introductory graphics. The concept was to give readers a general understanding of what drives bird migration. As they scrolled down the page, casual readers would see seven featured species with corresponding map animations (set to play and loop automatically) that gave a sense of the pulse of migration. More dedicated readers could go deeper, stepping incrementally through time and tracing notable features of each species’ migration through additional text descriptions. Static mockups were crucial in presenting this concept to department managers before starting production.

Mockups showing some screens of a mobile layout.

Mockups showing some screens of a mobile layout.

CREATING ANIMATED MAPS

While this project uses data from disparate sources, each of its various animated maps have a lot in common. They’re the result of data processing scripts that allowed me to regenerate the animations many times, in order to experiment with different visualization styles. Each map animation started with folders of images that were then processed and compiled into video files.

Animated STEM imagery. Each of video is compiled from sets of images, the result of Python data processing scripts and command-line tools.

Animated STEM imagery. Each of video is compiled from sets of images, the result of Python data processing scripts and command-line tools.

The raw data for the animated bird abundance maps came in the form of GeoTIFFs from the eBird project. GeoTIFF is a common raster format that includes geospatial metadata, such as the geographic location, map projection, and units of the data’s coordinate system. Each file represented the relative abundance of a species, through a single, high-precision channel of data indicating the expected number of birds within each 1 km2 area at 7 am. Migration happens over time, and the data reflect this. One year of a bird’s distribution and abundance is represented by a collection of 52 weekly files.

A preview of 52 weeks of raw data showing the Western Tanager, a yellow/black/orange songbird found in the American West.

A preview of 52 weeks of raw data showing the Western Tanager, a yellow/black/orange songbird found in the American West.

To scrutinize the data before processing, I used the gdalinfo command found within the open source GDAL raster processing library. The gdalinfo command describes the contents and metadata of a GeoTIFF. Knowing how my data was structured was crucial for responsibly transforming it. The metadata revealed the high-precision nature of the data. I needed to account for this explicitly in order to properly process it into a less-precise, browser-friendly image.

To get a consistent animation, and make a species’ migration pattern more apparent, I followed the same set of steps for each set of data associated with the seven species of interest: reproject the migration data, apply a color ramp, composite it with a base map, create a video from the composite, and add labels.

INITIAL CARTOGRAPHY

After previewing each dataset to understand it (as seen in the grid of images), I then created basemaps in Adobe Illustrator, working through one species at a time. I used the MAPublisher plugin, which allowed me to import data directly into Illustrator and reproject it, without having to resort to a GIS program like QGIS or ArcGIS, though either would have worked for this purpose. I used Natural Earth for landmasses and other reference data, and chose a Lambert azimuthal equidistant projection, centered on the western hemisphere and cropped to an extent a little larger than the species range.

Left: The basemap for the Western Tanager. Right: A composite of all Western Tanager files in one image, helpful for previewing the needed extent.

Left: The basemap for the Western Tanager. Right: A composite of all Western Tanager files in one image, helpful for previewing the needed extent.

I exported the map as a GeoTIFF file, which was used for a few purposes. It served first as a basemap that gave the geographic context for the migration pattern. I also used the metadata stored in the map file as a template, establishing the parameters for the bulk processing script that I ran on each of my 52 files. If I later modified the file’s crop or projection parameters, the script, which pointed at the file, would adjust its processing accordingly, and all data would continue to align.

GEOGRAPHIC PROCESSING

For automating the processing of each set of 52 files, I used Mapbox’s Rasterio. It’s a Python library that uses GDAL for geographic transformations and it integrates well with other Python libraries (like NumPy for data manipulation and Spectra for color ramp generation). I used Rasterio to reproject and crop each of the 52 files, so they aligned to the geographic metadata within the exported basemap.

52 reprojected, colored images. Compare the shapes in each frame to the previous grid. Reprojection warps the data such that the west coast of North America begins to assume a familiar shape.

52 reprojected, colored images. Compare the shapes in each frame to the previous grid. Reprojection warps the data such that the west coast of North America begins to assume a familiar shape.

Before creating images from the data, I first had to rescale the pixel values. Each file represented a distribution of bird abundance. From week to week, the minimum and maximum values of abundance could vary wildly. So I first analyzed all 52 weeks of files to tally the very lowest and highest values. I could use this total range of activity to consistently visualize the data across the whole year. To do so, I needed to first rescale the values to a new, visually oriented range. By converting my original range of abundence numbers to a new range of 0–255, I created a monitor and web friendly 8-bit image. I then created a color ramp of the same length (256 colors), using the Spectra library, and I mapped the intensity of each value to each color. I ran these two steps repeatedly as I experimented with colors and boundaries.

COMPOSITION AND EFFECTS

Because web browsers don't universally support video with transparency, I needed to composite each colored frame to a common base. I used the basemap I had previously generated in Illustrator, with additional shaded relief to show migration patterns in the context of elevation changes. There are good reasons not to embed state and country boundaries into a video: you can avoid excessively thin and thick line weights when a video scales, as well as compression artifacts. But I did it here so that I didn’t have to bring additional SVG layers into the browser.

To take the 52 processed abundance images and merge each of them with a common basemap, I used ImageMagick and its convert command-line tool. ImageMagick is the command-line, open source equivalent of Photoshop. Its documentation and cookbook are quirky, but, being over 30 years old, it’s robust and reliable. I used convert for two purposes: to composite each image to the basemap, and to add a feathered vignette to fade out the edges of the image.

52 weeks of data, composited on to a common base, with a vignette effect on the edges of each.

52 weeks of data, composited on to a common base, with a vignette effect on the edges of each.

From a folder of processed images, I then used the command-line ffmpeg tool to encode an mp4 video at my desired speed, with a few different sizes for desktop and mobile. Care must be taken to create an mp4 with the right encoding to properly load on mobile and desktop devices.

This script was time-consuming to develop and debug initially, but it saved an immeasurable amount of time because I could tweak settings and output new videos with new color ramps, frame rates, basemaps, etc. Alternatively, I could have cobbled together steps within QGIS (using the TimeManager plugin), Photoshop, and After Effects, but fully automating this process within a single environment was an investment for future projects that would use video maps.

LABELING

While I made some concessions by putting boundary lines within the videos themselves, I didn’t compromise on labelling. Text legibility is crucial, as is adhering to National Geographic cartographic style and policy. So I rendered labels in the browser as HTML text. I used Illustrator to register label placement and style atop the same basemap I had used previously, and also added vector leader lines to highlight important features. I used the ai2html script to export the labels from Illustrator to HTML/CSS positions. I also exported the leader lines as SVG so I could control their visibility step-by-step in the browser and render them with the desired stroke width independent of map size. In the browser, each layer was aligned with CSS and together, all layers appeared as a unified map: HTML labels (top), SVG leaders (middle), and the animated map (bottom).

Screengrab of all the labels and leader lines used for all the steps of the Western Tanager map. The map looks messy because it’s a composite of every step of my animation.

Screengrab of all the labels and leader lines used for all the steps of the Western Tanager map. The map looks messy because it’s a composite of every step of my animation.

When rendered in the browser, all labels and leader lines are initially hidden and then strategically turned on after each button click via Javascript/CSS. Custom labeling for seven species and 5–7 steps across desktop and mobile was a manual and monumental task. When I had all the pieces together, the maps were sent to Scott Zillmer, a National Geographic map editor, for review.

ANIMATED BIRD, ANIMATED EARTH

In addition to the bird abundance animations, the piece starts with an animation of the Earth. It shows the time period of October 8–12, 2017, along with an individual Broad-winged Hawk’s migration. With this visualization, I could both start with some drama, and show the breadth of a single bird’s journey as it travels through a dynamic Earth.

Having previously experimented with data from the GOES-16 satellite and seen some impressive animations, I knew that depictions of the entire western hemisphere were possible at a continental scale and as fast as an image every 15 minutes. I also suspected that some of our species of interest had been GPS-tagged for study. After some research and experimentation I was able to combine these datasets to show a bird’s route synced over time with an animated Earth.

DATA ACQUISITION

After looking in Movebank for GPS tracks of our birds of interest, I contacted the Hawk Mountain Broad-winged Hawk Project for a track of a Broad-winged Hawk after about March 2017. GOES-16 data only exist past this point, which was the data source of choice due to its spatiotemporal resolution. GOES captures frequent imagery of the same portions of the Earth, which works well for animation. The organization agreed to let us use a track of Patty, one of a number of birds that they have tagged with a GPS device. Patty spends much of her time in her summer range in the northeast United States and in her winter range around north Peru. Between those times she’s migrating across continents.

Patty, the Broad-winged Hawk, is shown in red. Her entire route from North to South America is shown along with that of another hawk, Rosalie. Source: Hawk Mountain Bird Tracker.

Patty, the Broad-winged Hawk, is shown in red. Her entire route from North to South America is shown along with that of another hawk, Rosalie. Source: Hawk Mountain Bird Tracker.

Knowing Patty’s migration period narrowed the search for satellite imagery. Within the restricted time range, I downloaded GOES-16 data from the AWS Public Archive using the Boto3 Python library. I used the 15-minute increment entire-planet (“full disk”) dataset in order to capture North and South America in the same view. Each file was about 400MB; I used an external drive to avoid filling up my primary drive and avoid crashes.

Data limitations impacted experimentation. Patty’s tracker did not record locations at even increments. Sometimes multiple locations recorded per day, but at other times there were multi-day gaps during which she might migrate a significant distance. I tried different spatial extents, depicting various portions of the route that expressed movement while still conveying a sense of distance.

I expected users to view the animation for about 20–30 seconds, given the multiple overlay steps. While I could have animated the movement of Patty’s entire route, animating weeks of earth imagery in 20 seconds was visually problematic because—aside from the hyperkinetic dance of clouds—GOES-16 captures the Earth in day and night, and thus flickers from light to dark at a disorienting rate. I opted for a much shorter clip to reduce this flicker, at the expense of route distance.

Lastly, GPS tracks are typically pretty messy, as animals don’t move in straight lines. To tidy the visualization, I generalized the lines to at most one location per day, which worked well at the display scale.

Patty’s filtered GPS recordings generalized into a line overlaid on GOES-16 imagery.

Patty’s filtered GPS recordings generalized into a line overlaid on GOES-16 imagery.

PROCESSING GOES-16 DATA

The key with this animation was to ensure that the GPS track of the bird with the basemap was appearing in the right place at the right time. Alignment in space proved to be the bigger challenge. While the bird’s GPS data provided latitudes, longitudes, and timestamps, getting control over the GOES-16 image geospatially was a challenge.

Left: A depiction of a single band of GOES-16 data. Right: Multiple bands of GOES-16 data combined together to represent a more familiar depiction of the Earth. Note that on the right, where the Earth is in nighttime, the clouds are shining bright, as compared to total darkness on the left. This image is using an infrared channel to bring out clouds at night. Source: RAMMB.

Left: A depiction of a single band of GOES-16 data. Right: Multiple bands of GOES-16 data combined together to represent a more familiar depiction of the Earth. Note that on the right, where the Earth is in nighttime, the clouds are shining bright, as compared to total darkness on the left. This image is using an infrared channel to bring out clouds at night. Source: RAMMB.

The raw GOES-16 data forms a circular image, a “full disk” of planet Earth. The GOES-16 satellite captures the western hemisphere with a particular perspective view of the Earth, a result of its geosynchronous orbit. Its raw data comes as a NetCDF file. It took a few extra steps to extract the geographic information from this format, as GDAL’s tools don’t automatically understand how to parse it. I read each file with a Python script and accessed the metadata that stored the satellite’s distance from the Earth and the center coordinates of its focus. From these parameters I was able to calculate the geostationary projection, which I wrote to a new GeoTIFF file using Rasterio. I then tested the file in ArcGIS by overlaying country boundaries to see how well the borders aligned with coastlines. It took a lot of trial and error to get things to reproject correctly with help from blog posts, code samples, python notebooks, and other sources of inspiration. NOAA’s GOES-16 documentation is lacking.

Three frames of processed GOES-16 imagery.

Three frames of processed GOES-16 imagery.

Working with satellite imagery isn’t like working with a normal photo. Data don’t typically come neatly as RGB channels for a single image; they come instead as a collection of individual bands corresponding to different wavelengths of light. You then make band combinations according to what you’re trying to express. In the case of GOES-16, these bands represent visible light, near infrared, and infrared. For what I was trying to show, I wanted a true-color view of the Earth. To attain this, I took the red, green(ish), and blue bands from within the NetCDF file and assigned them to the three RGB channels of an image. To get the clouds to be visible during the nighttime view of the Earth, I embeded the infrared band across all 3 channels.

I worked in Illustrator to preview a cropped and projected version of the GOES-16 data. I also prepared an additional map layer of state and country boundaries for use as an overlay on the GOES imagery. I used ImageMagick to composite each GOES frame with the boundary layer. All frames were then composited into a video with ffmpeg.

MERGING VIDEO WITH PATTY

I wanted fine-grained control over the appearance of Patty’s line atop the satellite imagery across desktop and mobile devices. To do that, I didn’t include Patty in the video itself, much like how I didn’t include leader lines and text within the migration animations. Instead, I looped the video, and animated a line using SVG in lockstep with the video's progress. This is a similar concept to one used in another bird project, where my colleague Kennedy Elliott layered animated explanatory SVGs atop slow-motion hummingbird videos.

I draw Patty’s route in the browser directly from GPS data, using a web friendly GeoJSON file that contained latitudes, longitudes, and associated timestamps. Knowing the parameters that determine the map projection of my GOES-16 video, I make an identical map frame with D3. With that frame, I could reproject the GPS data to match my video. Browsers scale SVGs and videos in different ways, so a good amount of trial and error went into solving alignment issues.

Animating Patty was aided with the help of a tutorial and example visualizing a typhoon’s path. This also took some trial and error because Broad-winged Hawks don’t migrate at night. Since Patty’s data was of limited granularity, I wrote custom overrides to ensure that she only moved during the daylight parts of the video, while respecting the timestamps in the data. There are some perceptual problems in combining two datasets with different time-steps, but one thing was certain: I would not remove frames from the GOES-16 video to stutteringly match Patty’s timestamps. A smoothly animated glittering blue dot from space was too spectacular to pass up.

CONCLUSION

I knew from the designs that this project would have a lot of maps, but it turned into a larger affair than expected due to the combination of techniques and technologies used to bring the animated data together. To coordinate all the different layers, I had to experiment and problem-solve to bring the data and graphics together in a reasonably graceful way. With enough research and a supportive graphics department, I was able to make something that I was proud of, and was able to learn plenty in the process.