All posts by Curtis_Seaman

Rivers of Ice

Oh, Yakutsk! It has been a long time – 2012, to be exact – since we last spoke about you (on our sister blog). It was a different time back then, with me still referring to the EUMETSAT Natural Color RGB as “pseudo-true color”. (Now, most National Weather Service forecasters know it as the “Day Land Cloud RGB”). VIIRS was a only a baby with less than one year on the job. Back then, the area surrounding the “Coldest City on Earth” was on fire. This time, we return to talk about ice.

You see, rivers near the Coldest City on Earth freeze during the winter, as do most rivers at high latitudes. Places like the Northwest Territories, the Yukon, Alaska and Siberia use this to their advantage. Rivers that are frozen solid can make good roads, a fact that has often been overly dramatized for TV. Transporting heavy equipment may be better done on solid ice in the winter than on squishy, swampy tundra in the summer. But, that comes with a cost: ice roads only work during the winter.

In remote places like these, with few roads, rivers are the lifeblood of transportation – acting as roads during the winter and waterways for boats during the summer. But, what about the transition period that happens each spring and fall? Every year there is a period of time where it is too icy for boats and not icy enough for trucks. Monitoring for the autumn ice-up is an important task. And, perhaps it is more important to monitor for the spring break-up of the ice, since the break up period is often associated with ice jams and flooding.

We’ve covered the autumn ice up before on this blog, but VIIRS recently captured a great view of the spring break up near Yakutsk, that will be our focus today.

We will start with the astonishing video captured by VIIRS’ geostationary cousin, the Advanced Himawari Imager (AHI) on Himawari-8 from 18 May 2018:

The big river flowing south to north in the center of the frame is the Lena River. (Yakutsk is on that river just south of the easternmost bend.) The second big river along the right side of the frame is the Aldan River, which turns to the west and flows into the Lena in the center of the frame.

Now that you are oriented, take a look at that video again in full screen mode. If you look closely, you will see a snake-like section of ice flowing from the Aldan into the Lena. This is exactly the kind of thing river forecasters are supposed to be watching for during the spring!

Of course, this is a geostationary satellite, which provides good temporal resolution, but not as good spatial resolution. The video is made from 1-km resolution imagery, but we are looking at high latitudes on an oblique angle, so the resolution is more like 3-4 km here. (Note: the scene in the video above is approximately the same latitude as the Yukon River delta, so this acts as a good preview of what GOES-17 and its Advanced Baseline Imager [ABI] will offer.) So, how does this look from the vantage point of VIIRS, which provides similar imagery, but at 375 m resolution? See for yourself:

(You will have to click on the image to get the animation to play.)

Animation of VIIRS Natural Color RGB composite of channels I-1, I-2 and I-3 (18 May 2018)

Animation of VIIRS Natural Color RGB composite of channels I-1, I-2 and I-3 (18 May 2018)

This animation includes both Suomi NPP and NOAA-20 VIIRS. That gives us ~50 min. temporal resolution to go with the sub-kilometer spatial resolution. Eagle-eyed viewers can see how the resolution changes over the course of the animation, as the rivers start out near the left edge of the VIIRS swath (~750 m resolution), then on subsequent orbits, the rivers are near nadir (~375 m resolution) and then on the right edge of the swath (~750 m resolution again). In any case, this is better spatial resolution than AHI can provide (or ABI will provide) at this latitude.

One thing you can do with this animation is calculate how fast the ice was moving. I estimated the leading edge of the big “ice snake” moved about 59 pixels (22.3 km at 375 m resolution) during the 3 hour, 21 minute duration of the animation. That works out to an average speed of 6.7 km/hr (3.6 knots), which doesn’t seem unreasonable. Counting up pixels also indicates our big “ice snake” is at least 65 km long, and the Aldan River is nearly 3 km wide in its lower reaches when it meets the Lena River. That is in the neighborhood of 200 km2 of ice!

That much ice moving at over 3 knots can do a lot of damage. Just look at what the ice on this much smaller river did to this bridge:

(Make sure you watch it all the way to the end!)

Watch for Falling Rock

Q: When a tree falls in the forest and nobody is around to hear it, does it make a sound?

A: Yes.

That’s an easy question to answer. It’s not a 3000-year-old philosophical conundrum with no answer. Sound is simply a pressure wave moving through some medium (e.g. air, or the ground). A tree falling in the forest will create a pressure wave whether or not there is someone there to listen to it. It pushes against the air, for one. And it smacks into the ground (or other trees), for two. These will happen no matter who is around. As long as that tree doesn’t fall over in the vacuum of space (where there is nothing to transmit the sound waves and nothing to crash into), that tree will make “a sound”. (There are also sounds that humans cannot hear. Think of a dog whistle. Does that sound not exist because a human can’t hear it?)

What if it’s not a tree? What if it’s 120 million metric tons of rock falling onto a glacier? Does that make a sound? To quote a former governor, “You betcha!” It even causes a 2.9 magnitude earthquake!

That’s right! On 28 June 2016, a massive landslide occurred in southeast Alaska. It was picked up on seismometers all over Alaska. And, a pilot who regularly flies over Glacier Bay National Park saw the aftermath:

If you didn’t read the articles from the previous links, here’s one with more (and updated) information. And, according to this last article, rocks were still falling and still making sounds (“like fast flowing streams but ‘crunchier'”) four days later. That pile of fallen rocks is roughly 6.5 miles long and 1 mile wide. And, some of the rock was pushed at least 300 ft (~100 m) uphill on some of the neighboring mountain slopes.

Of course, who needs pilots with video cameras? All we need is a satellite instrument known as VIIRS to see it. (That, and a couple of cloud-free days.) First, lets take a look at an ultra-high-resolution Landsat image (that I stole from the National Park Service website and annotated):

Glacier Bay National Park as viewed by Landsat

Glacier Bay National Park as viewed by Landsat (courtesy US National Park Service)

Of course, you’ll want to click on that image to see it at full resolution. The names I’ve added to the image are the names of the major (and a few minor) glaciers in the park. The one to take note of is Lamplugh. Study it’s location, then see if you can find it in this VIIRS True Color image from 9 June 2016:

VIIRS True Color RGB composite image of channels M-3, M-4 and M-5 (20:31 UTC 9 June 2016)

VIIRS True Color RGB composite image of channels M-3, M-4 and M-5 (20:31 UTC 9 June 2016), zoomed in at 200%.

Anything? No? Well, how about in this image from 7 July 2016:

VIIRS True Color RGB composite of channels M-3, M-4 and M-5 (21:42 UTC 7 July 2016), zoomed in at 200%

VIIRS True Color RGB composite of channels M-3, M-4 and M-5 (21:42 UTC 7 July 2016), zoomed in at 200%

I see it! If you don’t, take a look at this animated GIF made from those two images:

Animation of VIIRS True Color images highlighting the Lamplugh Glacier landslide

Animation of VIIRS True Color images highlighting the Lamplugh Glacier landslide

The arrow is pointing out the location of the landslide. Of course, with True Color images, it can be hard to tell what is cloud and what is snow (or glacier) and with VIIRS you’re limited to 750 m resolution. We can take care of those issues with the high-resolution (375 m) Natural Color images:

Animation of VIIRS Natural Color images of the Lamplugh Glacier landslide

Animation of VIIRS Natural Color images of the Lamplugh Glacier landslide

Make sure you click on it to see the full resolution. If you want to really zoom in, here is the high-resolution visible channel (I-1) imagery of the event:

Animation of VIIRS high-resolution visible images of the Lamplugh Glacier landslide

Animation of VIIRS high-resolution visible images of the Lamplugh Glacier landslide

You don’t even need an arrow to point it out. Plus, if you look closely, I think you can even see some of the dust coming from the slide.

That’s what 120 million metric tons of rock falling off the side of a mountain looks like, according to VIIRS!

Optical Ghosts

It’s not everyday that one comes across something that is truly surprising. But, here’s something I recently came across that surprised me: a website on ghosts, angels and demons with useful scientific information. Of relevance here is the section on lens flare and ghosting. Although, maybe it shouldn’t be surprising. If you’re looking for “real” ghosts, you have to be able to spot the “fake” ones.

Simply put, lens ghosting (or optical ghosting) is a consequence of the fact that no camera lens in existence perfectly transmits 100% of the light incident upon it. Some of the light is reflected from the back of lens to the front, and then back again, as in the first diagram on this website. When the source of this light is bright enough, the component of this light that bounces around due to internal reflections within the lens may be as bright or brighter than the rest of the incoming light and will show up on the film (for you old fogies) or recorded by the array of detector elements that convert light into an electric signal (pretty much any camera purchased after 2004). That leads to the phenomena known as “flaring” and “ghosting”.

We’ve all seen pictures or movies that contain these artifacts. Here’s an example of flaring. Here’s an example of ghosting. And here’s both in the same image:

Photo credit: Nasim Mansurov (photographylife.com)

Photo credit: Nasim Mansurov (photographylife.com)

Professional photographers use flaring and ghosting to their advantage. Amateurs wonder why it ruined their picture.

In the particular case of “ghosts”, the light you see often takes on the shape of the aperture, which gives you polygonal or circular shapes like these:

Examples of lens ghosts. Pictures courtesy Angels&Ghosts.com.

Examples of lens ghosts. Pictures courtesy Angels&Ghosts.com.

I hate to be a stickler but those are pentagons, not hexagons. (Keep on your toes!) Flaring and ghosting is so prevalent in cameras of all kinds that animated movies replicate it in order to look “more real.” And, they are two examples of the many artifacts produced by cameras. (Take a look at the differences between CCD and CMOS detectors, as an example of others.)

Why bring this up on a blog about a weather satellite? Because the VIIRS Day/Night Band is, in a manner of speaking, just a really high-powered CCD camera. It, too, is subject to ghosts. (More so than other VIIRS bands because of its high sensitivity to low levels of light.)

Before we get to that, see if you notice anything unusual about this Day/Night Band image:

VIIRS Day/Night Band image (00:42 UTC 9 February 2015)

VIIRS Day/Night Band image (00:42 UTC 9 February 2015).

Those with photographic memories will recognize this image from an earlier post about the N-ICE field campaign in 2015 (which I hid in one of the animations). See that row of 6 bright lights north of Svalbard? Those aren’t boats and they’re not optical ghosts – they are 6 images of the same satellite (using the more liberal definition of satellite: 2a).

Don’t believe me? Here’s the explanation: VIIRS is on a satellite that orbits the Earth at about 835 km. That means two things: 1) there are plenty of satellites (or bits of space junk) that orbit at lower altitudes; and 2) every time a satellite crosses over to the nighttime side of the terminator, there is a period of time that the object is still illuminated by the sun before it passes behind the Earth’s shadow. And, there’s a third thing to consider: lower orbiting objects travel faster than higher orbiting objects. If one of these lower orbiting satellites should pass through the field-of-view of VIIRS while it is still illuminated by the sun, it can reflect light back to VIIRS, where the Day/Night Band can detect it. It’s a form of glint, like sunglint or moonglint. If it moves only slightly faster than VIIRS, it will be in the field-of-view for multiple scans, like in the image above.

It happened again in the same area 4 days later, only with 5 bright spots this time:

VIIRS Day/Night Band image (06:10 UTC 13 February 2015)

VIIRS Day/Night Band image (06:10 UTC 13 February 2015).

With all the striping that is present in the above image, you can clearly see the outline of each VIIRS scan. Note the relative position of the bright light in each scan in which it is imaged. See how it moves in the along-track dimension from one edge of the scan to the other? (The along-track dimension is basically perpendicular to the scan lines.)

Here are the two previous images zoomed in at 400%:

VIIRS Day/Night Band image (00:42 UTC 9 February 2015)

VIIRS Day/Night Band image (00:42 UTC 9 February 2015) zoomed in at 400%.

VIIRS Day/Night Band image (06:10 UTC 13 February 2016)

VIIRS Day/Night Band image (06:10 UTC 13 February 2016) zoomed in at 400%.

If this “satellite” reflects a high amount of light back to VIIRS, it can cause optical ghosts like in this image:

VIIRS Day/Night Band image (11:50 UTC 1 March 2014)

VIIRS Day/Night Band image (11:50 UTC 1 March 2014).

The ghosting is obvious. The “satellite” is less obvious, but you should be able to see the six smaller dots indicating its location. Eagle-eyed observers may click on it to see the full resolution image and note the two partial dots at either end of the row, indicating where this “satellite” was only partially within the VIIRS field-of-view. Even when the “satellite” was not in the field-of-view of VIIRS, it still caused ghosts – just like how the sun doesn’t have to be in a camera’s field-of-view to cause flares and ghosts.

The yellow line demarcates where the solar zenith angle is 108° on the Earth’s surface and the green line demarcates the lunar zenith angle of 108°. The yellow line is the limit of astronomical twilight. (Astronomical twilight exists to the right of that line.) Even though the surface is dark where this ghosting occurs (astronomical night), satellites are still illuminated by the sun (and moon) in this region. In fact, my back-of-the-envelope calculation indicates that VIIRS (at ~835 km) doesn’t pass into the Earth’s shadow until the sub-satellite point reaches a solar zenith angle of ~118°. (As an aside, the International Space Station is much lower [~400 km], so it is illuminated only to a solar zenith angle of ~110°.)

Here is the above image zoomed in at 200%:

VIIRS Day/Night Band image (11:50 UTC 1 March 2014)

VIIRS Day/Night Band image (11:50 UTC 1 March 2014) zoomed in at 200%.

Now that you’ve passed the crash course, see if you can earn your PhD. How many ghosts you can find in this image from last month? Make sure you click on it to see it in full resolution:

VIIRS Day/Night Band image (11:50 UTC 4 May 2016)

VIIRS Day/Night Band image (11:50 UTC 4 May 2016).

Where is the “satellite” in this case? What is the “real” image? And what are the “ghosts”? Are they even ghosts? As shown on the Angels & Ghosts website, objects that are out of focus are not necessarily ghosts – either “real” ghosts or “fake” ones. VIIRS is focused on the Earth’s surface (835 km away), so if another satellite were orbiting the Earth just a few kilometers lower in altitude, it would definitely appear out of focus and it would have a very similar speed to VIIRS, so it could be causing ghosts in the Day/Night Band for a long time, as you see here.

Here are all the ghosts that I found:

Close ups of the ghosts

Close ups of the ghosts from 11:50 UTC 4 May 2016 (kept at native resolution).

But, is that what we’re seeing? Are we seeing one satellite? Or is it a clutter of space junk? Did VIIRS just come close to a collision with something (because we’re seeing nearby out-of-focus objects)? Or are they optical ghosts from an object well below VIIRS, so we don’t have to worry about it? Maybe it’s a UFO! What about that!?

For once, I don’t have all the answers. But, the truth is out there! (Cue music…)

UPDATE (6/24/2016): Thanks to Dan L. for pointing out an instance of the high-resolution Landsat-8 Operational Land Imager quite clearly spotting the lower-orbiting International Space Station. With a different instrument scan strategy, it produces a different kind of artifact: tracking the ISS motion from one band to the next!

UHF/VHF

Take a second to think about what would happen if Florida was hit by four hurricanes in one month.

Would the news media get talking heads from both sides to argue whether or not global warming is real by yelling at each other until they have to cut to a commercial? Would Jim Cantore lose his mind and say “I don’t need to keep standing out here in this stuff- I quit!”? Would we all lose our minds? Would our economy collapse? (1: yes. 2: every man has his breaking point. 3: maybe not “all”. 4: everybody panic! AHHH!)

It doesn’t have to just be Florida. It could be four tropical cyclones making landfall anywhere in the CONUS (and, maybe, Hawaii) in a 1-month period. The impact would be massive. But, what about Alaska?

Of course, Alaska doesn’t get “tropical cyclones” – it’s too far from the tropics. But, Alaska does get monster storms that are just as strong that may be the remnants of tropical cyclones that undergo “extratropical transition“. Or, they may be mid-latitude cyclones or “Polar lows” that undergo rapid cyclogenesis. When they are as strong as a hurricane, forecasters call them “hurricance force” (HF) lows. And guess what? Alaska has been hit by four HF lows in a 1-month period (12 December 2015 – 6 January 2016).

With very-many HF lows, some of which were ultra-strong, we might call them VHF or UHF lows. (Although, we must be careful not to confuse them with the old VHF and UHF TV channels, or the Weird Al movie.) In that case, let’s just refer to them as HF, shall we?

The first of these HF storms was a doozy – tying the record for lowest pressure ever in the North Pacific along with the remnants of Typhoon Nuri. Peak winds with system reached 122 mph (106 kt; 196 k hr-1; 54 m s-1) in Adak, which is equivalent to a Category 2 hurricane!

Since Alaska is far enough north, polar orbiting satellites like Suomi-NPP provide more than 2 overpasses per day. Here’s an animation from the VIIRS Day/Night Band on Suomi-NPP:

Animation of VIIRS Day/Night Band images of the Aleutian Islands (12-14 December 2015)

Animation of VIIRS Day/Night Band images of the Aleutian Islands (12-14 December 2015).

It’s almost like a geostationary satellite! (Not quite, as I’ll show later.) This is the view you get with just 4 images per day. (The further north you go, the more passes you get. The Interior of Alaska gets 6-8 passes, while the North Pole itself gets all 15.) Seeing the system wrap up into a symmetric circulation would be a thing of beauty, if it weren’t so destructive. Keep in mind that places like Adak are remote enough as it is. When a storm like this comes along, they are completely isolated from the rest of Alaska!

Here’s the same animation for the high-resolution longwave infrared (IR) band (I-5, 11.5 µm):

Animation of VIIRS I-5 images of the Aleutian Islands (12-14 December 2015)

Animation of VIIRS I-5 images of the Aleutian Islands (12-14 December 2015).

You may have heard of Himawari and its primary instrument, the Advanced Himawari Imager (AHI). AHI can be thought of as a geostationary version of VIIRS, and it’s nearly identical to what GOES-R will provide. Well, Himawari’s field of view includes the Aleutian Islands, and it takes images of the full disk every 10 minutes. Would you like to see how this storm evolved with 10 minute temporal resolution? Of course you would.

Here is CIRA’s Himawari Geocolor product for this storm:

Here is a loop of the full disk RGB Airmass product applied to Himawari. Look for the storm moving northeast from Japan and then rapidly wrapping up near the edge of the Earth. This is an example of something you can’t do with VIIRS, because VIIRS does not have any detectors sensitive to the 6-7 µm water vapor absorption band, which is one of the components of the RGB Airmass product. The RGB Airmass and Geocolor products are very popular with forecasters, but they’re too complicated to go into here. You can read up on the RGB Airmass product here, or visit my collegue D. Bikos’ blog to find out more about this storm and these products.

You might be asking how we know what the central pressure was in this storm. After all, there aren’t many weather observation sites in this part of the world. The truth is that it was estimated (in the same way the remnants of Typhoon Nuri were estimated) using the methodology outlined in this paper. I’d recommend reading that paper, since it’s how places like the Ocean Prediction Center at the National Weather Service estimate mid-latitude storm intensity when there are no surface observations. I’ll be using their terminology for the rest of this discussion.

Less than 1 week after the first HF storm hit the Aleutians, a second one hit. Unfortunately, this storm underwent rapid intensification in the ~12 hour period where there were no VIIRS passes. Here’s what Storm #2 looked like in the longwave IR according to Himawari. And here’s what it looked like at full maturity according to VIIRS:

VIIRS DNB image (23:17 UTC 18 December 2015)

VIIRS DNB image (23:17 UTC 18 December 2015).

VIIRS I-5 image (23:17 UTC 18 December 2015)

VIIRS I-5 image (23:17 UTC 18 December 2015).

Notice that this storm is much more elongated than the first one. Winds with this one were only in the 60-80 mph range, making it a weak Category 1 HF low.

Storm #3 hit southwest Alaska just before New Year’s, right at the same time the Midwest was flooding. This one brought 90 mph winds, making it a strong Category 1 HF low. This one is bit difficult to identify in the Day/Night Band. I mean, how many different swirls can you see in this image?

VIIRS DNB image (13:00 UTC 30 December 2015)

VIIRS DNB image (13:00 UTC 30 December 2015).

(NOTE: This was the only storm of the 4 to happen when there was moonlight available to the DNB, which is why the clouds appear so bright. The rest of the storms were illuminated by the sun during the short days and by airglow during the long nights.) The one to focus on is the one of the three big swirls closest to the center of the image (just above and right of center). It shows up a little better in the IR:

VIIRS I-5 image (13:00 UTC 30 December 2015)

VIIRS I-5 image (13:00 UTC 30 December 2015).

The colder (brighter/colored) cloud tops are the clue that this is the strongest storm, since all three have similar brightness (reflectivity) in the Day/Night Band. If you look close, you’ll also notice that this storm was peaking in intensity (reaching mature stage) right as it was making landfall along the southwest coast of Alaska.

Storm #4 hit the Aleutians on 6-7 January 2016 (one week later), and was another symmetric/circular circulation. This storm brought winds of 94 mph (2 mph short of Category 2!) The Ocean Prediction Center made this animation of its development as seen by the Himawari RGB Airmass product. Or, if you prefer the Geocolor view, here’s Storm #4 reaching mature stage. But, this is a VIIRS blog. So, what did VIIRS see? The same storm at higher spatial resolution and lower temporal resolution:

Animation of VIIRS DNB images of the Aleutian Islands (6-7 January 2016)

Animation of VIIRS DNB images of the Aleutian Islands (6-7 January 2016).

Animation of VIIRS I-5 images of the Aleutian Islands (6-7 January 2016)

Animation of VIIRS I-5 images of the Aleutian Islands (6-7 January 2016).

This storm elongated as it filled in and then retrograded to the west over Siberia. There aren’t many hurricanes that do that after heading northeast!

So, there you have it: 4 HF lows hitting Alaska in less than 1 month, with no reports of fatalities (that I could find) and only some structural damage. Think that would happen in Florida?

PS: I know this is a VIIRS blog, but if you want to look at CIRA’s Himawari data products, we have both full disk and North Pacific (including the Aleutians) sectors available in near real-time on this website.

A Graduate Level Course on DNB and NCC

Is there any post on this blog that doesn’t have to do with scaling the DNB or NCC?

I was going to title this post “Revisiting ‘Revisiting “Revisiting Scaling on the Solstice”‘”, but that would just be ridiculous. Besides the fact that we just passed an equinox (and are months away from a solstice), this post is more of a follow-up to our very first post.

If that was an introduction, this is a graduate level course. Well, maybe not. It won’t take a whole semester to read through this, unless you are a really slow reader. But, since Near Constant Contrast (NCC) imagery is coming to AWIPS in the very near future, now is a good time to prepare for what’s coming.

We start off with some good news: NCC imagery is coming to AWIPS! NCC imagery provides an alternative to ERF-Dynamic Scaling, CSPP Adaptive Scaling and whatever this is:

Example VIIRS DNB image displayed in AWIPS using the Histogram Equalization method

Example VIIRS DNB image displayed in AWIPS using the Histogram Equalization method. Courtesy Eric Stevens (UAF/GINA).

(I know that the above image uses the “Histogram Equalization” algorithm that was developed for CSPP originally. I was just being dramatic.) NCC imagery is an operational product, not some fly-by-night operation from CIRA or CIMSS (who both do great work, by the way).

Now the bad news: NCC imagery as viewed in AWIPS might not be the best thing since sliced bread. It may not solve all of our problems. To understand why, you have to know the inner workings of AWIPS (which I don’t) and the inner workings of the NCC EDR product (which I do).

Here’s a first look at the NCC product as displayed in AWIPS:

Example NCC image (12:39 UTC 19 August 2015) displayed in AWIPS II

Example NCC image (12:39 UTC 19 August 2015) displayed in AWIPS II. Image courtesy John Paquette (NOAA).

Notice the bright area of clouds northwest of Alaska that suddenly become black. Also notice the background surface just looks black, except for the Greenland Ice Sheet. These are examples of two different (but related) issues: the first being that AWIPS is blacking out areas where the image is saturated (the maximum value on the scaling bounds is too low), the second being that, at the low end of the scale, the image detail is lost (either the minimum value on the scaling bounds is too high, or AWIPS uses too few shades of gray in the display, which means you lose sensitivity to small changes in value).

If you read the first post on this blog (that I linked to), or you read my previous posts about ERF-Dynamic Scaling, you know that the primary problem is this: the VIIRS Day/Night Band is sensitive to radiance values that span 8 orders-of-magnitude from full sunlight down to the levels of light observed during a new moon at night. Most image display software, of which AWIPS is an example, are capable of displaying only 256 colors (or 96 colors if you use N-AWIPS). How do you present an 8-order-of-magnitude range of values in 256 colors without losing information?

Near Constant Constrast imagery does this by modeling the sun and moon to convert the Day/Night Band radiance values into a “pseudo-albedo”. Albedo (aka reflectance) is simply the percentage of incoming solar (and lunar) radiation that is reflected back to the satellite, so you end up with a decimal number between 0 and 1. That’s easy enough to display, but we’re not done. What happens when there is no moonlight at night because the moon is below the horizon (or it’s a new moon)? What happens when there is a vivid aurora, or bright city lights, or gas flares or fires? These light sources can all be several orders of magnitude brighter than the moon, especially when there is no moonlight. There are lots of light sources at night that the DNB detects that aren’t reflecting light – they’re emitting it. That’s why the NCC doesn’t provide a true albedo value.

When VIIRS first launched into space, the NCC algorithm assumed that pseudo-albedo values over the range from 0 to 5 would be sufficient to cover all these light sources, but that turned out to be incorrect.  If you weren’t within a few days of a full moon, images contained fill values (no valid data) because these myriad light sources fell outside the allowed range of 0 to 5. It took a lot of work by the VIIRS Imagery Team to fix this and get the NCC algorithm to where it stands now. And where it stands now is that pseudo-albedo values are allowed to vary over the range from -10 to 1000. (The “-10” accounts for the occasional negative radiance in the DNB data and the “1000” allows for light sources up to three orders of magnitude brighter than the moon.) Now, the images don’t saturate or get blanked out by fill values at night away from a full moon. But, the range from -10 to 1000 still presents a challenge for those who want to display images properly.

To show this, here are the same three VIIRS NCC images linearly scaled between the full range of values (-10 to 1000), the original range of values (0 to 5) and the ideal range of values (which was subjectively determined for this scene):

Example VIIRS NCC image (08:55 UTC 5 August 2015) linearly scaled between -10 and 1000

Example VIIRS NCC image (08:55 UTC 5 August 2015) linearly scaled between -10 and 1000.

Can you see the one pixel that shows up in the above scaling? (There is one pixel with a value over 900.)

Example VIIRS NCC image (08:55 UTC 5 August 2015) scaled between 0 and 5

Example VIIRS NCC image (08:55 UTC 5 August 2015) scaled between 0 and 5.

Now you can start to see some cloud features and the city lights, but this image still looks too dark.

Example VIIRS NCC image (08:55 UTC 5 August 2015) scaled between 0 and 1.5

Example VIIRS NCC image (08:55 UTC 5 August 2015) scaled between 0 and 1.5.

Now we’re talking!

The above images were taken when there was moonlight available. What happens when there is no moonlight?

Example VIIRS NCC image (12:57 UTC 26 July 2015) scaled from -10 to 1000

Example VIIRS NCC image (12:57 UTC 26 July 2015) scaled from -10 to 1000.

Scaling over the full range of values means you only see the city lights of Honolulu and the islands drawn on the map.

Example VIIRS NCC image (12:57 UTC 26 July 2014) scaled from 0 to 1

Example VIIRS NCC image (12:57 UTC 26 July 2014) scaled from 0 to 1.

Scaling from 0 to 1 is better, but I would argue that it’s still too dark. Let’s stretch it further.

Example VIIRS NCC image (12:57 UTC 26 July 2015) scaled between 0 and 0.5

Example VIIRS NCC image (12:57 UTC 26 July 2015) scaled between 0 and 0.5.

This is about as good as you can do without the image becoming too noisy.

And, of course, the presence of the aurora gives yet another result:

Example VIIRS NCC image (11:32 UTC 22 January 2015) scaled from -10 to 1000

Example VIIRS NCC image (11:32 UTC 22 January 2015) scaled from -10 to 1000.

Can you see the aurora over northern Alaska? Maybe just barely. Once again, scaling over the full range of values doesn’t work (just like it wouldn’t for the DNB radiance values). What about using the scale of 0 to 1.5? It worked before…

Example VIIRS NCC image (11:32 UTC 22 January 2015) scaled from 0 to 1.5

Example VIIRS NCC image (11:32 UTC 22 January 2015) scaled from 0 to 1.5.

GAHH! I’m blinded! Although, you can see the clouds over the Gulf of Alaska pretty easily as well as ice leads in the Arctic Ocean. But, the aurora is too bright and you can’t see any details over most of Alaska.

It turns out, in order to prevent the aurora from saturating this scene, the image needs to be scaled over a range of 0 to 21:

Example VIIRS NCC image (11:32 UTC 22 January 2015) scaled from 0 to 21

Example VIIRS NCC image (11:32 UTC 22 January 2015) scaled from 0 to 21.

But, notice that you lose the detail of the cloud field over the Gulf of Alaska and the ice over the Arctic Ocean. This is a difficult case to scale correctly. More on that later.

So, we’ve seen that the optimum scaling bounds vary from scene to scene. The 0 to 1.5 scale seems to work for daytime and full moon scenes. New moon scenes require a scale more like 0 to 0.5 (or thereabouts) to be able to detect clouds, snow and ice. And the occasional scene requires a totally different scale altogether. Wouldn’t it be great if there were some way to automate this, so we wouldn’t have to keep fussing with the scaling on every image?

I’m here to say, “there might be.” And, it’s called “Auto Contrast.” The idea is to do what Photoshop and other image editing software do when they “automatically” improve the contrast in the image. The idea is to take the NCC image data, scaled over a range from 0 to 2, for example, bump up the maximum value bound of the scaling with the same kind of adjustment the ERF-Dynamic Scaling uses to prevent saturation in auroras, then apply something similar to Photoshop’s Auto Contrast algorithm to create the ideal scene contrast. Here’s what Auto Contrast does for the three cases above:

Example VIIRS NCC image (08:55 UTC 5 August 2015) scaled with Auto Contrast

Example VIIRS NCC image (08:55 UTC 5 August 2015) scaled with Auto Contrast.

Example VIIRS NCC image (12:57 UTC 26 July 2015) scaled with Auto Contrast

Example VIIRS NCC image (12:57 UTC 26 July 2015) scaled with Auto Contrast.

Example VIIRS NCC image (11:32 UTC 22 January 2015) scaled with Auto Contrast

Example VIIRS NCC image (11:32 UTC 22 January 2015) scaled with Auto Contrast.

For the first two cases, Auto Contrast is very similar to the subjectively determined “ideal scaling”. For the aurora case, we can see that Auto Contrast is a compromise between “not allowing the aurora to saturate” and “allowing the aurora to saturate half of the image.” The aurora does saturate a portion of the scene, but you can still see ice on the Arctic Ocean and clouds over the Gulf of Alaska when you look closely.

Of course, there are a few caveats:

1) Auto Contrast has not been fully tested. These results are promising enough that I wanted to share it right away, but it might not produce ideal results in all cases. We are continuing to investigate this.

2) Sometimes, the image has poor contrast that Auto Contrast can’t fix. For example, a new moon case over land where there are lots of city lights or a vivid aurora. Non-city areas will be more like the Hawaii case, where clouds have pseudo-albedo value between 0 and 0.5, and the city lights or aurora will have pseudo-albedo values well over 100. If you stretch the scaling enough to see the clouds, you’ll be blinded by the city lights. If you scale it to the city lights, you won’t see the clouds or snow or ice.

3) Individual users may not care that the aurora saturated half the image in the third example because they can see the clouds and ice just fine. Auto Contrast makes the clouds and ice darker and harder to see. This is example of how “ideal contrast” not only varies scene to scene, but also from one user application to another. Pretty pictures are not always the same thing as usable images.

4) Demonstrating the utility of “Auto Contrast” is not the same thing as getting the algorithm up and running within AWIPS. (Or, sending files to AWIPS that have optimized contrast.) The JPSS Imagery Team is working with the developers of the AWIPS NCC product to improve how it is displayed, but it will likely take some time.

While it’s not clear how the NCC images are currently scaled in AWIPS, they almost certainly use a fixed scale. However, the examples shown here make it clear that the scaling needs to adjust from scene to scene – even if Auto Contrast is not the ultimate solution. So while we work to figure this out, if the NCC imagery looks sub-optimal in your AWIPS system, you know why.

One final thought: the Auto Contrast algorithm is designed to work with any image, not just NCC images. It’s possible that DNB images created with ERF-Dynamic Scaling may be improved with Auto Contrast as well. But, that’s a topic for another blog post about image scaling for the future. I may yet title a post “Revisiting ‘Revisiting “Revisiting Scaling on the Solstice”‘”.