You’ll hear astrophotographers talk a LOT about their “workflow.” I don’t know for sure if there’s an official definition for this, but for me it includes everything after carrying out the polar alignment all the way through to the finished image. I’ve already covered polar alignment and guiding in my walkthrough, so I’ll assume you’ve already read that and want to have a look at what else goes on. If not, then here is the link to that walkthrough. Recently I’ve started getting into narrowband imaging, which I’m finding quite a lot of fun, although the processing is different from the regular broadband imaging, which I cover in my other workflow walkthrough.
So you’ve done your polar alignment, you’ve selected your target (let’s say NGC 281, the Pacman Nebula in this case), slewed and platesolved to target. There’s any number of software that allows you to image and in this case I’m going to assume you’re using APT (AstroPhotography Tool.) A number of factors will determine the length of each frame, not least the accuracy of your guiding / tracking. In my case I usually go for between 2 to 5 minute frames, or “subs.”
One thing to bear in mind with the sub length is the loss of data. For example. lets say you shoot for an hour. Using 5 minute subs, that’s 12 frames. If two of those, for whatever reason, go wrong, then that’s 10 minutes of imaging data that you’ve lost. If you shoot 10 minute subs and two go wrong, then that’s 20 minutes you’ve lost. If shooting 1 minute subs and 2 go wrong, that’s only 2 minutes you’ve lost. At the moment, I’m pushing 10 minute subs, but not with a reasonable enough degree of consistency, so I restrict myself on the serious sessions to 5 minutes. Potentionally the longer the sub length, the more data you can aquire in a single frame, although there does come a point where you hit the rule of diminishing returns. I’m not going to go into that in this article, other than to say that 5 minutes for me is reasonable under the Class 4 skies I have.
You might ask then, why not just shoot 1 minute subs and minimise that data loss? Because an hour of 60 second subs grabs less data than an hour of 5 minute subs. The other thing to consider is that you need to try and obtain the best signal to noise ratio (SNR) you can, without over saturating the pixels. Again I won’t explain it here, but Dylan O’Donnell gives a great example on his YouTube Channel “Star Stuff.” The link for the video is here and I can highly recommend subscribing to his channel if you’re not already.
For narrowband imaging I’m using the ZWO Duo Narrowband Filter. This is a (relatively) cheaper way to introduce yourself to the joys of narrowband imaging. One of the upshots is that because you’re capturing your data only in specific wavelengths, in this case hydrogen alpha and oxygen, then you’re not as affected by light pollution nor lunar washout. Obviously if you’re going to be daft enough to image right next to a full moon, then you’re going to get hammered by that. But, by and large, if you exercise a modicum of common sense, you can get away with much more. Indeed, since starting to image in narrowband, I haven’t had to even look at the “remove light pollution” tool in Astro Pixel Processor at all.
A lot of astrophotographers use a mono camera and separate filters for each of the main wavelengths (hydrogen, oxygen and sulphur.) You’re talking serious amounts of money there so if you’re like me, and money is an issue, then staying with a one shot colour (OSC) camera and using a duo narrowband filter can greatly lessen any financial impact. It also means that you’re able to collect the hydrogen and oxygen data simultaneously and not have to spread the imaging sessions across too many nights. Nights that are few and far between these days, especially in the UK.
As an aside, Amanda often points out that I must have the patience of a saint to do astrophotography because there’s sooooo many things that can and do go wrong, plus the challenge of living in one of the most changeable climates in the northern hemisphere, all of which conspire to inhibit doing any meaningful astronomy and astrophotography. Then I get a good night, or a few good nights, and she knows WHY I persist. It’s not just about the images for me. It’s about sitting out there, under perfect skies, seeing the depths of space with my own eyes, and being able to free my soul from the rigours of life to fly free among the stars, and imagine the warmth of a hundred million stars unfettered by these earthly ties. For me, personally, it’s also about guarding my mental health. We often do the things we need in order to protect our physical health, and ignore the health of our own minds. This hobby, this PASSION, gives me the space (pun intended) to try and keep my own mind free and allows me the time to check in with myself. It’s never been just about the images.
The above image is a single frame of NGC 281 (Pacman Nebula) using APT. You can see looking at the different sections all the information you need, including what would be the guide graph on the right. This saves switching between APT and PHD2. There is also a numerical display over on the left giving the “APT State”. The lower that number, the better. The graph will help keep track of the trend in guiding accuracy. That being said, try not to get too hung up on chasing the numbers in PHD2. So long as the stars look round, then it’s going well.
In APT you can define a plan, whereby you set the sub length, the number of images and the camera gain, in this case 270, which is pretty much something called “lowest read noise” for my particular camera. The plan I would usually use here is 4 hours of 5 minute subs at unity gain, which is my usual “go to” for DSO imaging these days, depending on the brightness of the target.
So you have your images from the session, 4 hours worth of subs, plus calibration frames (darks, flats and dark flats.) What I usually do at this point is load the light frames (the subs) into a stacking program, such as Deep Sky Stacker (DSS.). For DSO I usually use Astro Pixel Processor (APP) to stack and do the initial cleaning up (crop and light pollution removal plus star colour calibration.) However I retain DSS simply as a tool to run through the captured frames and ditch any bad ones, and that’s simply because DSS is so lightweight and easy to use. Once I’ve loaded them all into DSS I then work through each frame and erase the bad ones. You can hover the cursor over the frame itself and inspect the roundness of the stars. You want them nice and tight and round without any “trailing.” Some would consider this the long way round of sorting your data, and in fairness there are plenty of other ways. But I would always suggest you find a way of doing it that works for you, because there is no “right” or “wrong” way.
Loading Into Astro Pixel Processor (APP)
When you first load up APP, if you’ve just come from using DSS, it can look insanely complicated. Thanks to Stacey over at AstroStace and her YouTube tutorials, I’ve learned not to be so afraid of it and have now fully incorporated it into my workflow. It’s a lot more powerful than DSS, but not as much as PixinSight (PI.) It makes for a very good intermediate tool though without the complexity of PI and I would highly recommend taking the time to learn it’s capabilities.
Once you’ve confirmed your working directory (I usually use the same root directory my frames are in), the first thing you’ll notice are the numbered tabs over on the left. As a side note, when I’m working through the initial data, I leave it in the APT directory, but once I’ve done that, I transfer the remaining good frames across to their own directory and then break it down into date order, for example C:/Astrophotography/NGC281/Date/Lights, and just change the “Lights” part of that structure to whatever the frame type is, ie darks, flats etc. Ultimately it’s whatever works for you, but I find a good directory structure helps with organisation of my data, especially as it’s often shot across different sessions.
Here’s where things get interesting. Whereas with broadband images you would pretty much just go straight for the stacking algorithm that gives a straight up final RGB image, with this you need to take a slightly different approach. You’re wanting the specific wavelengths of data, the Ha and the Oiii. If this was a mono camera with a dedicated filter then it wouldn’t be problematic. If you’re using an OSC and multibandpass filter, such as in this instance, then you need to isolate the specific wavelengths individually.
So what you would do is ensure that you’re using the right bayer matrix which, lets be honest, is usually RGGB, and then click “Force Bayer/CFA.” Usually I’ll run the stack normally to pull out the RGB as usual. Once I have that stacked image, making no changes to it, I save it as a 16bit TIFF file, identifying it in the filename as such. Then, because you’re using an OSC and the program needs to know that, change the algorithm to “Colour – Extract Ha” and proceed to do what you usually would do to integrate. Once I have the stacked Ha data, I make no changes to it at all and save it as a 16bit TIFF file, ensuring I can easily identify it as the Ha image. I then go back and run “Colour – Extract Oiii” and repeat. Just for kicks, I’ll also pull out a full mono image that can be used as a either a luminence layer or for the Sii channel later.
Combining and Stretching The Data
Previously I used to import the TIFF files into Photoshop and combine them that way. But after seeing Glenn Clouder’s video on combining OSC narrowband data I gave that a go. And the results are a lot more impressive.
I won’t spoil the video (it really IS worth a watch) but I’ll give a quick run down of the process as I understand it.
For starters you don’t need the TIFF files, so totally skip that step for now unless you prefer to work with TIFF files. Some do, which is why I’ve kept that step in. You still need the stacked Ha and Oiii FITS files though, because that’s what you’re going to be working with, unless you’re using those TIFFs. If, like me, your laptop is more sluggish than your own brain before its first coffee, you’ll find it easier to start this next part with a restart of APP. Assuming you have your Ha and Oiii FITS files and have restarted APP, let’s proceed.
The first thing you’ll need to do is load in the two files you need. These are the previous Ha and Oiii files you extracted. Going to the Tools tab, scroll down to Combine RGB. Select Add Channels after telling APP which palette you want to use, in this case SHO (Hubble) 1 or 2. The first channel is the Ha. When you double click on the Ha file it’ll come up with a pop-up. Change the filter selection to Hydrogen Alpha. Add channel again. This time it’s the Oiii file you want. Change the filter to Oxygen III. For the final channel you’re using the Oiii file again but this time you assign it to Sulphur II.
Generally speaking, the Oiii signal is often quite weak, so you need to boost this artificially. On the left side, scroll down to the Oiii channel and set the multiplier. Sometimes just x2 is enough. Recently I’ve done the Jellyfish Nebula and had to use a x7 multiplier because the Oiii was so weak. There’s no need to set anything else. Press (re)-calculate. The image it gives should be predominantly purple/green. If so, you’re on the right track. Hit Save and once it’s saved, press Cancel.
Double click the required file (there should only be one file listed) in the file manager below the main screen. Once it’s loaded, crop accordingly and save. It’s up to you if you need to rotate or flip the image at this point. I generally do exactly that as it allows me to see how the final image will be orientated as I work. I also don’t do any other processing such as light pollution removal, preferring to leave that for the final part of the processing in APP.
With the image loaded, look to the left panel again and scroll down to HSL Selective Colour. You’re now going to be altering three channels, in the following order: green, yellow and cyan. You can do these as many times as you like, but this is the order to do them in. This part is entirely subjective and the look will depend on how YOU like your images, but this will provide a starting point.
Green – alter the slider R<->CY by sliding it all the way to the left. Drop the G<->MG slider by about 30 then hit Calculate Current Adjustments. You should find that a lot of the green has now turned a pale orange. If you’re happy with the result then select Keep Current Adjustments.
Yellow -alter R<->CY again by sliding it all the way left. This time increase G<->MG by about 30 then hit Calculate. You’ll find that orange has now become deeper. Again, if you’re happy with this hit Keep Adjustments.
Cyan – zero the B<->YE slider. Calculate and Keep as above. The second part of this is the G<->MG slider. Drop this by about 30 and Calculate. Once you’re happy, hit Keep followed by Save. Once it’s saved Cancel out of the HSL tool and load the final file from the file manager at the bottom of the screen.
You’re now free to continue processing this as you usually would with an RGB image, ie light pollution emoval, background calibration, star reduction etc. As a point to note, I’ve found that I often don’t need to do a star colour calibration if I’m processing narrowband data in this way.
Thank you for reading, and if you have any comments, please feel free. Clear skies all.
One thought on “Narrowband Workflow”
[…] Narrowband Workflow […]