You’ll hear astrophotographers talk a LOT about their “workflow.” I don’t know for sure if there’s an official definition for this, but for me it includes everything after carrying out the polar alignment all the way through to the finished image. I’ve already covered polar alignment and guiding in my walkthrough, so I’ll assume you’ve already read that and want to have a look at what else goes on. If not, then here is the link to that walkthrough. Recently I’ve started getting into narrowband imaging, which I’m finding quite a lot of fun, although the processing is different from the regular broadband imaging, which I cover in my other workflow walkthrough.
So you’ve done your polar alignment, you’ve selected your target (let’s say NGC 281, the Pacman Nebula in this case), slewed and platesolved to target. There’s any number of software that allows you to image and in this case I’m going to assume you’re using APT (AstroPhotography Tool.) A number of factors will determine the length of each frame, not least the accuracy of your guiding / tracking. In my case I usually go for between 2 to 5 minute frames, or “subs.”
One thing to bear in mind with the sub length is the loss of data. For example. lets say you shoot for an hour. Using 5 minute subs, that’s 12 frames. If two of those, for whatever reason, go wrong, then that’s 10 minutes of imaging data that you’ve lost. If you shoot 10 minute subs and two go wrong, then that’s 20 minutes you’ve lost. If shooting 1 minute subs and 2 go wrong, that’s only 2 minutes you’ve lost. At the moment, I’m pushing 10 minute subs, but not with a reasonable enough degree of consistency, so I restrict myself on the serious sessions to 5 minutes. Potentionally the longer the sub length, the more data you can aquire in a single frame, although there does come a point where you hit the rule of diminishing returns. I’m not going to go into that in this article, other than to say that 5 minutes for me is reasonable under the Class 4 skies I have. Although at the time of writing this, my main rig is down (yes, STILL), and my exposure times are restricted to a minute using the Star Adventurer mount. Nevertheless, I’m still obtaining data so it’s all good for the time being.
You might ask then, why not just shoot 1 minute subs and minimise that data loss? Because an hour of 60 second subs grabs less data than an hour of 5 minute subs. The other thing to consider is that you need to try and obtain the best signal to noise ratio (SNR) you can, without over saturating the pixels. Again I won’t explain it here, but Dylan O’Donnell gives a great example on his YouTube Channel “Star Stuff.” The link for the video is here and I can highly recommend subscribing to his channel if you’re not already.
For narrowband imaging I’m using the ZWO Duo Narrowband Filter. This is a (relatively) cheaper way to introduce yourself to the joys of narrowband imaging. One of the upshots is that because you’re capturing your data only in specific wavelengths, in this case hydrogen alpha and oxygen, then you’re not as affected by light pollution nor lunar washout. Obviously if you’re going to be daft enough to image right next to a full moon, then you’re going to get hammered by that. But, by and large, if you exercise a modicum of common sense, you can get away with much more. Indeed, since starting to image in narrowband, I haven’t had to even look at the “remove light pollution” tool in AstroPixel Processor at all.
A lot of astrophotographers use a mono camera and separate filters for each of the main wavelengths (hydrogen, oxygen and sulphur.) You’re talking serious amounts of money there so if you’re like me, and money is an issue, then staying with a one shot colour (OSC) camera and using a duo narrowband filter can greatly lessen any financial impact. It also means that you’re able to collect the hydrogen and oxygen data simultaneously and not have to spread the imaging sessions across too many nights. Nights that are few and far between these days, especially in the UK.
As an aside, Amanda often points out that I must have the patience of a saint to do astrophotography because there’s sooooo many things that can and do go wrong, plus the challenge of living in one of the most changeable climates in the northern hemisphere, all of which conspire to inhibit doing any meaningful astronomy and astrophotography. Then I get a good night, or a few good nights, and she knows WHY I persist. It’s not just about the images for me. It’s about sitting out there, under perfect skies, seeing the depths of space with my own eyes, and being able to free my soul from the rigours of life to fly free among the stars, and imagine the warmth of a hundred million stars unfettered by these earthly ties. For me, personally, it’s also about guarding my mental health. We often do the things we need in order to protect our physical health, and ignore the health of our own minds. This hobby, this PASSION, gives me the space (pun intended) to try and keep my own mind free and allows me the time to check in with myself. It’s never been just about the images.
The above image is a single frame of NGC 281 (Pacman Nebula) using APT. You can see looking at the different sections all the information you need, including what would be the guide graph on the right. This saves switching between APT and PHD2. There is also a numerical display over on the left giving the “APT State”. The lower that number, the better. The graph will help keep track of the trend in guiding accuracy. That being said, try not to get too hung up on chasing the numbers in PHD2. So long as the stars look round, then it’s going well.
In APT you can define a plan, whereby you set the sub length, the number of images and the camera gain, in this case 270, which is pretty much something called “lowest read noise” for my particular camera. The plan I would usually here is 4 hours of 5 minute subs at 270 gain, which is my usual “go to” for DSO imaging these days, depending on the brightness of the target.
So you have your images from the session, 4 hours worth of subs, plus calibration frames (darks, flats and dark flats.) What I usually do at this point is load the light frames (the subs) into a stacking program, such as Deep Sky Stacker (DSS.). For DSO I usually use Astro Pixel Processor (APP) to stack and do the initial cleaning up (crop and light pollution removal plus star colour calibration.) However I retain DSS simply as a tool to run through the captured frames and ditch any bad ones, and that’s simply because DSS is so lightweight and easy to use. Once I’ve loaded them all into DSS I then work through each frame and erase the bad ones. You can hover the cursor over the frame itself and inspect the roundness of the stars. You want them nice and tight and round without any “trailing.” Some would consider this the long way round of sorting your data, and in fairness there are plenty of other ways. But I would always suggest you find a way of doing it that works for you, because there is no “right” or “wrong” way.
Loading Into Astro Pixel Processor (APP)
When you first load up APP, if you’ve just come from using DSS, it can look insanely complicated. Thanks to Stacey over at AstroStace and her YouTube tutorials, I’ve learned not to be so afraid of it and have now fully incorporated it into my workflow. It’s a lot more powerful than DSS, but not as much as PixinSight (PI.) It makes for a very good intermediate tool though without the complexity of PI and I would highly recommend taking the time to learn it’s capabilities.
Once you’ve confirmed your working directory (I usually use the same root directory my frames are in), the first thing you’ll notice are the numbered tabs over on the left. As a side note, when I’m working through the initial data, I leave it in the APT directory, but once I’ve done that, I transfer the remaining good frames across to their own directory and then break it down into date order, for example C:/Astrophotography/NGC281/Date/Lights, and just change the “Lights” part of that structure to whatever the frame type is, ie darks, flats etc. Ultimately it’s whatever works for you, but I find a good directory structure helps with organisation of my data, especially as it’s often shot across different sessions.
Here’s where things get interesting. Whereas with broadband images you would pretty much just go straight for the stacking algorithm that gives a straight up final RGB image, with this you need to take a slightly different approach. You’re wanting the specific wavelengths of data, the Ha and the Oiii. If this was a mono camera with a dedicated filter then it wouldn’t be problematic. If you’re using an OSC and multibandpass filter, such as in this instance, then you need to isolate the specific wavelengths individually.
So what you would do is ensure that you’re using the right bayer matrix which, lets be honest, is usually RGGB, and then click “Force Bayer/CFA.” Usually I’ll run the stack normally to pull out the RGB as usual. Once I have that stacked image, making no changes to it, I save it as a 16bit TIFF file, identifying it in the filename as such. Then, because you’re using an OSC and the program needs to know that, change the algorithm to “Colour – Extract Ha” and proceed to do what you usually would do to integrate. Once I have the stacked Ha data, I make no changes to it at all and save it as a 16bit TIFF file, ensuring I can easily identify it as the Ha image. I then go back and run “Colour – Extract Oiii” and repeat. Just for kicks, I’ll also pull out a full mono image that can be used as a either a luminence layer or for the Sii channel later.
Combining and Stretching The Data
Going into Photoshop I’ll import all the images and then using the RGB image as template, I’ll put the Ha image into the red channel and the Oiii image into BOTH the green and blue channels. This produces a palette known as HOO, or hydrogen/oxygen/oxygen.
You can of course decide quite arbitrarily to alter that. For example you could pull an extra full mono image from the original data and decrease the signal on that enough to place it into the green channel for a pseudo-Sii channel.
Either way, at this point I would proceed to process the image as I normally would, coming at each channel separately with levels and curves adjustments, as well as star and noise reduction. Once I have the individual channels how I like them (and you can check the overall RGB image as you go), it’s at this point I’ll look at cropping in order to tidy up the edges and cleaning up the stacking artifacts.
Traditionally the Ha signal can be quite overpowering so I’ll try and boost the Oiii and, if I’ve done it, the pseudo-Sii channels.
Aside from any global adjustments afterwards, such as contrast etc, I’ll then save and share. And that’s pretty much it done. The image below is just over 4 hours on the Pacman Nebula (NGC 281) using the above described method, minus the fake Sii channel, so it’s just the Ha and Oiii.
Thank you for reading, and if you have any comments, please feel free. Clear skies all.