Category Archives: Blog

ASI1600MM-COOL test

M27 luminance with ASI1600MM-COOL, RGB with Canon EOS600Da

2016-07-28 23:30-01:00
Clear, no moon, no wind, 12 degrees C

2016-08-02 00:00-02:00
Low clouds, no moon, no wind, 10 degrees C

2016-10-03 21:00-03:30
Some initial high clouds, some wind, 85% humidity, 5 degrees C

Took some more test luminance frames on 2016-07-28, 2016-08-02 and 2016-10-03 to decide upon gain, offset and exposure time. Added RGB from previous DSLR image of same object, see

66x30s+17x60s+9x90s+19x120s+8x180s+26x240s+3x300s exposures with ASI1600MM-COOL and IDAS LPS-D1 filter used for luminance (224.5 minutes total exposure) taken with gain 77, offset 12 and gain 100, offset 17 settings of the camera cooled to -20C. 16x300s+8x240s+9x180s exposures with Canon EOS600Da used for RGB (139 minutes total exposure for RGB)

With gain 77 setting, the 120s exposures seem to be optimal (with some caveat since the sky was not fully dark). With gain 100, offset 17 I could not see any difference in amount and detail of nebulosity and stars when stacking 23 30s exposures compared to 3 240s exposures apart from the 30s stack had less noise and less saturated stars. This suggests that 30s exposures could be used.

ASI1600MM-COOL first light

ASI1600MM-COOL first light - M27 (Dumbbell Nebula)
ASI1600MM-COOL first light – M27 (Dumbbell Nebula)

2016-07-19 00:00-01:00
Clear, full moon, no wind, 10 degrees C

Taken at full moon and dusk (astronomical night still a couple of weeks away) using just 5 60s unguided LIGHT exposures at -20C, unity gain with my new ASI1600MM-COOL camera
100 DARK
200 BIAS

Filter: IDAS LPS-D1
Lens: Sky-Watcher Esprit 80ED + Sky-Watcher field flattener
Mount: HEQ5 Pro
Skywatcher SynGuider auto guider and Celestron 80mm guide scope
Software: Sequence Generator Pro, PixInsight, Photoshop

ASI1600MM-COOL arrived


On 2016-07-15 I received the new ASI1600MM-COOL camera to supplement (and in the future replace) my Canon 600Da modded DSLR. I intend to use it for acquiring luminance (and later Ha) data to be combined with RGB data acquired by DSLR. So far, I have tested it with my iMac running Parallels Desktop and Windows 10 and just finished a sequence of 100 300s darks @-20C using Sequence Generator Pro without any hiccups. I am unable to use USB2.0 connection with the camera but USB3.0 works flawlessly even with 5m cable connected with additional 5m active extension cable to the iMac. I’m using the ASCOM 6.2 platform with the ASI Camera ASCOM V1.0.2.9 driver together with SGP Since the cooler works at around 55-75% power to get from ambient 23C to -20C. I will probably be able to work the camera at -25C or even -30C during actual night time photography. There is some little amp glow present in the darks at -15C which is almost gone at -20C. Both darks and bias frames also compares very favorably to corresponding frames from my DSLR. Will return with more information as soon as the astrophotography season begins after Aug 1st.

Heart and Soul Nebulae

Heart and Soul Nebulae
Heart and Soul Nebulae

Testing the PixInsight GradientMergeMosaic tool.

2014-04-03 22:00-05:00 (2 hours lost due to a combination of hardware and software problems)
Clear, moon at the horizon, no wind, 0 degrees C
8x480s LIGHT
28x480s DARK
200 BIAS

2015-12-13 17:30-21:00 (2 hours lost due to Canon driver issue with Windows 10 and USB 3.0 setting in VMWare Fusion 8.1.0)
Some low clouds followed by fog, no wind, -3 degrees C
9x480s LIGHT
60x480s DARK
200 BIAS

Camera: Canon EOS-600Da, IDAS LPS-P2, ISO800
Lens: Sky-Watcher Esprit 80ED + Sky-Watcher field flattener
Mount: HEQ5 Pro
Skywatcher SynGuider auto guider and Celestron 80mm guide scope
Software: BackyardEOS, PixInsight, Photoshop

Supermoon Eclipse 2015

Supermoon Eclipse 2015

The supermoon lunar eclipse photographed from my backyard.

6 s exposure at ISO200.

Camera: Canon EOS-600Da, IDAS LPS-P2, ISO200
Lens: Sky-Watcher Esprit 80ED + Sky-Watcher field flattener
Mount: HEQ5 Pro
Software: BackyardEOS, Photoshop


I’ve also made a short timelapse movie of the event from my images taken that night:

Guide to DSLR-image processing in PixInsight posted

I’ve posted a guide (in Swedish) on how to process DSLR-images with PixInsight at:

Guide: Behandling av DSLR-bilder i PixInsight

A rough translation in English:

Here is an example of a workflow for processing DSLR images in PixInsight. Note that the workflow is supposed to be used as the basis for general treatment in PixInsight of astrophotos taken with DSLR camera. Other values and steps are always required to achieve optimal results for each individual image. I would appreciate additional tips and steps as this workflow has evolved through trial and error to some extent.

====== LINEAR IMAGE ======

1. Stack LIGHTs, DARKs, BIAS (a Master BIAS of a few hundred images preferably produced with the help of the Super Bias tool, is recommended) and possibly FLATs (they may introduce additional noise), with BPP, select Linear Fit Clipping rejection algorithm for Lights (best for many images and will deal with most images even if they include satellite tracks or not completely round stars) or Winsorized Sigma Clipping (also quite good). Tick CFA images and Optimize dark frames and use Bayer / mosaic pattern RGGB and DeBayer VNG method. Select one of the best images as Registration Reference Image.

2. Open the generated master light file and turn on STF without linking the RGB channels to show the image adapted for displays (nonlinear) since the stacking creates a linear image.

3. Use DC to crop any ugly edges.

4. Select a Preview of an area of the image (one twentieth or so of the full image) with few stars and no nebulosity. Run BN with the area as reference.

5. Link the RGB channels in STF and apply it again.

6. Select another preview of an area of the image with many stars of different colors (one twentieth up to half of the picture is about right) or if the image consists largely of a galaxy, the whole galaxy. Run CC with the preview as white reference (use Structure Detection when stars are marked instead of a galaxy). Reuse the preview of the previous step as a background reference. If stars are selected for white reference, you should also adjust the Lower Limit of the white reference so that only the stars and no nebulosity or background will be included (check with R: G: and B: values ​in the status bar at the bottom of the main window, a nominal value should be about 0.1).

7. Apply STF with RGB channels linked.

8. Use ABE (or DBE if the object occupies most of the image, click 20-100 points regularly in the background should be enough for DBE) Function Degree 4, Subtraction, followed by Division. Then Function Degree 9, Subtraction, followed by Division. Make sure Normalize, Discard background model and Replace target image are checked when the ABE or DBE is applied.

9. Use the CBR script (at least for Canon cameras, I do not know how it works for other brands), be sure Protect from highlights is checked. Rotate the image 90 degrees and use the CBR script again. Rotate back the image. The script works best if all individual subs are taken with the similar rotation.

10. Clone the image and drag the New Instance button from STF to the button bar at the bottom of HT. Drag the New Instance button from HT to the cloned image to convert from linear to nonlinear image.

11. Inverse mask the linear original image with the cloned nonlinear image (menu item Mask->Select Mask and check the Invert Mask and select Mask->Enable Mask).

12. Reduce noise with MMT, 5 layers is adequate with the following values (Layer: [t, s, a]):
1: [0.0500, 0.06, 1.0000]
2: [0.0300, 0.06, 1.0000]
3: [0.0200, 0.06, 1.0000]
4: [0.0100, 0.06, 1.0000]
5: [0.0050, 0.06, 1.0000]

If the image is very noisy, you can use 6 layers and:
1: [0.7000, 0.25, 1.5000]
2: [0.5000, 0.25, 1.5000]
3: [0.3000, 0.25, 1.5000]
4: [0.2000, 0.25, 1.5000]
5: [0.2000, 0.25, 1.5000]
6: [0.2000, 0.25, 1.5000]

13. Turn off the mask (by Mask->Enable Mask again)

14. Pull the New Instance button from STF to the button bar at the bottom of HT. Drag the New Instance button from HT to the picture to convert from linear to nonlinear image.

====== NONLINEAR IMAGE =======

15. Make sure the picture is selected in the HT so that the RGB graphs are visible. Drag the left slider below the graphs so that the percentage of Shadows does not go over 0.0000%. Drag the middle slider to the right so that the darkest parts of the picture has R: G: and B: values around 0.1 (check by investigating the values on the Real-Time Preview). Apply on the picture.

16. Run SCNR (Green, Average Neutral, 1.00 Preserve Lightness) (it’s probably possible to run this anytime after the image has become nonlinear, the most important thing is that there are no green areas when the image is finished)

17. Run MS 1000 iterations.

18. Apply HT according to step 15.

19. If necessary, reduce the noise with MLT (preferably with high contrast copy of the image as inverse mask) or ACDNR (difficult to give any general values for these since it depends on how noisy the picture is). Highlight a preview and test against this before the applying noise reduction on the entire image. The important thing is not to reduce the noise so much that the faintest stars or nebulosity details disappear!

20. Run HDRMT (1 Iterations, no Overdrive, B3 Spline (5) To Lightness, Lightness Mask). Select the number of layers between 3-6 (the number of layers which makes the size of the stars smallest tend to be OK).

21. Run ET SMI Order 0.3-1.0 (the less noise the higher you can probably use), Smoothing 0 and Lightness Mask. If the colors are too strong, you can run this on luminance only (Channel Extraction with CIE L * a * b and apply on the L-picture, then run Channel Combination with CIE L * a * b on the original image and the three images created with the Channel Extraction selected as the Source Images).

22. Apply HT according to step 15.

23. Run ET PIP Order 0.3-1.0 (the less noise the higher you can probably use), Smoothing 0 and Lightness Mask. If the colors are too strong, you can run this on luminance only (Channel Extraction with CIE L * a * b and apply on the L-picture, then run Channel Combination with CIE L * a * b on the original image and the three images created with the Channel Extraction selected as the Source Images).

24. Apply HT according to step 15.

25. Improve the colors with CS according to taste.

26. You can save the image in .png format for further color processing, noise reduction and signal improvement in other imaging software or continue with PixInsight until you are satisfied.

====== Process NEBULOSITY separately if present in the image =======

27. Clone the picture, call it o.

28. Clone o, call it s.

29. Apply SM on s: Noise threshold of 0.1000, the Star mask, Scale 7, the Large-Scale 3, the Small-Scale 1 Compensation 2, Smoothness 8 are good starting values.

30. Apply MT on s (with the star mask from the previous step activated): Interlacing 1, Iterations 3, Amount 1.00, globular Structure Element, Size 7 (49 elements), Way 1 of 1. Start by running Erosion. Alternate with Closing, Morphological Median and Midpoint until the smaller stars disappear.

31. Apply RS to s to mask big stars: Pull the Lower Limit so that as many remaining stars as possible are selected and as little  nebulosity as possible is included (marked in white in the otherwise black background). Use Smooothness between 10-30.
You can run this step iteratively on the created mask if you want to get more control on the star mask.

32. Use the mask in the previous step inverted on s to only affect the nebula.

33. Run HDRT and / or LHE (LHE very gently) on the nebula to increase the contrast in the details.

34. Process the nebulosity according to taste, e.g. use ET according to step 21-24 to highlight nebulosity even more.

35. Use PixelMath with the following formula on o:
F = 0.4; (1- (1- $ T) * (1-s) * F) + (T * ~ $ F)
(F = 0.2-0.6 is usually good)

36. Apply HT according to step 15.

37. Repeat steps 35-36 if possible without producing an unnatural picture.

38. Done! Save in .png for further processing in another image program and for saving in .jpg format (avoid storing .jpg with PixInsight since the color profiles are not always OK).


BPP BatchPreProcessing
STF Screen Transfer Function
DC: Dynamic Crop
ABE: AutomaticBackgroundExtractor
DBE: DynamicBackgroundExtractor
CBR: CanonBandingReduction
BN: BackgroundNeutralization
CC: Color Calibration
HT: Histogramtransformation
CS: ColorSaturation
MMT: Multiscale Media Transform
MS: MaskedStretch
MLT: Multiscale Linear Transform
SM: Star Mask
MT MorphologicalTransformation
RS: Rank Selection
HDRMT: HDRMultiscaleTransform
ET ExponentialTransform



Testing Pixinsight Drizzle Integration

M57, SN2013ev
M57, SN2013ev

The first picture is a test using Pixinsight drizzle integration on the same image data as the picture below it. Although the images have been processed quite differently, there is definitely an improvement in the amount of details possible when using 2X drizzle (artificially doubling the resolution of the original data).



After a lot of fiddling with lenses  and stray light I have finally made the adjustments necessary to reduce the diffraction spikes that have plagued the photos taken with my SkyWatcher Esprit 80ED refractor. By first removing the dew shield, a soft cloth glued around the lens chamber is revealed. Under the cloth, there are six screws around the lens chamber at one end and another six around the other. By unscrewing each of these an equal amount (about one full turn) using a hex wrench, the diffraction spikes seem to have vanished! However, when temperature goes far below 0 degrees C, they seem to be more visible again so my plan for the next winter is to unscrew the screws a bit more (if I dare 🙂 ).


M42 region

The weather this winter has been awful. However, I managed a first test with my new Sky-Watcher Esprit 80ED (80mm f/5) triplet refractor telescope on Monday 17th of February. Very satisfied with the ease of use and sharp picture quality, at least compared to my vintage Meade 2080 Schmidt-Cassegrain telescope. Not so satisfied with the quality of the stars. The six diffraction spikes from each star suggested pinched optics. After this first test I have loosened he tight lens construction of the telescope and a brief test suggests much improvement in the photographic quality of the stars. Waiting for the next clear night to show the results. In the meantime, this first light image have to suffice (quite OK, particularly considering the city arena lights being switched on fully below the object and a rising moon…)


M57, SN2013ev
M57, SN2013ev

Took an image of M57 – The Ring Nebula before the fog clouds rolled in this Friday. To my surprise I also caught the supernova SN2013ev in the far away spiral galaxy IC1296 (225 million lightyears) which lies in sight with the much closer M57 (2300 lightyears). Currently this supernova is shining at magnitude 17, close to the magnitude 15 of the whole galaxy IC1296, however to me in this picture it seems to have brightened somewhat.


SkyWatcher SynGuider
SkyWatcher SynGuider

I have now made some tests with the SkyWatcher SynGuider autoguider I have recently acquired along with a Celestron 80mm guide scope attached to my good old Meade 2080 8-inch Schmidt-Cassegrain telescope. My experiences with it so far are pretty good. It can take a long time to find a good guide star to lock on to (up to 1 hour so far) but since the ratio of usable light subs is increased from 10-30% without autoguider up to 100% with the autoguider, it definitely pays off using it. However the added weight of the guide scope and autoguider required me to also purchase a third counter weight for my mount and I’m afraid I’m now at the limit of the mount’s specified carrying capacity.



Must say I’m pleased with the behaviour of my new mount. Here is the first astrophoto with it. It’s M27 The Dumbbell Nebula, a planetary nebula in the constellation Vulpecula. The photo was taken 2012-10-09 (the first clear night since I bought the mount in the beginning of September).15x60s exposures at ISO 1600 (14x60s black, 10 bias and 8 flat exposures were added) and processed with PixInsight.