Back in October 2021, I set on a journey to Uspallata, Mendoza. The idea was to reach a site dark enough to take a shot at Andromeda galaxy. This would be my first attempt at getting this picture and with a lot of planning due to the fact that Andromeda galaxy isn’t above horizon during most of the year (southern latitudes), so I had selected a site accordingly.
The site was good for astronomical observations, I make sure every town nearby was either south or east from the site’s location and chose a sort of valley for framing. I was accompanied by my girlfriend, my brother and his girlfriend; the view from there was astounding, stars above us seemed so big and close, the milky way bright as we’ve never seen and both Magellanic clouds completed the spectacle.
I took a lot of pictures, mainly to Andromeda (the planned one and two other shots) but I couldn’t resist to try the Magellanic clouds. With my mind divided partially in the moment and partially in post-processing I took 30 frames per shot as to average noise, I also took dark frames and bias frames as required by DeepSkyStacker to properly work so I got a lot of material to work with back at home.
What was I thinking? I thought that taking lots of shots would do the magic but stacking doesn’t improve exposure! (lesson learned the hard way), it does wonders with noise and detail but it simply won’t give you a better exposure than those of a still frame, so you have to make sure the histogram is what you’re looking for before taking the 30 or perhaps more frames. Well, I was disappointed, all of the planning, the journey, everything crumbled to almost nothing because I overlooked at one of the more important things.
I wasn’t going to give up, and decided it was, nevertheless, a perfect opportunity to learn a couple of things. So I stacked the frames using every algorithm available and with the fine tuning enough to get the most of them hoping maybe one had the answer. But to my surprise, none of them improved the image. There was something that called my attention though, one of the versions showed a green line of dots to the lower right of the finished frame. What was this? I didn’t showed on the others and certainly didn’t showed in the individual pictures.
The stacking algorithm in question here is “maximum”, as I understand it, it keeps the higher value for each pixel of every frame in the stack. Now, I took the individual frames at ISO6400, 5 seconds each, with a Nikon D90. For something to appear as a dot it has to be very slow moving and in order to appear in the picture it has to shine over background noise. I’ve had other shots ruined because of satellites, but here I was almost sure I wouldn’t have that problem, it was midnight (00:32) and sunset was at 19:47, 4 hours ago.
Upon careful inspection of the individual frames I could distinguish the faint dot from background noise and stars, the object was moving from south to north and it looked like noise. If it wasn’t because of the moving and the fact that I blew up exposure I wouldn’t find it. If the object were standing still It would have been impossible to tell it was there.
See for yourself.



The short video above was obtained by rendering all the pictures in the stack as frames. Bear in mind I needed to do some processing as the object is near noise so resize algorithms would filter it out. I chose to use B-Splines, should I have used another?