Pisukeman Site

>I'm David Regordosa. I love AI and Space, and that's why I created this page, to share the things I do.

Nerd

Image created using Midjourney AI


>My Stuff:



[My astrophotography images]

>These images are taken from my hometown Igualada, with an ED APO 80mm f/6 Refractor telescope or with a 20cm Newton telescope, with ASI 533 MC PRO Color. Stacked with Siril. Post-processed with Photoshop

My Images

>Go take a look to my insta account and subscribe! :)



[Meteor tracking using multi layer Grad-CAM analysis]

>Previously we were working on a model to automatically detect meteors in images /videos. This model allow us to detect if a meteor was on an image/video frame with huge precision.

>But, as an extension to this work, we wanted to not only detect meteors but track it's position in the picture as well. This tracking is done using Class Activation Maps (CAM) which were first introduced by MIT in Learning Deep Features for Discriminative Localization.

>With this methodology and algorithm developed we are able to detect where the meteor is in the image, without the need of a previous process of labeling each meteor in the images. We can do this, using CAM, and specifically using multi layer Grad-CAM analysis

>A CAM is a weighted activation map, that it's generated for each image, and that helps to identify the region of the image the CNN is looking to classify the image as a meteor.

>Our proposal method is to use Grad-CAM (a type of CAM based on apply the activation and the Gradient of each layer) on the last layer of the CNN to generate a Region of Interest (ROI). Once we have the ROI, We analyze deeper layers of the CNN that have higher resolution, but are less acurate in the prediction. We cross the ROI and the activations of the first CNN layer to get a more precise prediction of the meteor position on image.


>All this work was published in a Paper on Planetary and Space Science, available here and also published on Arxiv

>Here you can see an example of a meteor captured by the SPMN (it's a gif, wait for the meteor....):

SPMN meteor

>And Here you can see the resulting process of Meteor tracking using Grad-CAM from different layers (below left image). The blue box shows the region of interest of the outer layer (with less resolution but more accuracy). This region of interest comes from a layer from the CNN with low resolution and because of this, represents a huge area of the image. We can use the Grad-CAM analysis of this layer to be sure that we are pointing to the right image portion, but we need more accuracy. Once the algorithm finds this blue box, then start finding the region of interest of the inner layer of the CNN (using the activations of the layer), but only take into consideration the ones that are inside the blue box. As explained, the inner layers have higher resolution but less accuracy. We then select the weighted region of interest as the red box, and it corresponds to the Meteor.

>Also, as the algorithm works frame by frame, it's easy to calculate the trajectory tracking the red boxes (below right image)


Meteor Detection Meteor Detection Trajectory


>Here you can see an schemma about how the tracking is done analyzing the ResNET layers:


Meteor Tracking


>The code to analyze the Grad-CAM and get the weighted array:


#cls=0 is the meteor class on the classification process, and level=-1 is to focus on the last convolutional layer of the ResNet 34
cls=0
level=-1

with HookBwd(learn.model[0][level]) as hookg: 
  with Hook(learn.model[0][level]) as hook:
    output = learn.model(x1)
    act = hook.stored
  output[0,cls].backward()
  grad = hookg.stored
w = grad[0].mean(dim=[1,2], keepdim=True)
cam_map = (w * act[0]).sum(0)
#Normalize the weights 
avg_acts = torch.div(cam_map,torch.max(cam_map))
#At this point you'll find in avg_acts an array of weights, indicating the regions of interest in the image 
#The dimension of the array depends on the layer you are watching at
#Now, repeat the same with the level -5, which is the first convolutional layer of the ResNet 34
level = -5
#...
#At the end you'll have two arrays with different sizes, both having the weights, from 0 to 1 (0 means no interest, 1 means the highest interest)

>This code snippet is taken from this wonderful notebook from fast.ai repo


>Here an example of how to find the ROI using the Grad-Cam (image on the left), and then the selection of the meteor using the activations inside the ROI:

Meteor Detection

>As said before, the key topics on this are:

>With this approach an astronomical observatory will easily train it's own model (using transfer learning), and get a tool that detects and tracks the meteors without even having to label it's position before

>Thanks for reading!



[Image of Galaxy M101 with supernova SN2023ixf]

>20cm Newton telescope, with ASI 533 MC PRO Color. 5 images of 180 sec. Stacked with Siril. Post-processed with Photoshop

>Highlighted in the image the supernova SN2023ixf

M101

Galaxy M101 with supernova SN2023ixf



[BingMeImages: a small tool to create image datasets using Bing API]

>I was working on some models to automatically colorize black and white deep space images.

>So, the first step was to get some images from Galaxies, Nebulas, etc...I was searching for existing datasets, but I decided to build my own custom tool.

>That's why I build BingMeImages, a custom python library that does some of the tasks needed to build a image dataset. Basically:

>The result is that you can, for example, query for all the Messier and NGC objects, and the get a consolidated dataset of about 3k images, without duplicates, in less than an hour.

>For example, you can run the library with:


#Example of how to generate queries
  my_objects=[]
  for num in range(1, 111,):
      my_objects.append('MESSIER '+str(num))
  for num in range(1, 7840,):
      my_objects.append('NGC '+str(num)+' space')
  
  # And then finally execute all the process with something like:
  # my_objects: the queries we want to send to Bing API
  # 200: the number of images we want to download for each query
  # ./ufo: the folder to store all the images
  # 160: we want the images to be resized to 160x160 pixels
  createDataset(my_objects,200,"./ufo",160)
  

>The resulting folder:

Dataset

>You can download the python library on my Githbub: BingMeImages

>Thanks for reading!



[Meteor detection using transfer learning on a ResNet34]

>I was working on a model to automatically detect meteors in images/videos.

>The model was created using 57k images from the Alphasky camera from Observatori de Pujalt, using a transfer learning process on a pretrained ResNet34 neural network

>The dataset was not balanced due to the high number of no-meteor images compared to the ones with meteor on it. So, I used Data Augmentation to facilitate the model to generalize correctly and avoid overfitting.

>Everything was done using Fast.ai libraries


Meteors

>The resulting model has a 0.98 recall on meteor detection. The model is available to download and use here (guAIta_latest_version.pkl)


Result

Model evaluation on test dataset


>You can use the model installing first the Anaconda environment using this yaml file

>Once you set up the Anaconda environment, the inference can be done using this code snippet:


learn = load_learner("guAIta_latest_version.pkl")
predict = learn.predict(image)
if (predict[0]=="meteor" and predict[2][0]>scoring):
           #Captured!
  

>You can find more information about the model training in my Masters degree dissertation (in spanish), in my github library GuAIta

>As said before, the key topics on this are:

>With this approach an astronomical observatory will easily train it's own model (using transfer learning)

>All this work is explained on my Master DegreeYou can use the model installing first the Anaconda environment using this yaml file

>Thanks for reading!



[Using Autoencoders to add exposure to Galaxy Images]

>To understand this post is important to understand what autoencoders are. I'm working with autoencoders a long time ago, and i'm absolutelly in love about the properties they have.

>An autoencoder is just a Neural Network but with a specific topology, used to learn data encodings in an unsupervised manner. The autoencoders can learn a set of data, producing a dimensional reduction, while training the network to be able to ignore noise.

>In other words, and reducing the autoencoder to the minimum configuration possible, the autoencoder has an input layer, a hidden layer called "bootleneck" (because it's smaller than the input one), and an output layer with the same size as the input one. (check the image below. Credits: Credits: https://www.jeremyjordan.me/autoencoders/)

Autoencoder

https://www.jeremyjordan.me/autoencoders/

>Now let's imagine that we train this autoencoder to be able to reproduce the input. We can feed the neural network with galaxy images and train it to produce the exit as a reconstruction of the input. Note: the dataset used comes from https://www.zooniverse.org/projects/zookeeper/galaxy-zoo/

Autoencoder2

Own creation, using images from https://www.zooniverse.org/projects/zookeeper/galaxy-zoo/

>At this point we'll have a neural network that is able to reconstruct a galaxy image from a input image galaxy...mmm...maybe not so spectacular, but with some interesting features.

>If we split our autoencoder in 2 parts, the encoder and the decoder, we'll have the following features. With the encoder we'll be able to generate a dimensional reduction of each galaxy. Note that the bootleneck layer is also known as latent space.

Autoencoder3

Own creation, using images from https://www.zooniverse.org/projects/zookeeper/galaxy-zoo/

>And the other part of the autoencoder, the decoder, can be used to choose a point in our latent space ang reconstruct the corresponding galaxy.

Autoencoder4

Own creation, using images from https://www.zooniverse.org/projects/zookeeper/galaxy-zoo/

>Ok, now that we understand what an autoencoder can do for us, let's try something different. Imagine that we create a dataset with galaxy images with some noise added and train the network to genereate the same galaxy without noise. Then we train the autoencoder with the noisy data as the inputs, and the clean data the outputs. Some examples i did:

Example1 Example2

Own creation, using images from https://www.zooniverse.org/projects/zookeeper/galaxy-zoo/

>Note that these are test images, separated from the train set in order to test the autoencoder once the training was finished. So the autoencoder had never seen those images, and was able to reproduce a version without noise. Nice.

>Now, let's think about another use of the autoencoder. Some amateur astronomers (i'm one of them), have small telescopes wich give very faint galaxy images. To be able to generate good galaxy images with a amateur telescope is needed a good CCD, several exposure time, and a very good telescope calibration (polar alignment). In some cases, is very dificult to have long exposure images. So, why not to train our autoencoder with galaxy images manipulated to have low exposure as inputs, and the original galaxy images as the output?

>We are going to train the autoencoder with more than 61k galaxy images of (106x106 pixels), and the autoencoder will learn how to generate a "normal" galaxy image from a low exposure one.

Galaxies

https://www.zooniverse.org/projects/zookeeper/galaxy-zoo/

>The result is not perfect, but looks nice.