>I'm David Regordosa. I love AI and Space, and that's why I created this page, to share the things I do.
Image created using Midjourney AI
>My Stuff:
>These images are taken from my hometown Igualada, with an ED APO 80mm f/6 Refractor telescope or with a 20cm Newton telescope, with ASI 533 MC PRO Color. Stacked with Siril. Post-processed with Photoshop
>Go take a look to my insta account and subscribe! :)
>Previously we were working on a model to automatically detect meteors in images /videos. This model allow us to detect if a meteor was on an image/video frame with huge precision.
>But, as an extension to this work, we wanted to not only detect meteors but track it's position in the picture as well. This tracking is done using Class Activation Maps (CAM) which were first introduced by MIT in Learning Deep Features for Discriminative Localization.
>With this methodology and algorithm developed we are able to detect where the meteor is in the image, without the need of a previous process of labeling each meteor in the images. We can do this, using CAM, and specifically using multi layer Grad-CAM analysis
>A CAM is a weighted activation map, that it's generated for each image, and that helps to identify the region of the image the CNN is looking to classify the image as a meteor.
>Our proposal method is to use Grad-CAM (a type of CAM based on apply the activation and the Gradient of each layer) on the last layer of the CNN to generate a Region of Interest (ROI). Once we have the ROI, We analyze deeper layers of the CNN that have higher resolution, but are less acurate in the prediction. We cross the ROI and the activations of the first CNN layer to get a more precise prediction of the meteor position on image.
>All this work was published in a Paper on Planetary and Space Science, available here and also published on Arxiv
>Here you can see an example of a meteor captured by the SPMN (it's a gif, wait for the meteor....):
>And Here you can see the resulting process of Meteor tracking using Grad-CAM from different layers (below left image). The blue box shows the region of interest of the outer layer (with less resolution but more accuracy). This region of interest comes from a layer from the CNN with low resolution and because of this, represents a huge area of the image. We can use the Grad-CAM analysis of this layer to be sure that we are pointing to the right image portion, but we need more accuracy. Once the algorithm finds this blue box, then start finding the region of interest of the inner layer of the CNN (using the activations of the layer), but only take into consideration the ones that are inside the blue box. As explained, the inner layers have higher resolution but less accuracy. We then select the weighted region of interest as the red box, and it corresponds to the Meteor.
>Also, as the algorithm works frame by frame, it's easy to calculate the trajectory tracking the red boxes (below right image)
>Here you can see an schemma about how the tracking is done analyzing the ResNET layers:
>The code to analyze the Grad-CAM and get the weighted array:
cls=0 level=-1 with HookBwd(learn.model[0][level]) as hookg: with Hook(learn.model[0][level]) as hook: output = learn.model(x1) act = hook.stored output[0,cls].backward() grad = hookg.stored w = grad[0].mean(dim=[1,2], keepdim=True) cam_map = (w * act[0]).sum(0) #Normalize the weights avg_acts = torch.div(cam_map,torch.max(cam_map)) #At this point you'll find in avg_acts an array of weights, indicating the regions of interest in the image #The dimension of the array depends on the layer you are watching at #Now, repeat the same with the level -5, which is the first convolutional layer of the ResNet 34 level = -5 #... #At the end you'll have two arrays with different sizes, both having the weights, from 0 to 1 (0 means no interest, 1 means the highest interest)
>This code snippet is taken from this wonderful notebook from fast.ai repo
>Here an example of how to find the ROI using the Grad-Cam (image on the left), and then the selection of the meteor using the activations inside the ROI:
>As said before, the key topics on this are:
>With this approach an astronomical observatory will easily train it's own model (using transfer learning), and get a tool that detects and tracks the meteors without even having to label it's position before
>Thanks for reading!
>20cm Newton telescope, with ASI 533 MC PRO Color. 5 images of 180 sec. Stacked with Siril. Post-processed with Photoshop
>Highlighted in the image the supernova SN2023ixf
>I was working on some models to automatically colorize black and white deep space images.
>So, the first step was to get some images from Galaxies, Nebulas, etc...I was searching for existing datasets, but I decided to build my own custom tool.
>That's why I build BingMeImages, a custom python library that does some of the tasks needed to build a image dataset. Basically:
>The result is that you can, for example, query for all the Messier and NGC objects, and the get a consolidated dataset of about 3k images, without duplicates, in less than an hour.
>For example, you can run the library with:
#Example of how to generate queries my_objects=[] for num in range(1, 111,): my_objects.append('MESSIER '+str(num)) for num in range(1, 7840,): my_objects.append('NGC '+str(num)+' space') # And then finally execute all the process with something like: # my_objects: the queries we want to send to Bing API # 200: the number of images we want to download for each query # ./ufo: the folder to store all the images # 160: we want the images to be resized to 160x160 pixels createDataset(my_objects,200,"./ufo",160)
>The resulting folder:
>You can download the python library on my Githbub: BingMeImages
>Thanks for reading!
>I was working on a model to automatically detect meteors in images/videos.
>The model was created using 57k images from the Alphasky camera from Observatori de Pujalt, using a transfer learning process on a pretrained ResNet34 neural network
>The dataset was not balanced due to the high number of no-meteor images compared to the ones with meteor on it. So, I used Data Augmentation to facilitate the model to generalize correctly and avoid overfitting.
>Everything was done using Fast.ai libraries
>The resulting model has a 0.98 recall on meteor detection. The model is available to download and use here (guAIta_latest_version.pkl)
>You can use the model installing first the Anaconda environment using this yaml file
>Once you set up the Anaconda environment, the inference can be done using this code snippet:
learn = load_learner("guAIta_latest_version.pkl") predict = learn.predict(image) if (predict[0]=="meteor" and predict[2][0]>scoring): #Captured!
>You can find more information about the model training in my Masters degree dissertation (in spanish), in my github library GuAIta
>As said before, the key topics on this are:
>With this approach an astronomical observatory will easily train it's own model (using transfer learning)
>All this work is explained on my Master DegreeYou can use the model installing first the Anaconda environment using this yaml file
>Thanks for reading!
>To understand this post is important to understand what autoencoders are. I'm working with autoencoders a long time ago, and i'm absolutelly in love about the properties they have.
>An autoencoder is just a Neural Network but with a specific topology, used to learn data encodings in an unsupervised manner. The autoencoders can learn a set of data, producing a dimensional reduction, while training the network to be able to ignore noise.
>In other words, and reducing the autoencoder to the minimum configuration possible, the autoencoder has an input layer, a hidden layer called "bootleneck" (because it's smaller than the input one), and an output layer with the same size as the input one. (check the image below. Credits: Credits: https://www.jeremyjordan.me/autoencoders/)
>Now let's imagine that we train this autoencoder to be able to reproduce the input. We can feed the neural network with galaxy images and train it to produce the exit as a reconstruction of the input. Note: the dataset used comes from https://www.zooniverse.org/projects/zookeeper/galaxy-zoo/
>At this point we'll have a neural network that is able to reconstruct a galaxy image from a input image galaxy...mmm...maybe not so spectacular, but with some interesting features.
>If we split our autoencoder in 2 parts, the encoder and the decoder, we'll have the following features. With the encoder we'll be able to generate a dimensional reduction of each galaxy. Note that the bootleneck layer is also known as latent space.
>And the other part of the autoencoder, the decoder, can be used to choose a point in our latent space ang reconstruct the corresponding galaxy.
>Ok, now that we understand what an autoencoder can do for us, let's try something different. Imagine that we create a dataset with galaxy images with some noise added and train the network to genereate the same galaxy without noise. Then we train the autoencoder with the noisy data as the inputs, and the clean data the outputs. Some examples i did:
>Note that these are test images, separated from the train set in order to test the autoencoder once the training was finished. So the autoencoder had never seen those images, and was able to reproduce a version without noise. Nice.
>Now, let's think about another use of the autoencoder. Some amateur astronomers (i'm one of them), have small telescopes wich give very faint galaxy images. To be able to generate good galaxy images with a amateur telescope is needed a good CCD, several exposure time, and a very good telescope calibration (polar alignment). In some cases, is very dificult to have long exposure images. So, why not to train our autoencoder with galaxy images manipulated to have low exposure as inputs, and the original galaxy images as the output?
>We are going to train the autoencoder with more than 61k galaxy images of (106x106 pixels), and the autoencoder will learn how to generate a "normal" galaxy image from a low exposure one.
>The result is not perfect, but looks nice.
>Note that the point here is to get a reconstructed image as similar as possible to the Original one. And, also keep in mind that these are galaxy images never seen by our autoencoder, so the trick here is that the autoencoder, when receive a low exposure image of a galaxy (that the autoencoder had never seen), is able to reproduce a galaxy image without the low exposure.
>And last, with Keras, the definition of this autoencoder is pretty easy. First, we read the dataset and split it into training set and test (10% of the data set to test).
x_train, x_test = train_test_split(x_train, test_size=0.1, random_state=42) x_train_noise = simulate_low_exposure(x_train) x_test_noise = simulate_low_exposure(x_test) def simulate_low_exposure(x,max,min,perc): return np.where(x-perc>min,x-perc, min)
>The simulate low exposure function is just a noisy function not exactly a low exposure (to be honest), but do the trick. And finally we define the autoencoder, a very simple one:
def build_one_layer_autoencoder(img_shape, code_size): # The encoder encoder = Sequential() encoder.add(InputLayer(img_shape)) encoder.add(Flatten()) encoder.add(Dense(code_size)) # The decoder decoder = Sequential() decoder.add(InputLayer((code_size,))) decoder.add(Dense(np.prod(img_shape))) decoder.add(Reshape(img_shape)) return encoder, decoder
>The function parameters are the image shape and the size of the bootleneck layer (number of nodes), and returns the encoder and the decoder sepparately in order to allow us to play :)
#The size of the first layer will be width x Height of the IMAGE_SHAPE, and the bootleneck layer will be, for example 1000 nodes. encoder, decoder = build_one_layer_autoencoder(IMG_SHAPE, 1000) inp_shape = Input(IMG_SHAPE) code = encoder(inp_shape ) reconstruction = decoder(code) autoencoder = Model(inp,reconstruction) autoencoder.compile(optimizer='adamax', loss='mse') #And now we are ready to train the autoencoder with the noisy galaxies (x_train_noise) as input and the original galaxies (x_train) as output. #Also x_test_noise and x_test are the test dataset. history = autoencoder.fit(x=x_train_noise, y=x_train, epochs=10, validation_data=[x_test_noise, x_test])
>that's all. Just posted a little bit of the code, the most interesting. There are a lot of posts of noise reduction using autoencoders, i got the idea and some code snippets reading some of them.
>I like the approach to treating low exposure as noise.
This post was orignary posted at:https://dev.to/pisukeman/autoencoders-to-add-exposure-to-galaxy-images-4jlh