Posts by Tags

Optimization

Second order optimization methods and Neural Networks

1 minute read

Published:

Optimization of deep Neural Networks is often done using Gradient-based methods such as mini-batch gradient descent and its extensions such as Momentm, RMSprop, and Adam. Second order optimization methods such as Newton, BFGS, etc are widely used in different areas of statsitics and Machine Learning. Why are these methods are not popular in deep learning?

bayesian inference

Information Maximizing Neural Networks

3 minute read

Published:

We would like to learn the probability of some model conditioned on the data p(model|data). In a typical Bayesian setup one would estimate this with a likelihood function p(data|model) and a prior probability p(model). This requires making two assumption about the functional form of the likelihood function. In cosmological experiements the likelihood function is assumed to be a multivariate Gaussian whose mean and covariance are given by the model and the data covariance matrix respectively. There are a few issues here:

computer vision

Organizing a directory workflow for a Semantic Segmentation task

2 minute read

Published:

Semantic segmentation is the task of assigning labels to different pixels in an image. So you can think of it as a binary or multi-label classification. That means the targets have the same size as their corresponding images. How do we organize the directory for such a task and how can we make use of the ImageDataGenerator? Let’s start with a simple example.

data compression

Information Maximizing Neural Networks

3 minute read

Published:

We would like to learn the probability of some model conditioned on the data p(model|data). In a typical Bayesian setup one would estimate this with a likelihood function p(data|model) and a prior probability p(model). This requires making two assumption about the functional form of the likelihood function. In cosmological experiements the likelihood function is assumed to be a multivariate Gaussian whose mean and covariance are given by the model and the data covariance matrix respectively. There are a few issues here:

deep learning

Organizing a directory workflow for a Semantic Segmentation task

2 minute read

Published:

Semantic segmentation is the task of assigning labels to different pixels in an image. So you can think of it as a binary or multi-label classification. That means the targets have the same size as their corresponding images. How do we organize the directory for such a task and how can we make use of the ImageDataGenerator? Let’s start with a simple example.

Second order optimization methods and Neural Networks

1 minute read

Published:

Optimization of deep Neural Networks is often done using Gradient-based methods such as mini-batch gradient descent and its extensions such as Momentm, RMSprop, and Adam. Second order optimization methods such as Newton, BFGS, etc are widely used in different areas of statsitics and Machine Learning. Why are these methods are not popular in deep learning?

Mixture density networks for hard to evaluate conditional probabilities in cosmology

1 minute read

Published:

One of the chalenges of cosmological parameter estimation is marginalizing out the nuisance parameters such as the parameters that model the connection between galaxies and dark matter. For such marginalizations we are required to evaluate this conditional probability p(galaxy properties | dark matter halo properties). The list of galaxy properties consists of stellar mass, star formation rate, etc and the list of dark matter halo properties consists of mass, maximum circular velocity, etc.

Information Maximizing Neural Networks

3 minute read

Published:

We would like to learn the probability of some model conditioned on the data p(model|data). In a typical Bayesian setup one would estimate this with a likelihood function p(data|model) and a prior probability p(model). This requires making two assumption about the functional form of the likelihood function. In cosmological experiements the likelihood function is assumed to be a multivariate Gaussian whose mean and covariance are given by the model and the data covariance matrix respectively. There are a few issues here:

keras

Organizing a directory workflow for a Semantic Segmentation task

2 minute read

Published:

Semantic segmentation is the task of assigning labels to different pixels in an image. So you can think of it as a binary or multi-label classification. That means the targets have the same size as their corresponding images. How do we organize the directory for such a task and how can we make use of the ImageDataGenerator? Let’s start with a simple example.

likelihood free inference

Information Maximizing Neural Networks

3 minute read

Published:

We would like to learn the probability of some model conditioned on the data p(model|data). In a typical Bayesian setup one would estimate this with a likelihood function p(data|model) and a prior probability p(model). This requires making two assumption about the functional form of the likelihood function. In cosmological experiements the likelihood function is assumed to be a multivariate Gaussian whose mean and covariance are given by the model and the data covariance matrix respectively. There are a few issues here:

machine learning -feature engineering -feature importance

Many flavors of feature importance

5 minute read

Published:

I recently read this excellent book on financial machine learning which has a whole chapter dedicated to feature importance and its importance! It offers nice guidelines and some of the best practices for investigating feature importance in problems where we would like to know the extent to which different features contribute to the outcome of a machine learning model. Let’s imagine you are trying to predict whether you need to sell or buy stocks of a certain commidity in the market. Given the historical time evolution of the price, you can hand-engineer a large number of features. This includes:

natural language processing

An experiment with named entity recognition

1 minute read

Published:

Named entity recognition (NER) is the practice of recognizing names in a given body of text and identifying the entity category associated with them: persons, locations, etc.

probability theory

Mixture density networks for hard to evaluate conditional probabilities in cosmology

1 minute read

Published:

One of the chalenges of cosmological parameter estimation is marginalizing out the nuisance parameters such as the parameters that model the connection between galaxies and dark matter. For such marginalizations we are required to evaluate this conditional probability p(galaxy properties | dark matter halo properties). The list of galaxy properties consists of stellar mass, star formation rate, etc and the list of dark matter halo properties consists of mass, maximum circular velocity, etc.

Information Maximizing Neural Networks

3 minute read

Published:

We would like to learn the probability of some model conditioned on the data p(model|data). In a typical Bayesian setup one would estimate this with a likelihood function p(data|model) and a prior probability p(model). This requires making two assumption about the functional form of the likelihood function. In cosmological experiements the likelihood function is assumed to be a multivariate Gaussian whose mean and covariance are given by the model and the data covariance matrix respectively. There are a few issues here:

tensorflow

Organizing a directory workflow for a Semantic Segmentation task

2 minute read

Published:

Semantic segmentation is the task of assigning labels to different pixels in an image. So you can think of it as a binary or multi-label classification. That means the targets have the same size as their corresponding images. How do we organize the directory for such a task and how can we make use of the ImageDataGenerator? Let’s start with a simple example.

web scraping

An experiment with named entity recognition

1 minute read

Published:

Named entity recognition (NER) is the practice of recognizing names in a given body of text and identifying the entity category associated with them: persons, locations, etc.