Close

Gunjan Aggarwal


Gunjan_Aggarwal


Image_Tango


Innovation in creative AI : Image Tango

About Me

I am a Computer Science enthusiast and a Member of Technical Staff-2 at Adobe, working on Adobe's Chat Application and diving into Natural Language Processing for chat-bot automation. I am passionate about art and really thrilled about generating new art using deep learning. I have worked on various projects and internships which involved extracting meaning from text and images. My interests span a broad range of sub-fields in Computer Science, including Deep Learning, Machine Learning, Computer Vision and Natural Language Processing. In my free time I like to participate in adventure sports and appreciate the unexplored beauty of nature through trekking.

Publications

cFineGAN: Unsupervised multi-conditional fine-grained image generation

Paper link

  • Developed a multi-conditional image generation pipeline in an unsupervised way using a hierarchial GAN framework
  • Given a texture and a shape image, the pipeline generates an output that preserves the shape of first and texture of second input image
  • Used standard and shape biased ImageNet pre-trained Resnet50 models to identify the shape and texture characteristics of inputs, respectively
  • The paper was accepted as a workshop paper at NeurIPS 2019
  • On the Benefits of Models with Perceptually-Aligned Gradients

    Paper link

  • Explored the benefits of adversarial training for neural networks
  • Adversarial training with small epsilon improves the model's performance for downstream tasks.
  • Showed improvemen in performance for domain adaptation tasks, like SVHN -> MNIST transfer, and for weakly supervised object localization task.
  • The paper was accepted as a workshop paper at ICLR 2020
  • Experience

    Adobe, Noida, India

    Member of Technical Staff-2

  • Implemented an NLU engine with multi-lingual capability for intent classication Trained XGBoost classifer over the embedding obtained from a multilingual universal sentence en- coder model
  • The classifer obtained an accuracy of 74% and 61% over English and Japanese datasets respectively and is running live in production
  • Deployed the model on Amazon SageMaker and integrated Redis cache and PostgreSQL database support into the back-end chat infrastructure
  • Worked on an image generation framework which got selected amongst the top 11 ideas out of a few hundred Adobe wide submissions to be presented at Adobe Max conference
  • VMware, Bengaluru, India

    R&D Intern

  • Implemented the functionality to perform code coverage on the modi ed code, before it is committed to Perforce, inside vCHS platform
  • Adobe, Noida, India

    Research Intern

  • Improved the K-SVD algorithm used for compressing the raster images of PDF
  • Implemented quantization of K-SVD algorithm as a learning process and incorporated patch based similarity matching to reduce the number of patches to be sparsely coded
  • The changes achieved 21% better image compression ratio while having 0.088 lower RMSE than JPEG compression over the RAISE dataset
  • Yrals Digital, Mumbai, India

    NLP Intern

  • Worked on detection of quotes and speaker in an article by analyzing the position of quotation identifers, verb and name words
  • Integration of the feature into production reduced the time for textual summarization of news articles by 30%
  • Education

    Birla Institute of Technology and Science Pilani, India

    August 2014 - July 2018

    Bachelor of Engineering(Hons.), Computer Science | CGPA - 8.35/10.0

    Mount Carmel School, India

    Projects

    cFineGAN: Unsupervised multi-conditional fine-grained image generation

    Paper link

  • Developed a multi-conditional image generation pipeline in an unsupervised way using a hierarchial GAN framework
  • Given a texture and a shape image, the pipeline generates an output that preserves the shape of first and texture of second input image
  • Used standard and shape biased ImageNet pre-trained Resnet50 models to identify the shape and texture characteristics of inputs, respectively
  • The paper was accepted as a workshop paper at NeurIPS 2019
  • On the Benefits of Models with Perceptually-Aligned Gradients

  • Explored the benefits of adversarial training for neural networks
  • Adversarial training with small epsilon improves the model's performance for downstream tasks.
  • Showed improvemen in performance for domain adaptation tasks, like SVHN -> MNIST transfer, and for weakly supervised object localization task.
  • The paper was accepted as a workshop paper at ICLR 2020
  • Sentiment Analysis using Deep Learning

    Advisor: Prof. Yashvardhan Sharma | Jan 2017 - May 2017

  • Applied deep learning to perform sentiment analysis over different Indian languages
  • Experimented with different optimizers such as Adam and Momentum Optimizer, and also withdifferent network architectures such as CNN and RNN
  • Achieved 85% and 83% mean validation accuracy with CNN and RNN respectively over differentlanguages
  • Extended the project to contrast the impacts of product-centric and social cause marketing adver-tisements on users by analyzing their comments
  • Textual Search Engine

    Course Project : Information Retrieval | Jan 2017 - Feb 2017

  • Implemented sentence tokenization, normalization, building of inverted index and processing of wild-card queries for document retrieval on Reuters Corpus
  • Machine Translation

    Course Project : Information Retrieval | March 2017 - April 2017

  • Used IBM Models and Expectation Maximization Algorithm for the task of Statistical Machine Translation
  • Object Recognition

    Aug 2016 - Nov 2016

  • Used SIFT as feature descriptor and K-means clustering for codeword-generation to represent imagesusing Bag of Visual Words model
  • Trained multi-class SVM over the histogram of features to classify images
  • The method achieved an accuracy of 78% over a subset of 5 classes of Caltech-101 image dataset
  • Knowldge Extraction

    Course Project : Data Mining | Aug 2016 - Nov 2016

  • Analyzed UCI’s Student Performance dataset and classified the student’s grades using several models such as KNN, Decision trees and SVM
  • Applied different pre-processing techniques over Census Income dataset, classified the income using Logistic Regression and computed the correlation between different features
  • Skills