Resume

đź“„ Download a PDF version of my Resume.

Rishikesh Ajay Ksheersagar

Houston, TX | +1 734 489 2596 | rishiksh@umich.edu LinkedIn: https://www.linkedin.com/in/rishikeshksheersagar/ Website: https://rishiksh20.github.io/


Profile

Skills: Machine Learning (ML), Large Language Models (LLMs), Agentic AI, LangChain, AutoGen, Retrieval Augmented Generation (RAG), Deep Learning, Natural Language Processing (NLP), Anomaly Detection, Regression Analysis, Statistical Inference, Reinforcement Learning (RL), Information Retrieval, Bayesian Inference, Agent Based Models, CI/CD

Languages: Python (Pandas, Dask, NumPy, ScikitLearn, Tensorflow, PyTorch, Keras, NLTK, Spacy, StreamLit), SQL, R, PySpark, SAS, C++

Tools / Platforms: Azure, Snowflake, Hadoop, GCP, AWS, Jenkins, Tableau, PowerBI


Education

University of Michigan — Ann Arbor

Masters in Data Science August 2023 – May 2025 | GPA: 4.0/4.0

Relevant Coursework:

Savitribai Phule Pune University

Bachelor of Engineering – Computer Engineering June 2015 – June 2019 | GPA: 3.7/4.0


Professional Experience

LatentView Analytics

Data Scientist Houston, TX, USA | November 2025 – Present

Ecological Servants Project

Data Analysis and Research Intern Ann Arbor, MI, USA | August 2025 – November 2025

University of Michigan

Research Assistant Ann Arbor, MI, USA | May 2024 – May 2025

Graduate Student Instructor

Mu Sigma Inc.

Apprentice Leader

Bangalore, KA, IndiaJuly 2019 – June 2023

Decision Scientist

BMC Software

Project Intern Pune, India | August 2018 – April 2019


Academic Projects

PapeRet (Sept – Dec 2024) Designed a research paper retrieval system processing 98,000+ academic papers using recursive metadata extraction, web scraping, PDF download, and text extraction. Leveraged LLaMA for Retrieval-Augmented Generation (RAG) to generate summaries. Achieved MAP@10 of 0.539 and NDCG@10 of 0.81.

Register Augmented LLM Fine-Tuning (Oct – Dec 2024) Developed a register-augmented fine-tuning approach for LLMs enhancing global context management and interpretability. Implemented RegBERT for QA tasks improving F1 and Exact Match on the TyDiQA GoldP dataset with attention analysis using LRP and Integrated Gradients.

Few-Shot Preference-Based RLHF (Jan – May 2024) Implemented few-shot preference-based reinforcement learning algorithms including MAML, iterated MAML, and REPTILE to optimize human feedback efficiency on MetaWorld datasets achieving ~90% reduction in training time.

Is it Easy to be Multilingual (Nov – Dec 2023) Studied transfer mechanisms in mBERT highlighting syntactic, morphological, and phonological similarities as predictors of cross-lingual transfer and proposed a framework achieving 62.5% accuracy in selecting optimal source language.


Honors and Awards