Hello! My name is Armin.

I am a UC Davis alumni currently pursuing my Master’s degree in Europe with a focus on ML & AI. My experience spans python machine learning projects, Agentic AI with LangChain, CI/CD pipelines and full-stack development.

I am passionate about blending creativity and logic in software development to create solutions that improve people’s lives. My drive to learn in diverse and global environments has shaped my growth as both a developer and a collaborator.

I am eager to continue taking on new challenges, contributing my skills, and collaborating with talented teams to make a positive impact.

Batch Image Editor

Cross-platform desktop app that automates image processing tasks.

  • Reduced editing time by 85% with automated batch processing
  • Supports 10+ formats across Windows, macOS, and Linux
Electron • Python • JavaScript

Attendance Tracker

Web portal for ESL service providers to track client attendance and analytics.

  • Built full-stack app with authentication and client management
  • Dashboard tracks attendance percentages and monthly hours
TypeScript • Vercel • Supabase
Adat

Habit Tracker App

Mobile app for building positive habits and breaking negative ones.

  • Dual tracking system with progress metrics and milestones
  • Consistency counters and time-since-relapse tracking
React Native • Expo • JavaScript
MindtheAge

MindtheAge

Mental health app that provides personalized resources using ML classification.

  • Built logistic regression model to classify mental health text inputs
  • Designed UI with authentication and personalized recommendations
Python • Scikit-learn • Streamlit
NBA Points Predictor

NBA Points Predictor

Web app predicting player points per game using regression models.

  • Analyzed historical performance data with regression modeling
  • Provides insights for training and player evaluation
Python • Flask • Scikit-learn
Multilingual QA NLP Project

Multilingual QA Natural Language Processing Project

Built and evaluated a multilingual QA models for Arabic, Korean, and Telugu using state-of-the-art transformers..

  • Fine-tuned mBERT and mDEBERTa for span-based QA, improving F1 by 9.5% over baselines
  • Implemented rule-based, BiLSTM, and transformer models across 5 languages
  • Reduced perplexity 10x using LSTMs over n-gram models with subword tokenization
PyTorch • Hugging Face • Multilingual BERT • mDeBERTa

Drug-LLM: Medication Information Model

Fine-tuned LLM for accurate medication information retrieval.

  • Reduced trainable parameters by 93% using PEFT/LoRA
  • Achieved 87% accuracy vs 62% with generic models
PyTorch • Transformers • GPT-2