About Me
I'm a mathematician turned developer with a deep passion for clean, efficient code. My background in mathematics fundamentally shapes how I approach programming—I believe in rigorous problem-solving, testing assumptions, and finding elegant solutions within constraints.
Core Skills
- Languages: C++, Python
- Specializations: Backend systems, data science & ML, systems programming, performance-critical applications
- Philosophy: Hard work, analytical thinking, attention to detail, clever solutions that respect constraints
How I Think
My mathematical background taught me to think differently about problems. Constraints aren't limitations—they're tools for exploration, much like constructing counterexamples in a proof. This mindset naturally translates to coding:
- Breaking down complex problems into fundamental principles
- Optimizing within constraints (performance budgets, memory limits, algorithmic complexity)
- Building robust solutions that handle edge cases systematically
- Focusing on correctness first, optimization second
What I Build
I'm focused on deep specialization in systems and computational programming. My projects demonstrate how mathematical thinking and careful engineering solve real problems:
- High-performance applications where every cycle matters
- Backend systems designed for efficiency and reliability
- Tools that combine elegant algorithms with practical usability
Portfolio & Learning
This portfolio showcases my work and journey as a recent graduate. You'll find projects that range from algorithmic demonstrations (like the Fibonacci calculator with WebAssembly) to practical tools (like the MTG Proxy Printer) that solve real problems. WebAssembly appears here to demonstrate capability in performance optimization, but my core focus is C++ and Python for backend and systems-level work.
Projects
Explore my projects in the sidebar. Each project demonstrates different aspects of modern web development.
Click on individual projects to learn more and see them in action.
Fibonacci Calculator with WebAssembly
This project demonstrates the power of WebAssembly by calculating large Fibonacci numbers efficiently.
Features
- WebAssembly module for fast computation
- Real-time Fibonacci sequence generation
- Performance comparison with JavaScript
Interactive Demo
Technical Details
The WebAssembly module is written in C and compiled to WASM. It provides significant performance improvements for compute-intensive operations compared to pure JavaScript.
MTG Proxy Printer
A full-featured Python application that automates the creation of professional, print-ready Magic: The Gathering card proxy PDFs. It combines a powerful GUI for interactive deck building with a fast CLI for batch automation, allowing users to generate high-quality card proxies from deck files in minutes.
Problem Solved
Magic: The Gathering players often need physical proxy cards for testing deck strategies without investing in expensive cards. This tool eliminates manual work by:
- Automating card image retrieval from Scryfall
- Intelligently arranging cards for optimal printing (6 cards per A4 page)
- Providing true-to-size card dimensions (63.5 x 88.9 mm) at 300 DPI for professional printing
- Supporting multiple deck list formats and complex card types (transforming cards, split cards, adventure cards, etc.)
- Customizable layouts including card separation, top and side margins as well as card sizes
Key Features
Dual-Mode Interface
- GUI Mode - Interactive deck builder with live card preview, fuzzy search autocomplete, and PDF preview before saving
- CLI Mode - Fast, automatable PDF generation for scripting and batch processing
Intelligent Card Search
- Fuzzy matching powered by rapidfuzz (handles typos, symbol variations, long card names)
- Real-time autocomplete suggestions as users type
- Handles multi-face cards (transforming DFCs, split cards, adventure cards)
Smart Image Management
- Asynchronous background downloads while building decks
- Persistent disk cache (~/.cache/mtgproxy) for instant repeat runs
- Respectful Scryfall API usage with custom User-Agent, keep-alive sessions, and rate limiting
Technical Stack
Language & Framework:
- Python 3.9+
- PyQt6 / PyQt6-WebEngine (cross-platform GUI)
Core Libraries:
- requests - API communication with Scryfall
- Pillow - Image processing and manipulation
- PyYAML - Deck file parsing
- pydantic - Configuration management and validation
- rapidfuzz - Fuzzy string matching for card search
- tqdm - Progress indicators
- pyinstaller - Standalone executable creation
Architecture & Design
The application follows clean, modular design patterns with clear separation of concerns:
- CLI and GUI Separation - Shared core functionality with dedicated interface layers
- Configuration Management - Centralized config with sensible defaults and environment-based paths
- Async Operations - Background card downloading in GUI mode for responsive UX
- Caching Strategy - Disk-based image cache to minimize API calls and improve performance
Results & Impact
- Efficiency - Generates complete proxy decks in seconds vs. hours of manual work
- Flexibility - Supports both quick CLI automation and interactive GUI building
- Reliability - Robust error handling, caching, and logging
- Accessibility - Standalone executables eliminate the need for Python knowledge to run the tool
- Maintainability - Clean, modular code with clear separation of concerns
Future Enhancements
- Direct printing from the GUI (bypassing PDF export)
- Batch deck processing
- Export to additional formats (image collections, online services)
- Importing/Exporting raw deck data using the GUI
- Adding GitHub Actions to automatically build the executable for Windows and Linux
Machine Learning Projects
Three ML projects exploring different scales and approaches.
Neural Network from Scratch
Built a neural network in C++ using Eigen, implemented backpropagation, trained on MNIST. Got decent results.
Value of doing this: implementing backprop forces you to understand gradient flow. Deriving the chain rule for each layer and implementing it in raw C++ teaches things tutorials don't. You end up with notation that makes the computation clean and essentially have to rediscover tensors because that's how the math naturally expresses itself.
LLM Architecture with Memory Mechanism
The idea: Transformers have fixed context windows. Add internal memory vectors the model can write to as persistent scratch space, separate from token output.
The implementation: Modified attention mechanism with trainable memory vectors. Two-phase training approach: optimize input parameters to find useful memory patterns, then train weights to produce both correct outputs and strategically useful future memory states.
What I built: Full training pipeline in PyTorch with custom optimization loops. Analyzed Llama's attention patterns (found: heavily relies on boundary tokens). Experimented with knowledge distillation, pruning, and quantization approaches.
The architecture was sound and the training pipeline worked. Scaling it meaningfully would have required more data, GPU memory, and time than I had available.
Discord Chatbot
The approach: Fine-tune Phi-2 on real data: My friend group's Discord chat history.
What it does:
- Fine-tuned model to mimic specific Discord members
- Bot responds when tagged, 5% random message injection for organic feel
- Captured writing patterns and group dynamics well
- Intercepted model-generated dead links, replaced with random cat image API calls
Result: Deployed and running.
Technical Stack
- Languages: C++, Python
- Libraries: Eigen, PyTorch, Hugging Face Transformers, Discord.py
- Models: Phi-2
- Infrastructure: CUDA, quantization, inference optimization
Get In Touch
I'm always interested in hearing about new projects and opportunities.