Georgia Tech · Electrical Engineering

Hi, I'm Hanxiang Hao

ECE Student @ Georgia Tech · Class of 2029

I'm a first-year EE student at Georgia Tech who gets unreasonably excited about how things work from the circuits hiding inside everyday devices to the algorithms that make machines think. When I'm not debugging code at 2am, you'll find me exploring Atlanta, attempting to cook, or convincing myself that one more side project is a great idea.

A bit about who I am

Hanxiang Hao

"Stay curious.
Stay uncomfortable.
Keep building."

Skills

SystemVerilog UVM C/C++ Python CUDA MPI RISC-V Cadence Xcelium Verilator Git MATLAB Java

Hey, I'm Hanxiang, Dennis works too. I grew up in Beijing, which means I was raised on a diet of dense subway maps, street food at midnight, and the constant hum of a city that never really slows down. Somewhere along the way, I got obsessed with how things work under the hood, not just the what, but the why and the how-on-earth-does-this-not-break.

That curiosity landed me at Georgia Tech studying Electrical Engineering, which is either a great idea or a fantastic way to question all my life choices at 2am before a deadline. Probably both. I'm currently doing research in hardware verification and high-performance computing, and I genuinely enjoy the kind of problems that make your brain hurt a little.

I don't have my entire future figured out, and honestly, I think that's fine. Right now I'm focused on doing interesting work, learning as much as I can, and finding a summer internship where I can contribute something real. The rest? I'll figure it out by exploring, breaking things, and staying curious.

Career Goals & Life Philosophy

I like challenges that don't come with a clean answer key. I like asking "why does this work?" more than just accepting that it does. And I think the best way to figure out what you want to do with your life is to just try stuff, stay open, and pay attention to what makes you lose track of time.

For now: land a great internship, keep building cool things, and see where the curiosity leads. 🚀

What I study & explore

ASIC Design Verification

Passionate about UVM testbench architecture, SystemVerilog assertions, and RTL debugging. Currently working on RISC-V pipeline verification and formal verification projects.

🤖

Machine Learning Systems

Exploring ML infrastructure and inference optimization, including sparse attention mechanisms and KV cache efficiency in large language models.

📡

Signal & Information Processing

Fascinated by how SIP connects math, statistics, and computing to power speech recognition, medical imaging, and autonomous systems. One of my threads at GT, and a natural bridge to AI and ML.

💻

High Performance Computing

Experienced with MPI and CUDA parallel programming on Georgia Tech's PACE supercomputing cluster, focusing on scalability and performance benchmarking.

Things I love outside of class

When I'm not in the lab or staring at waveforms, you'll find me bouldering at the gym, shooting hoops, cycling around Atlanta, or watching a movie way too late at night. I'm a firm believer that the best engineers have lives outside engineering.

🧗 Bouldering
🏀 Basketball
🏋️ Gym
🚴 Cycling
🎵 Music
🎬 Movies
🌙 Night Owl

Fun Facts

🍦

Certified dining hall dessert connoisseur — tasted every ice cream flavor and could name them all on command

🍩

Worked through every Dunkin' donut on the menu. Ask me for a recommendation, I have opinions.

🐉

Collecting Monster Energy cans in every flavor I can find — it's not a problem, it's a hobby

💻

Clough is basically my second home — most of my day happens in front of those screens

My Résumé

Things I've built & researched

INPUT HIDDEN OUTPUT
Diffusion LLM Inference Python

Fast-dLLM KV Cache Sparsity

Implemented training-free sparse attention with dynamic KV cache eviction for Fast-dLLM (1.5B & 7B models). Applied SparsedLLM to reduce peak memory usage and improve inference throughput. Directly applicable to production ML inference pipelines where memory efficiency is critical.

Georgia Tech EIC Lab · January 2026 – Present

View on GitHub →
INPUT GPUs OUTPUT
Machine Learning CNN AI Training

Machine Learning Parallelism

Trained a custom CNN (4 convolutional layers with batch normalization & max pooling, 3 fully-connected layers with dropout) as a CIFAR-10 image classifier, then implemented and benchmarked four distributed training paradigms — single-GPU baseline, data parallelism (DDP), pipeline parallelism, and tensor parallelism — to study how different parallelism strategies scale for deep learning workloads.

Research Project · Georgia Tech

View on GitHub →

Snapshots of my world

Bouldering
Monster collection
Raspberry Pi
Desserts
Bouldering wall

Let's connect

I'm always open to conversations about internships, research opportunities, cool projects, or just a friendly chat. Feel free to reach out!

Currently seeking Summer 2026 internship in ASIC design verification or ML systems engineering.