~/Dashboard
Capstone 20264th Year
ViT-FPGA Hero
Capstone Research Project · 2026

Vision Transformer
on FPGA

Designing and implementing an efficient hardware accelerator for Vision Transformer inference on reconfigurable FPGA fabric.

Overall Progress
20%
4
Team Members
4 active researchers
6/30
Tasks Done
8 in progress
6
Brainstorm Ideas
2 pinned
14
Resources
6 papers

Team Progress

View all
JC
Jerry (Chenjia)
Hardware Architect
25%
SY
Stephanie (Yixin)
ML Model Engineer
25%
TY
Tiffany (Yiling)
HLS/RTL Developer
0%
WN
Winnie
Systems & Integration
25%

Upcoming Meetings

Calendar
Mar
10
Integration Sprint #3
Biweekly · 4 attendees
Mar
13
Weekly internal sync-up
Biweekly · 4 attendees
▲8
Use INT4 mixed-precision quantization
Apply INT4 for weights and INT8 for activations to reduce model size by 2x while maintaining accuracy within 2% of FP32 baseline.
Optimization
▲6
Tile-based attention computation
Partition the attention matrix into tiles that fit in on-chip BRAM to avoid expensive DRAM accesses during the softmax computation.
Architecture
▲5
Explore Swin Transformer for local attention
Swin's window-based attention has O(n) complexity vs O(n²) for standard ViT. Could significantly reduce hardware resource requirements.
Research Direction

Quick Links

In Progress Tasks
Write HLS kernel for multi-head attention
Integrate HLS IP cores into top-level design
Implement INT8 post-training quantization