~/Dashboard
Capstone 20264th Year
ViT-FPGA Hero
Capstone Research Project · 2026

Vision Transformer
on FPGA

Designing and implementing an efficient hardware accelerator for Vision Transformer inference on reconfigurable FPGA fabric.

Overall Progress
0%
4
Team Members
4 active researchers
0/12
Tasks Done
6 in progress
7
Brainstorm Ideas
3 pinned
18
Resources
7 papers

Team Progress

View all
JC
Jerry (Chenjia)
Hardware Architect
0%
SY
Stephanie (Yixin)
ML Model Engineer
0%
TY
Tiffany (Yiling)
HLS/RTL Developer
0%
WN
Winnie (Weini)
Systems & Integration
0%

Upcoming Meetings

Calendar
No upcoming meetings
▲9
Use INT4 mixed-precision quantization
Apply INT4 for weights and INT8 for activations to reduce model size by 2x while maintaining accuracy within 2% of FP32 baseline.
Optimization
▲7
Pipelined HLS design for FFN layers
Use HLS PIPELINE pragma with II=1 to fully pipeline the feed-forward network layers, maximizing throughput.
Optimization
▲6
Tile-based attention computation
Partition the attention matrix into tiles that fit in on-chip BRAM to avoid expensive DRAM accesses during the softmax computation.
Architecture

Quick Links

In Progress Tasks
Write HLS kernel for multi-head attention
Survey ViT model variants (DeiT, Swin, CvT)
Review papers on ViT on FPGA architecture papers