- 👋 Hi, I’m @JerryGJX
-
Massachusetts Institute of Technology
- Cambridge, MA
-
19:15
(UTC -04:00)
Highlights
- Pro
Pinned Loading
-
mit-han-lab/Block-Sparse-Attention
mit-han-lab/Block-Sparse-Attention PublicA sparse attention kernel supporting mix sparse patterns
-
-
mit-han-lab/fouroversix
mit-han-lab/fouroversix PublicCode for the papers: “Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling” and “Adaptive Block-Scaled Data Types”
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.



