AI just got portable: Modular 25.4 is now live! @media only screen and (max-width:639px){img.stretch-on-mobile,.hs_rss_email_entries_table img,.hs-stretch-cta .hs-cta-img{height:auto !important;width:100% !important} .display_block_on_small_screens{display:block}.hs_padded{padding-left:20px !important;padding-right:20px !important} .hs-hm,table.hs-hm{display:none}.hs-hd{display:block !important}table.hs-hd{display:table !important} }@media only screen and (max-width:639px){.hse-border-m{border-left:1px solid #cbd6e2 !important;border-right:1px solid #cbd6e2 !important;box-sizing:border-box} .hse-border-bottom-m{border-bottom:1px solid #cbd6e2 !important}.hse-border-top-m{border-top:1px solid #cbd6e2 !important} .hse-border-top-hm{border-top:none !important}.hse-border-bottom-hm{border-bottom:none !important} }.moz-text-html .hse-column-container{max-width:600px !important;width:600px !important} .moz-text-html .hse-column{display:table-cell;vertical-align:top}.moz-text-html .hse-section .hse-size-6{max-width:300px !important;width:300px !important} .moz-text-html .hse-section .hse-size-12{max-width:600px !important;width:600px !important} @media only screen and (min-width:640px){.hse-column-container{max-width:600px !important;width:600px !important} .hse-column{display:table-cell;vertical-align:top}.hse-section .hse-size-6{max-width:300px !important;width:300px !important} .hse-section .hse-size-12{max-width:600px !important;width:600px !important} }@media only screen and (max-width:639px){.hse-body-wrapper-td{padding-top:20px !important} #section-0 .hse-column-container{padding-top:10px !important;padding-bottom:10px !important;background-color:transparent !important} #section-0 .hse-column-container{background-color:transparent !important} }@media only screen and (max-width:639px){ #section-1 .hse-column-container{padding-top:0px !important;padding-bottom:0px !important} #section-1 .hse-column-container{background-color:#fff !important} }@media only screen and (max-width:639px){ #section-3 .hse-column-container{padding-top:0px !important;padding-bottom:0px !important} #section-3 .hse-column-container{background-color:#fff !important} }@media only screen and (max-width:639px){ #section-2 .hse-column-container{padding-top:10px !important;padding-bottom:0px !important} #section-2 .hse-column-container{background-color:#fff !important} }@media screen and (max-width:639px){.social-network-cell{display:inline-block} }@media only screen and (max-width:639px){ #section-5 .hse-column-container{padding-top:0px !important;padding-bottom:0px !important} #section-5 .hse-column-container{background-color:#fff !important} }@media only screen and (max-width:639px){ #section-6 .hse-column-container{padding-top:0px !important;padding-bottom:0px !important} #section-6 .hse-column-container{background-color:#fff !important} }@media only screen and (max-width:639px){.hse-body-wrapper-td{padding-bottom:20px !important} #section-7 .hse-column-container{padding-top:30px !important;padding-bottom:0px !important;background-color:transparent !important} #section-7 .hse-column-container{background-color:transparent !important} }#hs_body #hs_cos_wrapper_main a[x-apple-data-detectors]{color:inherit !important;text-decoration:none !important;font-size:inherit !important;font-family:inherit !important;font-weight:inherit !important;line-height:inherit !important} a{text-decoration:underline}p{margin:0}body{-ms-text-size-adjust:100%;-webkit-text-size-adjust:100%;-webkit-font-smoothing:antialiased;moz-osx-font-smoothing:grayscale} table{border-spacing:0;mso-table-lspace:0;mso-table-rspace:0}table,td{border-collapse:collapse} img{-ms-interpolation-mode:bicubic}p,a,li,td,blockquote{mso-line-height-rule:exactly}
Modular 25.4 is here with official AMD support, expanded model coverage, and the industry’s first truly portable AI runtime.
Run the same code on AMD and NVIDIA with zero changes!
Modular Platform 25.4 is here, bringing true cross-platform AI acceleration. With our official partnership with AMD, you can now deploy the same container on AMD Instinct™ MI300X, MI325X, and NVIDIA GPUs with no extra configuration.
What’s new in 25.4:
• Up to 53% better throughput on prefill-heavy BF16 workloads across Llama 3.1, Gemma 3, Mistral, and other state-of-the-art language models
• Support for MI300X/325X, Blackwell, RTX 20xx-50xx, and RDNA3/4
• Expanded model coverage including Qwen3, OLMo2, Gemma3, and InternVL
• 450k+ lines of open source Mojo kernel code
We’re also kicking off Modular Hack Weekend on June 27th with a GPU Programming Workshop and a stacked GPU prize pool! Join virtually or in person.
The countdown is on! Modular Hack Weekend launches on June 27th: a three-day coding sprint where you’ll dive into Mojo and MAX, build custom kernels, experiment with MAX Graph architectures, and push PyTorch custom ops to the limit.
We’re joining forces with top-tier partners to power up your hackathon:
• NVIDIA is fueling the GPU prize pool with cutting-edge gear: a 5090 for first place, a 5080 for second, and a 5070 for third.
• Lambda is our compute sponsor, giving you access to their blazing-fast AI Developer Cloud packed to power your hacking.
• GPU MODE, the go-to community for GPU devs, is bringing the energy, ideas, and vibes.
It all kicks off with our GPU programming workshop on June 27th, live at our Los Altos office and streaming online. Reserve your spot now 👇
AI just got portable: Modular 25.4 is now live! @media only screen and (max-width:639px){img.stretch-on-mobile,.hs_rss_email_entries_table img,.hs-stretch-cta .hs-cta-img{height:auto !important;width:100% !important} .display_block_on_small_screens{display:block}.hs_padded{padding-left:20px !important;padding-right:20px !important} .hs-hm,table.hs-hm{display:none}.hs-hd{display:block !important}table.hs-hd{display:table !important} }@media only screen and (max-width:639px){.hse-border-m{border-left:1px solid #cbd6e2 !important;border-right:1px solid #cbd6e2 !important;box-sizing:border-box} .hse-border-bottom-m{border-bottom:1px solid #cbd6e2 !important}.hse-border-top-m{border-top:1px solid #cbd6e2 !important} .hse-border-top-hm{border-top:none !important}.hse-border-bottom-hm{border-bottom:none !important} }.moz-text-html .hse-column-container{max-width:600px !important;width:600px !important} .moz-text-html .hse-column{display:table-cell;vertical-align:top}.moz-text-html .hse-section .hse-size-6{max-width:300px !important;width:300px !important} .moz-text-html .hse-section .hse-size-12{max-width:600px !important;width:600px !important} @media only screen and (min-width:640px){.hse-column-container{max-width:600px !important;width:600px !important} .hse-column{display:table-cell;vertical-align:top}.hse-section .hse-size-6{max-width:300px !important;width:300px !important} .hse-section .hse-size-12{max-width:600px !important;width:600px !important} }@media only screen and (max-width:639px){.hse-body-wrapper-td{padding-top:20px !important} #section-0 .hse-column-container{padding-top:10px !important;padding-bottom:10px !important;background-color:transparent !important} #section-0 .hse-column-container{background-color:transparent !important} }@media only screen and (max-width:639px){ #section-1 .hse-column-container{padding-top:0px !important;padding-bottom:0px !important} #section-1 .hse-column-container{background-color:#fff !important} }@media only screen and (max-width:639px){ #section-3 .hse-column-container{padding-top:0px !important;padding-bottom:0px !important} #section-3 .hse-column-container{background-color:#fff !important} }@media only screen and (max-width:639px){ #section-2 .hse-column-container{padding-top:10px !important;padding-bottom:0px !important} #section-2 .hse-column-container{background-color:#fff !important} }@media screen and (max-width:639px){.social-network-cell{display:inline-block} }@media only screen and (max-width:639px){ #section-5 .hse-column-container{padding-top:0px !important;padding-bottom:0px !important} #section-5 .hse-column-container{background-color:#fff !important} }@media only screen and (max-width:639px){ #section-6 .hse-column-container{padding-top:0px !important;padding-bottom:0px !important} #section-6 .hse-column-container{background-color:#fff !important} }@media only screen and (max-width:639px){.hse-body-wrapper-td{padding-bottom:20px !important} #section-7 .hse-column-container{padding-top:30px !important;padding-bottom:0px !important;background-color:transparent !important} #section-7 .hse-column-container{background-color:transparent !important} }#hs_body #hs_cos_wrapper_main a[x-apple-data-detectors]{color:inherit !important;text-decoration:none !important;font-size:inherit !important;font-family:inherit !important;font-weight:inherit !important;line-height:inherit !important} a{text-decoration:underline}p{margin:0}body{-ms-text-size-adjust:100%;-webkit-text-size-adjust:100%;-webkit-font-smoothing:antialiased;moz-osx-font-smoothing:grayscale} table{border-spacing:0;mso-table-lspace:0;mso-table-rspace:0}table,td{border-collapse:collapse} img{-ms-interpolation-mode:bicubic}p,a,li,td,blockquote{mso-line-height-rule:exactly}
Modular 25.4 is here with official AMD support, expanded model coverage, and the industry’s first truly portable AI runtime.
Run the same code on AMD and NVIDIA with zero changes!
Modular Platform 25.4 is here, bringing true cross-platform AI acceleration. With our official partnership with AMD, you can now deploy the same container on AMD Instinct™ MI300X, MI325X, and NVIDIA GPUs with no extra configuration.
What’s new in 25.4:
• Up to 53% better throughput on prefill-heavy BF16 workloads across Llama 3.1, Gemma 3, Mistral, and other state-of-the-art language models
• Support for MI300X/325X, Blackwell, RTX 20xx-50xx, and RDNA3/4
• Expanded model coverage including Qwen3, OLMo2, Gemma3, and InternVL
• 450k+ lines of open source Mojo kernel code
We’re also kicking off Modular Hack Weekend on June 27th with a GPU Programming Workshop and a stacked GPU prize pool! Join virtually or in person.
The countdown is on! Modular Hack Weekend launches on June 27th: a three-day coding sprint where you’ll dive into Mojo and MAX, build custom kernels, experiment with MAX Graph architectures, and push PyTorch custom ops to the limit.
We’re joining forces with top-tier partners to power up your hackathon:
• NVIDIA is fueling the GPU prize pool with cutting-edge gear: a 5090 for first place, a 5080 for second, and a 5070 for third.
• Lambda is our compute sponsor, giving you access to their blazing-fast AI Developer Cloud packed to power your hacking.
• GPU MODE, the go-to community for GPU devs, is bringing the energy, ideas, and vibes.
It all kicks off with our GPU programming workshop on June 27th, live at our Los Altos office and streaming online. Reserve your spot now 👇
发布者