Oracle and AMD Expand Partnership to Help Customers Achieve Next-Generation AI Scale
Beginning in calendar Q3 2026,
This announcement builds upon the joint work of Oracle and
Demand for large-scale AI capacity is accelerating as next-generation AI models outgrow the limits of current AI clusters. To train and run these workloads, customers need flexible, open compute solutions engineered for extreme scale and efficiency. OCI's planned new AI superclusters will be powered by the
"Our customers are building some of the world's most ambitious AI applications, and that requires robust, scalable, and high-performance infrastructure," said
"
-
Breakthrough compute and memory: Helps customers achieve faster results, tackle more complex workloads, and reduce the need for model partitioning by increasing memory bandwidth for AI training models. Each
AMD Instinct MI450 Series GPU will provide up to 432 GB of HBM4 and 20 TB/s of memory bandwidth, enabling customers to train and infer models that are 50 percent larger than previous generations entirely in-memory. -
AMD optimized "Helios" rack design: Enables customers to operate at scale while optimizing performance density, cost, and energy efficiency via dense, liquid-cooled, 72-GPU racks. TheAMD "Helios" rack design integrates UALoE scale-up connectivity and Ethernet-basedUltra Ethernet Consortium (UEC)-aligned scale-out networking to minimize latency and maximize throughput across pods and racks. -
Powerful head node: Helps customers maximize cluster utilization and streamline large-scale workflows by accelerating job orchestration and data processing on an architecture consisting of next-generation
AMD EPYC CPUs, code named "Venice ." In addition, these EPYC CPUs will offer confidential computing capabilities and built-in security features to help safeguard sensitive AI workloads end to end. -
DPU-accelerated converged networking: Powers line-rate data ingestion to improve performance and enhance security posture for large-scale AI and cloud infrastructure. Built on the fully programmable
AMD Pensando DPU technology, the DPU-accelerated converged networking offers the security and performance required for data centers to run the next era of AI training, inferencing, and cloud workloads. -
Scale
-out networking for AI: Enables customers to leverage ultra-fast distributed training and optimized collective communication with a future-ready open networking fabric. Each GPU can be equipped with up to three 800 Gbps
AMD Pensando "Vulcano" AI-NICs, providing customers with lossless, high-speed, and programmable connectivity that supports advanced RoCE and UEC standards. - Innovative UALink and UALoE fabric: Helps customers efficiently expand workloads, reduce memory bottlenecks, and orchestrate large multi-trillion-parameter models. The scalable architecture minimizes hops and latency without routing through CPUs and enables direct, hardware-coherent networking and memory sharing among GPUs within a rack via UALink protocol transported over a UALoE fabric. UALink is an open, high-speed interconnect standard purpose built for AI accelerators and supported by a broad industry ecosystem. As a result, customers gain the flexibility, scalability, and reliability needed to run their most demanding AI workloads on open standards-based infrastructure.
- Open-source AMD ROCm™ software stack: Enables rapid innovation, offers freedom of vendor choice, and simplifies the migration of existing AI and HPC workloads by providing customers with an open, flexible programming environment, including popular frameworks, libraries, compilers, and runtimes.
- Advanced partitioning and virtualization: Enables customers to safely share clusters and allocate GPUs based on workload needs by facilitating the secure and efficient use of resources via fine-grained GPU and pod partitioning, SR-IOV virtualization, and robust multi-tenancy.
To give customers that build, train, and inference AI at scale more choice, OCI also announced the general availability of OCI Compute with
Additional Resources
- Learn more about Oracle Cloud Infrastructure
- Learn more about OCI Compute
About Oracle
Oracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.
About
About
For more than 55 years
Future Product Disclaimer
The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle's products may change and remains at the sole discretion of
Forward-Looking Statements Disclaimer
Statements in this press release relating to Oracle's and
Trademarks
Oracle, Java, MySQL and
View original content to download multimedia:https://www.prnewswire.com/news-releases/oracle-and-amd-expand-partnership-to-help-customers-achieve-next-generation-ai-scale-302582957.html
SOURCE Oracle