Oracle has unveiled a multi-year partnership with Microsoft to fuel the rapid expansion of Bing Conversational search services. Microsoft is harnessing Oracle Cloud Infrastructure (OCI) AI infrastructure in conjunction with Microsoft Azure AI infrastructure to perform inference tasks on AI models, specifically optimized for driving Microsoft Bing's daily conversational searches.
By utilizing Oracle Interconnect for Microsoft Azure, Microsoft gains the ability to leverage managed services such as Azure Kubernetes Service (AKS). This orchestration allows for the seamless scaling of OCI Compute to meet the surging demands for Bing's conversational search feature. The collaboration empowers Microsoft to efficiently handle the growing user base and ensure the smooth functioning of Bing conversational searches through the combined capabilities of Oracle Cloud Infrastructure and Microsoft Azure.
Bing's conversational search relies on robust computing clusters to evaluate and analyze search results using Bing's inference model. These powerful infrastructure clusters support the intricate evaluation processes vital for delivering accurate and efficient Bing conversational search results.
“Generative AI is a monumental technological leap and Oracle is enabling Microsoft and thousands of other businesses to build and run new products with our OCI AI capabilities,” said Karan Batta, senior vice president, of Oracle Cloud Infrastructure. “By furthering our collaboration with Microsoft, we can help bring new experiences to more people around the world.”
“Microsoft Bing is leveraging the latest advancements in AI to provide a dramatically better search experience for people across the world,” said Divya Kumar, global head of marketing for Search & AI at Microsoft. “Our collaboration with Oracle and use of Oracle Cloud Infrastructure along with our Microsoft Azure AI infrastructure, will expand access to customers and improve the speed of many of our search results.”
Inference models rely on thousands of compute and storage instances, as well as tens of thousands of GPUs, operating concurrently as a unified supercomputer across a multi-terabit network.