AI Business News Roundup
Replit & Google Cloud Partner, Microsoft Launches Security Copilot, Cisco Unveils AI-Powered Innovations for Webex, and Hugging Face Optimizes Language Models on Habana Gaudi2 and Intel CPUs.
Replit and Google Cloud Partner to Advance Generative AI for Software Development
Replit, a cloud software development platform with 20 million users, has announced a new strategic partnership with Google Cloud that will give its developers access to Google’s infrastructure, services, and foundation models for generative AI. The collaboration aims to accelerate the creation of generative AI applications and underscores Google Cloud's commitment to nurturing the most open ecosystem for generative AI. For Replit, the partnership is the next step toward its goal of empowering a billion software creators. Source
Microsoft Introduces Security Copilot, Combining AI and Microsoft Security Technologies
Microsoft has launched Security Copilot, a security product that uses advanced large language model (LLM) and security-specific model technologies to enable defenders to move at the speed and scale of AI. By combining Microsoft’s leading security technologies with the latest advancements in AI, Security Copilot can surface prioritized threats in real time, deliver critical step-by-step guidance and context through a natural language-based investigation experience that accelerates incident investigation and response, and continually learn from user interactions to adapt to enterprise preferences. Microsoft is committed to delivering Security Copilot in a safe, secure, and responsible way, with users retaining control of their data and the product being protected by enterprise compliance and security controls. Source
Cisco Unveils AI-Powered Innovations for Enhanced Hybrid Work Experiences on Webex
Cisco has announced new AI capabilities for its Webex platform to enhance the hybrid work experience.
The innovations span three categories: reimagining workspaces, optimizing collaboration, and maximizing customer experience. The video-intelligence in Cisco Collaboration devices will be expanded to provide a cinematic meeting experience with cameras that follow individuals through voice and facial recognition, and a feature for setting virtual boundaries for any collaboration space in the office. Webex Calling has connected over 10 million users, nearly doubling year-over-year growth. Cisco also introduced new AI capabilities for its customer experience solutions to surface key reasons customers are calling into the contact center and make agents more effective. These innovations are expected to roll out over 2023. source
Hugging Face Shows How to Optimize Language Models on Habana Gaudi2 and Intel CPUs
AI startup Hugging Face has released two articles on optimizing the performance of large language models (LLMs). The first article discusses the deployment of LLMs on Habana Gaudi2, a second-generation AI hardware accelerator designed by Habana Labs. Hugging Face's Optimum Habana library enables faster inference than any GPU currently available, making Gaudi2 a great candidate for LLM training and inference. The article provides benchmark results showing that Gaudi2 performs BLOOMZ inference faster than Nvidia A100 80GB. Source
The second article details how to accelerate Stable Diffusion models on Intel CPUs, including Sapphire Rapids. The article discusses techniques such as using the Diffusers library, installing high-performance memory allocation libraries, and leveraging the Intel Extension for Pytorch (IPEX) and AMX. The article also shows how OpenVINO can be used to optimize models for bfloat16 format, resulting in a significant speedup in inference latency. By implementing these techniques, Hugging Face achieved almost 6.5x faster inference times compared to its initial Sapphire Rapids baseline. Source