top of page

AI Insight

Your weekly newsletter

Welcome to our latest edition of the weekly AI newsletter, where we give you a rundown of the most significant developments in the field.

  • OpenAI Gears Up for New Open-Source Launch Amidst Intense Competition

In response to the increasing sophistication of open-source alternatives, OpenAI is preparing to launch a new open-source language model. However, don't expect a direct competitor to its proprietary model, GPT, as OpenAI maintains its business model where its most advanced commercial AI remains proprietary.

  • Parallel Domain Unveils Reactor for Advanced Synthetic Data Generation

Reactor, Parallel Domain's new synthetic data generation engine, promises to enhance AI performance by providing fully annotated data for machine learning developers. Notably, Reactor has shown potential in improving autonomous vehicles' safety and ADAS's efficiency.

  • Stability AI Embraces Open-Source Movement Despite Controversies

Stability AI launched an open-source version of its interface, StableStudio, demonstrating commitment to open-source AI development. Despite ongoing financial troubles and IP controversies, the company maintains its blend of open-source and commercially licensed AI models. It's also reaching out to U.S. policymakers to emphasize transparency and competition in AI development.

  • ServiceNow and NVIDIA Collaborate to Boost Business Automation

ServiceNow and NVIDIA have teamed up to develop generative AI tools to streamline business processes and workflow automation. These new tools will be specifically trained on ServiceNow's platform data, aiming to dramatically increase productivity across a range of enterprise sectors, including IT departments and customer service teams.

  • Google Raises the Bar with PaLM 2, Trained on 3.6 Trillion Tokens

Google has announced PaLM 2, its new large language model, which uses nearly five times the training data as its predecessor. Despite being smaller, PaLM 2 accomplishes more advanced tasks, signaling improved efficiency. However, Google's secrecy around the specifics of its training data has led to increased demands for transparency from the research community.



bottom of page