AI Insights
Your weekly newsletter

In this week's AI newsletter, we'll cover the latest developments in the field, including Y Combinator's Winter 2023 batch, which includes 59 generative AI-focused startups that are disrupting the enterprise sector. We'll also discuss a comprehensive survey of Large Language Models (LLMs), the unveiling of Segment Anything by Meta AI, and recent funding rounds for Quantexa, Covariant, and Perplexity AI. Lastly, we'll explore Bloomberg's new language model, BloombergGPT, which has been trained on a mixed dataset of financial and general-purpose data and is set to transform the financial sector.
Y Combinator's Winter 2023 Batch Brings Generative AI to the Enterprise Sector
The latest batch of Y Combinator's startups includes 59 generative AI-focused startups, comprising 22% of the 272 companies in the cohort, up from just 11 in the previous cohort. The companies are primarily focused on bringing generative AI to the enterprise sector and are disrupting software engineering, sales, data and analytics, and marketing. Some of the most promising use cases for generative AI in the enterprise include leveraging AI chatbots for sales and customer support teams. Key players include Lightski, Kyber, and Stack AI, and the rapid growth in generative AI startups within Y Combinator's cohort indicates that the technology is on the verge of becoming mainstream. Source
AI Research - A Comprehensive Survey of Large Language Models
A new survey provides a comprehensive review of the recent progress in Large Language Models (LLMs) and their key concepts, findings, and techniques. The survey discusses four important aspects of LLMs: pre-training, adaptation tuning, utilization, and evaluation. It highlights the techniques and findings that are key to the success of LLMs in each aspect. The authors call for more formal theories and principles to understand and explain the behaviors of LLMs, as well as more efficient Transformer variants in building LLMs. The survey also introduces the challenges and future directions for LLMs in theory and principle, model architecture, model training, model utilization, safety and alignment, and application and ecosystem. The authors emphasize the importance of AI safety in the development of LLMs, making AI lead to good for humanity but not bad. Source
Meta AI unveils Segment Anything
Meta AI has introduced the Segment Anything (SA) project, a foundation model for image segmentation that can transfer zero-shot to new image distributions and tasks. SA includes a new promptable segmentation task, a model architecture called the Segment Anything Model (SAM), and a large-scale dataset called SA-1B that includes over 1 billion masks from 11 million licensed and privacy-preserving images. SA is pre-trained on the Segment Anything Data Engine, which uses a natural pre-training algorithm and a general method for zero-shot transfer to downstream segmentation tasks via prompting. The authors believe that SA and SAM will pave the way for the future of image segmentation, and have released the SA-1B dataset and SAM under a permissive open license to aid future development of foundation models for computer vision. Source
Recent Funding Rounds: Quantexa, Covariant, and Perplexity AI
London-based Quantexa has raised $129 million in a Series E funding round led by Singapore's sovereign wealth fund GIC, valuing the company at $1.8 billion. Quantexa provides AI tools to financial services, governments, and major organizations to tackle online crime and fraud. Meanwhile, AI firm Covariant, which develops robotic picking technology, has raised $75 million in a Series C extension. Lastly, AI-powered search engine startup Perplexity AI closed a $26 million Series A funding round to offer a conversational search engine as a rival to Google. The funds will be used to expand the platform and optimize its knowledge database. Source
AI Finance : BloombergGPT
Bloomberg has introduced a new language model, BloombergGPT, which is a 50 billion parameter language model trained on a mixed dataset of financial and general-purpose data. The goal of this model is to achieve best-in-class results on financial benchmarks while maintaining competitive performance on general NLP benchmarks. The model has been evaluated on finance-specific and general-purpose tasks, consistently outperforming other models on financial tasks such as sentiment analysis on publicly available financial benchmarks. BloombergGPT can be used to generate Bloomberg Query Language (BQL) queries from natural language queries, make suggestions for news headlines, and answer financial questions such as identifying CEOs. The paper also discusses the history and development of language models, ethical considerations and limitations of large language models, and the challenges of constructing high-quality training corpora. Source