Singular Intel #11

Google's BERT is a Newly Developed Chatbot Competing with OpenAI's ChatGPT.

Google has entered the AI chatbot arena with Bard, an experimental conversational AI service powered by Google's Lamda language model. Bard draws on the vastness of the internet to provide up-to-date, high-quality answers to a wide range of queries and has an advantage in answering questions about more recent events compared to OpenAI's ChatGPT. Currently available only to trusted testers, Google plans to release the service fully in a few weeks. With Bard's ability to explain complex topics in simple terms, even a child can understand, Google is proving that it has what it takes to compete with ChatGPT. The competition between these two tech giants will drive innovation and benefit users.

Prequel Artique is a novel photo editing application, enhanced with artificial intelligence, designed specifically for commercial enterprises.

Prequel has launched an all-in-one mobile advertising and marketing graphics platform called Artique, which uses artificial intelligence to help businesses and creators with limited resources to bootstrap graphic design with photos. The app allows users to customize ready-made templates with control over fonts, layout, colors, supporting images and other details. Artique is free to download, but the Pro tier incurs a cost after the initial 7-day free trial.

OpenAI Rival Stable Diffusion Maker Seeks to Raise Funds at $4 Billion Valuation

Stability AI Ltd., the parent company of Stable Diffusion, an AI tool for making digital images, is reportedly seeking to raise more money at a valuation of about $4 billion. The amount of capital it is seeking to raise is unclear, and a final decision has not been made. Stable Diffusion competes with OpenAI's Dall-E 2, another digital-image tool. The AI industry has become the hottest topic in Silicon Valley, with tech giants like Microsoft, Alphabet, Amazon.com and Meta Platforms investing billions in the technology. The open-source software offered by Stability AI sets it apart from competitors and has practical uses in designing video games and advertisements.

Scientists are utilizing artificial intelligence to reconstruct images based on brain activity.

Researchers from Osaka University have developed an AI model that can recreate images seen by a human test subject by processing their brain activity. The model, called a diffusion model, introduces random noise to the data and then learns how to remove it to create a realistic image. The Stable Diffusion image generator used in the study was recently downsized to run on a smartphone. Although currently, fMRI scans are required to gather the necessary data, companies like Neuralink are working on brain-computer interface implants that could record brain data using tiny electrodes. The technology could have both practical and potentially dystopian applications.

Alibaba’s New Text-to-Image Super Model that Provider More Control Over the Outputs

Alibaba researchers have developed a new paradigm called Composer, which allows for more flexible control over image output while maintaining high synthesis quality and model creativity. The technique involves breaking down an image into representative factors and training a diffusion model with all these factors as conditions for recomposing the input. Composer supports various levels of conditions and serves as a general framework, facilitating a wide range of classical generative tasks without retraining. The concept of compositionality is applied to image generation and manipulation tasks, enabling the creation of novel images from previously unseen combinations of representations. Overall, Composer represents a significant step forward in controllable image generation and has the potential to revolutionize the way designers approach real-world design projects.

Nvidia just launched a brand new vision-language model known as "Prismer" that can be trained with much less data than previous models.

Nvidia has introduced a new vision-language model called Prismer, which uses an ensemble of diverse, pre-trained domain experts to achieve fine-tuned and few-shot learning vision-language reasoning performance. Prismer requires up to two orders of magnitude less training data than current state-of-the-art models and scales down multi-modal learning while achieving excellent performance. Prismer is a transformer model that uses both an encoder and a decoder, leveraging pre-trained experts while keeping the number of trainable parameters to a minimum. Prismer achieved comparable image captioning performance with significantly less data than other models and achieved VQAv2 accuracy comparable to GIT despite being trained on much less data. However, there are limitations to Prismer during its implementation discussed for full transparency.

Subscribe to singularintel

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe