Here are this week's curated articles, guides, and news about natural language processing (NLP) and artificial intelligence (AI) for you!
Network News
Millions of new materials have been discovered through deep Xi. Deepmind's GNOME, powered by Graph Neural Networks (GNNs), has succeeded in stabilizing thousands of new materials, including 736 structures created by external researchers. Inspired by known crystal structures and chemical formulas, GNOME's input connections are particularly useful for exploring new crystalline materials.
Investors urge CEO to resign, Stability AI considers**. UK AI startup Stability AI is considering** in response to investor pressure on its financial stability and performance. Investor Cotue advises the CEO to make the necessary changes to improve the company's economic position.
ChatGPT's training data can be exposed through "divergence attacks". A recent study of language models, including ChatGPT, revealed their unexpected ability to recall and reflect on specific training data. Researchers have found that ChatGPT has potential privacy concerns because it can leak sensitive information such as email addresses and ** numbers.
OpenAI's GPT store has been postponed until next year. OpenAI's GPT store launch has been postponed until next year. The GPT store aims to be a marketplace for users to sell and share their GPT creations, and OpenAI offers paid services based on usage.
Pika debuts, and the AI generator targets the tech giants. Pika Labs has released Pika 10, which is an impressive AI** generation tool. It has advanced features like text-to-and image-to-image conversion. The company also raised $55 million in funding.
Introducing SDXL Turbo: A real-time text-to-image generation model. Stability AI has introduced SDXL Turbo, a new text-to-image model that uses anti-diffusion distillation (ADD) to quickly generate high-quality images in one step. It can quickly and accurately create 512 x 512 images in just over 200 milliseconds.
The $10 million Artificial Intelligence Mathematics Olympiad was established. A $10 million prize pool has been announced to incentivize the development of AI models that can win gold medals at the International Mathematical Olympiad (IMO). The $5 million grand prize will be awarded to the first publicly shared AI model to achieve the gold standard in an approved competition.
Web Guide
LLM visualization. This content showcases a visual and interactive representation of the well-known Transformer architectures, including Nano GPT, GPT2, and GPT3. It provides clear visuals and illustrates the connections between all the blocks.
How Jensen Huang's NVIDIA is driving the AI revolution. NVIDIA CEO Jensen Huang has led the company's AI growth, achieving a staggering $200 billion in value growth. With a strong focus on artificial intelligence and its applications in various industries, Nvidia has surpassed large companies such as Walmart to become the sixth most valuable company. Huang's stake in NVIDIA is currently worth more than $40 billion.
In the age of artificial intelligence, Google is trying to make bold changes to search. Google is responding to pressure from generative AI tools and legal action by transforming the search experience. They are testing a "Notes" feature for the public to comment on search results and introducing a "Follow" option that allows users to subscribe to specific search topics and receive updates, similar to social networks.
Why are ai wrappers badly rated?AI wrappers are utility tools that utilize AI APIs to generate output and have been shown to be financially rewarding for creators. Examples such as Formula Bot and PhotoAI earn between $200,000 and $900,000 per year.
From pixels to possibilities: AI vision. A guide to possible innovations by utilizing GPT-4V, such as screenshots to** and helping the visually impaired.
Interesting ** and repositories
vaibh**s10/insanely-fast-whisper。'insanely-fast-whisper'CLI is a versatile tool for transcribing audio files. It is powered by Whisper Large v3 and can transcribe 98 minutes of audio in 150 seconds.
Extract training data from chatgpt. The researchers found a flaw in ChatGPT's alignment training that allowed the extraction of its training data, posing a significant security risk. By using meaningless prompts, the model could inadvertently expose its training data, extracting more than 10,000 unique examples for just $200.
Can the generic base model outperform the dedicated tuning?Medical case studies. GPT-4 surpasses MED-PALM 2 in answering medical questions using a new method called MedPrompt. By utilizing three advanced cue strategies, GPT-4 achieves 90Amazing accuracy of 2%.
Merlin: Giving the multimodal LLM a visionary. The researchers propose to add future modelling to the Multimodal Master of Laws (MLLM) to improve their understanding of the fundamentals and the intent of the discipline. Inspired by existing Xi paradigms, anticipatory pre-training (FPT) and anticipatory instruction adjustment (FIT) techniques are used for this purpose. 'merlin'is a new MLLM supported by FPT and FIT that demonstrates enhanced visual understanding, future reasoning, and multi-image input analysis.
Starling-7B: Improving the helpfulness and harmlessness of LLMs through RLAIF. Berkeley has launched Starling-7B, a powerful language model that leverages artificial intelligence feedback reinforcement Xi (RLAIF).
Dolphins: A multimodal language model for driving. Dolphins are a visual language model designed to act as conversational driving assistants. It is trained using data, text commands, and historical control signals to provide a comprehensive understanding of difficult driving scenarios for autonomous vehicles.