Pika Labs, an AI** generation tool founded by a post-95 girl, swiped the screen last week, with a team of 4 people, and received $55 million in financing within half a year of its establishment, with a valuation of $200 million. At the same time, it also staged a drama of "father is more expensive than daughter" in A-shares, and his father's listed company gained 3 consecutive daily limits after this tool exploded. The first set of Hanfu ready-to-wear designed entirely with AIGC in China was unveiled at the 17th Hangzhou Cultural Expo, and AIGC has a place to play in the film and television industry. With the help of unbounded AI, directors can intuitively generate what they want and give it to the departments that need to communicate, greatly saving communication costs.
Generated by Unbounded AIGenerative AI startup Together AI has raised more than $100 million in Series A funding
According to the AIGC Open Community, on November 30, the open-source generative AI platform Together AI announced on its official website that it had won 102.5 billion US dollars (about 7.5 billion.)300 million yuan) Series A financing. The investment was led by Kleiner Perkins, followed by Nvidia, Emergence Capital, NEA, Prosperity 7, Greycroft and others.
Generative AI startup Pika Labs has closed a $55 million funding round and launched the first generator, Pika 10
Generative AI startup Pika Labs raised $55 million in a pre-seed and seed round led by Nat Friedman and Daniel Gross, as well as a Series A round led by Lightspeed Venture Partners, according to The Decoder on Nov. 29**. Other investors include Adam D'Angelo (Founder & CEO, Quora), Andrej Karpathy, Clem Delangue (Co-Founder & CEO of Hugging Face and Partner at Factorial Capital), and Craig Kallman (CEO of Atlantic Records).
In addition, Pika Labs announced the launch of the **generator Pika 10。It is reported that pika 10 uses a new AI model that can generate and edit different styles of 3D animation, anime, and movies.
Heygen, an AI-generated tool, closed a $5.6 million funding round
On November 29, Heygen, an AI generation tool, announced on social platform X that it had received $5.6 million in new venture capital led by Sarahguo's Conviction Partners. The company is valued at $75 million in this round. In addition, Heygen said its ARR (Annual Recurring Revenue) has grown from $1 million to $18 million in a year and launched Instant ATAR (**ATAR 2.).0)。
Biotech and AI startup Cradle has raised $24 million in Series A funding
Biotech and artificial intelligence startup Cradle has raised $24 million in Series A funding after closing a $5.5 million seed round last year, following a Nov. 28 round led by investor Index Ventures, with participation from Kindred Capital (also a seed investor) and individual investors such as Chris Gibson and Tom Glocer, with the new funds going to grow the team and sales.
"Haina AI" completed tens of millions of yuan in Series A financing, with exclusive investment from Lenovo Venture Capital
According to 36 Krypton on December 1, "Haina AI" recently completed tens of millions of yuan in Series A financing, which was exclusively invested by Lenovo Venture Capital, and this round of financing funds will be used for talent recruitment, AI model research and development, and marketing system construction. It is understood that "Haina AI" is an AI product in the vertical field of talent recruitment under Beijing Qunxing Shining Technology***, which was first launched in 2019, specializing in AI interview services to help enterprises complete recruitment interviews with the help of AI technology.
Harbin Institute of Technology (Shenzhen) released the multi-modal large model "Nine Days".
According to the webmaster's home on December 4, Harbin Institute of Technology (Shenzhen) recently released a multimodal large language model called Jiutian-Lion, which achieved state-of-the-art performance on 13 visual language tasks by integrating fine-grained spatial perception and high-level semantic visual knowledge, especially on visual spatial reasoning tasks.
KLCII has officially open-sourced the 70 billion parameter model aquila2-70b-expr
According to 36 Krypton on November 30, Lin Yonghua, vice president and chief engineer of Beijing Zhiyuan Artificial Intelligence Research Institute, announced at the 2023 Artificial Intelligence Computing Conference that the 70 billion parameter large model aquila2-70b-expr (heterogeneous pioneer version) was officially open-sourced, which is the first large model trained based on NVIDIA mixed resources and days of Zhixin mixed resources.
According to reports, this time it is based on FlagScale V02 Completed the training of the Aquila2-70B-EXPR large model in NVIDIA Mixed Resources (A100 cluster + A800 cluster) and Tiantian Zhixin Mixed Resources (BI-V100 cluster + BI-V150 cluster).
Inspur Information released a 100-billion-level open source large model "Source 2."0”
According to Titanium ** on November 27, Inspur Information officially released the 100-billion-level open source large model "Source 2."0”。Source 20 The localized filtering-based attention mechanism (LFA) can effectively capture local information and short sequence information, so that the model can more accurately grasp the strong semantic association between contexts, and learn Xi human language Xi habitual paradigm and programming ability.
The Fudan Insurance team released the "Insurance Smart Cool" special model for the insurance field
According to the "Science and Technology Innovation Board**" on December 2, Fudan Insurance Celebrity Day and the "Insurance Zhiku" large model press conference were held, and the insurance vertical model "Insurance Zhiku" developed by the Fudan Insurance team was released at the meeting. According to reports, "Insurance Smart Cool" is a special large language model in the insurance field that provides professional, intelligent and comprehensive digital services for various users in insurance scenarios.
Alibaba Cloud open-source Tongyi Qianwen 72 billion parameter model
According to the news of Jinshi on December 1, Alibaba Cloud open-source Tongyi Qianwen 72 billion parameter model QWEN-72B and 1.8 billion parameter model QWEN-18b and the audio model qwen-audio. It is reported that in addition to the pre-trained model, Alibaba Cloud's open-source model has also launched a corresponding dialogue model, which is oriented to 72b and 1The 8b dialogue model provides a 4-bit 8-bit quantized version of the model, which is convenient for developers to inference and train.
Tencent, Nanyang Technological University and other open source chart alpaca large model chartllama
According to the heart of the machine, recently, Tencent, Nanyang Technological University and Southeast University proposed chartllama. The research team created a high-quality graph dataset and trained a multimodal large language model focused on graph understanding and generation tasks. Chartlama combines multiple capabilities such as language processing and chart generation to provide researchers and professionals with a powerful research tool.
High-Flyer Quantitative's Deepseek released 67B large model
On November 29, the well-known private equity giant High-Flyer Quant officially announced that its new organization "Deepseek" to explore AGI (Artificial General Intelligence) officially released the general large language model "DeepSeek LLM 67B" after the release of the Coder ** model in early November. The model has been fully open-sourced, and the service has been fully open for internal testing.
According to DeepSeek, compared with the open-source model of the same level LLAMA2 70B, DeepSeek LLM 67B performed better on nearly 20 public evaluation lists in Chinese and English, especially for reasoning, mathematics, programming and other abilities.
Stability AI introduces the Stable Diffusion XL Turbo model
According to IT House on November 30, Stability AI recently launched the Stable Diffusion XL Turbo (SDXL Turbo), which is an improved version of the previous SDXL model, claiming to use "Adversarial Diffusion Distillation" to reduce the generation iteration step from the original 50 steps to 1 step, allegedly" Just one iterative step can produce high-quality images."
It is reported that the biggest feature of the Stable Diffusion XL Turbo model is the above-mentioned "one-time iterative generation of images", which claims to be able to carry out "instant text-to-image output" and can ensure the best quality. Experimental results show that the stable diffusion XL Turbo can greatly reduce the computational requirements while still maintaining good image generation quality, and the model word iteration is better than the 4 iterations of LCM-XL, and the 4 iterations of the Stable Diffusion XL Turbo can beat the previous 50-step iterative configuration of the Stable Diffusion XL;On an A100 GPU, it only takes 207 milliseconds to compute an image at 512 x512 resolution.
Google releases the Translatotron 3 model, which bypasses the text conversion step
According to IT House, Google officially introduced a new AI model called Translatotron 3, which can realize speech-to-speech simultaneous interpretation and translation without any parallel speech data.
Google launched the Translatotron S2 ST system in 2019, launched its 2nd version in July 2021, and announced in a post published on May 27, 2023, that it was deploying a new method to train Translatotron 3. According to the researchers, Translatotron 2 already delivers superior translation quality, robustness and naturalness of speech, while Translatotron 3 achieves "the first end-to-end model of fully unsupervised direct speech-to-speech translation."
Amazon has launched several AI tools, including the Titan series of AI models
According to VentureBeat on November 30, following the launch of a new chatbot called Amazon Q, an upgraded AI system processor Trainium2, and an expanded partnership with Nvidia, Swami Sivasubramanian, vice president of data and artificial intelligence at Amazon AWS, announced a series of new AI tools at Re: Invent yesterday. These include three generative AI models from the "Titan" family: Titan Image Generator, Titan Text Express, and Titan Text Lite. In addition, Amazon Bedrock has been upgraded to provide enterprise customers with access to most models on the market, including Jurassic for AI21 and Claude 2 for Anthropic1. Meta's Llama 2 and Stable Diffusion.
The first set of Hanfu ready-to-wear designed entirely with AIGC in China was unveiled at the 17th Hangzhou Cultural Fair
From November 23rd to 27th, the 17th Hangzhou Cultural and Creative Industry Expo was successfully held in Hangzhou. During this period, the first set of Hanfu ready-to-wear in China designed and created entirely with AIGC was unveiled at the exhibition. The design of this Hanfu is derived from the excellent work "Decoration of Xizi" in the 2023 "Mengxi Cup" Song Yun Cultural Innovation Competition, which was created by contestant Li Chao using unbounded AI. The work is inspired by the four traditional Chinese color schemes – Juyi, Yang Fei, Qinglian and Cuiwei – and is created by Unbounded AI and Song Yun Hanfu models with Gongbi figures and watercolors such as Lora and Dotted Watercolors.
ByteDance launched the large-scale model product "chitchop" overseas
According to Tech Planet on November 29, Byte launched a large model product called "Chitchop" overseas, the development and operation company is Poligon, and Byte's overseas social product Helo is also operated by the company, and has now launched an independent app and web version.
It is reported that chitchop is an artificial intelligence assistant tool that can provide users with up to 200+ intelligent robot services, serving users' work and life by providing creative inspiration and improving work efficiency. It is worth noting that this product is similar to Douyin Group's AI product "Little Wukong", which is a collection of AI tools created based on the lark large language model.
Meizu released AICY AI, a large-scale model, which supports a number of AI functions such as Q&A and painting
According to Kuaitech on November 30, Meizu officially announced the release of Flyme 10 today5 systems and their own large model AICY AI. According to reports, AICY AI is an encyclopedia of instant answers, and AICY with massive knowledge can answer various questions such as natural science, life knowledge, health knowledge, and emotional Q&A. In addition, AICY also supports creative inspiration to generate paintings, which can generate paintings in various styles such as realistic style, two-dimensional, and ink style. In addition, an AI photo feature has been added to the gallery. After the user uploads, the AI can generate their photo.
Google's Deepmind uses AI tool GNOME to discover 2.2 million new crystalline materials
On November 30, Google's DeepMind demonstrated the AI tool GNOME in the journal Nature and introduced the relevant applications of AI in materials science. It is reported that Deepmind has discovered 2.2 million new crystals using GNOME, of which 380,000 crystals are stable materials that can be manufactured in the laboratory and are expected to be used in batteries or superconductors. Deepmind claims that it would take 800 years to calculate these materials by human power alone.
Alibaba International released 3 AI design ecological tools
According to the "Science and Technology Innovation Board**" on December 1**, at the 6th China International Industrial Design Expo, Ali International released 3 design ecological tools: Duiyou, Piccopilot, and Luban AI. According to reports, these 3 products have functions such as AI painting, AI model creation, AI image and ** processing, and have served hundreds of thousands of merchants and covered 500,000 designers.
FreePik Launches Pikaso Real-Time AI Drawing Tool Using LCM Drawing Technology, Simple Lines Can Be Drawn
According to the webmaster's home on December 1, recently, the well-known gallery platform Freepik released its innovative product - Pikaso real-time drawing tool, which combines LCM technology with a million-level gallery to bring users a new creative experience. Pikaso uses cutting-edge LCM drawing technology, combined with FreePik's library of millions of licensed images, to make real-time drawing possible.
Korean media: Samsung's Galaxy Book 4 series notebooks will be released on December 15 and will support local running of Gaussian AI large models
According to IT House, citing Yonhap News Agency, industry insiders revealed that Samsung Electronics will launch the Galaxy Book 4 series laptops equipped with Intel's next-generation processor Core Ultra on the 15th of this month, claiming to be the world's first AI notebook.
According to the report, the Galaxy Book 4 will be unveiled a month and a half earlier than the previous generation. The reason why it chose to release it earlier this time was because Samsung needed to embody the symbolism of this new product as the "first AI notebook". The Galaxy Book 4 is expected to be powered by its own AI model "Samsung Gauss", which eliminates the need for the device to transmit the collected information to a **server, which means that the built-in "Gauss" of the notebook will support local operation.
ASUS will release the first Intel Core Ultra processor AI notebook
ASUS announced on Weibo today that the 2024 ASUS Core Ultra AI PC thin and light laptop launch event will be held at 15:00 on December 15, when the new ASUS Zenbook series notebooks will also appear. ASUS claims that this is the first of the latest Core Ultra laptops.
Bill Gates: Generative AI has reached its limits, and the next breakthrough is explainable AI
According to the Science and Technology Innovation Board on November 27, Bill Gates said that many people within OpenAI, including Ultraman, believe that GPT-5 will be significantly better than GPT-4. But he argues that there are many reasons to believe that generative AI has reached its limits. The next breakthrough Gates believes is explainable AI, but it is not expected to happen until the next decade (2030-2039).
Xu Zongben, academician of the Chinese Academy of Sciences: At present, large-scale model research is far from scientific
According to Jiemian News, Xu Zongben, an academician of the Chinese Academy of Sciences, said at the CCF China Software Conference that as a trend of a new wave of artificial intelligence development, the revolutionary impact of large models on scientific research paradigms, production methods, and industrial models cannot be underestimated, and investing in large model research has become an inevitable choice. However, he also said that large-scale model research is still engineering and far from scientific.
He believes that software will become the first field of artificial intelligence to break through, "software has language, language has grammar, grammar has strict standards, as long as artificial intelligence can be standardized and has logical boundaries, it can do well in the field of software." ”
Hugging Face Co-Creator Releases 2024**: Open Source LLMs Will Reach the Best Level of Closed-Source LLMs
AI New Intelligence News, on November 27, Clement Delangue, co-founder and CEO of AI open source community Hugging Face, posted 6 ** for the development of the industry in 2024, including: A popular AI company will go out of business, or be acquired at a very low **;Open source LLMs will be at the best level of closed-source LLMs;AI has made major breakthroughs in the fields of **, time series, biology and chemistry;The public will be more concerned about the economic and environmental costs of AI;Most of the content for a popular ** will be generated by AI;10 million AI developers on Hugging Face will not lead to an increase in unemployment.
Lin Yonghua from Beijing Academy of Artificial Intelligence: There is a three-year gap between the large model training performance of domestic AI chips and foreign countries
According to the "Science and Technology Innovation Board**" on November 29**, Lin Yonghua, vice president and chief engineer of Beijing Zhiyuan Artificial Intelligence Research Institute, said that the current training performance of large model clusters of Chinese AI chips is only close to Nvidia A100 A800, and most of them are less than 50%. In addition, there is a huge ecological gap, there are more than 40 AI chip companies in China, but the overall market share of China's AI chips does not exceed 10%, and the AI chip software is different, and the ecology is very fragmented.
Jack Ma: The era of AI e-commerce has just begun, and it is both an opportunity and a challenge for everyone
According to the "Science and Technology Innovation Board**" on November 29**, it was learned from a number of Ali insiders that Ma Yun rarely participated in the discussion and spoke on Alibaba's intranet in response to the employees' discussion of Pinduoduo's financial report and e-commerce last night. Ma Yun said that please put forward more constructive opinions and suggestions, especially innovative ideas. He believes that everyone is watching and listening to Ali people today, and he firmly believes that Ali will change, and Ali will change. All great companies are born in the winter. The era of AI e-commerce has just begun, and it is both an opportunity and a challenge for everyone. Ma Yun also said that he would like to congratulate Pinduoduo on its decision-making, implementation and efforts over the past few years. "No one has been bullish, but an organization that can reform for the sake of tomorrow and tomorrow, and who is willing to pay any price and sacrifice, is respectable. Back to our mission and vision, Ali people, come on!”
Meta Chief Scientist Likun Yang refuted Huang: Superintelligence isn't coming anytime soon
According to IT House on December 4, Nvidia CEO Jensen Huang recently announced that super artificial intelligence (AI) will catch up with humans within five years. In this regard, Yann Lecun, chief scientist of Meta, the parent company of Facebook, and a pioneer of deep Xi, has a diametrically opposite view. Superintelligence, he believes, won't come anytime soon.
Yang Likun said that it will take decades for current AI systems to reach some sense of human-like perception capabilities. At that point, these common-sense AI systems will be more powerful and will no longer be limited to summarizing mountains of text in creative ways. In response to Huang's sentiments, Yang commented, "I know Huang, the CEO of Nvidia, who has benefited a lot from the AI boom. It's an AI war, and it's providing**.
Study: GPT-4 beats the professionally tuned MedPalm 2 model for medical problems
Microsoft researchers demonstrated the superior performance of GPT-4 in medical knowledge tests, especially when combined with advanced prompt engineering techniques, which outperformed the professionally tuned MedPalm2, according to Webmaster Home on December 4**.
The results show that applying more effective prompt engineering to mainstream general models may be a better way to achieve more accurate results than time-consuming and laborious tuning and model training. The MedPrompt method employs a variety of prompt engineering techniques, including GPT-4-generated chain-of-thought reasoning and generating multiple individually scored responses, which are then returned to the user with the highest score. Although this approach increases the cost of inference because more labels are generated, the results suggest that combining leading general-purpose models, such as GPT-4, with advanced prompt engineering techniques to evaluate the criteria for state-of-the-art performance, may be worth considering.
According to the study, the energy consumed to generate an AI image is equivalent to fully charging a mobile phone
According to a new study conducted by researchers at AI startup Hugging Face and Carnegie Mellon University, every time AI is used to generate an image, compose an email or ask a question to a chatbot, it causes a certain burden on the planet.
In fact, using a powerful AI model to generate an image consumes the equivalent of fully charging a mobile phone, and the study calculates for the first time the carbon emissions generated by using AI models for different tasks. However, they found that the energy consumption of using AI models to generate text was significantly lower. Generating 1,000 texts consumes only 16% of the energy equivalent to charging a phone.