I used to think that AI was far away from us, until a year ago, ChatGPT's stunning appearance officially opened a new era in the AI era. In the past year, AIGC has become the hottest topic in the current Internet era, and AI has played a great role in many fields such as intelligent driving, office, game development, film and television post-production, industrial design, and drawing.
As a leader in the PC field, NVIDIA has been working tirelessly to promote the development of the AI field and popularize generative AI applications. Recently, NVIDIA officially released a demo of AI technology called "Chatwith RTX".
Its purpose is to make it easy for ordinary people to deploy their own private local AI chatbots on their Windows PCs, so that more people can experience the power of generative AI, and can help users complete office work and information acquisition more efficiently on their Windows PCs through "ChatWith RTX".
With a few questions, let's take a look at what kind of artifact "Chatwith RTX" has?
What is Chat with RTX?
First of all, Chatwith RTX is a completely free AI app!
To put it simply, ChatWith RTX is a chatbot built with RTXAI acceleration technology, which can be deployed locally on a Windows PC and run completely offline.
After the deployment of Chat WithRTX is completed, the user's local data (documents, notes, ** or other data) on the user's PC can be connected to the large language model, so that the user can quickly and accurately obtain the information the user wants through the conversation with the AI custom chatbot.
Since it is an on-premise AI application, the advantage of ChatGPT RTX is that it does not need to be connected to the Internet, responds quickly, and makes information more private and secure.
Of course, the disadvantage is also obvious, due to the limited data and information in the PC's local disk, it is not possible to get more non-local big data information through Chatwith RTX.
What can Chat withrtx do?
Chat with RTX currently supports file formats including text, pdf, doc docx, and xml, and simply point the app to the folder containing the files and it will load them into the library in seconds. In addition, it can provide YouTube **, and AI can quickly give users information feedback by analyzing **.
For now, after users successfully deploy ChatWithRTX on Windows, they can use it to realize basic AI robot chat functions, local data retrieval, rapid refinement, analysis, and summary of data information, as well as support for YouTube online ** analysis, etc.
For a simple example, you have a doc on your computer, with tens of thousands of words, and it takes a long time for you to read it yourself, at this time, you can specify the content of the local file through Chatwith RTX, and ask your questions to the AI through the form of dialogue, for example, let the AI help you summarize the main content of the **, and the AI will give analysis and insights according to the **content in the first time.
To put it simply, just like you talk to ChatGPT directly, you ask your needs and questions, and ChatGPT RTX gives you the answers, the difference is that ChatGPT's data database is larger, while ChatGPT's RTX database is your PC's local data data.
So, to put it simply, Chatwith RTX is more like a localized private AI assistant chatbot.
How does Chat with RTX work?
To deploy the Chatwith RTX program on your Windows computer, you must first make sure that your computer configuration performance is enough to support the operation, the user needs to have an RTX30 or RTX40 series graphics card, more than 16GB of RAM, Windows 11 operating system, 53511 or newer graphics driver version.
After making sure that your computer can run the Chatwith RTX program, you only need to log in to the NVIDIA official **, you can **local installation package, unzip the installation package, and install it according to the steps.
Note that during the installation process, please use the default installation path of the installation software, and changing the installation path to another one will cause the program to error. In addition, during the installation process, some necessary runtime programs will be added to the external network.
What does the future hold? Chat WithRTX currently only has one demo, and many functions and optimizations are not perfect, and many users will feel a little unintelligent and even a little chicken after the experience.
However, because ChatWith RTX is a free and low-cost local AI program for ordinary consumers, it is not difficult to see that NVIDIA has ambitions in the field of personal PCAI in the future, and believes that in the future, not only the optimization of Chatwith RTX, NVIDIA will continue to make efforts in other AI applications to accelerate the arrival of the national AI era!
Playing with AIPC, how can you miss a piece of ZOTAC RTX4070 Super Apocalypse OC high-performance graphics card optimized and accelerated for AI.
The newly upgraded ZOTAC RTX 4070 Super Apocalypse OC graphics card has been strengthened again, equipped with a new Adalovelace architecture, third-generation ray tracing technology, **TensorCore core, RTX accelerated AI experience, making AI creation more powerful, and DLSS3 frame rate black technology, as well as Reflex technology, the game is enjoyable. AI creation, game and entertainment are both correct.
As the sub-flagship series of ZOTAC, the overall appearance of Apocalypse is full of elements of hardcore mechas, highlighting its unique beauty. And it is equipped with 3 bionic shield scale fans, the ice mirror fully covers the heat dissipation base, and the upgraded and optimized ice vein 20 composite heat pipes and dense nickel-plated heat dissipation fins, silent and efficient, excellent heat dissipation performance, SEP power supply system, customized selected electronic materials, perfect interpretation of the stacking quality of ZOTAC's sub-flagship series graphics cards.