Artificial intelligence training platform, artificial intelligence science and education platform

Mondo Technology Updated on 2024-03-03

An AI lab is a facility dedicated to AI research and education, typically including high-performance computers, specialized software, datasets, and other related equipment. These training rooms are mainly used for the following purposes:

Teaching: The training room can be used to teach courses related to artificial intelligence, such as machine learning, deep learning, computer vision, and natural language processing. Students can practice at these facilities to understand and master the theory through hands-on experience.

Research: The training room can also be used as a place for conducting artificial intelligence research. It's a place where researchers can develop new algorithms, models, and technologies that will advance the field of artificial intelligence.

Project PracticeThe training room can also be used as a practice place for students' projects, giving students the opportunity to apply theoretical knowledge to practical problems, improve their practical ability and innovative thinking.

Industry cooperationThe training room can also cooperate with enterprises and industries to jointly promote the application and development of artificial intelligence technology. This collaboration can help students understand the needs and trends of the industry, laying the foundation for their future careers.

In general, the artificial intelligence training room is of great significance for promoting the development of the field of artificial intelligence, cultivating professional talents, and promoting industry-university-research cooperation.

DB-SD23 AI Experiment Box

AI involves a wide range of knowledge, and requires a strong mathematical foundation, programming foundation and related embedded development capabilities. At present, the existing books or products are either too theoretical, it is easy for us to get started and give up, or the development is too complicated, and people who have no foundation stop at it, so we have developed a new DB-SD23 AI artificial intelligence experiment box.

Based on the multi-dimensional learning practice platform, from the perspective of beginners, we start learning from the basic and independent GPIO extensions, transition to sensor experimental projects, and then enter OpenCV, PyTorch, ROS robot systems, machine kinematics, AI vision, AI hearing, etc., so as to learn AI artificial intelligence development.

One. AI core.

GPU: 128-core NVIDIA Maxwell GPU

CPU: 4-core Cortex-A57 processor.

RAM: 4 GB LPDDR 256 gb/s

Hashrate: 472 gflop

Based on NVIDIA's powerful Al computing power, the system core is a small but powerful computer that allows you to run multiple applications such as neural networks, object detection, segmentation, and speech processing in parallel. The system is equipped with a quad-core Cortex-A57 processor, 128-core Maxwell GPU and 4GB LPDDR memory, bringing enough Al computing power, providing 472GFLOP computing power, and supporting a range of popular AL frameworks and algorithms. For example, TensorFlow, PyTorch, Caffe Caffe2, Keras, MXNET, etc.

Two. System framework and AI framework.

The system is pre-installed with ubuntu1804 Operating system, all environmental ** library files have been installed, ready to use at boot.

ubuntu 18.04 LTS is extremely efficient in cloud computing, especially for storage-intensive and compute-intensive tasks such as machine learning. Ubuntun Long-Term Support releases come with up to five years of official Canonical technical support. ubuntu 18.04 LTS will also come with Linux Kernel 415, which contains fixes for spectre and meltdown bugs.

Provide detailed python open source sample programs.

According to Tiobe's latest rankings, Python has surpassed C to become the top 4 most popular languages in the world along with J**A, C, and C++. At present, the index search volume in China has surpassed J**A, and C++, which will soon become the most popular development language in China.

Python is widely used in back-end development, game development, development, science and computing, big data analysis, cloud computing, graphics development and other fields; Python is in an advanced position in software quality control, improving development efficiency, portability, component integration, and rich library support. Python has the advantages of being simple, easy to learn, free, open source, portable, extensible, embeddable, object-oriented, etc., and its object-oriented is even better than J**a and Cnet is more thorough;

Jupyterlab programming.

JupyterLab is an interactive web-based development environment for Jupyter notebooks, notebooks, and data. Jupyterlab has the flexibility to configure and arrange the user interface to support a wide range of workflows in data science, scientific computing, and machine learning. Jupyterlab is an extensible and modular writing plugin that adds new components and integrates with existing ones.

Multiple AI frameworks.

OpenCV computer vision library, TensorFlow AI framework, PyTorch AI framework, etc.

Three. Basic GPIO & Sensor Experiment Module.

Two-color LED experiment.

RGB-LED experiments.

Relay experiments.

Laser sensor experiments.

Experiment with the touch of a switch button.

Tilt sensor experiment.

Vibration sensor experiments.

Buzzer experiment.

Reed sensor experiments.

U-shaped photoelectric sensor experiment.

PCF8591 analog-to-digital conversion experiment.

Raindrop detection sensor experiment.

PS2 joystick experiment.

Potentiometer experiments.

Simulate Hall sensor experiments.

Simulated temperature sensor experiments.

Sound sensor experiments.

Photosensor experiments.

Flame alarm experiment.

Smoke sensor experiments.

Touch switch experiments.

Ultrasonic sensor distance detection experiment.

Rotary encoder experiments.

Infrared obstacle avoidance sensor experiment.

I2C LCD 1602 Liquid Crystal Display Experiment.

BMP180 barometric pressure sensor experiment.

MPU6050 gyroscope accelerometer.

DS1302 real-time clock simulation experiment.

Tracking sensor experiments.

DC motor fan module experiment.

Stepper motor drive module experiment.

PIR Human Pyroelectric Induction Module Experiment.

Four. AI Vision.

Sphere tracking. Facial recognition tracking.

**Identify. Automotive pedestrian detection.

Body tracking. Implement face recognition based on dlib.

License plate recognition. Custom object recognition.

Gesture recognition based on PyTorch.

AI facial feature recognition.

Color recognition tracking.

Color grabbing. Color interaction.

Model training - manipulator garbage sorting.

Human Feature Recognition Interaction - Robotic Hand Gesture Interaction (recognizing multiple gestures and performing corresponding actions).

Human Feature Recognition Interaction - Robotic Gesture Grasping (Recognition of Number Gestures, Stacking Layers, and Pushing Down Under a Fist Gesture).

Human Feature Recognition Interaction - Manipulator Face Recognition Tracking (detecting faces, tracking movement after recognition).

Five. AI hearing.

* Speech synthesis experiments (converting text to *** format audio and**).

Speech dictation streaming experiments that convert speech into textual output

Turing bot experiments (after entering the content of a conversation, the bot will reply to the content of the conversation).

AIUI experiment (a full-link human-computer interactive voice solution with natural language understanding as the core launched by iFLYTEK).

VAD experiments (voice activity detection (VAD) also known as voice endpoint detection, speech boundary detection).

Xiaowei robot voice dialogue experiment (the program will enter the dialogue state).

Snowboy Voice Arousal Experiment (Kitt.)Artificial intelligence software toolkit developed by AI. With Snowboy software, developers can add "voice word detection" to some hardware devices, allowing users to "wake up" or "command" them to do something by talking to their mobile devices. In this process, the device will be transformed into an intelligent robot through the owner's "voice control". )

6.Machine kinematics with ROS robots.

Mobile app controls the robotic arm (io Android).

FPV first-person control.

6 degrees of freedom robotic arm.

Intelligent serial bus servos.

PC upper computer control.

In addition to the FPV camera screen, the upper computer also added a 3D** model of the robotic arm.

The 3D model and the entity rotate synchronously, so that the theory and practice of the robotic arm control are combined.

Robotic arm custom learning action sets.

After entering the learning mode, you can learn and repeat the action set by reading and recording the angle of each rotation.

Fun fixed action sets.

There are 8 fixed action groups in the app, which can be previewed by clicking on the serial number, and then starting to execute after clicking Run.

Synchronous teaching of robotic arms (2 sets required).

Read the joint angle of the master and transmit it to the slave in real time, so that the slave rotates synchronously according to the attitude of the master.

6 degrees of freedom inverse kinematic control.

The servo motion control of the manipulator with 6 degrees of freedom is decomposed, the theoretical motion angle of each servo is calculated by inputting the target coordinates, and the movement of each servo is controlled simultaneously in combination with the servo control protocol.

ROS operating system.

The ROS Robotics Operating System is a collection of tools, libraries, and protocols designed to simplify robotics platforms and build complex and powerful robots.

7.Cloud Internet of Things.

Experiments on the Internet of Things based on the MQTT protocol.

IoT experiments based on Alibaba Cloud.

Experiments on the Internet of Things (IoT) based on the Bafa Cloud.

The structure of the WeChat Mini Program.

Mobile phone experiment of Internet of Things based on WeChat applet.

Internet of Things smart lamp experiment.

IoT Smart Fan Experiment.

Related Pages