Agora Native RTC SDK 43.Version 0 has been officially launched on the official website, 43.0 SDK has achieved significant improvements in the basic quality and experience of RTC, such as the optimization of the performance of the first frame of the audio and the time-consuming of API calls; HD experience and bandwidth optimization, etc. This version also supports features such as custom photo layout and multi-view local preview at the receiving end, which can be applied to scenarios such as multiplayer team battles, conferences, and virtual social networking, as detailed below.
The quality of the RTC foundation has been significantly improved
In order to further improve the access and user experience of developers, Agora has made changes to 43.0 The SDK task processing scheduling mechanism has been optimized, which is reflected in the following aspects:
SDK Stability:
The stability of the SDK has been further enhanced, which not only reduces the crash rate in various specific scenarios, such as screen sharing in multi-person remote meetings, and joining channels on special models in live shows, but also optimizes the DNS resolution policy of the SDK to improve the stability of calling setlocalaccesspoint to resolve domain names in complex network environments.
Tone ** First frame performance optimization:
10 to 20 percent faster first-frame output and sound output for both remote and local locations.
API call time consuming optimization:
API call time and response time can be reduced by up to 50%.
Continuously optimize** the HD experience
Agora has been committed to promoting the popularization of 720p and 1080p **HD in domestic and foreign markets3.0 SDK We've also further optimized the HD experience in multiple ** scenarios.
**Optimized for HD models
In the RTC scenario, it has always been difficult for low-end devices to achieve 720p HD**, and there are common pain points in overseas markets. In this regard, Agora has been optimizing the performance of low-end machines in live broadcast and audio call scenarios. This time 43.Version 0 further enhances the parallel processing capabilities of the SDK, enabling higher ** quality (720p, 24 fps) to be experienced on lower-end devices, as well as more stable image processing at high resolutions and high frame rates** scenarios.
**HD capability is enabled by default
In order to effectively promote the popularization of 720p HD across the network, 43.0 SDKs enable PVC AI quality by default. PVC: The resolution is enabled by default between 180 and 720p, and is automatically degraded and disabled when the performance is insufficient. AI image quality is enabled by default, and image quality enhancement is achieved in multiple dimensions, and it will also be automatically downgraded and disabled when the performance is insufficient.
Support device rating query and adaptive resolution
The querydevicescore method is added to query the rating level of a device to ensure that the parameters set by the user, such as the streaming resolution, do not exceed the capabilities of the device. For example, in the high-definition or ultra-high-definition scenario, the anchor can call this method to query the rating of the device before the official live broadcast, and if the returned score is low (such as less than 60 points in the 100-point system), the resolution needs to be appropriately lowered to avoid affecting the experience of the device. Different business scenarios require different minimum device rating levels, and customers can choose by themselves.
**Bandwidth optimization
As the image quality becomes more and more high-definition, the bandwidth requirements for transmission are also higher. And Agora 43.0 SDK reduces the bandwidth requirements for real-time transmission by optimizing the algorithm of the module. In the case that the network environment remains the same, it provides users with a better experience.
A number of new features have been added
Support richer scene gameplay
Customize the layout of the composite image on the receiving end
In the scenario where multiple anchors (4 or more) send streams, the audience faces challenges such as high requirements for device performance and downlink bandwidth, flexible layout of the screen, and switching and enlargement of the audience window. Therefore, Agora has launched an experience optimization scheme in the multi-anchor scenario, which supports the receiver to customize the layout of the image, and create a smooth and personalized experience with the audience as the center. This solution can be widely used in scenarios such as multiplayer team battles, multiplayer conferences, and large classes. The technical principle is shown in the figure below.
Figure 1: The client-side custom map layout needs to be used in conjunction with the cloud transcoding service
Figure 2: When the viewer receives the transcoded composite stream, the viewer implements a custom composite layout locallyMulti-view local preview is supported
This version of the SDK supports previewing multiple screens locally at the same time, and the ** displayed in the screen is at different observation positions on the **link. For example, it is supported to see that the ** captured by the device camera is rendered as two pictures in the local ** preview at the same time, which are "the original picture without ** pre-processing" and "the picture that has been ** pre-processed (such as: beautification, virtual background, watermark local preview)".
This function can be used in scenarios such as virtual social networking and meetings, such as virtual social scenes, where the anchor supports the camera real screen + virtual human image preview at the same time in the app interface. **In the meeting scenario, when you switch the virtual background, beautification and other effects, the local preview screen will not affect the online live broadcast screen.
Figure: Illustration of a virtual social sceneAudio scene start-up assistance
This SDK version also adds the SelectMultiAudioTrack method, which supports setting local audio tracks and sending them to the remote end, which can be applied to the start-up assistance of audio scenarios such as karaoke. For example, in the K-song scenario, the anchor can choose to use one audio track A locally (such as starting the original song) according to his needs, and then send another audio track B to the remote end (such as accompaniment only) The audience can only hear the accompaniment + anchor singing effect, which not only improves the singing effect of the anchor, but also improves the pure listening experience of the audience.