To improve the audio quality, this release added the following ENUMs in setLocalVoiceChanger and setLocalVoiceReverbPreset:
See Set the Voice Changer and Reverberation Effects for more information.
The new voice enhancement in the release will improve general audio quality, improving the experience however you implement it.
This release enabled local face detection. After you call enableFaceDetection to enable this function, the SDK triggers the onFacePositionChanged callback in real time to report the detection results, including the distance between the human face and the device screen.
This method offers many good uses in your applications. For instance, you can remind users to keep a certain distance from the screen so that their entire faces can be captured.
This release added the setAudioMixingPitch method in the SDK, which allows you to set the pitch of the local music file during audio mixing through the pitch parameter. This method only changes the pitch of the music file and does not affect the pitch of a human voice.
Now you can add a feature in your app that allows users to change the background music’s pitch if it doesn’t fit their voices. For example, if a streamer wants to sing a song, but the background music’s pitch is too high for them, they can use this feature to change the music’s pitch without affecting their own voices.
To improve the user experience of watching videos, this release added a video display mode RENDER_MODE_FILL(4). This mode zooms and stretches the video to fill the display window. You can select this mode when calling the following methods:
This new method provides one more way to display your video based on the display window, which makes application development easier.
Android: This release added setRemoteVideoRenderer in the RtcChannel class to enable users who join the channel using the RtcChannel object to customize the remote video renderer.
iOS/macOS: This release added setRemoteVideoRenderer and remoteVideoRendererOfUserId in the AgoraRtcChannel class to enable users who join the channel using the AgoraRtcChannel object to customize the remote video renderer.
This release added support for post-processing remote audio and video data in a multi-channel scenario by adding the following C++ methods:
After successfully registering the audio or video observer, you can get the corresponding audio or video data from onPlaybackAudioFrameBeforeMixingEx or onRenderVideoFrameEx by setting the return value of isMultipleChannelFrameWanted to true. In a multi-channel scenario, Agora recommends setting the return value as true.
With this new method, you can now retrieve the audio or video data from multiple channels.
After successfully registering the video observer, you can observe and get the video frame at each node of video processing. To reduce the power consumption, this release enabled customizing the frame position for the video observer. Set the return value of the getObservedFramePosition callback to set the position to observe one of the following:
This release replaced the static library with a dynamic library for the following reasons:
To upgrade the RTC Native SDK, you must re-integrate the dynamic library, AgoraRtcKit.framework. This process should take no more than five minutes.
Apple supports the dynamic library on iOS 13.4 and later.
As of this release, to get the video frame from the onPreEncodeVideoFrame callback, you must set POSITION_PRE_ENCODER(1<<2) in getObserverFramePosition as the frame position to observe, as well as implementing the onPreEncodeVideoFrame callback.
This post only gives you a taste of the additions and improvements in this new version. To see more great features for v3.0.1, check out the OS-specific release notes:
Have questions or suggestions? Join our Developer channel on Slack or ask a question on Stack Overflow.