Back to Blog

Agora SDK version 3.0.1: Voice enhancement, face detection, and more in this release!

Agora SDK version 3.0.1: Voice enhancement, face detection, and more in this release!

Voice Enhancement

To improve the audio quality, this release added the following ENUMs in setLocalVoiceChanger and setLocalVoiceReverbPreset:

  • VOICE_CHANGER_PRESET adds several types prefixed by VOICE_BEAUTY and GENERAL_BEAUTY_VOICE. The VOICE_BEAUTY types enhance the local voice, and the GENERAL_BEAUTY_VOICE types apply several different gender-based enhancement effects.
  • AUDIO_REVERB_PRESET adds the AUDIO_VIRTUAL_STEREO type and several enum types prefixed by AUDIO_REVERB_FX. The AUDIO_VIRTUAL_STEREO type implements reverberation in the virtual stereo, and the AUDIO_REVERB_FX enumerations implement additional enhanced reverberation effects.

See Set the Voice Changer and Reverberation Effects for more information.

Why this will improve your experience:

The new voice enhancement in the release will improve general audio quality, improving the experience however you implement it.


Face Detection

This release enabled local face detection. After you call enableFaceDetection to enable this function, the SDK triggers the onFacePositionChanged callback in real time to report the detection results, including the distance between the human face and the device screen.

Why this will improve your experience:

This method offers many good uses in your applications. For instance, you can remind users to keep a certain distance from the screen so that their entire faces can be captured.


Change Audio Mixing Pitch

This release added the setAudioMixingPitch method in the SDK, which allows you to set the pitch of the local music file during audio mixing through the pitch parameter. This method only changes the pitch of the music file and does not affect the pitch of a human voice.

Why this will improve your experience:

Now you can add a feature in your app that allows users to change the background music’s pitch if it doesn’t fit their voices. For example, if a streamer wants to sing a song, but the background music’s pitch is too high for them, they can use this feature to change the music’s pitch without affecting their own voices.


Video Fill Mode

To improve the user experience of watching videos, this release added a video display mode RENDER_MODE_FILL(4). This mode zooms and stretches the video to fill the display window. You can select this mode when calling the following methods:

Why this will improve your experience:

This new method provides one more way to display your video based on the display window, which makes application development easier.


Remote Video Renderer in Multiple Channels (Android/iOS/macOS)

Android: This release added setRemoteVideoRenderer in the RtcChannel class to enable users who join the channel using the RtcChannel object to customize the remote video renderer.

iOS/macOS: This release added setRemoteVideoRenderer and remoteVideoRendererOfUserId in the AgoraRtcChannel class to enable users who join the channel using the AgoraRtcChannel object to customize the remote video renderer.


Data Post-Processing in Multiple Channels

This release added support for post-processing remote audio and video data in a multi-channel scenario by adding the following C++ methods:

After successfully registering the audio or video observer, you can get the corresponding audio or video data from onPlaybackAudioFrameBeforeMixingEx or onRenderVideoFrameEx by setting the return value of isMultipleChannelFrameWanted to true. In a multi-channel scenario, Agora recommends setting the return value as true.

Why this will improve your experience:

With this new method, you can now retrieve the audio or video data from multiple channels.


Improvements

Frame Position

After successfully registering the video observer, you can observe and get the video frame at each node of video processing. To reduce the power consumption, this release enabled customizing the frame position for the video observer. Set the return value of the getObservedFramePosition callback to set the position to observe one of the following:

  • The position after capturing the video frame.
  • The position before receiving the remote video frame.
  • The position before encoding the frame.

Others

  • Android: Implemented low in-ear device latency on Huawei phones with EMUI v10 and above.
  • Android/iOS: Improved in-call audio quality. When multiple users speak at the same time, the SDK does not decrease the volume of any speaker.
  • Android/iOS: Reduced the overall CPU usage of the device.

Compatibility Changes

1. Dynamic library (iOS/macOS)

This release replaced the static library with a dynamic library for the following reasons:

  • Improving overall security.
  • Avoiding incompatibility issues with other third-party libraries.
  • Making it easier to upload the app to the App Store.

To upgrade the RTC Native SDK, you must re-integrate the dynamic library, AgoraRtcKit.framework. This process should take no more than five minutes.

Apple supports the dynamic library on iOS 13.4 and later.

2. Frame position for the video observer

As of this release, to get the video frame from the onPreEncodeVideoFrame callback, you must set POSITION_PRE_ENCODER(1<<2) in getObserverFramePosition as the frame position to observe, as well as implementing the onPreEncodeVideoFrame callback.

This post only gives you a taste of the additions and improvements in this new version. To see more great features for v3.0.1, check out the OS-specific release notes:

Have questions or suggestions? Join our Developer channel on Slack or ask a question on Stack Overflow.

RTE Telehealth 2023
Join us for RTE Telehealth - a virtual webinar where we’ll explore how AI and AR/VR technologies are shaping the future of healthcare delivery.

Try Agora for Free

Sign up and start building! You don’t pay until you scale.
Try for Free
Get Started with Agora thumbnail