Agora recently released the latest version of their industry-leading video SDK. In this post, we’ll break down the upgrades, improvements, and best practices you can start using right now.
Whether you’re updating an existing project or just getting started, we’ll walk through how these new capabilities can help you deliver smoother, more engaging real-time experiences.
This release includes several key compatibility changes to how the SDK interacts with operating system frameworks and shared libraries when using multiple Agora SDKs.
Both the Video SDK (v4.5.0
) and the Signaling SDK (v2.2.0
and above) now include the libaosl.so
library, which powers essential cross-platform runtime features.
If you’ve integrated the Video SDK manually and use the Signaling SDK, make sure to remove any older libaosl.so
versions to prevent conflicts. The version shipped in v4.5.0
is 1.2.13
, and it's critical that you keep only the latest one.
To improve compatibility across different frameworks, a few parameters now expect int64_t
instead of the previous types.
If your app uses Agora’s screen capture APIs, adjust your code accordingly when updating so everything runs smoothly.
displayId
has changed from uint32_t
to int64_t
windowId
has changed from view_t
to int64_t
displayId
has changed from uint32_t
to int64_t
and the windowId
from view_t
to int64_t
sourceDisplayId
has changed from uint32_t
to int64_t
and the sourceId
from view_t
to int64_t
On macOS, to streamline the process of audio capture routing, the first time your app calls enableLoopbackRecording
on a client device, the SDK will automatically install the AgoraALD
virtual sound card; without extra manual steps.
Once installed, the system routes audio data through this virtual card as needed, with the goal of improving your app’s audio experience.
Before v4.5.0
, when the active camera device was unplugged and re-plugged in, the camera wouldn’t always trigger a reliable device state change or resume capturing automatically. Now, the SDK automatically handles camera reconnections on both macOS and Windows.
This means less time spent dealing with device states and more time focusing on your app’s features.”
Note: On Windows, after re-plugging the device state is now correctly reported correctly as MEDIA_DEVICE_STATE_IDLE
rather than MEDIA_DEVICE_STATE_ACTIVE
.
Starting with Android 15, the operating system supports larger memory pages. This release improves Agora’s Android support to smoothly run on devices with both 4 KB and 16 KB memory pages, preventing unexpected crashes and ensuring a more stable experience across a wider range of devices.
Agora has fine-tuned how noise suppression and video encoding preferences work, giving you more flexible tools to produce high-quality video under various conditions.
Instead of using VIDEO_DENOISER_LEVEL_STRENGTH
, to enable noise suppression use setVideoDenoiserOptions
and then apply skin smoothing with setBeautyEffectOptions
. This two-step process delivers better overall suppression.
Note: For low-light situations, we recommend to first enable noise suppression and then configure the low-light enhancement using setLowlightEnhanceOptions
.
Agora’s latest release makes it easier to get the best possible video quality without a lot of manual tweaking. Previously, you needed to pick a default preference for encoding quality or latency, and similarly for how the SDK handled changes in network conditions. Now the SDK automatically makes those calls for you:
PREFER_COMPRESSION_AUTO
. Instead of always going for top quality or strictly low latency, the SDK dynamically picks the right approach (low latency or high quality) based on your current video scene. This ensures viewers see the best possible experience, no matter what’s happening on the network side.MAINTAIN_AUTO
setting replaces the old quality-first choice. With this approach, the SDK automatically chooses whether to maintain framerate, balance the load, or hold resolution steady, all depending on your video scenario. The result is that you spend less time fine-tuning and more time delivering great streaming experiences.Beyond refinements, v4.5.0
brings new capabilities that can help you polish your app’s video experience and streamline common workflows.
This version adds the APPLICATION_SCENARIO_LIVESHOW
enumeration, allowing you can tailor your video environment specifically for live show performances, like concerts or virtual events. The result is smoother playback and better bandwidth efficiency.
By calling setVideoScenario
and choosing APPLICATION_SCENARIO_LIVESHOW
mode, the SDK automatically optimizes rendering and audio/video synchronization. This means you get faster frame rendering right out of the box—no need to manually enable instant media rendering—so your audience sees smoother, higher-quality video with less waiting before the first frame appears.
The result is everything is fine-tuned for a vivid, engaging live show experience that saves bandwidth and delivers top-notch quality.
The new setLocalRenderTargetFps
and setRemoteRenderTargetFps
methods allow developers to cap the maximum rendering frame rate on both the local and remote clients.
This is particularly useful for scenarios like screen sharing or online education, where you don’t need a high frame rate and want to reduce CPU usage or support lower-end devices effectively. The SDK tries to match the specified frame rate as closely as possible, letting you balance quality and performance.
To simplify the audience/viewer experience, this release introduces the openWithUrl
method to reduce the number of API calls required to receive the video stream. This method allows viewers to access a live-stream directly via a URL, bypassing the traditional join channel flow and the need to subscribe to individual streams.
This improves the support for use cases that embed video “previews” or need to expose a URL for ‘one-click’ watch links.
Now you can apply custom .cube
filter files via setFilterEffectOptions
. Whether you’re aiming for a brighter feed or a more stylized aesthetic, these new options give you the creative control you need.
Audio mixing just got simpler with the startLocalAudioMixer
and stopLocalAudioMixer
methods. These APIs allow developers to combine multiple audio inputs—such as a microphone, media player, or remote streams—into a single unified feed, creating a seamless and consolidated audio experience.
The updateLocalAudioMixerConfiguration
method lets developers dynamically adjust mixer settings in real time, adding flexibility for use-cases like live streaming, online education, or any situation where local control over composite audio is essential. With these features, managing and merging audio streams has never been easier.
To offer more flexibility in screen capture workflows, setExternalMediaProjection
now lets developers provide their own MediaProjection
object. This gives developers greater control over how their app captures and processes screen data, making it easier to implement customized capture flows or advanced processing pipelines.
In GPU-based video rendering, an EGL context is like the “workspace” that connects your code to the graphics hardware, ensuring textures and frames render smoothly. In v4.5.0
, Agora adds setExternalRemoteEglContext
so developers can supply their own EGL context for rendering remote video streams.
This means all video rendering — both remote content and custom processing — can share a single, unified graphics environment, improving performance and complexity behind the scenes.
This update introduces getColorSpace
and setColorSpace
to VideoFrame
, giving developers precise control over the video frame’s color space properties. By default, the SDK uses Full Range and BT.709 standards, but these settings can be customized to fit specific capture or rendering requirements.
With these new methods, developers gain more flexibility to fine-tune video processing workflows, to enhance the quality and adaptability of their applications.
This release isn’t just about adding new features; it also refines the existing ones.
The latest updates improve accuracy in the segmentation between the foreground and background. Body outlines are more accurate, details like fingers are clearer, while the background remains stable with reduced flicker around these edges.
The result is a more polished, professional appearance when using virtual backgrounds.
This release introduces the takeSnapshot
and takeSnapshotEx
methods to give developers precise control over when to capture frames. By passing in a config parameter, you can choose to capture frames at specific stages, such as before encoding, after encoding, or during other critical points in the video pipeline.
The new enableAudioProcessing
parameter (added to AudioTrackConfig
) lets developers toggle 3A (Automatic Acoustic Adjustment) for custom audio tracks of type AUDIO_TRACK_DIRECT
.
By default, this feature is disabled (parameter is false
). Developers can enable it as needed to implement their own tailored audio processing.
queryDeviceScore
has been updated to provide more accurate device ratings, helping developers optimize performance across various hardware setups.Agora Video SDK v4.5.0
resolves several issues to enhance usability and reliability:
muteRemoteVideoStream
.pauseAudioMixing
works immediately after startAudioMixing
.Agora’s v4.5.0
release of their Video SDK brings meaningful advancements in real-time video technology. The new APIs, engine improvements, and bug fixes lay the foundations for developers to build the richest real-time video experiences with smooth performance and industry-leading quality.