Update 20-March-22: The blog has been updated to work with v4.0.0 of the Agora React Native UIKit.
The React Native UIKit makes it easy to build your own video calling app in minutes. You can find out more about it here. In this blog post, we’ll take a look at how we can extend the UIKit and add custom features to it using the example of AI denoising.
You can get the code for the example on GitHub, or you can create your own React Native project. Open a terminal and execute:
npx react-native init demo --template react-native-template-typescript
cd demo
Install the Agora React Native SDKs and UIKit:
npm i react-native-agora agora-react-native-rtm agora-rn-uikit
At the time of writing this post, the current agora-rn-uikit
release is v4.0.0 the react-native-agora
release is v3.7.0 and agora-react-native-rtm
is v1.5.0
If you’re using an iOS device, you’ll need to run cd ios && pod install
to install CocoaPods. You’ll also need to configure app signing and permissions. You can do this by opening the /ios/.xcworkspace
file in Xcode.
That’s the setup. You can now execute npm run android
or npm run ios
to start the server and see the bare-bones React Native app.
The UIKit gives you access to a high-level component called <AgoraUIKit>
that can be used to render a full video call. The UIKit blog has an in-depth discussion on how you can customize the UI and features without writing much code. The <AgoraUIKit>
component is built with smaller components that can also be used to build a fully custom experience without worrying about the video call logic.
We’ll clear out the App.tsx
file and start fresh:
We’ll create a state variable called inCall
. When it’s true we’ll render our video call, and when it’s false we’ll render an empty <View>
for now:
To build our video call, we’ll import the PropsContext
, RtcConfigure
, and GridVideo
components from the UIKit. The RtcConfigure
component handles the logic of the video call. We’ll wrap it with PropsContext
to pass in the user props to the UIKit.
We’ll then render our <GridVideo>
component, which will display all the user videos in a grid. You can use the <PinnedVideo>
component instead. Because we’ll want to create a button to enable and disable AI denoising, we’ll create a custom component called <Controls>
, which we’ll render below our grid:
We can use the LocalAudioMute
, LocalVideoMute
, SwitchCamera
, and Endcall
buttons from the UIKit and render them inside a <View>
.
We’ll create a new component called CustomButton
, which will contain the code to enable and disable our denoising feature:
We can access the RtcEngine
instance using the RtcContext
. This gives us access to the engine instance exposed by the Agora SDK that’s used by the UIKit. We’ll define a state enabled that will toggle the denoising effect. We’ll create a button using <TouchableOpacity>
that will call the enableDeepLearningDenoise
method on our engine instance based on our state. And we’ll add an image icon to show the status.
That’s all we need to do to add a custom feature. You can even add event listeners in the same fashion to access engine events and perform custom operations.
If there are features you think would be good to add to Agora UIKit for React Native that many users would benefit from, feel free to fork the repository and add a pull request. Or open an issue on the repository with the feature request. All contributions are appreciated!
For more information about building applications using Agora SDKs, take a look at the Agora Video Call Quickstart Guide and Agora API Reference. You can also take a look at the UIKit GitHub Repo, API Reference, and Wiki.
And I invite you to join the Agora Developer Slack community. Feel free to ask any questions about the UIKit in the #react-native-help-me
channel.