I am trying to build a remote assistant app in android with google AR and Openvidu (for video calling). So that users can communicate with experts through Openvidu video calling.
Now, I am trying to replace the frames from the device camera (which is normally used in an AR session) with frames from a streamed camera via webrtc. To render the webrtc stream I am using
webrtc SurfaceViewRenderer and to render the AR session I am using
opengl.GLSurfaceView in activity_main.xml.
<org.webrtc.SurfaceViewRenderer android:id="@+id/local_gl_surface_view" android:layout_width="match_parent" android:layout_height="248dp" android:layout_gravity="bottom|end" /> <android.opengl.GLSurfaceView android:id="@+id/opengl_surfaceview" android:layout_width="match_parent" android:layout_height="195dp" android:layout_gravity="top" />
These two viewers work as they are supposed to do separately but now I want to combine them. The problem is that I don’t know how to extract the frames from the webrtc stream.
So the story is to use ArCore and WebRtc At the same time and sharing what you see in ArCore session to the WebRtc so that the remote user sees them.
In Openvidu when I create an instance of LocalParticipant always we have to supply the org.webrtc.SurfaceViewRenderer. But I think we need to supply android.opengl.GLSurfaceView instead of org.webrtc.SurfaceViewRenderer.
public LocalParticipant(String participantName, Session session, Context context, SurfaceViewRenderer localVideoView)
So how can I share our ArCore session to the WebRtc through Openvidu. Please help me.
I apologize for my bad English.
Thanks in advance.