Saturday, February 3, 2018

GStreamer has grown a WebRTC implementation

In other news, GStreamer is now almost buzzword-compliant! The next blog post on our list: blockchains and smart contracts in GStreamer.

Late last year, we at Centricular announced a new implementation of WebRTC in GStreamer.  Today we're happy to announce that after community review, that work has been merged into GStreamer itself! The plugin is called webrtcbin, and the library is, naturally, called gstwebrtc.

The implementation has all the basic features, is transparently compatible with other WebRTC stacks (particularly in browsers), and has been well-tested with both Firefox and Chrome.

Some of the more advanced features such as FEC are already a work in progress, and others will be too—if you want them to be! Hop onto IRC on #gstreamer @ Freenode.net or join the mailing list.

How do I use it?


Currently, the easiest way to use webrtcbin is to build GStreamer using either gst-uninstalled (Linux and macOS) or Cerbero (Windows, iOS, Android). If you're a patient person, you can follow @gstreamer and wait for GStreamer 1.14 to be released which will include Windows, macOS, iOS, and Android binaries.

The API currently lacks documentation, so the best way to learn it is to dive into the source-tree examples. Help on this will be most appreciated! To see how to use GStreamer to do WebRTC with a browser, checkout the bidirectional audio-video demos.

Show me the code! [skip]


Here's a quick highlight of the important bits that should get you started if you already know how GStreamer works. This example is in C, but GStreamer also has bindings for Rust, Python, Java, C#, Vala, and so on.

Let's say you want to capture video from V4L2, stream it to a webrtc peer, and receive video back from it. The first step is the streaming pipeline, which will look something like this:

v4l2src ! queue ! vp8enc ! rtpvp8pay !
    application/x-rtp,media=video,encoding-name=VP8,payload=96 ! 
    webrtcbin name=sendrecv

As a short-cut, let's parse the string description to create the pipeline.

1
2
3
4
5
GstElement *pipe;

pipe = gst_parse_launch ("v4l2src ! queue ! vp8enc ! rtpvp8pay ! "
    "application/x-rtp,media=video,encoding-name=VP8,payload=96 !"
    " webrtcbin name=sendrecv", NULL);

Next, we get a reference to the webrtcbin element and attach some callbacks to it.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
GstElement *webrtc;

webrtc = gst_bin_get_by_name (GST_BIN (pipe), "sendrecv");
g_assert (webrtc != NULL);

/* This is the gstwebrtc entry point where we create the offer.
 * It will be called when the pipeline goes to PLAYING. */
g_signal_connect (webrtc, "on-negotiation-needed",
    G_CALLBACK (on_negotiation_needed), NULL);
/* We will transmit this ICE candidate to the remote using some
 * signalling. Incoming ICE candidates from the remote need to be
 * added by us too. */
g_signal_connect (webrtc, "on-ice-candidate",
    G_CALLBACK (send_ice_candidate_message), NULL);
/* Incoming streams will be exposed via this signal */
g_signal_connect (webrtc, "pad-added",
    G_CALLBACK (on_incoming_stream), pipe);
/* Lifetime is the same as the pipeline itself */
gst_object_unref (webrtc);

When the pipeline goes to PLAYING, the on_negotiation_needed() callback will be called, and we will ask webrtcbin to create an offer which will match the pipeline above.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
static void
on_negotiation_needed (GstElement * webrtc, gpointer user_data)
{
  GstPromise *promise;

  promise = gst_promise_new_with_change_func (on_offer_created,
      user_data, NULL);
  g_signal_emit_by_name (webrtc, "create-offer", NULL,
      promise);
}

When webrtcbin has created the offer, it will call on_offer_created()

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
static void
on_offer_created (GstPromise * promise, GstElement * webrtc)
{
  GstWebRTCSessionDescription *offer = NULL;
  const GstStructure *reply;
  gchar *desc;

  reply = gst_promise_get_reply (promise);
  gst_structure_get (reply, "offer",
      GST_TYPE_WEBRTC_SESSION_DESCRIPTION, 
      &offer, NULL);
  gst_promise_unref (promise);

  /* We can edit this offer before setting and sending */
  g_signal_emit_by_name (webrtc,
      "set-local-description", offer, NULL);

  /* Implement this and send offer to peer using signalling */
  send_sdp_offer (offer);
  gst_webrtc_session_description_free (offer);
}

Similarly, when we have the SDP answer from the remote, we must call "set-remote-description" on webrtcbin.

1
2
3
4
5
6
7
answer = gst_webrtc_session_description_new (
    GST_WEBRTC_SDP_TYPE_ANSWER, sdp);
g_assert (answer);

/* Set remote description on our pipeline */
g_signal_emit_by_name (webrtc, "set-remote-description",
    answer, NULL);

ICE handling is very similar; when the "on-ice-candidate" signal is emitted, we get a local ICE candidate which we must send to the remote. When we have an ICE candidate from the remote, we must call "add-ice-candidate" on webrtcbin.

There's just one piece left now; handling incoming streams that are sent by the remote. For that, we have on_incoming_stream() attached to the "pad-added" signal on webrtcbin.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
static void
on_incoming_stream (GstElement * webrtc, GstPad * pad,
    GstElement * pipe)
{
  GstElement *play;

  play = gst_parse_bin_from_description (
      "queue ! vp8dec ! videoconvert ! autovideosink",
      TRUE, NULL);
  gst_bin_add (GST_BIN (pipe), play);

  /* Start displaying video */
  gst_element_sync_state_with_parent (play);
  gst_element_link (webrtc, play);
}

That's it! This is what a basic webrtc workflow looks like. Those of you that have used the PeerConnection API before will be happy to see that this maps to that quite closely.

The aforementioned demos also include a Websocket signalling server and JS browser components, and I will be doing an in-depth application newbie developer's guide at a later time, so you can follow me @nirbheek to hear when it comes out!

Tell me more!


The code is already being used in production in a number of places, such as EasyMile's autonomous vehicles, and we're excited to see where else the community can take it.

If you're wondering why we decided a new implementation was needed, read on! For a more detailed discussion into that, you should watch Matthew Waters' talk from the GStreamer conference last year. It's a great companion for this article!

But before we can dig into details, we need to lay some foundations first.

What is GStreamer, and what is WebRTC? [skip]


GStreamer is a cross-platform open-source multimedia framework that is, in my opinion, the easiest and most flexible way to implement any application that needs to play, record, or transform media-like data across an extremely versatile scale of devices and products. Embedded (IoT, IVI, phones, TVs, …), desktop (video/music players, video recording, non-linear editing, videoconferencing and VoIP clients, browsers …), to servers (encode/transcode farms, video/voice conferencing servers, …) and more.

But what I like the most about GStreamer is the pipeline-based model which solves one of the hardest problems in API design: catering to applications of varying complexity; from the simplest one-liners and quick solutions to those that need several hundreds of thousands of lines of code to implement their full featureset. 

If you want to learn more about GStreamer, Jan Schmidt's tutorial from Linux.conf.au is a good start.

WebRTC is a set of draft specifications that build upon existing RTP, RTCP, SDP, DTLS, ICE (and many other) real-time communication specifications and defines an API for making RTC accessible using browser JS APIs.

People have been doing real-time communication over IP for decades with the previously-listed protocols that WebRTC builds upon. The real innovation of WebRTC was creating a bridge between native applications and webapps by defining a standard, yet flexible, API that browsers can expose to untrusted JavaScript code.

These specifications are constantly being improved upon, which combined with the ubiquitous nature of browsers means WebRTC is fast becoming the standard choice for videoconferencing on all platforms and for most applications.

Everything is great, let's build amazing apps! [skip]


Not so fast, there's more to the story! For WebApps, the PeerConnection API is everywhere. There are some browser-specific quirks as usual, and the API itself keeps changing, but the WebRTC JS adapter handles most of that. Overall the WebApp experience is mostly šŸ‘.

Sadly, for native code or applications that need more flexibility than a sandboxed JS app can achieve, there haven't been a lot of great options.

libwebrtc (Chrome's implementation), Janus, Kurento, and OpenWebRTC have traditionally been the main contenders, but after having worked with all of these, we found that each implementation has its own inflexibilities, shortcomings, and constraints.

libwebrtc is still the most mature implementation, but it is also the most difficult to work with. Since it's embedded inside Chrome, it's a moving target, the API can be hard to work with, and the project is quite difficult to build and integrate, all of which are obstacles in the way of native or server app developers trying to quickly prototype and try out things.

It was also not built for multimedia use-cases, so while the webrtc bits are great, the lower layers get in the way of non-browser use-cases and applications. It is quite painful to do anything other than the default "set raw media, transmit" and "receive from remote, get raw media". This means that if you want to use your own filters, or hardware-specific codecs or sinks/sources, you end up having to fork libwebrtc.

In contrast, as shown above, our implementation gives you full control over this as with any other GStreamer pipeline.

OpenWebRTC by Ericsson was the first attempt to rectify this situation, and it was built on top of GStreamer. The target audience was app developers, and it fit the bill quite well as a proof-of-concept—even though it used a custom API and some of the architectural decisions made it quite inflexible for most other use-cases.

However, after an initial flurry of activity around the project, momentum petered out, the project failed to gather a community around itself, and is now effectively dead.

Full disclosure: we worked with Ericsson to polish some of the rough edges around the project immediately prior to its public release.

WebRTC in GStreamer — webrtcbin and gstwebrtc


Remember how I said the WebRTC standards build upon existing standards and protocols? As it so happens, GStreamer has supported almost all of them for a while now because they were being used for real-time communication, live streaming, and in many other IP-based applications. Indeed, that's partly why Ericsson chose it as the base for OWRTC.

This combined with the SRTP and DTLS plugins that were written during OWRTC's development meant that our implementation is built upon a solid and well-tested base, and that implementing WebRTC features is not as difficult as one might presume. However, WebRTC is a large collection of standards, and reaching feature-parity with libwebrtc is an ongoing task.

Lucky for us, Matthew made some excellent decisions while architecting the internals of webrtcbin, and we follow the PeerConnection specification quite closely, so almost all the missing features involve writing code that would plug into clearly-defined sockets.

We believe what we've been building here is the most flexible, versatile, and easy to use WebRTC implementation out there, and it can only get better as time goes by. Bringing the power of pipeline-based multimedia manipulation to WebRTC opens new doors for interesting, unique, and highly efficient applications.

To demonstrate this, in the near future we will be publishing articles that dive into how to use the PeerConnection-inspired API exposed by webrtcbin to build various kinds of applications—starting with a CPU-efficient multi-party bidirectional conferencing solution with a mesh topology that can work with any webrtc stack.

Until next time!

47 comments:

Unknown said...

I think there is a typo, "g_assert (webrtc1 != NULL);" should be "g_assert (webrtc != NULL);" in attaching callback example.

BTW, I've tried webrtcbin (with Janus Gateway as signaling server) before it got merged upstream, it works for me, very appreciated. ☺️

IƱaki said...

While these are good news, I don't understand the design of gstwebrtc.

I'm not an expert in gstreamer but AFAIK gstreamer is already able to send and receive RTP, right? And I'm pretty sure that those RTP handlers do not implement any "createOffer" nor "setRemoteDescription", etc, am I right? If so, why gstwebrtc does implemente them?

Honestly, I expected something MUCH more low level, something without SDP involved at all. Instead, pass some RTP parameters (such as in ORTC spec) to the gstwebrtc plugin to indicate it what to send and what to receive.

Good luck with this, but be ready for people complaining about SDP renegotiation, multi-stream (using Chrome's Plan-B versus IETF/W3 Unified-Plan), etc etc etc.

Nirbheek said...

@Chiu
> I think there is a typo, "g_assert (webrtc1 != NULL);" should be "g_assert (webrtc != NULL);" in attaching callback example.

Fixed, thanks!

> BTW, I've tried webrtcbin (with Janus Gateway as signaling server) before it got merged upstream, it works for me, very appreciated. ☺️

Cheers :)

@IƱaki
> Honestly, I expected something MUCH more low level, something without SDP involved at all. Instead, pass some RTP parameters (such as in ORTC spec) to the gstwebrtc plugin to indicate it what to send and what to receive.

You can already do that with GStreamer. As I noted in the blog post, the low-level APIs all already exist. gstwebrtc and webrtcbin are about implementing an API that app developers can use without having to know the internals of how rtp/rtcp/ice/etc work.

IƱaki said...

Thanks Nirbheek, clear now.

Unknown said...

Sounds pretty awesome and might be what I need.
Currently I am writing my Bachelor Thesis at a company and researching possibilities to implement a webrtc media server, which can run on windows. It should be able to establish a connection between two clients and record audio/video as well as play audio or video from the server to the clients.
Initially they wanted it to be implemented in c#, but this might difficult to do, because unfortunately all media servers I found are either implemented in c++ and only support Ubuntu/Mac or via NodeJS.
My current POC is using google's webrtc native code, which I compiled into a static library and made a wrapper via c++/cli. But using their API is a real pain.
Do you think gstreamer would be a good solution? And will it also support webrtc datachannels or only audio and video via rtp?

Thank you for sharing it with us.

Nirbheek said...

@Sebastian
> implement a webrtc media server, which can run on windows. It should be able to establish a connection between two clients and record audio/video as well as play audio or video from the server to the clients.

This use-case is one of the primary reasons why we wrote this. The webrtc code hasn't been tested on Windows, but it uses components that are well-tested on Windows, so it should all work fine.

> Initially they wanted it to be implemented in c#

GStreamer does have a C# API, so once it's been updated to add support for the new gstwebrtc and gstpromise APIs you should be able to use this from C#.

If you want to get started quickly, I would recommend prototyping in C to get a hang of gstreamer and the webrtc API, and then building it in C# later. Poke thiblahute or slomo on #gstreamer on FreeNode about the bindings. :)

Datachannel support is not implemented yet, but can be. However for your use-case communication over websockets via a signalling server might work just as well. The gstwebrtc-demos repository has an example for that.

Unknown said...

Sounds awesome. I will definitely try it out.
Thank you very much for your help and quick response.

Anonymous said...

Post the noob guide please

Nirbheek said...

@anonymous
That will probably come out at the same time as the GStreamer 1.14 release so people don't have to build gstreamer to try it out :)

Unknown said...

Hi! Thanks for a great work! However I have installed gstreamer with the help of gst-uninstalled and got now
gst-inspect-1.0 webrtcbin
No such element or plugin 'webrtcbin'

Nirbheek said...

@Victor
The plugin should be built inside gst-plugins-bad/ext/webrtc/.libs, if it isn't you need to look at config.log to see if all the dependencies were found. Particularly, ensure that libnice and the gstreamer libnice packages are installed.

If you still have problems, find us at #gstreamer at FreeNode. Almost anyone there should be able to help you with build issues. Cheers!

Anonymous said...

Hello,

Can you explain how to install your web server ?
Serve the js/ directory on the root of your website , copy js and index files isn't enough to run the demo.

I have this error in my console,
webrtc.js:149 WebSocket connection to 'wss://192.168.4.127:8443/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
Unable to connect to server, did you add an exception for the certificate?

Should we start this server ?

gstwebrtc-demos/signalling$ ./simple-server.py


thanks in avance








https://github.com/centricular/gstwebrtc-demos

Nirbheek said...

@anonymous

Yes, you need to start ./simple-server.py and point it to your HTTP server's certificate database, which must be signed by a CA.

If you don't have one of those, you can generate a self-signed certificate with ./generate_cert.sh and then tell your browser to accept it when prompted. On the gst side, you need to find the line "SOUP_SESSION_SSL_STRICT, TRUE," and change it to "SOUP_SESSION_SSL_STRICT, FALSE,".

Anonymous said...

You use ./simple-server.py below https://webrtc.nirbheek.in/ with index.html and js file ???

where can we download the server to instantiate this page https://webrtc.nirbheek.in/

thanks in avance

Nirbheek said...

@anonymous, yes, webrtc.nirbheek.in is literally just running from a git checkout of gstwebrtc-demos.

Anonymous said...

Really :( it does not work at home

When I checkout this https://github.com/shanet/WebRTC-Example.git

This example just running from git .

I have build correctly your application but it's not work with webrtc.nirbheek.in

./webrtc-sendrecv --peer-id 5981 --gst-debug=*:3

too bad for me :(

Anonymous said...

the signaling server seems to me missing from the git repo.

Nirbheek said...

I have no idea if I am talking to the same person or multiple different people. Please use a name ;)

The signalling server is https://github.com/centricular/gstwebrtc-demos/blob/master/signalling/simple-server.py

You do not need it to try out the code, https://webrtc.nirbheek.in has been working fine for everyone, and I re-verified just now.

It's difficult to diagnose on blog comments, please visit #gstreamer on freenode or email me. Cheers!

draden said...

hi Nirbheek ,
Thanks you.
I want to use the directory code js. However locally, it does not work. The peer id is not displayed in the view and the websocket can not connect.


how to configure the web server to have the same behavior locally as your site https://webrtc.nirbheek.in .


best regards


Draden


draden said...

Hi,

Good news . I have disable ssl protocol in gst app.

session = soup_session_new_with_options (SOUP_SESSION_SSL_STRICT, FALSE,
SOUP_SESSION_SSL_USE_SYSTEM_CA_FILE, FALSE,
//SOUP_SESSION_SSL_CA_FILE, "/etc/ssl/certs/ca-bundle.crt",
SOUP_SESSION_HTTPS_ALIASES, https_aliases, NULL);

By using a mac with chrome browser, it works with the VP8 and H264 codecs on your site. However, I do not hear sounds.

Using my workstation under ubuntu 16.04, I have this error

Status: InvalidStateError: setRemoteDescription needs to called before addIceCandidate



draden said...

hello,

The video freezes frequently. Can you reproduce? I observe this on your site https://webrtc.nirbheek.in/

cheers

draden

Nirbheek said...

@draden, please understand that I cannot treat this comments section as a support system. Video freezing could be due to any number of issues, please either post on the gstreamer mailing list or ask on the IRC channel so other people can also try to help.

Alternatively, if you are able to get to the bottom of any problems and have isolated the cause, you should file a bug so it can be looked at.

Cheers!

fred said...

Hello,

Great good job

Have you tested with firefox ?
Have you tested with a client on another network ?

cheers

fred

Nirbheek said...

@fred, yes the code has been tested with both Firefox and Chrome, and across NATs. The example currently uses a STUN server, and can also use a TURN server if needed (f.ex, because of a proxy).

Anonymous said...

Hi Nirbheek,

This looks very interesting! I'm excited to see WebRTC come to gstreamer.

Running webrtc-sendrecv against your server, I consistently receive "ERROR: received SDP without 'type'" (on both Firefox and Chrome). If you have any pointers about how to debug this, it would be much appreciated.

Thanks - Jon

Nirbheek said...

@Jon, sorry that was my fault. I made some changes to the JS bits yesterday and forgot to update the repository on the website. Try again, should work now!

Unknown said...

"The next blog post on our list: blockchains and smart contracts in GStreamer."

Where is it?

Nirbheek said...

@Junjie, that was a joke, sorry!

Unknown said...

Hi, I newbie in gstreamer, can you help me to stream rtsp via webrtc...
My pipe:
/* "rtspsrc location=rtsp://10.8.24.238/Streaming/Channels/101 ! queue ! x264enc ! rtph264pay ! " */

or
"rtspsrc location=rtsp://10.8.24.238/Streaming/Channels/101 ! queue ! vp8enc deadline=1 ! rtpvp8pay ! "

but web (https://webrtc.nirbheek.in/) freezes on status:
Status: Sending SDP answer

thx

Nirbheek said...

@Ivan,

What format is your rtsp server streaming video in? You might need to decode it before you encode it again. Try this:

uridecodebin uri=rtsp://10.8.24.238/Streaming/Channels/101 ! queue ! vp8enc deadline=1 ! rtpvp8pay

Also, this is not the best place to get support, like I said in the blog post, please try the gstreamer-devel mailing list or #gstreamer on freenode. :)

Unknown said...

thanks a lot! your pipe works well!

one more question, and i go to freenode =)

Can I stream via webrtc without decode/encode or only encode without decode?

My rtsp server has next format : H264

all caps: caps = application/x-rtp, media=(string)video, payload=(int)96, clock-rate=(int)90000, encoding-name=(string)H264, profile-level-id=(string)420029, packetization-mode=(string)1, sprop-parameter-sets=(string)"Z00AKpWoHgCJ+WEAAAcIAAFfkAQ\=\,aO48gA\=\=", a-recvonly=(string)"", x-dimensions=(string)"1920\,1080", a-Media_header=(string)"MEDIAINFO\=494D4B48010200000400000100000000000000000000000000000000000000000000000000000000\;", a-appversion=(string)1.0, ssrc=(uint)520355123, clock-base=(uint)2270644404, seqnum-base=(uint)1844, npt-start=(guint64)0, play-speed=(double)1, play-scale=(double)1

Nirbheek said...

@Ivan, yes you should be able to directly payload encoded data received from elsewhere.

If it's from a file, you will still need to demux it first. If it's from an RTSP stream you should be able to payload it without decoding and then re-encoding. H264 should work fine with webrtcbin.

Unknown said...

thanks, it works well, we will use it

Phil said...

Dear Nirbheek

I could test sendrecv succesfully. Works like a charm :-)

I can see a todo for the JS part of multiparty-sendrecv
I am looking forward to it!

Cheers
Phil

Nirbheek said...

@Phil, glad to hear you got it working! Sorry I missed your earlier question about problems with getting the self-signed certificate accepted by your browser.

Phil said...

Hey Nirbheek,

Can you by chance generate the JS part of the multiparty-sendrecv demo?
That would help me soooo much :-)

Cheers
Philippe

Nirbheek said...

@phil if it were trivial I would've published it already. Just haven't had the time ;)

Unknown said...

"If it's from a file, you will still need to demux it first. If it's from an RTSP stream you should be able to payload it without decoding and then re-encoding. H264 should work fine with webrtcbin."

In my pipeline, the source is from a RTSP server with H264 stream, and I have to decode firstly and then encode again so that I can view the stream. If I just payload it without decoding and re-concodeing, I will see nothing.

If the stream is originlly encoded with VP8, I can view it without decoding and re-concodeing.

Nirbheek said...

@Junjie: Check the h.264 stream properties (profile, bit depth, etc). Your browser may not support it.

See also: https://bugzilla.gnome.org/show_bug.cgi?id=795404#c4

Unknown said...

Nirbheek hi!
Please check your demo with h264 format, pipeline like :
v4l2src ! queue ! vp246enc ! rtph264pay ! application/x-rtp,media=video,encoding-name=H264,payload=96 ! webrtcbin name=sendrecv, it stops working in all browsers for me
thanks

Unknown said...

Thanks for your comments above, but I have another problem in the audio (g711a). In your sendrecv example, I change the opus to pcma and there's no voice in the web (opus is OK).
But in the gst-plugin-bad example, two local webrtcbin change the audio and video, there's voice both g711a and opus.

the pipeline is :
"audiotestsrc wave=red-noise ! audioconvert ! audioresample ! queue ! alawenc ! rtppcmapay ! "
"queue ! " RTP_CAPS_OPUS "97 ! sendrecv. ",

#define RTP_CAPS_OPUS "application/x-rtp,media=audio,clock-rate=8000,encoding-name=PCMA,payload="

Unknown said...

Work fine(only chrome v66.*) after added : gst_sdp_media_add_attribute((GstSDPMedia *)&g_array_index(offer->sdp->medias, GstSDPMedia, 0), "fmtp", "96 profile-level-id=42e01f;level-asymmetry-allowed=1;packetization-mode=1");

xiaoyong said...

Hi, Nirhbeek

The sendrecv works fine for me as is. I have some issue when I changed the signaling and js and jst code to do browser start sdp offer and gst create answer. The gstwebrtcbin side can receive the streams and create/display the video/audio sink fine, I see and hear the output from browers's streams. However the browser's ontrack/onaddstream is never called. I can see from chrome://webrtc-internals there are two active connection and the setbytes/receivebytes are growing constantly. Do you have some suggestions on how to investigate what might be the problems?

thanks

xiaoyong said...

Hi, Nirbheek

I solved part of the problem by looking at the answer sdp and then the gst tracing/debug message and code. The sdp answer I got has recvonly instead of sendrecv so rtcpeerconnection never create stream. The reason of recvonly is because the gst part has video first in media index. I think there is a bug in createanswer of gstwebrtcbin.c where the format is only matched based on mline, match using mid would be better in my opinion. I switched the order in gst pipeline and now onaddstream callback is fired. The problem is not totally solved yet because the stream I got is unknowtype. I used chrome://webrtc-internals to save a dump of rtp and try to see whether I get valid rtp packet back. however the log file is of some kind binary format. Does anyone know how to parse the dump/log? the google group of webrtc has some similar request but the suggested tool doesn't work anymore.

thanks

xiaoyong said...

Sorry for spamming the thread here. I got it working now. The change I made is a simple api change in onRemoteStreamAdded, instead of using event.stream.getVideoTracks() and getAudioTracks(), I use event.streams now. it is a MediaStream type and is a collection.
As for the chrome://webrtc-internals dump file, I see there are some tools under google's source code of webrtc under src/rtc_tools/event_log_visualizer. However I haven't get time to build and try it out yet.


thanks

green7 said...

Hello,

first of all - thank you for a working webrtc in something else than a browser.

I have a gstreamer noob question - I need unidirectional recv-only (audio+video) transmission from browser's webcam to gst binary. How can I achieve this? And additionally how to specify a preferred receiving media type like in sendrecv mode?

thanks.

xiaoyong said...

@green7, based on my understanding, if you don't specify the src for webrtcbin then it will be recvonly. as for the preferred receiving media I think that depends on the caps/filter you queue after the wetrtcbin sink.