UDP-like Networking in the Browser

Overview

Browser based real time games that use WebSockets are forced to have ordered messages. This means that if messages have to be retransmitted the recipient can't process newer messages that have already been received until after receiving and processing the delayed messages. With UDP-like connections, messages can be skipped when they show up late or don't show up at all. For games this means the user being able to see the most up to date game state that's been received, regardless of if some older messages have issues being delivered.

To use UDP-like connections in a browser, we can use WebRTC DataChannels with ordered: false and maxRetransmits: 0. This will allow us to skip messages that arrive after newer messages, and keep messages from being needlessly resent when they will likely arrive too late anyway.

WebRTC is intended to be used Peer-to-Peer, connecting users' browsers directly to each other. In order to use it in a Client-Server fashion, we’ll still have to implement the signalling that WebRTC requires for negotiating connections between peers.

We’ll be using Node.js for the server, with the node-webrtc / wrtc package. However, these basic ideas should work with any server side WebRTC implementation.

Browser Support

This approach has been tested in recent versions of Firefox, Chrome, and Safari. Edge does not currently support WebRTC DataChannels, but it seems like "Edge (Chromium)" will.

Signalling

One small downside to this approach is that getting the client to connect to the server is not as simple as specifying a URL like with WebSockets. We still need a Signal Server to negotiate the connection just like any WebRTC use case. Though in the Client-Server case, the signalling can be done through HTTP calls or WebSockets to the same server that will connect with WebRTC. The signalling is only needed to initiate the connection, and won't be used after that. We'll just use two HTTP endpoints, which are "Get Offer" and "Send Answer" described below. Normally, something like these requests would be proxied between peers, but because our server includes the signalling, we’ll react to them directly.

We’re just using the core Node.js http server for signalling, but you could change this to use Express, Hapi, etc.

Get Offer

To start connection negotiation, the client only needs to call this endpoint with no body and no prior setup. The server will create a PeerConnection and an associated Offer. Because we want to allow multiple clients, each PeerConnection will have a unique ID. We’ll return both the Offer and ID to the client.

Here is our Client code that calls the endpoint, and receives an Offer with an ID. You can adjust the apiPort and baseUrl if needed.

client.js:
const apiPort = 8080
const baseUrl = `${window.location.protocol}//${window.location.hostname}:${apiPort}/`

function connect() {
  fetch(`${baseUrl}get-offer`)
    .then(function(response) {
      return response.json()
    })
    .then(sendAnswer)
}
connect()
  • We’ll implement sendAnswer() later under the Send Answer section.
  • You can call connect() when responding to a button click event or however else you’d like.

Here is the server handler that will process the request, with comments showing the 4 main steps happening here.

server.js:
const wrtc = require("wrtc")

const clients = {}
let nextClientId = 0

function getOffer(request, response) {
  // 1. Create a PeerConnection specific to this client
  const clientId = nextClientId++
  const peerConnection = new wrtc.RTCPeerConnection()

  const client = { peerConnection, dataChannel: createDataChannel(peerConnection) }
  clients[clientId] = client

  // 2. Use the PeerConnection to create an Offer
  console.log(clientId, 'creating offer')
  peerConnection.createOffer(
    function(offer) {
      console.log(clientId, 'setting offer')
      // 3. Set the Offer on the PeerConnection
      peerConnection.setLocalDescription(
        offer,
        function() {
          console.log(clientId, 'sending offer')
          // 4. Return the ID and Offer to the Client
          response.setHeader('content-type', 'application/json')
          response.end(JSON.stringify({ clientId, sdp: offer.sdp }))
        },
        getErrorHandler(response, 'setting offer')
      )
    },
    getErrorHandler(response, 'creating offer')
  )

  peerConnection.onicecandidate = getIceCandidateHandler(clientId, client)
}

function getErrorHandler(response, failedAction) {
  return function(error) {
    console.error(`error ${failedAction}: `, error)
    response.statusCode = 500
    response.end(`error ${failedAction}`)
  }
}
  • The sdp property of the Offer used in step 4 is all we actually need to return to the Client.
  • Because we create a new PeerConnection and Client ID for every Offer, we can have multiple Clients connected at the same time to a single Server.
  • We'll implement createDataChannel() and getIceCandidateHandler() in later sections.

Send Answer

Now that the Client has an Offer from the server, we create a Client PeerConnection, set the Offer, create an Answer, and send the Answer to the Server.

client.js:
let peerConnection

function sendAnswer(offer) {
  // 1. Create the client side PeerConnection
  peerConnection = new RTCPeerConnection()
  const clientId = offer.clientId

  // 2. Set the offer on the PeerConnection
  peerConnection.setRemoteDescription(
    { type: 'offer', sdp: offer.sdp }
  ).then(function() {
    // 3. Create an answer to send to the Server
    peerConnection.createAnswer().then(function(answer) {
      // 4. Set the answer on the PeerConnection
      peerConnection.setLocalDescription(answer).then(function() {
        // 5. Send the answer to the server
        fetch(`${baseUrl}send-answer-get-candidate`, {
          method: 'POST',
          body: JSON.stringify({clientId, sdp: answer.sdp})
        })
          .then(function(response) {
            return response.json()
          })
          .then(addIceCandidate)
      })
    })
  })

  setupDataChannel()
}
  • We return the Client ID with our Answer so the Server knows which PeerConnection to use.
  • At this point, the client is generating ICE candidates, but we’ll talk about that and implement addIceCandidate() in the next section and setupDataChannel() after that.

Here is the server handler for processing the client’s answer.

server.js:
function sendAnswerGetCandidate(request, response) {
  // This starts with boilerplate to read the body from the request
  let body = ''
  request.on('readable', function() {
    const next = request.read()
    if (next) return body += next

    const answer = JSON.parse(body)

    // 1. Get the PeerConnection we started with
    const client = clients[answer.clientId]
    const peerConnection = client.peerConnection
    // 2. Set the Answer on the PeerConnection
    console.log(answer.clientId, 'setting answer')
    peerConnection.setRemoteDescription(
      { type: 'answer', sdp: answer.sdp },
      function() {

        // 3. If there is already an ICE Candidate ready, send it
        if (client.iceCandidate) {
          response.end(JSON.stringify(client.iceCandidate))
          delete client.iceCandidate
          return
        }

        // 4. Otherwise, Save the response for sending the ICE Candidate later
        console.log(answer.clientId, 'saving response')
        client.iceCandidateResponse = response
      },
      getErrorHandler(response, 'setting offer')
    )
  })
}
  • Now the server is also generating ICE Candidates which send to the client now if one is ready. If one is not ready yet, we'll store the request and send the ICE Candidate in the next section.
  • The Offer/Answer part of the connection is complete, but we still need to pass at least one ICE Candidate before the Client and Server are directly connected.

ICE Candidates

Normally with Peer-to-Peer WebRTC, both peers will send multiple ICE Candidates to the other peer through the signalling server. These are used to attempt connections between the peers until one of them works.

For Client-Server, we’re assuming the Server has a public address that the Client can directly connect to. With that in mind, we will be ignoring the Client’s ICE Candidates and only sending one of the Server’s to the Client.

First, we’ll implement addIceCandidate() in the client to pass the ICE Candidate to our Client PeerConnection. We are already assigning this as a callback in getOffer() in the Client code.

client.js
function addIceCandidate(candidate) {
  // 1. This checks for the server indicating it could not provide any
  //    ICE Candidates.
  if (candidate.candidate === '') {
    return console.error('the server had no ICE Candidates')
  }

  // 2. Pass the ICE Candidate to the Client PeerConnection
  peerConnection.addIceCandidate(candidate)
}

Next, we'll implement getIceCandidateHandler() which is already being called at the end of getOffer() on the Server. It will provide a callback that is called every time an ICE Candidate is ready.

server.js
function getIceCandidateHandler(clientId, client) {
  return function(event) {
    const candidate = event.candidate

    // 1. Do nothing if a candidate one is already set
    if (client.iceCandidate || !candidate) {
      return
    }

    // 2. Skip candidates with certain addresses.  If your server is public, you
    //    would want to skip private address, so you could add 192.168., etc.
    if (candidate.address.startsWith('10.')) {
      return
    }

    // 3. Skip candidates that aren't udp.  We only want unreliable, 
    //    unordered connections.
    if (candidate.protocol !== 'udp') {
      return
    }

    // 4. If the user is waiting for a response, send the ICE Candidate now
    if (client.iceCandidateResponse) {
      console.log(clientId, 'sending ICE candidate')
      client.iceCandidateResponse.end(JSON.stringify(candidate))
      delete client.iceCandidateResponse
      return
    }

    // 5. Otherwise, save it for when they are ready for a response
    console.log(clientId, 'sending ICE candidate')
    client.iceCandidate = candidate
  }
}
  • Normally you would want to send all the ICE Candidates generated to the other peer, but because we're setup in a Client/Server way, just one should be enough. In an actual application it may be best to send all the ICE Candidates anyway.
  • For step 2, you could instead filter the candidates by a specific public address you know your server has.
  • I haven’t tested if the ordered and maxRetransmits settings will work with a tcp connection, but seems like udp would be required, and that’s what step 3 is for.

At this point, our Client and Server should be directly connected to each other and will no longer need the signalling server!

DataChannel

Setting up the DataChannel for sending messages between the Client and Server is the same as any other WebRTC setup. Here the server will create the DataChannel and send 50 messages per second to the Client.

We’ll start with the Client reacting to creation of the DataChannel and messages received through it. This code gets added to sendAnswer() where we create our Client PeerConnection.

Most of the code in the ondatachannel callback is specific to our demo app, but you can see how to detect messages as lost or late.

client.js:
function setupDataChannel() {
  let messagesOk = 0
  let messagesLost = 0
  let messagesLate = 0
  let messagesOkElement = document.getElementById('ok')
  let messagesLostElement = document.getElementById('lost')
  let messagesLateElement = document.getElementById('late')
  peerConnection.ondatachannel = function (event) {
    const dataChannel = event.channel

    let lastMessageId = 0
    let firstMessage = true
    dataChannel.onmessage = function(event) {
      // Ideally this wouldn't be a string, but that's out of scope here.
      const messageId = parseInt(event.data.split("\n")[0], 10)

      if (messageId <= lastMessageId) {
        // This message is old. We can either skip it, or handle it
        // differently knowing it is old.
        if (messageId < lastMessageId) {
          messagesLost--
          messagesLate++
        }
      } else {
        messagesOk++
      }

      if (messageId > lastMessageId + 1) {
        if (firstMessage) {
          firstMessage = false
        } else {
          // Some messages before this one were lost or may show up late. 
          // If this happens a lot we may want to alert the user that the
          // connection seems unstable.
          messagesLost += messageId - lastMessageId - 1
        }
      }
      lastMessageId = messageId

      messagesOkElement.innerText = messagesOk
      messagesLostElement.innerText = messagesLost
      messagesLateElement.innerText = messagesLate
    }
  }
}
  • Here you can see the benefit we’d never get with WebSockets where we can go ahead and process all received messages, even if previous messages are delayed or lost.

Finally, we’ll have the server create the DataChannel and send the messages. Part of this is implementing createDataChannel() called from getOffer() in the first section.

server.js:
function createDataChannel(peerConnection) {
  return peerConnection.createDataChannel('hello', {
    ordered: false,
    maxRetransmits: 0
  })
}

// Build a random message with a set size. You can adjust the size
// to simulate different applications.
let message = "\n"
while (message.length < 1000) {
  message += String.fromCharCode(Math.round(Math.random()*256))
}

// DataChannel Loop
let messageId = 1
setInterval(function() {
  const clientIds = Object.keys(clients)

  for (const [id, client] of Object.entries(clients)) {
    if (client.dataChannel.readyState === 'open') {
      client.dataChannel.send(`${messageId}${message}`)
    }
  }

  messageId++
}, 20)
  • Here we see the configuration mentioned in the overview that prevents us from having to wait on delayed or lost messages.
  • Our loop that attempts to send the messages starts as soon as the Server starts, but it won't actually try send anything until the first client connects. If a client disconnects, its readyState is no longer open, and we'll stop sending it messages.

All Sorted Out

Now we have a server that can connect to multiple browsers and send a high rate of messages without one or more messages delaying others. The primary use case for this is multiplayer game updates, but should work for any data with a high rate that doesn't rely on message order or reliability.

If you'd like to see and/or clone a complete implementation of this demo, you can find it here on GitLab:
https://gitlab.com/MarkSort/udp-like-browser-networking

This is just the groundwork and you'd want to look into improvements beyond it. These are just a few ideas for what might be next:

  • On the Server, clean up disconnected clients
  • On the Client, detect disconnects and then reconnect
  • Allow multiple and/or user ICE Candidates
  • Binary message contents
  • Different language for the server