As we presented our new product streamdata.io at DevoxxFr, we were often asked why we choose SSE vs Websockets (SSE stands for Server-Sent Event) as our Push protocol. This post may help you understand our choice and test what best suits your needs. We’ll start first with a short description of the two protocols.
SSE: Server-Sent Event
1 – The problematic
At the beginning of the development of the Proxy, we had used SSE to communicate with the client for several reasons:
– At first glance more suitable because once the connection is established, the client need not send data to the server. The bi-directional link is not useful in our case.
– Easier to implement because, unlike the WebSockets, SSE does not need to define a message exchange protocol (onOpen, onMessage,…)
– SSE is based on HTTP, it is more compliant with various elements of existing IT infrastructure (load balancers, firewalls, …)
However, during the development, we encountered several problems related to the use of SSE:
– Unable to detect the disconnection of a client before trying to send data
In order to route the requests to the Information System, the proxy must be able to forward the headers of the requests (port number, OAuth authentication,…). In addition, to secure the transactions going through the proxy, we will set up an authentication mechanism based on security tokens that will also be routed to the Proxy.
When a client is connected to the server via SSE, the server is not notified if the client disconnects. Disconnection will be detected by the server only when trying to send data to the client, and getting an error report mentioning that the connection was lost.
This is a problem in our case because the server performs regular polling to the information system, and we want to stop this polling at the earliest to avoid inducing unnecessary load on the IS when no client is connected. In some cases, if the data source is not very dynamic, it may take several pollings before the data is changed, so that the server tries to send a diff to client.
This is problematic, so we considered different solutions that will be presented next.
2 – Considered options
SSE with Polyfill
A solution to override this implementation is to write a Polyfill.
Today, there are several open-source Polyfills that provide some fallback mechanisms to support SSE over all browsers, even those that don’t provide a native implementation. One of the most widely used among those Polyfills redefines the SSE implementation even for browsers that support it natively and replace it with long polling.
We do not want such an implementation because with a source whose data changes very frequently, long polling is less efficient than the standard polling strategy.
If we go for this solution, we will, therefore, write our own Polyfill that uses the native implementation when it is available, and fallback mechanisms when it is not.
– Rather simple to implement. We can rely on existing Polyfill to implement the logic that suits us.
– A Polyfill can be hard to maintain with all browsers’ versions support.
SSE with query parameters
– Very simple to implement
– Adding query parameters will dramatically increase the size of URLs. But URLs cannot be infinite. There is no universal maximum size, but 2048 characters is a reasonable limit.
– Today we consider that this limit is not blocking. However, it will be noted in our documentation as a known limitation.
SSE with OPTIONS request
To avoid passing parameters to each request, one option would be that the client sends an OPTIONS request when first opening the connection, containing all the settings for the subsequent requests. The proxy then would record these parameters and automatically add all requests from during the session.
– Very simple to implement.
– Cancels the constraint on URLs length as parameters are passed only once.
– This introduces state-full sessions server-side, which can cause problems when we will need to replicate the Proxy installation on many servers to guarantee continuity of service and ensure scalability.
Add a WebSocket proxy between streamdata.io Proxy and the client
The last option we considered was adding a WebSocket proxy in front of streamdata.io Proxy. The client communicates with the proxy in WebSocket. Then the WebSocket proxy communicates with streamadata.io proxy with the SSE. The WebSocket proxy can add headers as it uses the Java SSE library.
– Once implemented, this proxy will be very simple to maintain.
– WebSocket are more widely supported by web browsers than SSE.
– The fallback mechanisms for browsers that do not support WebSockets are very easy to set up with SockJS.
– The WebSocket proxy can detect client disconnections.
– Complicated to implement.
– Induces considerable complexity at the IT infrastructure level.
– Requires defining a protocol for WebSocket communication between the client and the WebSocket proxy.
– WebSocket connection does not allow to set up a sticky URL load balancing strategy between client and WebSocket proxy.
– WebSockets are not always supported by the firewall, proxy or load balancers. This will thus induce some constraint on the IT infrastructure.
How to address the disconnection detection issue?
To address this issue, we can set up a heartbeat mechanism sent by the server at regular intervals. With SSE, if you send a blank character, it does not induce network overload or additional processing at the client level and will detect disconnection without waiting for data to be sent.
3 – Adopted solution
First, we have implemented the solution using query parameters. The limitations of this solution are non-blocking for our beta version. It will simply be explained clearly in the documentation.
The disconnection detection issue is addressed with the heartbeat option.
Are you ready to navigate the new streaming API landscape?