Introduction
Real-time is more about the business, not the technology. In day-to-day life, to make real-time decisions like buying or investing, business need the latest information (e.g. Gold Rate/Stock Rate). Unlike traditional days, they need not wait for a few months or a few weeks to get market trend information.
Real Time (RT) Application
A Real-time application gives information instantly and constantly. The efficiency of the application depends mostly on how it is fault tolerant to missed timing requirements. In real-time application, required response times are achieved with a combination of custom hardware and software, possibly with no- or a very thin-operating-system layer. As they function constantly, they need to be closely analyzed for performance metrics.
Some of the real-time applications are listed below
- Videoconference applications
- VoIP(voice over Internet Protocol)
- Online gaming
- Community storage solutions
- Some e-commerce transactions
- Chatting
- IM (instant messaging)
- Current active visitors,Currently logged-in users
- When friend comes online or goes offline
- When someone likes or comments on status
- Send notification on event to subscribed users
- How many people are actively playing a particular game, watching online webinar/event, etc
- Total transactions, purchases, impressions, messages, clicks till day and so on
Soft vs Hard
Features
| Soft RT
| Hard RT
|
On failure/delay
| Application can continue, and the user, though unhappy, can still use it.
| Must strictly meet real-world timing requirements because delayed result could be catastrophic.
|
performance
| Unpredictable performance gets tolerated by user
| Unpredictable performance effects response-time requirements that impacts application greatly
|
Examples
| Application taking more than 0.1 seconds to respond to a mouse click.
| Application controlling the rudder of an airplane
|
Deliverable %
| At least 99.9 % of response should be delivered in specific real-time
| 99.999 % of response should be delivered in specific real-time
|
Real-time web application development with SignalR
What is Signal R
Signals are used for communication (e.g. analog and digital). R stands for real-time.
Real-time web refers the ability to have server code push content to the connected clients instantly. Signal R in short refers to Real-time application communication
What it does?
- Over HTTP connection, push content from Server to client RPC.
- It is a connection abstraction. Gives impression of working on a permanently open persistent connection. It is NOT reliable messaging.
- It sends each message through the message-bus.
- It supports CORS - Cross-origin-resource-sharing or JSONP- JSON with padding (communication technique used in JavaScript programs running in web browsers to request data from a server in a different domain)
How it is measured and monitored
- Performance counters gives the number of events since the last application pool or server restart.
- Connection metrics, Message metrics, Message bus metrics, Error metrics, Scaleout metrics
SignalR Overview
Pattern Used:
It is designed in Hub and Spoke pattern. SignalR in Central, with all other browsers being changeable and flexible and contextual to it
Structure:
It is categorized into major four areas as presented below.
- Hub
- Persistent Connection
- Transport types
- Browser
SingulaR files are listed here
SignalR - A meta package that brings in SignalR.Server and SignalR.Js
SignalR.Server - Server side components needed to build SignalR endpoints
Transport Flow of SignalR
Features
| Web sockets
| SSE
| Forever Frame
| Long Polling
|
Server support
| Server and browser should support websockets
| initiate transmission on an initial client connection has been established
| Hidden Iframe with block of content
| Still response is sent connection is open
|
Real time Connection type
| Two-way
| One Way. Send/Receive event from server
| •One-way(server to client & continually sends script to the client)
•Client to server (separate connection)
| •Polls the server with a request that stays open until the server respond
|
Persistent connection
| True
| False
| False
| False, buffer storage
|
Design Parameters to meet real-time requirements
Problem Statment:
Performance can creep into real-time application as it scales up and also ASP.NET, Signal-R thread and memory limitations have the mutual short-term nature of Web requests
Few set of tunable parameters :
Scaling
Three main factors of scaling are specialization, optimization, and distribution
- Specialization – Componentization of application into smaller pieces in order to isolate the problem. MVC takes care of the same by splitting controller, Model, View and also image servers, streaming server can be split
- Optimization - Provides components for distribution by reducing the amount of work needed for a given operation. This translates directly into fewer servers needed to scale to the same number of users.
- Clustering - Each server in the cluster sends out a information to let the other servers know it is alive. challenge of effective clustering lies in eliminating affinity/ stickiness
It can be scaled with Backplane scaling approach using Redis, SQL Server
State
State property of the hub’s proxy and caller property in hub is used for maintaining the state. State can be maintained through multiple connections and method calls to a database. Sticky Session isn’t supported as it is limitation to distribute.
Cache
persistent memory objects, like in-process session and cache objects, memory usage becomes much more problematic while scaling. Each webserver gets the message from redis and stores it in a local cache. This local cache is where SignalR clients (browsers), are served.
- Scaling Cache Storage :
- specialization—Partioning of the database into logical pieces. Those partitions could be datacentric.
- Cluster :Have multiple databases, each containing a portion of the whole database.
Performance
Response Time = Payload/Bandwidth + Round Trip Time(RTT) + Appturns/Concurrent Requests+Server Compute Time(Cs)+ Client Compute Time(Cc)
Scaling Approaches for better performance (Load balancer Vs Backplane)
Load Balancer
| Backplane
|
Client requests can get routed to different servers.
| Each application instance/Client sends messages to the backplane
|
A client that is connected to one server will not receive messages sent from another server.
| Backplane forwards message to the other application/Client instances.
|
Backplane Server -Overview
- A Backplane Server is an independent orchestrator of the message interchange between Backplane Clients and may serve multiple independent Buses.
- Each server instance connects to the backplane through the bus. When a message is sent, it goes to the backplane, and the backplane sends it to every server. When a server gets a message from the backplane, it puts the message in its local cache. The server then delivers messages to clients from its local cache.
- Introduce delays in message delivery, which will not work well for low-latency work. With multiple servers to handle clients and mostly latency is minimal this might not work.
- trade-off here is between latency and complexity
- Signal R Supports 3 backplanes Azure ServiceBus, SQL Server, Redis
- Amazon Web Services (AWS)
- Do not support Azure ServiceBus
- SQL Server in EC2 needs manual handling ,a counter-intuitive to running on cloud infrastructure
- Supports Redis
- Default MessageBus of signalR gets replaced with a bus designed for that backplane.
Conclusion
This article would help in understanding the overview of SignalR.