Up until the advent of Software as a Service (SaaS), almost every business-critical application ran inside an enterprise’s own data center. The company had complete control over the performance of the application and could use technologies such as MPLS and techniques like WAN optimization to ensure that users across the enterprise always had a good experience with the application.
That’s no longer the case now that SaaS has become the de facto delivery model for core business applications today. In effect, the cloud is the new data center, and the internet is the new LAN. The most business-critical network between the end user and the application is not the corporate LAN but the public internet, which itself is a big collection of networks. When the internet is what sits between the end user and the SaaS application, the company depending on that application may no longer have good performance, reliability, and control.
The fact is the internet is not actually designed for high performance. One factor that intimately affects performance is the use of Border Gateway Protocol (BGP), as the core routing protocol for the internet. BGP doesn’t respond particularly well to outages, and it essentially doesn’t respond at all to congestion. As different internet backbone providers are handing traffic off between each other, those routes really have no provision to self-heal based on the presence of congestion on particular routes. For example, internet provider A that is handing traffic over to internet provider B will just keep pouring traffic down a congested route. It’s a fairly common problem.
Another factor in how traffic is routed over the internet is not what’s going to get the traffic there the fastest, but what will cost the least. Internet provider A is going to push traffic to whatever peer it has that is going to offer them the lowest cost. Neither the SaaS provider nor the application’s end users have any control over this; both are at the mercy of the contracts the network providers make with each other. Sometimes the routing results in good performance, sometimes it’s adequate performance, and sometimes it’s downright miserable performance. But hey, the internet doesn’t care.
Building control into SaaS delivery
Teridion approaches this problem in a manner that gives SaaS providers more control over how their specific traffic is delivered to and from their customers. Teridion has formed a series of partnerships with more than 20 public cloud providers around the world, including Google Cloud, Amazon Web Services, Alibaba Cloud, CenturyLink, Digital Ocean, and others. Teridion deploys thousands of sensors, or measurement agents, into these providers’ network fabrics. These agents collect data in real time about the performance of the various routes that the providers have available to them.
All that data feeds into an orchestrator called Teridion Management System (TMS). The orchestrator makes decisions about how to route traffic most efficiently between a particular SaaS provider and its customers. TMS then spins up Teridion Cloud Routers, which are essentially virtualized routing engines that get deployed across the fabric of those public cloud providers. Teridion leverages that routing infrastructure to establish the fastest path, at a given time, between a user and a SaaS provider. For example, Box is a Teridion customer, and Box has virtual cloud routers all across the globe that they use to enhance data upload performance.
Machine learning capabilities in TMS enable route adjustments in real time. If Teridion determines that a better route exists in a particular area, the company can shift that traffic, even if this action requires automatically spinning up another cloud router, maybe even in a different cloud provider to get the best throughput, the best latency, and the tightest control over packet loss between user and provider. This also enables close to infinite scalability. Teridion is constrained only in terms of the number of routes that it can have—the number of virtual PoPs that it can have by the constraints of those public cloud providers themselves.
Not only can Teridion automatically spin up additional routers to reroute traffic when necessary, but it can also elastically scale the bandwidth available for a particular customer on the fly. So, when traffic goes up, Teridion instantiates more cloud routers. And when traffic goes down, the company discards those containers until they are needed again. It’s an elegant way of handling capacity demands.
Deployment is easy for Teridion’s SaaS customers
Teridion’s customers are SaaS providers, and some big-name customers are among them, including Salesforce, Box, Egnyte, Merrill Corporation, Trax, Thru, and Check Point Software. Teridion claims it can enhance performance between 8x and 20x for these types of SaaS providers.
In trying to solve this performance problem themselves, some of Terdion’s customers had deployed their own regional PoPs, which was cumbersome. By adopting the Teridion solution, they were able replace the regional PoP network they already had in place in favor of Teridion’s network fabric. Egnyte, for example, had done this because of the vast amount of bi-directional traffic it supported on behalf of its customers. Teridion allowed Egnyte to dismantle their own PoP network, thus saving money and gaining better performance overall.
No hardware or software is required for a SaaS provider to deploy the Teridion solution, and no application code changes are needed. Deployment primarily involves making a DNS CNAME change to redirect traffic to the nearest Teridion edge node, and Teridion takes it from there. Everything is completely transparent to end users of the SaaS services.
There is an inherent measure of security in this solution. The traffic going to Teridion is already encrypted, so there is no worry about any privacy or any authentication issues related to data security. For every customer, Teridion spins up a unique set of virtual routers. It is a single tenant solution by design, not by lack of architectural forethought. Each customer gets its own network, which provides inherent protection from distributed denial of service attacks. Teridion knows what application the SaaS provider is delivering, so there is very stringent filtering on what can pass through the network.
Some SaaS providers are offering the Teridion performance enhancement as a premium capability, so they can monetize it themselves. Others just include it as a mainstream part of their solution to enthrall users with speedy performance and to reduce customer churn. It seems to me that organizations that are looking to subscribe to a SaaS service should make this type of performance enhancement a basic requirement of the service.