In the Cloud or private data centres? What considerations do you need to make
when deploying FIX Engine SDKs across different infrastructures?
There are many considerations, pros and cons that should be considered when deploying FIX Engine SDKs across different infrastructures. Typical deployments today will be across one of two options; a private data centre, or in the Cloud - such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.
Which deployment you choose will largely depend on your context and business requirement. But at a top level, Cloud deployment offers the major advantages of scale up/scale down flexibility, and pay-for-what-you-use models, without the associated support obligations that are required to maintain the underlying technology.
Of course, this does mean relinquishing a degree of control over the underlying technology, which would be required with private data centres. But here, the pros of Cloud deployment may outweigh any cons.
There is nothing technical that tightly binds or limits any FIX Engine SDK to either deployment context. In all cases, a FIX Engine does what the FIX Protocol Standards were designed to do – provide integration between internal transaction processing systems/trading frameworks and counterparties/exchanges using an industry standard reliable messaging protocol.
The FIX Engine itself is not the main event. It provides mission-critical plumbing. But there are practical considerations for the optimal deployment contexts depending on the business use-cases that need to be supported, and that deployment context is influenced by the type and performance profile of the counterparty/exchange transaction flow, which links back to the FIX Engine SDK usage.
Therefore, the best target deployment context depends on the requirements of the business transaction processing application itself.
Key areas of consideration are:
Automated trading strategies can be very latency critical and require highly deterministic processing speeds with low network access latencies. FIX Standards and FIX Engines are often used to support order entry, executions, and trade monitoring. In low latency contexts, these are best deployed on bare metal servers in data centres rather than Cloud platforms – usually colocation data centres in physical proximity to the target exchange/venue matching engine.
For these latency critical contexts, Cloud platforms may not be optimal as you have more limited access to tune the hardware level accelerations for low latency and high throughput. Access to best practices like support for Solarflare Onload Extensions API, TCPDirect ultra-low latency network stacks, and hardware overclocking can be critical for trading operations.
Post-trade use-cases are generally not particularly latency critical, so are suited to Cloud deployment. FIX Standards based counterparty connections using TCP point to point, as used by FIX Engine SDKs, works nicely in Cloud deployments. If a post-trade feed needs to consume real-time full depth market data feeds, then specific consideration is required when working with UDP Multicast data feeds on Cloud platforms. Supporting multicast market data feeds is often not possible with current Cloud provider network services. There are workarounds that necessitate complex data replication techniques into Cloud platforms. In a data centre context, when you have full control of the network topology and hardware on which the consumer application(s) are hosted, then supporting UDP feeds is not an issue.
Unless you pay to rent a private cloud within the Cloud provider facilities and network, you will be sharing hardware resources with other Cloud users. Firms who have invested in proprietary intellectual property worry about this proximity to other users. This is not only a concern about the security controls of the Cloud provider access controls. It is also a user risk; could a relatively simple access control mistake expose trading algo secrets on a shared resource platform? Would we know if it happened? Where exactly is our code deployed? Some firms prefer to just avoid this risk completely by isolating critical trading frameworks and applications with physically controlled private data centre facilities, to ensure that they control access security.
The number of OnixS independent software vendors (ISV) who deploy applications on AWS, GCP and Azure Cloud platforms has grown over the last few years. Overall, in terms of deployed business applications that include OnixS SDKs, private data centres would still be the majority, but we can see this trend towards Cloud continuing for non-latency critical deployment contexts.
These Cloud platforms lend themselves to smaller start-up operations being able to offer reliable application services globally, to reach geographically dispersed end users on a platform that can scale up and down in terms of capacity and costs quickly, based on requirements. Indeed, a number of OnixS ISV customer partners started small, and we have grown our businesses together.
We have also seen OnixS ISV customer firms start with Cloud platforms because of the speed to market advantages in offering new business applications. At a certain scale, the cost benefits changed and they took some of the platform back internally, with a resultant hybrid deployment context.
Overall OnixS SDKs can be deployed in either or both. OnixS FIX Engine, and all OnixS Direct Market Access SDK’s, are agnostic to deployment on Cloud platforms or within private data centres.
You can access a free 30 day evaluation SDK download distribution for market access solutions that is specific to your target venue and codebase here.