When it comes to flexibility in cost, size, and speed of scaling, the major Cloud platforms (such as AWS and GCP) are highly effective across many use cases. But when we consider their ability to integrate high-volume multicast market data feeds into the Cloud, we face a major constraint.
In such use cases, the network topologies do not support high bandwidth multicast market data feeds as used by major global financial exchanges.
What can be done?
This guide outlines a proven approach to overcome this constraint.
The approach outlined here uses message queue replication from exchange colocation to the Cloud platform of choice. The use-cases where this works well, are contexts such as providing the necessary data to trade surveillance, where the ICE iMpact multicast market data feed is required in addition to ICE trade and order flows that are available on the ICE FIX Trade Capture and ICE FIX Private Order Feed services via TCP network connections.
This approach also works well for contexts that require a full orderbook reconstruction for depth analysis, where the data access requirements are not ultra-low latency algorithmic trading. Where a trading strategy itself has an ultra-low latency profile then the deployment in colocation is still the optimal approach.
There are a number of steps required to integrate the live production ICE iMpact Multicast Price Feed from your colocation server to your own server instance(s) in the Cloud (or anywhere else).
The solution for live data transfer includes the following components:
The data flow will be like this: Collect -> Transfer -> Replay
Services to both collect ICE iMpact data, and also to replay it, can be implemented on top of the OnixS directConnect: ICE iMpact Multicast Price Feed Handler SDK Java implementation.
The message queuing service can be one of the following solutions:
Or any similar reliable queue replication solution of your choice.
To implement a collector service for the colocation environment, you need to do the following:
There are three call-backs available for PacketProcessingListener:
There are three properties which you need to transfer:
For more details on how to get this data from PacketProcessingEventArgs please see our custom-log-replay reference implementation source code sample that is included in the OnixS directConnect: ICE iMpact Multicast Price Feed Handler SDK Java implementation distribution package.
Depending on your choice, you need to use the corresponding API of the messaging service you selected. In our custom-log-replaysample we are using just LinkedBlockingQueue, but in the real-world scenario, the API you need to use can be slightly more complicated.
For example, in the case of Chronicle Queue, you can use self-describing messages or define your own message to send ICE iMpact data to the wire.
On the destination server, depending on the messaging system of your choice, you will have a call-back or an event handler which will be triggered each time a new message is received.
And to replay these messages and receive the Handler's call-back, you need to do the following:
All the implementation details can be found in the custom-log-replay sample. There are only two classes:
CustomLogReader - shows how to collect data and add it to the queue and how to feed this data back to the Handler.
If you are trying to implement this, then you almost certainly will have some questions.
We would first suggest that you register to download the fully functional OnixS directConnect: ICE iMpact Multicast Price Feed Handler SDK Java implementation evaluation distribution package that includes the referenced samples.
You can reference the OnixS directConnect: ICE iMpact Multicast Price Feed Handler SDK Programming Guide.
Have a closer look at those samples, and please do not hesitate to contact OnixS Support at support@onixs.biz if you get stuck or need assistance.
You can also contact us at sales@onixs.biz for a walk-through discussion to help you get what you need.
You can request an evaluation of the OnixS directConnect: ICE iMpact Market Data Handler SDK to support your integration.