MiFID II – Managed Services Strike Back

MiFID Strikes BackSo, we have arrived at the third act.  Sticking with my cinematic obsession, this is where our story is supposed to be satisfyingly resolved.  Where the bad guys are vanquished, the monster is slain and we blow up the Death Star.

 

However, first we need a plan; we need to know the whereabouts of that exhaust port or how to disable the shield.   In my previous blogs here and here, I mentioned some of the challenges in setting up a time service suitable for the requirements of MiFID II.  Now I’d like to suggest ways in which they can be overcome.  Some of this could be filed under “easier said than done” but trust me, it can and has been done.

 

To provide a robust time service, you need multiple independent and verified time sources.  A mixture of GPS, combined with existing and trusted time sources, is a good start.  You need multiple sources so you can compare them against each other for verification.   Think of them as wise old Jedi to whom you will be asking the same question to make sure the answer is consistent and reliable. This allows you to decide which is best, as well as helps you to spot if any individual Jedi (time service) is behaving badly. This could be because he’s simply confused, or maybe he’s gone over to the dark side…

 

Back in the real world, this means you will need some kind of central system that has an holistic understanding of each time source.  This system needs to have an appreciation of the network topologies, latency and drift characteristics and an understanding of your GPS behaviour.  These factors can and will impact the time reported by your time services. The higher your clock frequency, the greater the impact will be.

 

However, with all of this in place, you can generate a time service that is reliable and accurate enough to satisfy the regulators and give you the framework to drive your accuracy up into higher frequencies.

 

Of course, after you’ve created and deployed your time service, you need to be able to provide evidence of your clock synchronisation across all your in-scope devices.   But, you don’t know when, or even if, you’ll be audited or required to provide evidence that you are compliant.   Therefore, you need to store records and/or reports.  Regulators are notoriously edgy about “taking your word for it”.

 

You also need to know how your time synchronisation is performing.  There are several scenarios where it can fail and you need to know when this happens and be able to ensure that you are back to full compliance as soon as possible.

 

Easy, right?

 

Ok, Good.  Then let’s synchronise watches!

 

Peter Williams (FIA)

Senior Technical Director

Tel: +44 (0)203 328 7544

peter.williams@cjcit.com

 

Hi again, so, in modern Hollywood tradition here is the post credit sting…

 

There is, of course, an easier way.  You could have this done for you as a managed service.  Let’s face it, you want compliance, not a new system to build, maintain, monitor and manage.

 

Peter

The High Frequency and The Furious – Clock Drift

In part one, I talked about the time synchronisation requirements coming into effect with regulations like MiFID II.  Today I’d like to delve into some of the challenges that await us in tackling these requirements. Like any good sequel, this is where we take the concept from the original and expand it!  Be warned however that as the middle act this is where we come up against seemingly insurmountable obstacles, where all might seem lost, but don’t worry we’ll talk solutions in part three.

 

First, let’s be clear that linking servers to time sync services is nothing new.   There was FINRA’s OATS rule in 1998 and I can personally remember implementing NTP on a market data infrastructure in the early 2000s.  The difference here is the levels of accuracy required.  But synchronising your clock is one thing, keeping it in sync is quite another.

 

Stick with me here, it’s analogy time.

 

Think of the system clock on one of your servers as a boat, let’s call it the Orca.

 

Orca is floating on a wide expanse of ocean (this ocean is ‘time’). Now, you need the boat to remain on, or very close to, a specific patch of ocean because that is where you expect to encounter a 3 ton great white shark (within 100 microseconds of UTC). Even on the calmest day, your boat will drift because there is nothing holding it in place. No matter how often you correct it and get it back to where it needs to be over time, it will drift and require further correction.

 

But what of the ‘time’ that you need to be in synchronisation with?  UTC is itself a derivation of International Atomic Time (TAI).  It too requires constant attention, with things like leap seconds being added to compensate for the earth’s rotation.   This is because the earth’s rotation is slowing down due to tidal deceleration, so anyone wishing they had more hours in the day will get their wish – in a few hundred million years.

 

Back to the issue at hand and this is where a time service comes in.  You need to be able to generate a time source accurate enough to keep you within the agreed tolerances of UTC.  However, that alone is not enough.  Once you have your accurate time source, you need maintain it day to day and make sure that it’s secure enough to prevent tampering, or that it’s sufficiently robust to withstand an attack while you deal with it.

 

One of the problems you can come up against is spoofing, where some cheeky scamp spoofs your GPS signal and fools your clock into drifting out of sync.  This raises the key question of how will you know if you’ve been spoofed?  For that matter, how will you know if your clocks have drifted outside of tolerances for any reason, despite your best time sync efforts?

 

So, you don’t just need a time source, you need a highly accurate, verified, reliable, robust and monitored time source and you need it soon.

 

“You’re going to need a bigger boat.”

 

TO BE CONTINUED …

 

Peter Williams (FIA)

Senior Technical Director

Tel: +44 (0)203 328 7544

peter.williams@cjcit.com

MiFID II – Revenge of The Regulator

Vader helmetI’m a movie lover. I’ve seen more than the average person and can re-watch the same film hundreds of times (I’m looking at you Star Wars, Jaws, Ghostbusters…).  When you watch a lot of movies you start to spot the tropes, those little repeating themes, devices or motifs. One of my favourites is always just before the big third act set piece, in the moments before the heist, the daring rescue, or the desperate last stand where our protagonists are about to initiate ‘the plan’.  It’s where the tension is high and the anticipation is higher, that someone says the line; “synchronise watches”.   Then you get either an overhead shot of fists in a circle and all sporting watches set to identical times, or a series of cuts showing lots of different time pieces starting their countdowns with the push of a button that almost always beeps – even on analogue watches !  Only now are they ready, synchronised with their team mates, or with the explosive charges that will blow the damn, the bridge or the Nakatomi building, to win the day, safe in the knowledge that everything will occur as it should. And most importantly, when it should.

 

So, it’s with a nerdy sense of glee that I find myself immersed in one of the biggest watch synchronisations our industry has ever seen. As trading organisations prepare to come under siege from the various regulators, one of the things they must have is properly synchronised clocks on their trading systems. Here is what’s required:

 

* Human powered trading requires a clock that is accurate to 1 second of UTC

* Algorithmic, but not High Frequency Trading (HFT), participants must be at 1 millisecond of UTC

* HFT market participants must meet the 100-microsecond of UTC standard

 

The differences between these clock resolutions is bigger than you might think. As I write this, the time is now 08:24 and 17 seconds, but does that tell the whole story?  Let’s say that I’m telling my time on an extremely accurate, high frequency clock. If that was the case, what time would I tell you I was writing this?  Let’s have a look –  I’ve highlighted how these numbers relate to the regulations to show how accurate firms will need to be.

 

The first one is easy:

 

08:24:17 – (we could still use our watches here!) 1 tick per second.

 

At millisecond precision things become a little more complicated, as we’re talking about thousandths of a second – which could look like this:

 

08:24:17.753 – so there are now 1,000 ticks inside a single second.

 

Things get even more fun at microsecond precision, as we’ve entered millionth of a second territory – which could look like this:

 

08:24:17.795123 – so now there are 1,000,000 ticks inside a single second.  1,000,000 “nows” if you like.

 

These degrees of accuracy are designed to allow regulators and ‘appropriate authorities’ to reconstruct trades and trading behaviours, so they can sort out the good guys from the bad guys. It’s therefore important that everyone adheres to these standards. Firms just need to “synchronize watches” and then get on with what they really want to do, trade.  Easy right?

 

Wrong.

 

Time is a subtle beast. When trading firms take the red pill and descend into a world of digital control (taking the blue pill is not an option here), they will find that the machines are as disobedient and hard to pin down as anything the Matrix had to offer.

 

But let’s leave that discussion to the sequel.

 

TO BE CONTINUED…

 

Peter Williams (FIA)

Senior Technical Director

Tel: +44 (0)203 328 7544

peter.williams@cjcit.com

Ready for the knock on the door ?

Iknock-on-doorn May 2012, the Basel Committee on Banking Supervision (BCBS) introduced the Fundamental Review of the Trading Book (FRTB) as a consultative paper. This paper set out a revised market risk framework and more stringent trading book capital requirements in the wake of the credit crisis.

The measure will create a more definitive distinction between firms’ trading books and banking books, with each inviting different levels of capital coverage. The final paper, released on 15th January 2016, set a tentative compliance date of 2019. However, since banks need to show a year’s worth of observable prices ahead of this, preparations should be in place by January 2018.

 

So it’s on its way. But is the banking community ready?

 

Outbound, or contributed, data is affected by this regulation and yet many organisations don’t have a clear picture of what prices they contribute, to whom and who can see it downstream. And it’s not just FRTB. MiFID II, MiFIR and BCBS239 all impact pre-trade Over the Counter (OTC) data.

 

Currently, though most banks have robust market data management, processes and protocols in place, teams are generally only responsible for inbound consumption. Is there any clarity around who takes control of outbound data in the organisation? Is there any oversight of global contributed data?

 

In turn, this raises a raft of additional questions that need to be answered:

·         Which desks contribute rates to third parties? 

·         Which asset classes are affected? 

·         Which countries contribute prices? 

·         Which external clients can see proprietary data – can even competitors see it? 

·         Are banks paying to receive their own contributed data back?

 

At a more general level, has anyone questioned why most contributions to vendors are free? Or, since we’ve see growing levels of fines from global regulators around benchmark data – could contributed data be next?

 

Organisations that struggle with these questions may not be ready for FRTB.

 

Help is available. CJC recently announced a new strategic division, a key focus area for which is helping banks and brokers to take control of the prices they publish to the market through data vendors or submissions to benchmark rates. We’re already in extensive discussions with a number of tier one banks, outlining how we can provide a strategic review of all the operational processes around pre-trade data publishing. 

 

In the coming weeks, I’ll be elaborating on these processes and the value they bring in preparing organisations for the inevitable knock on the door from the regulator.

 

Sheena Clark

Tel: +44 (0)7734 995 687 

sheena.clark@cjcit.com

 

Reducing existing costs and moving to Big Data / Private Cloud – impossible? No.

Steve Moreton & Peter Williams

What a month here at CJC! This is the month we move from being a service company – to a product and service company. We just launched mosaicOA – a big data visualisation solution with our initial use case focusing on ITOA. This has been something we have been working on for a few years now. What is most exciting has been the announcement of a production client, the result of a process that took 18 months.

Firstly – what is ITOA? IT Operations Analytics is an approach designed to retrieve, analyse and report data for IT operations. ITOA applies big data analytics to large datasets where IT operations can extract unique business insights. Gartner has rated the business impact of ITOA as being ‘high’, meaning that its use will see businesses enjoy significantly increased revenue or cost saving opportunities.

CJC have provided our clients IT Operations Analytics (ITOA) since 2012 as part of our standard managed services. However, feedback from our engineering team and clients showed we needed to go deeper and embrace the full ‘truth’ of client infrastructures.

So what do we mean by truth – let’s put it this way. All of the sensors on real time infrastructure(s) are collecting data on both standard components like the CPU, RAM, OS, along with industry specific application level data from systems such as TREP, Solace, Wombat and BPIPE. Client infrastructures can easily generate hundreds of thousands to millions of metrics – all updating at high frequency with some at sub-second intervals. All of this data has to be tracked, managed, stored and analysed – not just for a day or a week, but for months and potentially years.

This immediately led us to several challenges – traditional database solutions like Oracle and SQL are just not cut out for the job. Database licenses and storage have bigger costs at some institutions than market data! Not only this, but once you have this information stored – how would you visualise it?
Rest assured all challenges were successfully met and we’ll save those details for an upcoming case study and white paper.

Our client wanted a POC. Check. Client required a pre-production pilot. Check. We then started the process of moving to production. At this time, we discovered we had been in a cook off with another vendor.

We won.

As our client stated – “Best front end, no infrastructure needed upgrading, paid for by cost savings on database sizes”.
The best thing for us is that this is a major win for our private cloud platform.

CJC can go head to head on price with the major public cloud providers, provide a service from a secure private location and provide specialist add on services outside of server orchestration.

Read more here:

Top 10 Tier 1 chooses CJC for ITOA analytics.

The Art of Performance Tuning in 2016

Paul Tomblin

As CJC CTO since 1999, I’ve seen the company grow from a 3 man London team to a 100+ global managed services provider and consultancy supporting over 500 organisations. Indeed, much has changed, but the core service principles of supporting real-time infrastructure remain the same, to keep things running, to enhance the current platform, to look at future developments and functionality, and to tune the systems for peak performance. Optimisation is possibly more important today, than it was when CJC started.

Back in the mid-80s, computer hardware was very expensive and so system tuning was designed to eke out as much value from the investment as possible. In 2016, it is to ensure high throughput, low latency, maximum efficiency and resiliency. Tuning is a continual process at CJC for our expert engineering team.

Sam Grayston joined CJC in 2010, and I’m proud to say is a senior 3rd line engineer who came through our academy. He is passionate about system tuning and has created a detailed guide for all the systems CJC support. I’ve asked him to provide a quick insight into our tuning approach and notable improvements CJC has provided in the last 12 months.

Over to Sam!

Consistent, reliable software performance is predicated on dependable hardware operation. Simultaneously, management of hardware and Operating System configuration is essential for deriving the most value from production assets. Therefore, in production environments, optimisation of the entire system from the lowest possible is therefore critical.

For maximising availability and reducing overheads. Such enhancements should be identified in advance of any deployment to avoid lengthy and costly investigations which come at the direct expense of business operations. This has the additional benefit of placing the responsibility for this configuration with design and implementation team, reducing then need to maintain a costly enhanced skill base across and entire operations support team.

At CJC, we have defined processes to enable such analysis to be conducted for all new hardware platforms that our clients introduce; developing tools which build on standard Linux and Windows utilities to allow for easy application of identified tuning configuration and simplified iteration on this work for future hardware platforms. These tools have allowed CJC to guarantee significant performance increase as compared to the standard configuration for a given system as supplied by the vendor.

As an example, our BIOS tuning is applicable as a near instantaneous process on most platforms and will enhance throughput for client Market Data Systems by 20% as a typical measure. When coupled with CJC’s application thread and network interrupt binding optimization processes, an end to end improvement in throughput or latency performance can be attained, balanced in accordance with client needs. At the same time this work moves the performance limiters of the system away from low level functions and settings, obviating the need for high impact configuration changes to meet changing requirements and allowing for focus on application configuration.

As CJC move forward we constantly revisiting and exploring emerging performance edges in system architecture and relaying the benefits of this work to our client base.