top of page
Search

Remind me, why are we collecting Flight Simulation Training Device (FSTD) metrics?



Introduction

Go to any training centre that operates a qualified FSTD and they will probably be able to show you a number of impressive graphs proving how wonderful their FSTDS are operating. This week we look at what data is being collected, why, and more importantly, if the data is useful.

What are the issues that need to be considered?


Why produce any Metrics anyway?

Well the first reason is that you have to! Under EASA document Part ORA European operators, and those who chose to have EASA qualify their devices, are obliged to have a Compliance Monitoring System (CMS). Acceptable Means of compliance (AMC) AMC2 ORA.FSTD.100 indicates that Arinc 433-1 (see below) is acceptable for this. Add to this that EASA has recently published a proposed amendment that reinforces and defines more precisely the metrics to be collected (see further below). In the United States FAR Part 60 mandates a Quality Monitoring System which includes a Discrepancy Reporting & Tracking System (DRTS). Most National Aviation Authorities (NAA) base their local rules on these two regulations.


But, we would contend, that is not the most important reason to collect the data and produce the metrics. FSTDs are expensive investments and whether an FSTD is primarily intended for training your own flight crews or as a commercial investment it is essential to monitor its performance. Properly implemented the data can give you advanced warning on failures and allow operators to allocate their scarce improvement funding wisely.


OK, so what is Arinc 433

No discussion on FSTD metrics would be complete without reference to Arinc Report 433 “Standard Measurements for Flight Simulation Quality”. It was written by people drawn from across the industry under the auspices of the Flight Simulation Engineering and Maintenance Conference (FSEMC). First published in 2001 and revised in 2007 and 2013 it remains the one industry document that gives recommendations on the data to be collected and provides examples on how to analyse and present the data. However this is in reality a tool kit rather than a definition of what you have to do: there are excellent examples of best practice contained within.


Data from the TDM

You might think your TDM will be able to provide you with accurate MTBF, MTTR and component expected life; they can’t. Among the principals of Sim Ops we have worked for four major TDMs in the past, none of them were able to muster this data as - except maybe for a few - they don’t themselves operate the simulators they build. And, as we discussed in a previous blog, The View from the other side of the Pitch, the TDMs really struggle to obtain the raw data required to calculate this from operators.


Metrics in common use

In our experience the majority of centres collect and measure four main, core, metrics.

  • Device usage, utilisation

  • Availability

  • Interruption rate

  • Subjective Instructor quality ratings

- Device usage, utilisation – this is the easiest to measure and shows the number of training hours used. You will no doubt be collecting this data for commercial reasons.

- Availability – most centres use the Arinc 433 definition for this or a variation on it; essentially this metric is a measure of the time the device was capable of conducting training during a defined period expressed as a percentage. When buying a new FSTD the TDMs will quote an availability of 99% plus, depending upon your tenacity they might even underwrite this with liquidated damages. But in actuality as a measure of the quality of the device its use is limited, what it does do is give an indication of the likelihood of your training being curtailed on a particular device. What is often not apparent is the effort the maintenance team has put in to keep the device up and running. A device that has 99% availability but needs a technician present to do resets whenever it is in training is not going to please anyone.


At some centres two measurements are made of availability; one considering just the device and the other the device and the facility as well; particularly in areas where power is not stable. When just referring to the device this can also be referred to as the device Reliability.


- Interruption rate – This, in essence, is a measure of Mean Time Between Failure (MTBF). At the end of each training session the number of times the training had to be halted, for technical reasons, is recorded. This can then be presented as the number of interruptions over a given time or a FSTD operation basis.


This metric should not be taken at face value, if you start talking to the instructors who regularly train on a device you might well start to hear things like; “yes that fails regularly, we just reset it, we don’t bother to report it now as nobody knows how to fix it” or “oh, there is no point activating that malfunction before a reposition, it just causes the FM to crash”. However, ignore this at your peril, even a minor interruption to the training can cause a major diminution in the quality of training. The whole idea of simulation is to create an immersive environment which can be shattered by a “it’s only a quick reset”. And if re-loading flight plans - or even worse, manually re-entering them - is required it can be even more disruptive. Our experience is that most instructors/trainees would prefer a session to start late than be plagued by interruptions.


- Subjective Instructor quality ratings – at the end of every session it is normal to ask the instructor to give the device a quality rating, typically from 1 to 5, indicating his/her subjective opinion of the device. As suggested by Arinc 433, most training centres have some guidance on this. See below the table from Arinc 433.

  1. Unsatisfactory: No training completed

  2. Poor: Some training completed

  3. Acceptable: All training completed, many workarounds or many interrupts

  4. Good: All training completed, few workarounds or few interrupts

  5. Excellent: All training completed, no workarounds and no interrupts

We have seen some centres go further than this to try to ensure a good data set; at one centre we have visited the instructor had to record his training outcome at a terminal on exiting the FSTD and was not able to select anything but 5, Excellent, unless faults had been reported. Another centre we know of had one instructor who, out of principle, would never award a mark higher than 4.


EASA Notice of Proposed Amendment (NPA) 2020-15

As mentioned earlier EASA are proposing to standardise the metrics recorded for devices under their supervision, the NPA defines ten measurements, namely:

  • Scheduled training time,

  • Support time,

  • FSTD utilisation,

  • Average FSTD quality rating,

  • FSTD failure time during scheduled training time,

  • FSTD downtime during scheduled training time,

  • Number of interrupts during scheduled training time,

  • Number of discrepancies raised by FSTD users,

  • FSTD availability, and

  • FSTD reliability.

Along with the method of calculation. As you can see these are very similar to those we have discussed and once (if) adopted will provide standardisation in Europe.


So are you saying the current metrics are useless then?

No, without collecting this information the training centre would be unable to spot upcoming problems and “flying blind”. Also there are some training centres that do collect and act upon data very well. What we are saying is there is a need to be clear as to why you are collecting the data, what data to collect and what you are going to do with it. If you are just going through the motions in order to satisfy your authority at the next re-qualification you might as well get Tom Clancy to produce them.


There are however some measurements a training centre should be measuring though that we rarely see. For us there are two we think are particularly relevant and seldom seen.


- Cost per operating hour – a measurement that divides all direct costs (spares, consumables, maintenance hours, power consumption, etc.) by the number of training hours achieved over a given time.


- Maintenance hours per operating hour – the total of all scheduled and un-scheduled maintenance hours divided by the number of training hours achieved over a given time.


How can Sim Ops Help?

At Sim Ops our principals have been involved in many such projects and can advise you all the way from business case through to training commencing and beyond. We can lead you through the whole process or only the specific elements you’re less at ease with. Contact us to know more.


447 views0 comments

Recent Posts

See All
bottom of page