top of page
Search

QTGs a force for good or evil?


Every simulator technician, engineer and operator sooner or later comes into contact with the infamous Qualification Test Guide (QTG). For some people we know the QTG is the most important document in the synthetic aviation training world; for others it is just a time consuming waste of effort. So we thought, in true Oxford Union style, we would look at the debate; are QTGs a force for good or evil?


The case for a force for evil


Sadly we have come across some engineers at the TDMs and at operators who seem to have actually forgotten what they are trying to achieve and treat the production of the Master Qualification Test Guide (MQTG) as an exercise in producing “a nice set of results” that will please the authority and QTG re-runs as a tick box exercise. That is to say the simulation is taken to be good, after all there have been no complaints from the instructors, so all must be good with the model, right?


A question; how many readers will have come across the situation that a QTG test is out of tolerance and, eventually, was addressed by the Training Device Manufacturer (TDM), but how? Going back to basics this means you have approved flight test data from the Original Equipment Manufacturer (OEM) and a model results that didn’t match. So you might expect the TDM’s engineer looked at the model; but no, what often happens is that they will first go and look at their scripts or the initial conditions to “make the results match”. And, increasingly, the model will have come from the aircraft OEM and only implemented by the TDM; the answer “it matches on our test station so it must be your implementation” has been seen by us more than once.


Then we come to sound QTGs. We shudder to think of the hours that, as an industry, have been wasted on sound QTGs. I’m sure many readers will have had the experience of running these late at night, when the building is quiet, to try to get results that match the masters. This often involves repeated attempts after making adjustments to the positions of air vents and seats between attempts, then once an in tolerance test is done going onto the next case. A complete waste of time. But the farce often starts when the initial sound tuning is done, the conundrum often being that you can tune the sounds to meet the OEM data or tune them so the acceptance crew are happy, but not both. Of course the ultimate joke is that, apart from when the authority is onboard, the Flight Simulation Training Device (FSTD) will spend its life with the sounds turned down to 40% rendering QTG testing irrelevant or with the crew wearing noise cancelling headsets and posing the question that if sound is required for Level D is the device still Level D with the sound turned down?


A similar case can be made on transport delays. How many hours are spent perfecting techniques for control inputs to match these cases when they’re not fully automated?


Lastly we have to mention the whole practice of multiple runs of tests, particularly ones with a person involved, to get one that matches. We have even heard this practice being openly admitted at conferences, and it has to bring the whole exercise into question. The question being that if you ran the test five times and only one set of results was in tolerance is this really verifying the model matches data?


The case for evil can be summed up by saying that too much time is wasted on a process that has some dubious practices. Oh, and most military FSTDs don’t have QTGs but have operating envelopes far exceeding their civil counterparts, just saying!



The case for a force for good


There is an old joke amongst simulator engineers. It is based around subjective tuning with an acceptance crew. An engineer goes on board the FFS and is told the tuning “isn’t quite right”, it could do with this, that or the other. The engineer returns to the computer room, has a cup of coffee and a cigarette (as we said, it's an old joke); after 10 minutes the engineer goes back on board and tells the crew to retry now that they have made an adjustment; they do and say that it is much better now. A joke, but it has an element of truth, the moral of the story being that subjective tuning just does not work.


The whole reason the industry (voluntarily, the mandates came along after) started producing QTGs was to establish an objective and repeatable approach to determining the flight performance of a FSTD as subjective assessments didn’t work. The method chosen being to define a number of tests that can be used to verify the simulation model is accurately matching the simulated aircraft. This was expanded to include other systems including controls, sound, visual and motion systems.


Looking beyond a single FSTD there is a more important consideration, with the same flight test data (or alternative data according to an approved Validation Road Map (VDR)) being used by all TDMs there is the ability to ensure consistency of training across FSTDS; no matter who has produced the models. Hence the quality of training is maintained.


We only have to look at the validation of UPRT simulations to see problems where we don’t have data or validation data. Good SMEs are hard to find!


The QTGs are your friend for checking for regression after model changes, if any part of the simulation has been changed or for any system where hardware has been replaced, running the associated QTGs is an objective way of checking for unexpected consequences. Incidentally so are the Acceptance Test Manuals (ATMs), documents that have a tendency to collect dust as soon as on site acceptance is complete.


In summary, the case for good; it is a proven methodology that works and, by the way, there is no credible alternative. Having an objective qualification methodology ensures consistency and makes acceptance and continued qualification a science, not an art or opinion.



So, good or evil?


Well of course we should probably pin our colours to the mast at this point; but as with most debates the truth is not at either extremes of the debate. QTGs definitely are a cause of irritation in the people we meet, not least of these being the need to re-run them all every year. The FSEMC established the Simulator Continuing Qualification (SCQ) Working Group to look at ways of better handling continued qualification in 2017 and issued in 2020 a draft report, ARINC 449. However the consistency that the QTG approach brings to the industry has no credible alternative at this time.


So we would say, good with a tinge of evil! That said we do advocate the following;


  • Never, never, never let your TDMs engineers run QTGs and/or produce a MQTG from their own lap-tops. Make sure that the MQTG is run from the device using the tools you are getting and preferably get your own team to do it.

  • Make sure you have the ability to re-master QTGs yourself; without recourse to the TDM, we have seen some woeful tools delivered by TDMs in the past.

  • Drill it into your team that if any changes are made to the device any associated QTGs need to be re-run before the TDM has closed the issue. For example, if the Autopilot model is changed in any way run all the autopilot engaged QTGs, if a control loading actuator is changed re-run the QTGs for that channel (this may sound obvious but we can assure you it doesn’t always happen).

  • Make sure your team has a copy of the RAES qualification handbook and read it.



How can Sim Ops help?


Well we can’t resolve the debate! However among our partners we have a wealth of experience to help you through the labyrinth and assist you. The critical time of acceptance of a new device is an area we have assisted operators and can offer on-site help. We can also arrange training and/or coaching for your team in the QTG process.


1,103 views0 comments

Recent Posts

See All
bottom of page