We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you’ve provided to them or that they’ve collected from your use of their services. You consent to our cookies if you continue to use our website. Term of use & Privacy Policy

Products

FS #T07: Hierarchical Task Analysis

 

Background

The term Task Analysis denotes a variety of techniques used for collecting, classifying and interpreting data on the performance of systems that includes at least one person as a system component. It is an explicit representation of what people are required to do to achieve a goal, in terms of actions or thinking.

The peculiarity of the Hierarchical Task Analysis (HTA) is that it presents tasks in a hierarchical structure, with the primary goal(s) represented at the top and tasks/subtasks represented below. As shown in the figure below, even very simple activities, such as preparing a coffee with a traditional moka pot, may imply many steps when analysed in detail. Actually the basic idea of HTA is that what is normally taken for granted and sometimes considered part of a normal repertoire of automatic actions should be made explicit to support a design or redesign process with a user-centred view. 
 

 

 

Key Concepts

  • Goal what needs to be achieved;
  • Task the individual part of an overall activities necessary to achieve a specified goal;
  • Objects the agents or things used by the agents;
  • Subtask the lowest level of the activity, not requiring further decomposition; what agents do, alone or with their objects, and other agents.

 

Thanks to the HTA it is possible to compare different approaches to support a human activity in interaction with a tool, a procedure or with another human operator. The HTA is an abstract representation of the main atomic elements of the activity: it may of course vary depending on the analyst who prepares it. For example one analyst may choose a lower level of granularity compared to another. As well as different criteria might be chosen to divide one task from the other. However having an explicit representation of the activity is as such a very important support for the design team. Different members of the team will have the possibility to use it as a reference when deciding how to develop a given design solution, be it started from scratch or redesigned. When used in a discussion within a design team, the HTA can of course be modified or refined as soon as new design issues or implementation options are identified. 

 

 

Benefits

The HTA can be used with different purposes.

  • As a way to anticipate the tasks in which the users will experience more difficulties in achieving their goals or possibly end up in errors that completely prevent them from reaching such goals.
  • To help spotting the difference between the so-called “work as imagined” (what is prescribed by the procedure or manual) and “work as done” (what is observed in the work setting, during real operations).
  • As a basis for subsequent analyses, such as those that can be conducted with other more specialized types of task analysis (see the factsheet T08 for Tabular Task Analysis, T10 for Operational Sequence Diagram, or T11 for Critical Incident Technique, etc.) or the analysis of errors that can occur at an individual levels (e.g. the TRACEr technique explained in the factsheet T02). 
  • To support function allocation, i.e. to guide the decisions about the tasks to be performed by different types of operators or by different types of hardware/software systems or by a combination of the two.
  • To determine at which level of automation each task could be supported (see as an example the figure below).

On the side of limitations, it should be considered that the HTA does not capture the contextual elements affecting the performance of an operator and generally does not show in detail the equipment, tool or HMI the operator is interacting with. This is way the HTA should normally be complemented with other types of task analyses as well as with different HF techniques.

 

 

How It Works

Developing a HTA normally entails the following steps.

  1. Collecting information from either the users of a tool/procedure (descriptive approach) or from the designer of them (normative approach) to know which are the tasks composing the concerned activity and how they can be grouped and organized in a hierarchy. The strategies for collecting such information normally includes: (i) reading manuals and procedures describing the activity -if available-, (ii) conducting individual interviews with the users, the designers or a combination of them, (iii) observing some of the  tasks when executed by the users of the concerned tool or procedure. 
     
  2. Preparing the diagram with the representation of the overall goals, tasks and sub-tasks
     
  3. Revising the diagram with representatives of the people who participated in the interviews or are either experts or developers of the manual/procedures.

 

 

Illustrative Example

Illustrative example: aircraft emergency
In the following example the HTA is used to identify the likely sources of actual or potential failure an air traffic controller (ATCO) may experience when managing an engine failure emergency occurring to an aircraft.

 

 

In order to conduct the analysis, the following steps are proposed:

Step 1: Purpose of the analysis – identify Human Performance issues, such as ATCO’s errors while dealing with a PAN-PAN situation. 

Step 2: Definitions of Goals – decide which are the outcomes of the analysis, e.g.

  • design of new, automated tools to support the activity;
  • writing new procedures;
  • facilitating controller’s tasks and eliminating ambiguities.

Step 3: Data acquisition – done by consultation of manuals and procedures, direct observation of the work interviews with experts, performance records, or even simulations with monitoring of ATCO’s activity with system logs or flight recorders.

Step 4: Decomposition diagram (Decomposition = the process of breaking goals down into constituent tasks. It can be continued until the level of detail required is reached)Step 4: Decomposition diagram (Decomposition = the process of breaking goals down into constituent tasks. It can be continued until the level of detail required is reached).

 

 

Step 5: Recheck validity – thinking about the task, the personnel may realize that:

  • events do not always occur in a standard way; 
  • something which has never been known to happen could actually happen. 

An iterative process is recommended wherever possible by cross-checking between sources and professional operators, e.g. by comparing training manuals with experienced ATCOs’ judgement to obtain the most accurate and complete description of the task.

Step 6: Identify Significant Operations and Redesign the diagram if necessary.– Decide which of the tasks of the ATCO need further details and if there is any unnecessary information. If tThe ATCO’s tasks are correctly described there is no need for further decomposition or reduction of the diagram.

Step 7: Generate and Test Hypothesis concerning Task Performance – the analysis should emphasize likely sources of actual or potential failure to meet overall task goals, such as:. 

  • Missed coordination between the ATCOs
  • Lack of training in emergency situations
  • Inadequate time efficiency 

The analyst should propose practical solutions (regarded as hypotheses) that must be tested. This is the only way to guarantee that the analysis is valid.

 

Illustrative example: Ship collision emergency during Strait passage 
In the following example the HTA is used to identify the likely sources of actual or potential failure an Office of the Watch (OOW) may experience when managing a risk of collision between two ships in narrow waters.

 

 

Application of HTA to ship collision emergency: 

Step 1: Purpose of the analysis – identify Human Performance issue, e.g.Officer on Watch (OOW)’s error during overtaking of the other ship in TSS (Traffic Separation Scheme)  

Step 2: Definitions of Task Goals – decide which are the outcomes of the analysis

  • design of new, automated tools;
  • recommending appropriate manoeuvring;
  • facilitating officer’s tasks and eliminating ambiguities.

Step 3: Data acquisition – Observation of the passage, investigating records or even simulators. Monitor the ship navigation activities (ECDIS, VDR, ARPA, GMDSS communication, etc.) and check logbooks, ISM forms, Master standing order and VTS recorders (if possible).

Step 4: Decomposition diagram (Decomposition = the process of breaking tasks down into constituent sub-tasks. It can be continued until the level of detail required is reached).

Step 5: Recheck validity – thinking about the task, the personnel may realize that:

  • events do not always occur in a standard way; 
  • something which has never been known to happen could actually happen. 

Compare simulator trainings with experienced Officer/Master to obtain the most accurate and complete description of the task.

Step 6: Identify Significant Operations – decide which of the tasks and sub-tasks of the OOW need further details. Redesign the diagram if necessary. In this casethe task to be performed to manage a ship collision emergency during Strait passage.

Step 7: Generate and Test Hypothesis concerning Task Performance – the analysis should emphasize likely sources of actual or potential failure to meet overall task goals. 

  • Inadequate Manning on Bridge
  • Inadequate Policy, Standards and Application
  • Lack of Communication between ships
  • Poor risk assessment 
  • Operational blindness
  • Mental fatigue
  • Inadequate experience 

FS #10: Operational sequence diagram

 

Background

Operations Sequence Diagrams (OSD) are generally for multi-person tasks. They map out each operator’s actions in sequence, showing how information is passed from one crew member to another, and how the team must function both together and with the available equipment and automation to achieve the task. It is particularly useful for examining critical operations. 

OSD often follows on from a Hierarchical Task Analysis (HTA), which states functionally what needs to be done, whereas an OSD highlights who does what/when/with what information. As such, OSDs are useful to consider if each crew member has the right information and displays/controls at the right time, making OSDs useful from design, training, procedures and staffing level perspectives. They can also be useful as a precursor to Human Reliability Analysis as they give an indication of time pressure, and a risk analysis can consider what might happen if a key source of information (including a crew member) was lost/unavailable during an emergency.

An OSD can be developed with or without an HTA, but usually other data collection activities will have taken place such as Critical Incident Technique, Walk-through / Talk-through, Observation, or Real-time simulation, in order to develop the OSD. Discussion with operational people is essential to generate a realistic OSD.

 

 

Key Concepts

The key components of an Operations Sequence Diagrams (OSD), usually represented by a table with a number of columns suited to the task, are as follows:

1. Time, indicating the exact moment or the timeframe in which the task is performed.

2. The goal of the operators, if this will evolve during the scenario, for example if the situation degrades significantly as time progresses (e.g. in a ship collision scenario the goal may start as collision avoidance, but may end with evacuation)

3. The different personnel involved in the task, including their location if not co-located

4. The system status - what is happening in the system and/or environment, whether or not the operators are aware of it 

5. The key information sources they have available 

6. Decisions/Actions/Communications (who does what in each time-slot; actions can include ‘monitoring’ if nothing active is happening)

7. Secondary tasks or distractions if appropriate

8. Analyst comments

 

 

Benefits

Operations Sequence Diagrams (OSDs) bring the task to life, and make more understandable the team’s tasks and the required coordination, as well as points in the task which may be vulnerable to error or be under time pressure or stress or significant uncertainty. 

An OSD can help model the task and identify the following:

  • The required task ‘rhythm’ for effective coordination between human operators / crew members
  • Discrepancies between operator understanding and system state
  • Key decision points
  • Critical actions
  • Areas where they don’t know what the system (automation) is doing and/or why
  • Potential ‘pinch points’ where there is little margin for error due to time pressure
  • Critical communications and communication vulnerabilities
  • Critical information sources / displays as well as areas for improvement
  • Staff shortages / criticalities

If the task is overly complex, and particularly if the nature of the task is recursive (so that there are ‘loops’ in the task), then this can become difficult to represent in an OSD. In some cases, e.g. where diagnosis is required or there are complex alarm systems involved, a tabular task analysis may be required to ‘drill down’ into how the interface is used by the human operators. 

 

 

How It Works

The task is described in linear time according to what each person does or needs to do, and what information sources they have available, in relation to how the task or incident unfolds. Usually this requires prior information gathering and/or working with operational experts to develop the OSD (see illustrative example). The table below illustrates the typical format of an OSD table with the key components to analyse. The content that should be included is briefly described in each column. 

 

 

Illustrative Example

Aviation Domain
Wake Vortex Case Example 

Operational Sequence Diagram (OSD) was used in the SAFEMODE Wake Vortex case study. Incidents relating to wake vortex were reviewed, and then three commercial airline pilots who had experienced such incidents were interviewed, first using the Critical Incident Technique (CIT), [BT(SM1] and then to develop an OSD (both CIT and OSD were developed during a single day with the pilots), as shown overleaf.

The ‘time’ element in this OSD is shown in qualitative terms, as wake events happen suddenly and are often over in 20-30 seconds. The human operators are the captain and first officer, and key instrumentation here relates to the status of the autopilot (A/P) which is usually flying the aircraft (a/c) in the cruise portion of the flight. ATC (Air Traffic Control) represents an outside agency who are generally directing the flight but who may be unaware of the existence of a wake threat. 

The pilots in this study found the OSD to be an excellent format for representing this type of event and their role and actions within such events. Some of the key learning points were as follows:

  • Currently the pilots build an image of the surrounding airspace by using R/T information, contrail, TCAS information and wind information + geometry of a/c pairs, if they are not informed by ATC. 
  • They would welcome a warning /alert from ATC so they could be prepared for such events, thus avoiding ‘startle response’, and enabling them to secure the cabin and ‘cover the controls’ in case autopilot disengages
  • They do not know the criteria for A/P disengagement, which appears to be aircraft type and age dependent
  • Existing procedures are useful (though there is little time to use them) as long as pilots are aware it is a wake encounter and not an instrument failure
  • The flight crew plan for manual intervention (in case of wake expectation) but keep the A/P engaged unless a corrective action is required. The need for corrective action is down to judgement.
  • During a sudden unexpected wake, the pilots may feel the induced roll is more significant than it actually is.
  • During the event there is less looking outside and more focus on the cockpit instrumentation and Captain-FO exchanges. However, if there is a head-up display (e.g. 787) it still allows for peripheral vision/ outside view.
  • The ‘pilot flying’ checks primarily the flight parameters (e.g. climb/ descend, bank angle, speed, attitude)
  • The Pilot not-flying would continue to monitor the flight path.
  • They also check the Flight Management System to make sure it does no go into a degraded mode.

 

Maritime Domain
Fire Discovery Procedure Example 

OSD can be applied to a great number of operations taking place on board ships for representing the complex interactions between crew members, passengers and ship systems. In this specific case, existing procedures used by Operators have been fused to describe generally established procedures for ship fire emergency evaluation.

Similarly to the Aviation domain, time indications are only qualitative, since they depend on a variety of factors as extent of fire, reaction time of all involved personnel, time needed to evaluate the situation, etc.

The main actors in this assessment phase are:

  • First Notifier: a passenger or crew member that suspects fire onboard and activates alarm
  • First Responder: a crew member with fire fighting or safety skills on duty, able to locate fire or to assess its non-existence
  • Command Team: the Officers taking decisions. They can be Ship Master, OOW or other Officers.
  • Assessment Team: fire fighting personnel on duty, able to assess extent of fire and severity of the situation

The procedure described in the following table can be used for many purposes, among which we mention training, revision of already existing procedures, determination of the ‘bottlenecks’ and evaluation of the uncertainties the Command Team can have in any phase, thus triggering the addition of new systems or procedure changes able to better address the risks identified.

 

IMG

 

FS #T19: Critical Incident Technique

 

Background

Critical Incident Technique (CIT) is one of the oldest task analysis techniques and can be used in two ways. The first is to identify the critical components of a job or task (including exceptionally good performance as well as ordinary or ‘nominal’ performance). This means identifying (often through observation and interview) what needs to be done, and what should not be done. The second purpose more often used in safety circles, and the focus of this Fact Sheet, is to examine how people performed during challenging circumstances, e.g. during an incident or abnormal event. This can include: what happened; whether anything went wrong; how was it recovered; and what factors made it difficult or helped it succeed. 

CIT can be further elaborated to ask those involved in the incident as to what could have made it better/worse, thus exploring human performance in variations of what actually happened. However, mostly CIT focuses on the facts as remembered, corroborated by other factual details wherever possible. 

A weakness of the technique is that it is subjective, and ‘everyone is a hero in their own story.’ Nevertheless, often personal renditions of events can be compared to known facts and also compared to other people’s remembrances (i.e. others who were involved in the event), so the ‘hero’ biasing effect can be countered. 

CIT can be done as a one-to-one interview, or one or two people interviewing a group who were involved in the incident. An advantage of group interviews is that other details may come to light during the discussion, as no one person has perfect recall. Group interviews can also reduce the hero bias. The disadvantage of group interviews is ‘group-think’ wherein everyone gravitates towards a unified version of what happened, usually a version that protects the team involved and is more conservative than what actually happened. The obvious solution (see illustrative example) is to conduct some single-person interviews first, and then a group interview. 
(Flanagan, John C. Psychological Bulletin, Vol. 51, No. 4, July 1954.)

 

 

Key Concepts

CIT is a flexible interview approach – as such it requires no prior training.

A set of questions are asked. These are best undertaken with open-ended questions, and the interviewees should be encouraged to state what happened and why in their own words (their narrative).

CIT can be single or group interview, or both.

A set of ‘factors’ may be used to explore what was affecting human performance during the event. These factors can be used to inform Human Reliability Analysis / Risk Analysis if that is to be carried out. 

The answers require no special codification, though they often feed into other task analysis methods such as Hierarchical or Tabular Task Analysis, or Operations Sequence Diagram (see aviation illustrative example), or directly into designer consideration.

This photo shows a near crash at the airport of Barcelona in 2014. The aircraft landing pulled up just in time to avoid the collision. The Critical Incident Technique can be used to analyze such near misses and bring to light the factors that influenced the incident.

 

 

Benefits

CIT can really add another dimension to a task analysis that is purely based on what people ‘should’ do. As such it adds important ‘colour’ and understanding of how stress plays a part. It can highlight what information (e.g. from displays or communications or observations) the people in the incident really paid attention to, and what they ignored, which is valuable information for a designer or procedure writer. It can also give useful insights into how actions between different people in the incident were coordinated, and the time it took for things to happen or be affected. It may identify issues or factors the designers, procedure writers and/or trainers had not previously considered. Incidents and accidents are where the design, procedures and training – in fact all the Human Factors in the system – are tested ‘through fire’, and you see what helped them make it through such challenging events.

More generally, CIT can also prevent ‘hindsight bias’, whereby people (after the event) wonder why those involved in an incident or accident didn’t simply ‘do the right thing.’ This is usually because in the heat of the moment, the ‘right thing’ was not clear. A relevant example of this was the landing of a plane on the Hudson River after a bird-strike to both engines. The flight crew were initially criticised for not trying to fly to the nearest airport. Only after further simulations, taking into account the required thinking time to make a decision, was it realised that if they had done so the plane would have crashed into a populated area with considerable loss of lives. A CIT would have identified this thinking-time component, often underestimated by designers and procedure-writers.

The disadvantage of CIT is that it is subjective

. Therefore, other more objective methods may be employed (e.g. prototyping and simulation) to verify any design decisions informed by CIT. Similarly, CIT focuses on one or two events that have actually happened, and insights should not be give disproportionate weight, as there are likely to be many other potential incident /accident scenarios that have yet to happen. Nevertheless, the fact that something has actually happened means it could happen again, and so some kind of design or other restorative action will be required. CIT will help ensure that the restorative ‘fix’ addresses the human element, making it more acceptable to future operators of the system.

US Airways Flight 1549 landing on the Hudson River. Photo Greg Lam Pak Ng

 

 

How It Works

The basic approach is very simple, and involves asking a question along the following lines: 

BOX

 

If it relates to an incident or accident that has actually happened, the opening is along the lines of the following:
Please tell us in your own words what happened.

Typical follow-up questions are as follows:

  • Can you tell me what you were doing before the event? Were things quiet? Busy?
  • When and how did you notice that something was happening?
  • How did you react?
  • How did others around you act?
  • How was the situation resolved?
  • How did it affect you at the time, and how did you feel about it afterwards? 
  • Were you surprised or startled? 
  • How unsafe did it feel, on a scale of 1 to 10? 

Once the facts have been established, it is possible to ask about the factors that may have influenced their performance. This is best done first with an open question, so that they tell you in their own words, and then they can be shown a list and asked if any of the following factors listed affected them.

  • Startle/surprise
  • Unfamiliarity with handling a wake event
  • Workload (high or low – please specify)
  • Displays
  • Team coordination
  • Experience
  • Communications/support
  • Vigilance / fatigue
  • Visual circumstances
  • Weather conditions 
  • Time of day / night-time

The final questions shift the interview from what did happen to what could have happened:

  • What could have made it worse on the day?
  • What might have made it better?
  • If a similar event happened again, would you do anything differently?
  • What would you have liked to have been aware of: 
    • before the event 
    • during the event 
    • after the event
  • Anything else you’d like to add?

 

 

 

Illustrative Example

Aviation Domain
CIT was used to interview three pilots who had experienced Wake Vortex events, wherein their aircraft encountered the wake from an aeroplane ahead of them, creating moderate to severe turbulence for their aircraft. Specific variants of the CIT generic questions were asked:

  • Were you aware of the other aircraft? 
  • Did ATC warn you?
  • Could you see their condensation trail? Could you see the other aircraft?
  • Did you consider you might encounter a wake or was it a sudden surprise?
  • Did the autopilot disengage automatically?
  • Did you take manual control? How easy was it to control the aircraft?
  • Did ATC give instructions/information during or after the event?
  • Did you inform ATC?

An example of the type of material gained from one of the interviews is as follows:

The event occurred during daytime, and during a period of low alertness.

The captain saw the contrail of the aircraft ahead “coming down” which was a “surprise”.

The captain told the First Officer (FO) to put the seat belt sign on and move the seat forward to get ready for the encounter. No cabin crew warning was given.

The rolls of the vortex were visible: “rotating tubes” in clear air.

On a scale from 1 to 10 (10 being the most unsafe), the captain categorized the event as a 7 or an 8, mentioning “significant stress”. 

The roll was “sensed” as a 15°-degree angle of bank, although it was 8°.

The startle of the encounter had passengers screaming but no injuries were recorded.

After the event, the flight crew contacted ATC. 

Currently the pilots build an image of the surrounding airspace by using R/T information, contrail, TCAS information and wind information + geometry of aircraft pairs, if they are not informed by ATC. 

By having prior ATC instructions/warnings, e.g. a verbal alert several minutes before the possible encounter, pilots believe they would overcome or better prepare for encounter as it may occur during “low alertness” states.

Training should expose flight crew to en-route wake encounters both theoretically and practically. The pilots think the training will most probably focus on recovery as it is hard to simulate the “startle” effect. 

The top 6 factors (no priority order) identified by the group of pilots were as follows:

  1. Seatbelt sign on / cabin crew info (would minimize the outcome of the wake in terms of passenger and cabin crew disturbance /potential injuries)
  2. Covering controls (being ready to take control)
  3. Vigilance / Situational Awareness (SA) / circadian rhythm and arousal (monitoring throughout the flight depending on flight crew alertness)  
  4. Startle effect (slower reactions or over-corrective input)
  5. Robust Autopilot (depending on aircraft type it might disengage faster)
  6. IMC (in cloud) / night (less “outside” view / lower alertness)

The CIT in this case was part of a sequence of task analyses for the Wake Vortex study. You can see how it informed the Operational Sequence Diagram (OSD) by reviewing the case study in the Factsheet for OSDs.

 

 

Maritime Domain
While approaching a harbour, a bulk carrier was met by an escort tug who was to assist the vessel to a mooring station. When approaching the harbour entrance, the vessel experienced a sudden loss of steering. A loss of steering is an unexpected event that can be manifested by several mechanical issues (e.g. blackout, engine gear box failure, steering engine failure on rudder, rudder jam, loss of starting air pressure (due to many consecutive manoeuvres). 

Specific variants of the CIT generic questions were asked:

  • Were you aware of any ongoing repair or maintenance orders that might affect steering control?
  • Were you in communication with the engine control room (ECR)?
  • Were you aware of any bridge alarms that would indicate abnormal conditions?
  • Were you confident about the manoeuvring safety zones and traffic conflicts nearby?
  • Did you communicate the issue at first indication to others on the bridge or in the ECR?

An example of the type of materials gained from crew interviews identified reasons for steering failure and how human factors, automation and work practices may provide procedural, pedagogical and/or mediating actions. 

Black out:

The emergency generation should start and provide enough power for essential equipment like the steering engine. If the power management system is working and there are auxiliary engines ready and in automated mode, they should start, connect and equipment should by sequence automatically start. The emergency generator stops automatically when auxiliary engines are up and running and can take the load.

Routines and check lists should be in place to get everything up and running. Communication with the bridge is an immediate action to inform about the situation.

If it is a critical situation, narrow passage, arrival or departure the ECR must be manned by the chief engineer who usually is responsible for informing the bridge about the progress.

Problems with the steering engine:

A hose could break causing a rapid drop in hydraulic pressure. Usually there are two electrohydraulic motors and in critical situations two are running. If one stops the other should automatically start. 

Bridge should be informed about this incident.

Loss of remote steering:

If the remote steering from the bridge is not working the steering engine can be operated from the steering gear room. 

The hydraulic valves are then hand operated according to the bridge instructions. A rudder indicator is fitted in the steering gear room and the engineer is following the instructions from the bridge.

The communication is through a fixed mounted emergency phone or using VHF.

Checklists and routines need to be practiced regularly (re. SOLAS)

Problems with the rudder:

Mechanical failure causes the rudder to stick.

Problems with the main engine:

Numerous reasons for the main engine to stop. 

  • Fuel quality 
  • Shutdown alarm on an auxiliary engine
  • Could also be a mechanical failure which forces a stop of engine
  • Leaks of fuel, lubricating oil, cooling water etc.
  • Problems with the gear
  • Mechanical failure
  • Hight temp lubricating oil, high temperature lubricating oil.

Problems with basically any critical transmitter

The top factors (no priority order) identified by the bridge and ECR teams:

  • An emergency plan with the escort tugs (if within arrival and departure protocols) should be reviewed (similar to a pilotage plan)
  • Better communication between bridge and engine control room
  • Clear understanding of operations and detection of abnormal sensor data outputs
  • Training for immediate control (emergency steering system of the steering engine)

Bridge team better aware of the sensor mimics and alarms from the Engine Dept. 

Photo by Daniel Norris

 

 

FS #T13: Prototyping and Real-Time Simulations HP measures

 

Background

Real time simulations (RTS) are an experiment that uses experimental facilities to simulate the environment under assessment in real time; for example ATC simulators are simulating recorded air traffic in real time. In the US the term human-in-the-loop (HITL) simulations is used for the same technique. This points the focus correctly to the human operator. A real time simulation involves the human operator in order to design, improve and validate new concepts and system functionalities

A prototype is often used in a lower maturity phase of the project; however, it is not only a simplification of things. Even a simple prototype can help the developer to test new ideas in a relatively straightforward and agile way. While prototypes focus on human - machine interfaces, simulations are used to mimic the most important behaviour of the entire system, which allows focusing on working methods and communication. 

The involvement of the human operator in the experiment facilitates the collection of objective and subjective human performance data, used to evaluate and improve the system design.

Digital tower simulator at Eurocontrol. Photo Eurocontrol

 

 

 

Key Concepts

The real time simulation as an evaluation tool allows human operators to test and validate new concepts and new automation procedures. To be able to draw conclusions on the benefits of the proposed new concepts, procedures and communication means, human performance data have to be collected.

Those data can be:

Objective Measures

  1. E.g. number of flights controlled in a sector of airspace in a certain time period
  2. Interactions with the system (e.g. number of mouse clicks),
  3. Number of instructions given by controllers to aircraft pilots
  4. Number and length of verbal communications 
  5. Data on the user’s gaze (using eye movement tracking)

Subjective Measures: 

  1. Perceived workload (e.g. Bedford Workload scale, NASA TLX (NASA Task Load Index), ISA (Instantaneous self-assessment of workload) rating scale, etc.) 
  2. Perceived situational awareness (e.g. China Lakes and SASHA Situation Awareness for [SHAPE- Solutions for Human Automation Partnerships in European ATM])
  3. User’s trust (SATI-SHAPE Automation Trust Index)
  4. Potential for Human Error 
  5. Teamwork and Cooperation
  6. Users Acceptance (CARS- Controller Acceptance Rating Scale)

This and further information on HP measures can be found on the eHP repository on the Eurocontrol website. 

 

 

 

Benefits

The benefit of these types of simulations is that, apart from it being a safe way to conduct experiments, certain situations can be repeated; events can be injected to simulate emergency as well as non-nominal situations. The system can simulate the future, which allows comparing the expected environment with the reference environment. This makes the quantification of the benefit possible. The data of the simulated reference scenario can be compared with the simulated future scenario in a stable experimental design allowing concluding on the direct impact of the proposed solution. 

A more general benefit can be user acceptance by the user community. For example if controllers or pilots participating in a RTS of a new system design are genuinely pleased with it, this acceptance can spread through the rest of the user community. 

There are few disadvantages of RTS and other simulation approaches, other than the fact that they can take a lot of preparation and resources. They are best run with licensed personnel (e.g. air traffic controllers and line pilots), though sometimes for more experimental or futuristic system simulations, test pilots or controllers involved with future systems design may be involved. 

With any RTS usually a battery of measures is used (objective and subjective), in an attempt to triangulate between them to determine how performance is being affected and where to improve. But an advantage with real subjects (licensed personnel) is that they can often tell at an overall level whether the system will be an improvement or not, and where it may need to be ‘tweaked’ so that it will deliver best performance. 

Usually simulations are not so useful for safety issue quantification, as even a large-scale simulation may have in total less than a hundred ‘runs’, and unless there is something radically wrong with the design, safety issues do not usually appear during the simulation campaign. Nevertheless, they can happen, or else precursor events may arise which could, if left to develop, lead to safety issues. The point here is that a simulation without any observed safety issues does not guarantee that the concept under development is indeed safe. That is for other approaches to determine (e.g. HRA and risk analysis).

 

 

How It Works

Validation simulation
Pre-simulation

Human performance objectives, success criteria (evidence) and the respective measurement have to be identified and formulated.  The simulation is planned as an experiment ideally simulating a reference scenario with the solutions scenario, respecting basic quality indicators for good experimental design. Procedures and operating methods shall be defined before the simulation. 

During-simulation

The simulation is conducted with the human operator interacting with the system applying the defined procedures and working methods, etc. Data on the prior defined measurements are collected. The collection of data can be performed by the system automatically, by observers, and by the human operator. The human operator files his/her feedback in pre-defined questionnaires and post-simulation run debriefs. 

Post-exercise

In post-exercise questionnaires (usually administered immediately after each simulation run) evidence for Workload, Situational Awareness, Trust, HMI Usability and System Error-related questions are collected. A post-simulation questionnaire can be used to gather more qualitative data and insight on the overall perception of the runs on a certain concept (e.g. trust, usability and acceptance). As compared to the post-exercise questionnaire, the post-simulation questionnaire (after the entire simulation exercise) contains more open questions, allowing the ATCOs to detail their overall experience. Observations can be carried out for each of the runs for all active working positions. The observers take notes related to any system errors, pilot errors and ATCO errors, as well as of the working methods of the ATCOs and the communication. Structured Debriefs shall be conducted after every individual exercise run, with a more detailed debrief at the end of the day. The debrief allows the human operators to express their feedback for the previous run in relation to the position they were assigned to, supported by the expert notes. The final debrief can follow a wider approach, allowing each human operator to give a general feedback on the runs completed throughout the simulation.

Post simulation 

The data have to be analysed and interpreted to be able to provide a recommendation to the decision maker on the concept implementation.

Prototype session

A prototype session might be less following a tight experimental schedule

 and be more agile and able to react to proposals of system improvement. The techniques to collect data can be the same as for the simulations; the topic however might focus more on HMI usability.

 

 

Illustrative Example

Aviation domain
The aviation example presented here derives from a study conducted in a projects of the SESAR Programme (Single European Sky ATM Research), dealing with the innovation of the Air Traffic Management in Europe. In the case of this specific project, the innovation consists in the testing of a set of enhanced arrival procedures intended to facilitate the reduction of the environmental impact (e.g. noise, fuel) and the increase of runway throughput.

Real-Time Simulation
One of a series of Real-Time Simulations (RTS1) assessed the application of the Increased Glide Slope (IGS) concept on the Paris Charles de Gaulle (CDG) airport and with an approach environment. The enhanced arrival procedures were simulated to be flown with GLS (GBAS –ground based augmentation system- Landing System) while the conventional approaches were simulated to be flown with ILS (Instrument Landing System). The enhanced arrival procedures were supported by an ATC support tool – the Optimised Runway Delivery (ORD). The tool was designed to reach the goal of an increase of runway throughput, by helping the ATCOs to determine and achieve the required a/c spacing/separation, using enhanced arrival procedures. The Human Machine Interface (HMI) of the ORD shows the initial and final target distance. The initial target distance indicator considers the compression effect. While the final target distance indicator shows the distance that has to be achieved at the threshold.

The aim of this exercise was to assess:

  • the usability and acceptability of the IGS procedures in a large and congested airport
  • the usability and acceptability of the sequencing & separation tool (ORD) that facilitates the management of mixed ILS and IGS approach procedures by not degrading workload and situational awareness and without reducing capacity
  • the impact of the IGS communication exchanges/ phraseology proposed for IGS procedures
  • the usability of the HMI for the IGS procedures, and
  • the acceptability of the number of aircraft flying the IGS procedure.

Three traffic samples were simulated:

  • A reference scenario with no enhanced arrival procedures (RECAT)
  • A traffic sample with 25% of the aircraft being capable of flying IGS. (EAP 25%)
  • A traffic sample with 100% of aircraft being able to fly IGS. (EAP 100%) 

The aircraft labels presented on the controller working position contained the information about the aircraft equipment: G for aircraft able to fly IGS and I for aircraft able to fly ILS only.

Objective and subjective data were recorded during the validation exercise: Objective data included the number of aircraft on frequency, the number of ATCO instructions, the number of separation infringements etc. Subjective data included workload, situational awareness and the usability of the concept and tool.

Objectives on Safety, Operational Feasibility, HMI usability, ATC support tool usability and the appropriateness of the phraseology were formulated.  

Validation Objectives
The following objectives were formulated:

  1. To confirm that Increased Glide Slope (IGS) approach procedures do not negatively affect safety from ATC perspective
  2. To confirm that the IGS is operationally feasible from ACT perspective
  3. To confirm that ATC separation delivery support functions for IGS are usable and acceptable
  4. To confirm that ATC HMI for IGS is usable and acceptable for the controller
  5. To confirm that the phraseology used by ATCOs for IGS is clearly understandable

For the sake of brevity, only Objective #3 will be considered here for an illustrative purpose.

Success Criteria for Objective #3
The following success criteria were defined:

  • The usability of the support tool (separation tool) is rated as being acceptable 
  • The support tool (separation tool) is rated as being useful  
  • The support tool (separation tool) supports the application of the IGS procedure 
  • The ATCOs trust the support tool (separation tool) that facilitates the application of IGS 

Excerpt of the Results
The SHAPE Automation Trust Index (SATI) was used to provide an assessment of trust and usability of the ORD tool and was hence applied only for the runs in which the enhanced arrival procedures were instructed. The index encompasses six questions rated on a Likert scale from 0-6, corresponding to answers from “never” - “always” (negative to positive).

After breaking down the statistical data for each of the six questions of the scale, the results indicate a stable response rating for both INI and ITM that were confident in working with the ORD tool. They found it reliable, robust and easy to understand.

TABLE

Excerpt of the Conclusions
The ATCO HMI had a sequencing and separation tool (ORD tool) implemented to facilitate the reduction of separation of two aircraft that are on different glide slopes. The findings show that the tool supported the concept and improved ATCO performance. The ORD tool facilitated the common situational awareness, which allows the INI to take over some tasks from the ITM to offload her/him if needed. The ORD tool and the associated procedures were considered useful and easy to use. 

Excerpt of the Recommendations
The ORD tool shall be fine-tuned to take the wake vortex figures in combination with the assigned glide slope into account. The availability of the ORD tool and its benefits should be supported by further HMI improvements.  The HMI should be consistent and indicate – as it does for ILS – the area where the aircraft shall intercept the GLS path. This would result in a reduction of the communication/coordination needs between INI and ITM but also the communication between ITM and pilot. 

 

Maritime Domain
Even though the structure and scope of the simulation may differ, the maritime sector also uses real-time simulations for testing purposes. An example is provided below:

Real-Time Simulation
RTS (Real-Time Simulations) were used to evaluate the risk of ship collision during passage through the Istanbul Strait. The case name is “MV Vita Spirit” which had collided with a mansion on the coast of Istanbul Strait. The ship lost control due to main engine failure during passage through the Strait. The case was simulated on the NTPRO-5000 Full Mission Bridge Simulator (RTS). The passage through the Strait procedures and environmental condition were simulated under control of the VTS (Vessel traffic services) operator. The enhanced approaching and passage procedures and conditions were supported by simulator operator (acting as VTS). In the case study, ship particulars (size and type of ship, engine power, manoeuvring ability, thrusters, anchors, rudder type and other specific features), other ships passing through the Straits, passenger ships, fishing boats and sailing boats were inserted into the scenario. The BRM (bridge resources management) team consisted of ship master, pilot, OOW (officer on watch) and helmsman. This simulation was performed in a real-time environment including time of day (early morning), current and wind, weather condition, traffic congestion and VTS control. The simulation training was performed for one day followed by four measured sessions. One operator working position was simulated. The bridge team consisted of pilot, master, OOW and Helmsman in one group, and the other group consisted of master, OOW and helmsman (without pilot). 

Objective and subjective data were recorded during the exercise. Objective data included the number of ships passing through the Strait, number of passenger ships, fishing vessels and boats, time interval of CPA (closest point to approach) and TCPA (time to closest point to approach), response time to collision avoidance, tugs positioning distance, etc. The subjective data included, situational awareness, communication, collaborative working quality on the bridge, stress, fatigue (if any), etc.

Validation Objectives
The following objectives were assessed:

  • To confirm that taking a pilot enhances safety of navigation in the Strait passage.
  • To confirm that VTS do not negatively affect the decisions of ship master or pilot (if any) during passage or collision avoidance manoeuvring.
  • To confirm that the TSS (traffic separation scheme) is operationally feasible from perspective of ship master.
  • To confirm that terminology used by VTS operator is clearly understood by ship master. 

Excerpt of the Result 
Criterion-referenced testing was applied to a particular group of pilots and master during passage through the Istanbul Strait. The scores of the individuals were ranked, and the results permitted comparison of performance amongst individuals or across a group of individuals assumed to be similar in make-up and knowledge. In view of the findings, it was observed that the ship which took a pilot during Strait passage completed her passage much safer than the ship without a pilot. There were. 

Excerpt of the Recommendations 

  • The quality of collaborative working on the bridge should be improved to minimize risk. 
  • Situational awareness and communication with VTS should be improved for the BRM. 
  • Coordination between master and VTS operator is critical in order to avoid collision/contact during passage through the Strait. 
  • Tug assistance should be available in case of emergencies.      

FS #T19: Safety Culture

 

Background

The roots of Safety Culture are spread across several industries (oil and gas, nuclear power, space, air transport, rail, medical and maritime), being first mentioned officially in a report of the Chernobyl nuclear power accident in 1986. The International Atomic Energy Agency (IAEA) used safety culture to explain the organizational conditions that led to the violations of the front-line operators that created the path to disaster. A weak safety culture is seen as a strong contributory factor in various important accidents such as the King’s Cross underground station fire (Fennell, 1998), the Herald of Free Enterprise passenger ferry sinking (Sheen, 1987), the Clapham Junction passenger train crash (Hidden, 1989), the Dryden air crash (Maurino et al. 1995), the Uberlingen mid-air collision (2002), and – more controversially – the two recent Boeing 737-Max air crashes where poor safety culture appeared to apply more to the design, development, validation and oversight phases.

Even though there is no single definition of safety culture, various authors such as the International Nuclear Safety Agency Group (INSAG-4, 1991), Cox and Cox (1991), the UK Health and Safety Commission (HSC 1993) and Guldenmund (2000) agree on what safety culture represents. In broad terms, they all endorse the idea that safety culture embodies the practices, attitudes, beliefs, norms, perceptions and/or values that employees or groups of employees in a company share in relation to managing risks and overall safety. In addition, EUROCONTROL (2008) specified that safety culture can be described as “the way safety is done around here”, suggesting that it needs a practical approach.

 


Traditionally, safety culture is applied to operational organizations such as air traffic organizations (most in Europe undertake periodic safety culture surveys, certain airlines, and more recently, airports. However, the approach has been applied to one research and development centre where the focus was on designing new operational concepts for air traffic management (Gordon et al, 2007). While it is difficult to directly link equipment design and safety culture, and it is perhaps impossible for a designer to predict the safety culture that may evolve in the future when their design goes into operation, there are some salient points that designers need to consider:

Safety culture surveys sometimes do raise issues concerning design and the need for design improvement.
• Design of equipment, workplace or interfaces that are cumbersome, difficult to use or complex/confusing may lead human operators to perform ‘workarounds’ that are easier in practice, but may possibly be less safe in certain circumstances.
• Whenever design choices are being made, and there is a safety element, and especially when there is a potential safety-efficiency trade-off, the designer should favour safety. As one senior project manager put it, safety at any cost doesn’t make sense, but safety as a starting point does.
In stressful or emergency scenarios, where the operators have to make judgement calls between safety and productivity, if indications in the interface are not clear and integrated to help rapid decision-making that errs on the side of safety and caution, then a different outcome is likely to arise. Designers must ensure that, in the heat of the moment, the required safe action will be as clear as it can be.
• Designers must be clear as to which equipment and interface elements are safety critical.Moreover, the chief designer must have a holistic (rather than piecemeal or fragmented) view of the system and the safety critical role of the human operators, as well as a clear idea of the competence of those intended operators.
• The designer cannot ‘leave safety to the safety assessor’, or assume that oversight authorities can pick things up later. This is poor design culture, and, frankly, no one understands design intent better than the designer. The safety role of the human operator must be considered from the outset by the designer and be ‘built-in’.
• The designer cannot assume something is fit for purpose based on a few prototyping sessions with a few operational experts. Proper validation with a range of typical end users and abnormal as well as normal scenarios needs to occur.
• Designers should not underestimate the positive impact of design on safety culture, particularly where new and better equipment – user-friendly, making use of human aptitudes – enters a workplace which has hitherto been dogged by equipment problems.
• Designers should read accident and incident reports. This is the only way to look ahead, and to see how well-intended designs can end up in accidents.
• Designers should stay close to real operations and operators as much as possible, and their design organizations should encourage and enable such contact. This is the best way for designers to see around the corner, and to understand the user and their working context and culture as it evolves.

More generally, safety culture (or lack of it) can of course be felt by those working in design organizations. If for example, as a designer or engineer you think you are being pressurized to produce or accept a poorer design because of commercial pressures, who are you going to tell, and will they take you seriously and support your safety concern, or will they bow to the commercial pressure?

 

 

Key Concepts

Safety culture has a number of dimensions, such as the following (Kirwan et al, 2016):

Management commitment to safety (extent to which management prioritize safety);
Communication (extent to which safety-related issues are communicated to staff, and people ‘speak up for safety’);
Collaboration and involvement (group involvement and attitudes for safety management);
Learning culture (extent to which staff are able to learn about safety-related incidents);
• Just culture and reporting (extent to which respondents feel they are treated in a just and fair manner when reporting incidents);
Risk handling (how risk is handled in the organisation);
Colleague commitment to safety (beliefs about the reliability of colleagues’ safety behaviour);
Staff and equipment (extent to which the available staff and equipment are sufficient for the safe development of work);
Procedures and training (extent to which the available procedures and training are sufficient for the safe development of work).

Safety culture is typically evaluated using an online questionnaire of validated questions that is distributed across the organisation and filled in anonymously (usually taking about 10 minutes). Based on the responses, analysts then identify the safety culture ‘hotspots’, which are then discussed in confidential workshops, leading to recommendations for the organisation to improve its safety culture. At a macro level, typically a ‘spider diagram’ is produced based on the results, highlighting which safety culture dimensions need most attention. In the example figure to the right, ‘Colleague commitment to safety’ is doing well in safety culture terms, whereas ‘Staffing and Equipment’ requires attention.

 

 

 

Benefits

Safety culture surveys, particularly when anonymous and carried out by an independent organisation, are often seen as one of the only ways to find out all your risks and what people really think about the state of safety and safety culture in an organisation. They can pinpoint both current issues that can be corrected (so-called ‘quick wins’) without too much trouble, as well as deeper problems that may be entrenched in the organisational culture. Most safety culture surveys do identify the safety culture strengths of an organisation as well as vulnerabilities, so there is always some ‘good news’ as well as areas for improvement. Because the approach uses workshops of operational personnel, usually the recommendations that arise from surveys are useful and practicable, since they come from the organisation itself. 

The survey process is long, however, often taking 9 months from the first decision to have a survey, to having a report with recommendations. The management need to be committed to doing the process and realise that it is not a quick fix, and there will be work to do afterwards. If management are doing the survey just to gain ‘a tick in the box’, then it may be better not to do one at all.

 

 

How It Works

In many application of the aviation domain, there are two main phases used to evaluate the safety culture in a safety culture survey:

Phase 1: Questionnaire 
The safety culture survey uses an electronic version of the Safety Culture Questionnaire, over a period of typically 3 weeks. The confidential data are anonymised by The London School of Economics and then independently analysed by EUROCONTROL

Phase 2: (Optional) Workshops 
After an initial analysis of the questionnaire results, there is a visit to the organisation’s premises to hold a number of confidential workshops with front-line staff (without managers present) to clarify and explore further certain aspects emerging from the questionnaires. An example of what might be discussed in such a workshop is indicated by the figure below, which shows results for an organisation in the safety culture dimension ‘Staff and Equipment’. With respect to the questionnaire statement ‘We have the equipment needed to do our work safely’, whilst more than 70% were in agreement (the green bar), the rest were either unsure or in disagreement. The workshop would explore equipment issues or concerns people have, which are underlying this response.

 

 

Illustrative Example

Aviation domain
It should be noted that the example reported was seen as an exploratory survey and were focused on a single particular segment of the organization – Airbus Design rather than being company-wide (see Kirwan et al, 2019).

For the Airbus survey, the focus was on the Design part of the organization, which involves around 2,000 of the 70,000 employees. This survey required considerable tailoring of the questionnaire, as the job of design and systems engineering is somewhat different from airline operations, even if most underlying issues, except fatigue, and dimensions remain relevant. But the tailoring worked, with the same issues raised during workshops as via the questionnaires. Participation in the workshops by designers from all three of Airbus’ main design locations (France, UK, and Germany) was intense. The general principle espoused in each workshop was that a safety defect in a design of an aircraft could simply not be allowed to happen; therefore, any safety issues had to be resolved carefully and thoroughly before continuing, even if resolution efforts delayed progress toward production. One observation from the study team was that the managers in the management workshop were connected with safety and detailed safety concerns, more so than is often the case for middle managers. This is because many of the managers had held operational positions or positions ‘close to operations,’, so that they understood the operational pressures that could affect airlines and pilots. Another had been to an accident site after an air crash, and noted that after such an experience ‘safety never leaves you.’

The workshops focused a lot on internal communications between the different design departments, as well as the resolution of safety issues that could delay production, and the importance of middle manager support during such resolution periods. 

A number of constructive recommendations were made to Airbus in the final survey report.
 


Maritime domain
A safety culture assessment and an implementation framework were developed by the University of Strathclyde to enhance maritime safety. The focus was on the development of a safety culture assessment tool which covers all of the safety related aspects in a shipping company and measures will be taken proactively and reactively in order to enhance safety of the shipping industry.

Questionnaires were developed with an interdisciplinary group to ensure that they capture the fight information to conduct a comprehensive analysis. Each safety factor has specific questions which try to address the employee’s attitudes and perceptions amongst crew members and shore staff. There are a total of 85 questions which are asked to the employees, under the headings of communication, employer-employee trust, feedback, involvement, problem identification, promotion of safety, responsiveness, safety awareness and training and competence. 

The aim of the surveys is to analyse strengths and weaknesses in the company and perform benchmarking between crew and shore staff’s attitudes towards safety. Participants of the surveys were asked questions about demography, safety factors and open-ended questions about company policies. The safety culture assessment framework requires commitment from all of the bodies in a shipping company for successful implementation. 

Based on the findings from the questionnaires a set of recommendations were provided to the shipping company to improve communication, employer-employee trust, feedback, involvement, mutual trust, problem identification, responsiveness, safety awareness, training and competence, and the promotion of safety.

 

 

SOAM The Systemic Occurrence Analysis Method

 

Background

The Systemic Occurrence Analysis Method (SOAM) is a process for conducting a systemic analysis of data collected during a safety occurrence investigation, and for summarising and reporting this information using a structured framework and standard terminology. It aims to remove the spotlight from the errors of individuals and to identify factors at all levels of the organisation or broader system that contributed to the safety occurrence. A correct application of SOAM will identify systemic safety deficiencies and guide the generation of effective recommendations to prevent recurrence of such events. SOAM helps to:

Establish what happened
Identify local conditions and organisational factors that contributed to the occurrence
Review the adequacy of existing system controls and barriers
Formulate recommendations for corrective actions to reduce risk and prevent recurrence
Identify and distribute any key lessons from the safety occurrence
Detect trends that may highlight specific system deficiencies or recurring problems.


 

Key Concepts

The Systemic Occurrence Analysis Method is one of several accident analysis tools based on principles of the well-known "Swiss Cheese Model" of organisational accidents. SOAM is a process for conducting a systemic analysis of the data collected in a safety occurrence investigation, and for summarising this information using a structured framework and standard terminology. As with some root-cause analysis investigation methods, SOAM draws on the theoretical concepts inherent in the Reason Model, but also provides a practical tool for analysing and depicting the inter-relationships between all contributing factors in a safety occurrence. SOAM allows the investigator to overcome one of the key historical limitations of safety investigation - the tendency to focus primarily on identifying the errors - those intentional or unintentional acts committed by operators - that lead to a safety occurrence. Reason's original model has been adapted and refined within SOAM. The nomenclature has been altered in accordance with a "Just Culture" philosophy, reducing the implication of culpability and blame by both individuals and organisations. In SOAM, 'Unsafe Acts' are referred to as Human Involvement, 'Psychological Precursors of Unsafe Acts' as Contextual Conditions, and 'Fallible Decisions' as Organisational and System Factors.


 

 

 

Benefits

SOAM forces the investigation to go deeper than a factual report that simply answers questions such as “What happened, where and when?" First, data must be collected about the conditions that existed at the time of the occurrence which influenced the actions of the individuals involved. These in turn must be explained by asking what part the organisation played in creating these conditions, or allowing them to exist, thereby increasing the likelihood of a safety occurrence. SOAM thus supports the fundamental purpose of a safety investigation - to identify and understand the factors that contributed to an occurrence and to prevent it from happening again.

SOAM is aligned with, and supports, "Just Culture" principles by adopting a systemic approach, which does not focus on individual error, either at the workplace or at management level. It avoids attributing blame by:

Removing the focus from people's actions, and instead seeking explanation for the conditions that shaped their behaviour;
Identifying latent organisational factors that allowed less than ideal conditions to exist, under which a safety occurrence could be triggered.

As with the original Reason’s Model, SOAM can be applied both reactively and proactively. The process can be applied to any new occurrence, and is also suitable for the retrospective analysis of previously investigated occurrences in an attempt to extract additional learning for the promotion of safety. SOAM can also be applied proactively to generic occurrences or hypothetical events. These applications result in a comprehensive analysis of the absent or failed barriers and latent conditions that are commonly found to contribute to such events, thereby identifying areas of organisational weakness that need to be strengthened to improve safety and prevent future occurrences.

 

 

 

How It Works

The SOAM process follows seven specific steps that are Gathering data, identifying barriers, Identifying human involvement, Identifying contextual conditions, Identifying organizational factors, Prepare SOAM chart and Formulate recommendations.

GATHERING DATA:
While there is no definitive or prescribed method for the gathering of investigation data, it is useful to gather data within some form of broad descriptive framework, to help with the initial sorting of facts. The SHEL Model provides the basis for such a descriptive framework.
Data should be gathered across five areas (the four original areas of the SHEL model, and an extra fifth element - organisation):

Liveware
Software
Hardware
Environment
Organisation

While the data gathering and analysis phases in an investigation are typically depicted as discrete, in reality they are part of a recursive process. After an initial data collection phase, a preliminary analysis can be conducted, which will identify gaps that can be filled by further data gathering. This process will continue until the systemic analysis has eliminated unanswered questions and reached a logical conclusion.

Having collected the data, the first stage of the SOAM analysis involves sorting each piece of factual information into an appropriate classification. This is a progressive sorting activity which can be conducted as a group exercise if the investigation is being conducted by a team.

IDENTIFYING ABSENT/FAILED BARRIERS:
Barriers protect complex socio-technical systems against both technical and human failures. Absent or failed barriers are the last minute measures which failed or were missing, and therefore did not (a) prevent an action from being carried out or an event from taking place; or (b) prevent or lessen the impact of the consequences.
As with each stage of the SOAM process, a check question is applied to ensure that the item being considered fits within the definition of the category it is being considered for. 

 

 

IDENTIFYING HUMAN INVOLVEMENT:
Following identification of the relevant absent or failed barriers, the next step is to identify the contributing human actions or non-actions that immediately preceded the safety occurrence. The question at this stage should not be why people behaved as they did, but simply what their actions/inactions were just prior to the event. 

The information-processing model selected for use with this methodology is Rasmussen's Decision Ladder technique (1982). Like similar models, it assumes that information is processed in stages, beginning with the detection of information and ending with the execution of an action.

The Decision Ladder uses a six-step sequence which has been adapted for use within SOAM to present a simplified view of common controller tasks such as ATCO. Using this model, human involvement in a safety occurrence can be analysed in terms of:

Observation 
Interpretation
Choice of goal
Strategy development
Choice of action plan
Execution of action plan

 

 

IDENTIFYING CONTEXTUAL CONDITIONS:
Contextual conditions describe the circumstances that exist at the time of the safety occurrence that can directly influence human performance in the workplace. These are the conditions that promote the occurrence of errors and violations. In the occurrence investigation process, contextual conditions can be identified by asking “What were the conditions in place at the time of the safety occurrence that help explain why a person acted as they did?"
Five categories of contextual conditions can be distinguished, two relating to the local workplace, and three to people:

Workplace conditions
Organisational climate
Attitudes and personality
Human performance limitations
Physiological and emotional factors

 

 

IDENTIFYING ORGANISATIONAL FACTORS:
The organisational factors (ORFs) describe circumstances which existed prior to the occurrence and produced or allowed the continued existence of contextual conditions, which in turn influenced the actions and/or inactions of staff. The following categories of ORFs are identified:

TR - Training
WM - Workforce Management
AC - Accountability
CO - Communication
OC - Organisational Culture
CG - Competing Goals
PP - Policies and Procedures
MM - Maintenance Management
EI - Equipment and Infrastructure
RM - Risk Management
CM - Change Management
EE - External Environment

 


THE SOAM CHART:
The final product of the systemic occurrence analysis process is a summary chart depicting:
The individual contributing factors - grouped according to the layers of the methodology as Barriers, Human Involvement, Contextual Conditions and Organisational Factors; and
Horizontal links representing the association between a contributing factor at one level (e.g., a human action), and its antecedent conditions (i.e., the context in which the action took place).

FORMULATING RECOMMENDATIONS:
The formulation of recommendations for corrective action is a critical final element of the occurrence investigation process. In order to be effective, the recommendations should have the following characteristics:

Be directly and clearly linked to the SOAM analysis;
Be focussed on findings that are amenable to corrective action;
Aiming to reduce the likelihood of a re-occurrence of the event, and/or reduce risk.

In formulating recommendations, the SOAM process requires that the following two elements be addressed:

The deficient Barriers (absent or failed), and
The Organisational Factors.

 

 

Illustrative Example

Aircraft runway collision
The Linate Airport disaster occurred on 8 October 2001 at Linate Airport in Milan, Italy, when Scandinavian Airlines Flight 686, a McDonnell Douglas MD-87 airliner carrying 110 people bound for Copenhagen, Denmark, collided on take-off with a Cessna Citation CJ2 business jet carrying four people bound for Paris, France. All 114 people on both aircraft were killed, as well as four people on the ground. The subsequent investigation determined that the collision was caused by a number of non-functioning and non-conforming safety systems, standards, and procedures at the airport. It remains the deadliest accident in Italian aviation history.

The SOAM Chart of the Milan Linate accident highlights the role played by a number of latent failures in the chain of events that brought to the disaster. For example, in the analysis the boxes highlighted in blue show the path of contextual conditions, organizational factors and other system factors that acted as precursors of the error committed by the flight crew of the Cessna aircraft causing the runway incursion.

 

Illustrative Example: Collision between Hampoel and Atlantic Mermaid
On 7 June 2001, the Panamanian- registered refrigerated cargo vessel Atlantic Mermaid, collided with the Cypriot-registered general cargo vessel Hampoel, off the Varne in the south-west bound lane of the Dover Strait traffic separation scheme (TSS). An investigation of the accident was conducted by the UK’s Marine Accident Investigation Branch (MAIB). 

The SOAM Chart of collision between Hampoel and Atlantic Mermaid highlights the role played by a number of failures in the chain of events that brought to the accident. The SOAM Chart was developed based on the identified root caused and findings obtained from the investigation.

 

FS #T15/B: NEUROID Neurophysiological Indicators

 

Background

NEUROID is a technique that employ signals gathered from the operator to characterize Human Factors like Mental Workload, Stress, Vigilance, and Engagement from a neurophysiological perspective. NEUROID aims at providing information linked to the insights of the operators while interacting with the surrounding environments (e.g. aircraft cockpit, ship commands) and executing operative tasks without interfering with them, and it allows to adapting the system itself depending on the operators’ psychophysical states. 

The NEUROID was developed by UNISAP and validated together with DBL and ENAC in several operational contexts like Automotive, Aviation, and ATM, to determine how the combination of the considered HFs can be used to define the Human Performance Envelope (HPE) of the operator, and consequently employed to monitor the operator while interacting with high-automated system and unexpected malfunctions, or under challenging and demanding situations.

image courtesy Stress project

 

 

Key Concepts

The NEUROID is based on data-mining and machine-learning algorithms to make the HFs neurophysiological models fitting each operator and minimize inter-user variability, therefore obtaining reliable operator’s assessment. More than 88% of all general aviation accidents are attributed to human error. Therefore, NEUROID generates an innovative and systematic approach to quantify and objectively measure HFs by taking into account, at the same time, the behaviours, emotions, and the mental reactions of the operators themselves, and integrating them with the data related to accidents and incidents investigations. 

The general concept at the base of NEUROID is that brain, body, and operator’s experience are reciprocally coupled, and that accurate assessment of the operator’s can only be achieved through a well-defined combination of all the available data. Each biological activity is regulated by the human Nervous System, therefore variations of such biological activities correspond to internal reactions because of modification of external (environment) and internal (mental, motivations, emotions, etc.) factors. There are numerous types of neurophysiological measures, such as Electroencephalogram (EEG, related to brain activity), Electrocardiogram (ECG, hearth activity), Electrooculogram (EOG, ocular activity), Galvanic Skin Response (GSR, skin sweating), and so on. Such neurophysiological measures can be seen as the physical interface of the NEUROID technique that will enable to gather insights about all the aspects relating to HFs of the operator like Mental Workload, Stress, Vigilance, and Engagement. The key concept of NEUROID is the Human Performance Envelope (HPE) taxonomy: instead of considering one or two single human factors, the HPE investigates a set of interdependent factors, working alone or in combination, which allows to completely characterize the operator. In other words, these concepts are proposed as performance shaping factors, which can differentially and interactively affect successful completion of a task. The HPE theory explicitly declares that boundaries exist where performance can degrade in line with the theoretical underpinnings for the considered HFs.

 

 

Benefits

Traditional methods to catch information about the operators’ HFs are usually based on self-reports and interviews. However, it has been widely demonstrated how such kind of measures could suffer of poor resolution due to a high intra- and inter-user variability depending on the nature of the measure itself (i.e. subjective). In addition, the main limitation in using subjective and behavioural measures alone is due to the impossibility of quantifying ‘‘unconscious’’ phenomena and feelings underlying human behaviours, and most importantly it is not possible to measure such unconscious reactions experienced by operators while performing working activities. In fact, the execution of a task is generally interrupted to collect subjective evaluations. 

On the contrary, neurophysiological measures exhibit an unobtrusive and implicit way to determine the operator’s affective and cognitive mental states on the basis of mind-body relations. Therefore, the benefit and advantage of the NEUROID technique is to use neurophysiological measures and machine-learning algorithms to overcome such limitations, especially to (1) objectively assess the operator’s mental states while dealing with operative tasks; (2) identify the most critical and complex conditions and correlated them with accident and incident investigations; and (3) create a closed-loop between the systems and operators (i.e. Joint Human Machine Cognitive system) to continuously and non-invasively monitor the operators themselves.

Finally, modern wearable technology can help in overcome the invasiveness (e.g. many cables, uncomfortable) of the equipment necessary to collect the neurophysiological signals from the operators.

 

 

 

How It Works

• NEUROID consists in 3 main phases as reported in the figure. 

The neurophysiological signals are gather from the operators by wearable technology while dealing with working tasks. 

Afterward, the neurophysiological data are processed through a series mathematical steps towards the machine-learning phase, in which each HF is classified. 

Finally, the considered HFs are combined to define the Operator’s HPE, and integrated with the operator’s behaviour, and data related to accidents and incidents (e.g. SHIELD database and HURID framework) for 

  a comprehensive operator’s assessment which can be employed for different applications, for example real-time mental states assessment, or triggering adaptive automations.

 

Illustrative Example

Aviation Domain
To see how the NEUROID technique can be implemented in practice we can consider the following scenario where an Air Traffic Controller (ATCO) is managing the air traffic and at the same time we are acquiring the ATCO’s EEG, ECG, and GSR neurophysiological signals. In the first phase of the scenario, the ATCO can rely on the support of high-automated systems to manage high traffic situation (High). Then, the traffic demand returns to a normal condition (Baseline), but at a certain moment the automations crash (Malfunction) and the ATCO will have to keep managing the traffic, and find out what is wrong with the automations.

In this context, we can use the NEUROID to measure how the ATCO’s HPE changes. In this regard, the neurophysiological data will be employed to estimate the ATCO’s Mental Workload, Stress, Attention, Vigilance, And Cognitive Control Behaviour based on the S-R-K model. Finally, those HFs will be combined to define the ATCO’s HPE on the three operational conditions as shown in the following figure. In particular, the values of the considered mental states have been normalized within the [0 ÷ 1] range: “0” means Low, while “1” mean High. Concerning the S-R-K aspect, “0” represent the S (Skill), “0.5” the R (Rule), and “1” the K (Knowledge) level.
 

 

The analysis of the HPE in the different scenario conditions exhibits interesting variations on the configuration of the mental states. In fact, the failure of the automations (red line) induced higher Vigilance, lower Attention, and shift from Skill to Rule level in the Cognitive Control Behaviour with respect to the HIGH (orange line) and BASELINE (blue line) conditions. Furthermore, considering the HPE changes over time throughout the ATM scenario allows to identify those moments, thus situations, corresponding to high (RICH) or low (POOR) performance. For example, for the considered ATCO, the HPE corresponding to RICH (green plot) and POOR (red plot) performance are reported in the following figure. Generally, in correspondence of RICH performance condition, the vigilance is higher and the stress is lower than during POOR performance. In conclusion, the NEUROID could be used, for example, to monitor the operators and warn them when they are working under low vigilance and high stress condition.

 

 

Maritime Domain
To see how the NEUROID technique can be implemented in Maritime practice we can consider the following scenario where the collision occurred between tanker ship and livestock ship during Dardanelle Strait passage at sea. The collision occurred during overtaking. The VTS (Vessel traffic services) was informed about the collision since it occurred during Dardanelle Strait passage. The weather was partly cloudy and sea state was calm at the time of collision. The time was early morning. The officer was keeping the navigational watch at the time of collision. It was early morning at the time of collision.

The HF analysis of the ship collision during Strait passage would highlight the following potential root causes: 

Mental Fatigue: Rather than physical fatigue, over alertness and overload (decision making, concentration, mental load). Watchkeeper Officers continuously working load can affect his/her decisions.

Physical Fatigue: Rather than mental fatigue, physical fatigue can be seen due to intensive work load on-board ship. 

Inadequate Manning on Bridge: According to sailing zone, increase the number of watchmen on bridge may help perception of the dangers earlier.

Inadequate Policy, Standards and Application: Increase of the inspections due to sailing rules and minimum CPA-TCPA values and activation of the existing rules may help avoiding dangerous situations and provide clearance from the plotted vessels.

Inadequate / Lack of Communication: Necessary manoeuvre could not be applied due to inadequate communications between vessels. Lack of communications between vessels and VTS also caused vessels to sail in close range. Every communication methods on board must be managed in most efficient manner.

Poor Risk Evaluation: Main reasons of the incident between vessels are the failure of the evaluation of risks for all vessels in the sailing area and unforeseen of the other vessel possible manoeuvres which were sailing in the outmost line of the separation. 

Inadequate Leadership and Supervision / Planning: Another root causes of the incident are improper planning of watch hours and not taking necessary measurements as a result of evaluations of inconveniences timely.

External/Environmental Conditions: Sailing in narrow zone, congestion and poorness of the VTS stations in planning the traffic are the main effective reasons of the incident. To prevent such like incidents, environmental conditions must be evaluated sufficiently and proper speed and watch order must be planned according to congestion.

Some of those aspects are mainly linked to the management level (e.g. manoeuvres procedures, risk evaluation), but the ones coming from the evaluation of the operators’ status like the mental and physical fatigue can be monitored and measured via the NEUROID during the working activities. 

In particular, along each phase of the considered scenario the neurophysiological measures can characterise the operators in terms of Mental Workload, Stress, Vigilance, and Engagement combination and those information can be consequently used to:

interact and\or intervene in real-time on the system, for example depending on the Master's mental workload it could be possible to enable a warning during overload or low vigilance conditions to regain the proper operational status;

post-doc analyse the incident\accident or simulation by combining the previous tasks analysis, and the operators' mental states profile along the scenario, for example to find out how the Mental Workload, Stress, Vigilance, and Engagement of the Master and C/O were right before the collision or during a specific phase which brought to the collision, and finally understand if the configuration of those mental states was not appropriate (e.g. distracted?) or, on the contrary, the operators were under too high demanding and stressing conditions due to the adopted procedure or situation.

 

 

FS #A14: Eye Tracking

 

Background

Experts can tell us a great deal about the job they do, and how they do it; to a certain degree. They can talk us through situations, or past experiences and walk us through exactly what they did and how they did it. But this inevitably misses half the picture. It focusses on the procedural, rule following aspects of their task. It misses the short glances at pieces of information, it misses the bits of incomplete information that was gathered minutes, or hours before that only arranged itself into a complete picture at the precise moment the operator needed it.

When it gets to the point that someone is so skilled and practised at a task that they don’t realise they are doing a particular action or looking at something for information, it becomes impossible for them to verbalise it. But these are the important information aspects we need to capture to fully understand the nuances of that task and expert performance. These are the elements that make up that ‘expert’. This is where eye tracking comes into its own. Eye tracking can be an extremely useful tool in initial skills capture, and also to assist in measuring task performance and mental workload as it can provide objectivity. It can help us to build that complete picture and provide insights that are vital to fully understanding the complexities of tasks, actions and behaviours. For a designer, it completes the picture of how an operator is truly using an interface – what they find useful and what they do not, in a range of situations.

 

 

 

Key Concepts

The key concept of eye tracking is to be able to identify exactly what someone was looking at, when, and for how long. Eye tracking can be used in isolation, or in addition to other methods and techniques such as simulation or as part of a task analysis.

 

 

 

Benefits

• Eye tracking outputs can be particularly powerful when communicating results. This is especially true when working on multi-disciplinary projects. A diagram clearly showing how often someone looked at something or the scan path they followed in order to make a decision can paint a very clear picture to non-specialists. As already stated, eye tracking outputs can be used to give us an insight into peoples’ cognitive processes which they would otherwise be unable to communicate.

• Eye tracking outputs, specifically video outputs, can be used during interviews as an aid to the interviewee: “can you explain what you were doing then?” or “what was going through your mind during this action?” The prompt of seeing their actions again ‘through their own eyes’ can often trigger the interviewee to remember specific thoughts or feelings and allow a deeper insight into the actions at that time.

The main downside of eye tracking is the requirement of specialist equipment, and the need acquire a specific product for analysing the data depending on the eye tracker product. This is a substantial outlay, but they are also available for hire via short-term licenses. The actual use of the eye tracking equipment also has some downsides. Depending on the product used, some of the head mounted glasses-type eye trackers can obstruct peripheral vision as the frames can be quite thick. Some newer models are frameless for this reason. Also, the equipment requires calibration before use, and can easily de-calibrate if the user touches the equipment or moves suddenly.

• Another point for consideration is the analysis of data. Eye tracking produces large quantities of data that quite often requires substantial ‘cleaning’ before they can be analysed. This takes time, but when this is done the actual analysis is fairly straightforward and most of the software packages are user friendly.

 

 

How It Works

Eye tracking requires specialist software to both gather and analyse data. The good news is that there are now some extremely good quality, affordable options, which has opened up this area of analysis, which in turn has led to more research in the area and an increased understanding of outputs and metrics.

Typically there are two types of eye tracking device: head-mounted, or screen / panel mounted. 

Screen or panel mounted devices are typically used where the operator is in a fixed position and normally looking in one direction. These devices are used a lot for User Experience (UX) and web design applications to elicit where a user is looking in order to design webpages more effectively. These devices are also sometimes used in activities such as car driving, as the user is sat in one position and typically looking in one direction. 

Head mounted eye trackers are worn by the operator and are a lot more versatile as the operator can move freely. These head mounted trackers can resemble a pair of glasses, and have multiple cameras tracking the operator’s pupils and eye movements, and also filming what they are looking at. Both types of device can be used in lab-based or applied settings since they are relatively non-intrusive. 


Gathering data is fairly straightforward and depending on the device, can produce many different metrics. The most common eye tracking metrics are based on fixations and / or saccades. Fixations are discrete stable points of where the eye is looking for at least 300ms (and therefore visual information is processed by the brain), and saccades are defined as the eye movements between these fixations (where visual information is not processed). Typically, the first stage of analysis would be to separate the area the operator is working in or looking at into ‘areas of interest’. These then allow you to focus your analysis. Fixation metrics can include number of fixations, fixation durations, total number of fixations in one area of interest, fixation density, and repeat fixations. Fixation metrics can tell us a lot about the operator’s engagement with certain tasks or information. For example, if an operator is focussing on an area for a long time, it could indicate a higher cognitive effort. If an operator is fixating on many different points, it could indicate that the information the operator requires is scattered around, and we should aim to work out why, and if the presentation of information could be improved.

Saccade metrics can provide us with information regarding decision-making, but typically saccades and fixations are used in combination to provide scan-path data. Scan-paths can give us insight into how people go about looking for information, and allow us to analyse how information is gathered and used in certain situations. Scan-paths can be compared between operators, and between novice and expert users to understand the differences between approaches and the impacts these may have. We may be able to identify certain scan-paths that are always straight, and quick, indicating that this task is efficient. Conversely, we may be able to identify scan-paths which are erratic which may help us to understand any aspects of the task or information provision which isn’t currently working as well as it should. These metrics can provide us with some powerful insights into how work is carried out and can greatly assist with redesign of interfaces, displays, and tasks.

 

 

Illustrative Example

Aviation Domain:
As part of a larger project, we wanted to understand how pilots behaved in a complex situation containing multiple incidents. This was carried out in a full motion simulator, and a complex scenario was played out which resulted in the co-pilot flying the plane, a number of go-arounds, a runway change, and low fuel throughout the scenario. From the other data (cockpit data, and voice recordings) and through expert analysis, it was clear there were pilots who dealt with the scenario more successfully than others. By using eye tracking outputs, we were able to develop theories as to why some pilots performed better than others, and attach metrics to the data, such as dwell times on specific instruments, or scan-paths which were repeated time and time again. We were then able to triangulate this with the cockpit transcripts and understand why certain things were looked at certain times, or indeed, allow us to explore later during interview why they weren’t.This information helped to inform more advanced cockpit instrumentation designs, as well as informing emergency training requirements. 


As a specific example of the insights afforded by Eye Movement Tracking (EMT), it was possible to see how the Captain and First Officer were cross-checking each other during the escalating emergency scenario, and how they supported each other. Typically, the FO, whose main fixation was the PFD (Primary Flight Display)  was able to ‘offload’ a degree of the situation awareness to the Captain so that he could focus on diagnosing the problems affecting their flight. EMT was also able to clearly pinpoint the moment when each crew realised they were low on fuel, and how they used various cockpit instruments to analyse an electrical failure. In some areas instruments such as the PFD were fully supporting the flight crew, whereas in others there was clearly room for improvement.

Maritime Domain:
In this study, navigator’s eye movement on the navigation bridge simulator was analysed during passage through the Istanbul Strait. A measurement device called “EMR-8” was used for eye movement tracking. In the application, three examinee groups were selected and they divided according to their onboard experience level. Each group has 4 examinees in real-time bridge simulator. Group 1 consisted of deck department students who had up to 2.5 months sea experience onboard ship as a cadet. Group 2 included junior officers who had already completed 12 months training on-board ship. Group 3 consisted of oceangoing masters who had wide experiences at sea. 

The scenario about passing Istanbul Strait was given to each examinee. The visual field was divided into three parts; inside, outside and others. The inside part has three components; instruments, indicators and an engine telegraph. The outside part has three components; the sea condition, navigational equipment and target ships. The scenario was recorded by eye movement tracking. In the light of findings, Group1 (beginners), gave minimum attention to navigation euqipments, target ships, regulations and sailing routes. Group 2, the examinees had intermediate characteristics as a navigation officer between professionals and beginners. They had enough knowledge and ability to use navigational equipments, target ships, regulations and routes. They had a longer duration on the bridge compared to Group 1. On the other hand, Group 3 had high level experience and knowledge. The examinees gave utmost attention to navigational equipments, sailing routes, target ships and regulations. Group 3 could stand on the bridge for the longest time without losing concentration. 

 

 

Focus group

 

Background

Focus Group (FG) is a form of group interview or a carefully planned series of discussions designed to obtain perceptions on a defined area of interest in a permissive, nonthreatening environment. Like any other research or evaluation tool its purpose is to gather information. Through listening and observing interactions FG can help to appreciate how people think and feel about an experience, an issue or a service. Each group session is conducted with 5 to 10 people led by a skilled interviewer. The discussions are relaxed and work best when people feel free to give their opinions without being judged.

The groups are composed of carefully selected individuals who are representative of a wider population for which the topic is relevant or whose input is sought (e.g., representing different roles and levels of experience). The importance of this approach is that discussions are facilitated to elicit ideas and concepts through collaborative evaluation of scenarios and questions with views of more than one person being considered at any one time. Interaction and exploration of concepts by different individuals is here favourable. Essentially then, FG are excellent for quickly eliciting new directions, new ideas, enhanced functions, or different directions.

Possible uses of the FG are:

  • To imagine the characteristics that the new solution must have, in a first creative phase that can be considered of problem setting, rather than problem solving;
  • To identify requirements for the solution (new tool/procedure/operating concept) that is being studied;
  • To predict possible hazards/issues for safety/human performance that may emerge once the new solution is introduced (e.g. a Hazard Identification workshop).

A Focus group can be used in combination with Scenario Based Design, which can act as an initial input for the Focus Group to make the participants understand the idea they are asked to work on. In addition, at the end of the session, the results of the Focus Group can be used to update and enrich the scenario.

 

 

 

Key Concepts

For participants, the Focus Group session should feel free flowing and relatively unstructured, but in reality, the moderator must follow a pre-planned script of specific issues and set goals for the type of information to be gathered. During the group session, the moderator has the difficult job of keeping the discussion on track without inhibiting the flow of ideas and comments. The moderator must also ensure that all group members contribute to the discussion and must avoid letting one participant's opinions dominate. The following is a list of essential tips/guidelines to conduct a proper FG:

  • The moderator should begin by explaining the purpose of the exercise and what is expected by the group, avoiding generating biases about the content of sessions.
  • She/he should also address the question of how any data collected or personal data will be used and how it will not be used.
  • The moderator should try to manage group dynamics and to establish a permissive environment in which:
    • hierarchical relationships are not influential in the discussion;
    • everyone feels free to contribute;
    • everyone sits equally distributed around a table;
    • no one feels judged to express thoughts and comments.

 

 

If a participant begins to dominate, the moderator should gently encourage others to get involved and rein in the dominant participant(s).
The moderator should not be tasked with note taking. Ideally one or two observers will do this – they should be introduced to the group and their roles explained as part of the introduction.
Badges or panel with names of each participants should be used to facilitate both the moderator and the different attendants to refer to the others during the discussion.
If video or audio recording is to be used – this should be explained in the introduction too.
Refreshments should be made available and if the Focus Group sessions are lengthy – regular comfort breaks should be given.
The moderator’s job is to facilitate, but not participate in, the discussion.
The moderator should sum up important points at convenient moments and ensure that the majority of participants have understood them.
The moderator (with the observers) should summarize the key themes at the end of each session, check for understanding and ask any questions that the observers feel would be useful.
The moderator must be careful in considering that the data coming out are not totally objective, as they are the result of a partial perspective of those who have taken part in the FG. Data must be compared with those coming out from other techniques (see for example Factsheet A.15).

 

 

 

 

Benefits

Fast and cost effective method: because several subjects can be “interviewed” at the same time, FG are a fast and cost effective means to achieve relevant information as well as to obtaining attitudes, feelings, and beliefs.

Provides broad content: FG allow for an open format and they are flexible enough to handle a wide range of topics. They allow taking into account different perspectives on the solution you are imagining or evaluating.

Provides in-depth content: FG allow in-depth exploration of the reasons why the participants think the way they do and often provide insights that can be difficult, time consuming, or expensive to capture using other methods.

Builds new content through interaction: FG provide participants with the opportunity to react to, reflect on, and build on the experiences of others. This can generate new ideas that might have not been uncovered in individual interviews. It also provides the scholar with the opportunity to clarify responses, ask follow-up questions, and receive contingent answers to questions.

 

 

How It Works

Focus Groups can be developed following a process like the one described below:

The Choose the facilitator and note taker;
Choose and invite participants:

- If you start from an already mature solution, allows the identification of all involved stakeholders, find a representative for each of them; 
- If, on the contrary, the solution you are working on is still undefined, identify the people you consider the most expert on the topic, if possible, with different backgrounds.

Identify the appropriate place and room-layout;
Define the goals (Explain the goals of the FG so that all are aligned);
Prepare the input to be given to the participants, such as:

- Guiding questions;
- Scenarios;
- Model of system or process under analysis.

Collect the data (see one of the tips – the note taker should be different from the facilitator);
Analyse the data.

Focus Groups are best analysed immediately after they finish. It is when things are freshest in the minds of the moderator and the observers. Other participants may be brought in to the analysis and videos/audios reviewed during that process. If possible, a transcript of the audio may be useful to prepare for later analysis at the same time. A simple report should be made of key findings after an individual FG.

Depending on what is expected from the FG, the output could be (Consider the three possible uses of the Focus Group highlighted in the Background section):

a) a set of refined scenarios, if the design process is still in the initial creative phase;
b) a list of user requirements;
c) a list of Human Performance Issues or Hazards potentially associated to the introduction of the new solution, with a proposal of how to mitigate each of them.

 

 

Illustrative Example

Aviation domain
Preventing STCA nuisance alerts in military operations

The following example concerns a Focus Group session organized in the context of a project promoted by EUROCONTROL aimed to improve the performance of a Safety Net on the Controller Working Position (CWP) of an Air Navigation Service Provider (ANSP). The Short Term Conflict Alert (STCA) is a ground-based Safety Net intended to assist the air traffic controller in preventing a too close proximity between aircraft by generating, in a timely manner, an alert of a potential or actual infringement of Separation Minima (i.e. the minimum distance allowed among aircraft flying according to the international regulations). 

When an STCA alert is activated, the radar tracks of the corresponding aircraft on the controller’s display become red. In this way, the controller attention is triggered to that specific situation that requires an immediate action to solve the conflict, i.e. an instruction to flight crews of one or more of the involved aircraft to immediately change their trajectories.

 

 

Specifically in this project, the Safety Net was installed in a military control centre, but was configured in a way that fitted well only for civil commercial flights and did not take into account the specificities of military operations.
Different operational experts were invited to the FG session:
2 Military controllers;
1 Military pilot;
2 Technical experts (with experience in the design of multi-radar tracking systems and safety nets);
1 HF and Safety expert acted as facilitator;
1 Safety expert acted as Note taker.

Several operative scenarios were analysed during the session. One of them, presented in this example, was the Formation Flight, a typical disciplined flight of two or more aircraft under the command of a flight leader. Military pilots use formations for mutual defence and concentration of firepower.

 

 

One of the problems that emerged specifically in relation to this scenario was that the STCA triggered alerts even when it was not operationally justified, because flying in close proximity was exactly what the aircraft were supposed to do. This generated many nuisance alerts that represented a big disturbance for the ATCOs and generated the so-called ‘cry-wolf syndrome’. As also shown in the diagram, during the flight in formation the STCA alerted whenever at least two of the aircraft went too close to each other.

A negative consequence then was that ATCOs asked to switch off the STCA in this and other operational situations, thus losing completely the protection given by the safety net.

During the FG, a mitigation for this hazard was identified, which consisted in turning off the aircraft transponder of all military aircraft except the flight leader one, as soon as the formation flight was going to start and only until its end. In this way, during military operations implying a formation flight, a possible proximity of the leading aircraft with other aircraft not involved in the exercise (including civil aircraft in neighbouring airspace sectors) were still brought to the controller attention by an STCA alert.   

The mix of perspectives and competences offered by the FG played a critical role in helping the project team to both identify the hazards and the mitigations associated to this important scenario, as well as a number of other hazards and mitigations.

 

Illustrative example – Maritime domain
ECDIS – Position Verification

The following example concerns a Focus Group session organized in the context of a project that aimed to improve the performance of the Electronic Chart Display Information System (ECDIS) on the matter of position verification.

ECDIS is a computer-based navigation system that complies with IMO regulations and can be used as an alternative to paper navigation charts. It is mandatory for all SOLAS vessels (larger than 500 gross tons engaged in international waters) to install and operate ECDIS. One of the systems feeding data to ECDIS is the Global Positioning System (GPS), which provides ECDIS with the actual position (coordinates) of the vessel at all times. For safety reasons a second GPS is usually connected to ECDIS and selected either as an alternative system in case of failure, or alongside the primary GPS.

An Officer of the Watch (OOW) is obliged to know the ship’s position at all times while the ship is at sea and in this context ECDIS has been proven to be a valuable tool. ECDIS was built to assist the user in developing various skills, among which the most important may be Situational Awareness. When the vessel is navigating in shallow or otherwise confined waters the OOW’s job is even more difficult considering the ship needs to be handled with precision and swiftly to avoid dangerous situations.

The problem deriving from the use of ECDIS is that unless the position of the ship is verified by alternative means, such as position lines (bearings, distances, contours, transits etc.) or celestial lines, the OOW cannot safely determine the accuracy of the position of the ship. Because the above-mentioned methods take time and manpower, a useful feature that could be integrated in ECDIS is the ability to cross-check the position of the ship.

To display the position of the ship on the charts accurately, ECDIS may use both GPS receivers to plot the vessel’s position. Because of the reduced accuracy of the GPS, a difference is observed between the two fixes. That distance (usually a few meters) determines the accuracy of the plotted position.

Method implementation: The FG worked towards the development of a feature that calculates the above-mentioned distance. The different operational experts that were invited to the FG session were:

2 Masters;
1 Bridge Simulator Instructor;
1 Designated Person at Shore (DPA);
1 Navigational Auditor;
2 Navigating Officers (OOWs);
1 Technical expert (with experience in the design of ECDIS systems and safety parameters);
1 Human Factor (HF) and Safety expert acted as of facilitator;
1 Safety expert acted as Note taker.

The FG conclude that the difference between POSN1 and POSN2 may be displayed and, if the user chooses, may be indicated by a user-set alarm. For example, when the Difference between POSN1 and POSN2 is 10m the alarm would sound to notify the user of the reduced accuracy of their position. The OOW can then use this information to avoid any hazards that are closer than 10m. Therefore, the OOW’s Situational Awareness would be enhanced and safer navigational decisions would be made. Errore. L'origine riferimento non è stata trovata. shows how the added feature, showing the distance between the position fixes from the two GPSs, may be integrated into an existing ECDIS system.

 

SBD Scenario based design

 

Background

A scenario is the fictitious, narrative description of one or more users performing an action or trying to achieve a goal via a product. It allows designers and users to describe existing activities or to predict or imagine new activities that may be produced by interaction with a new tool or procedure. It can be used to structure the data collected by observing an activity (activity analysis) or to imagine the characteristics of the future system and stimulate the creative phase of the design process, also called envisioning (prototyping). Widely used by User Experience and Interaction Design experts, scenarios focus on users’ motivations, and document the process by which the user might use a service or a product.

User scenarios can be used in the ideation phase of a design project to visualize how a user will utilize the future technology or system. At such an early stage, these scenarios offer to a designer a lot of initial flexibility. User scenarios can also be used to determine the most important areas to test during usability testing, and to provide guidance on how each test should be done. 

Good user scenarios identify specific roles taking part in the activity and provide context and details in order to be as accurate as possible, and need to be based on some form of insight into or research done with real or prospective users. Consequently, by working through well-thought-out user scenarios, a design team will be able to project a stronger light on their work in progress and expose previously obscure problem areas, which they can then remedy. In order to make them more effective, scenarios can be enriched by defining specific “Personas” which are the most common users, which have specific characteristic, skills, competencies and organizational roles. A fundamental point is that user scenarios do not represent all possible users. Instead, they account specifically for Personas.

 

 

Key Concepts

People need to coordinate information sources, to compare, copy, and integrate data from multiple applications. Scenarios highlight goals suggested by the appearance and behaviour of the system, what people try to do with the system, what procedures are adopted, not adopted, carried out successfully or erroneously, and what interpretations people make of what happens to them.

Scenarios include a specific setting with agents or actors: it is typical of human activities to include several to many agents. Each agent or actor typically has goals or objectives. These are changes that the agent wishes to achieve in the circumstances of the setting. Every scenario involves at least one agent and at least one goal. When more than one agent or goal is involved, they may be differentially prominent in the scenario. Often one goal is the defining goal of a scenario, the answer to the question “why did this story happen?”

Scenarios have a plot; they include sequences of actions and events, things that actors do, things that happen to them, changes in the circumstances of the setting, and so forth. Particular actions and events can facilitate, obstruct, or be irrelevant to given goals. For example, in an office application resizing a spreadsheet and moving it out of the display are actions that facilitate the goal of opening a folder. While resizing and repositioning a memo are actions that facilitate the goal of displaying the memo so that it can be carefully examined. 

 

 

Benefits

Vivid descriptions of end-user experiences evoke reflection about design issues;
Scenarios concretely fix an interpretation and a solution, but are open-ended and could be easily revised;
Scenarios help to identify specific requirements to make sure the envisaged solution will actually sustain the users in performing their tasks;
Scenarios anchor design discussion in work, supporting participation among relevant stakeholders and appropriate design outcomes;
Scenarios can be written at multiple levels, from many perspectives, and for many purposes.

 

How It Works

Scenarios can be developed following a process like the one described below:

1. Draft a narrative description of the scenario starting from your own experience of the activity under analysis or imagining how the future activity will work;

2. Involve other relevant people in the revision of the scenario, e.g. designers, engineers, human factors experts, operations manager, product managers, future end users, either individually or in a dedicated session with all the representatives;

3. Explain to everyone the objectives of the scenario and provide context to make it as accurate as possible, i.e. the who, what, when, where and why detail;

4. Collect data, information, critics, suggestion and ideas from each involved people. Try to understand what are the strengths and weaknesses of the system, what should be removed, what maintained and what improved;

5. If possible, organize a role-playing or simulation to test the scenario and see which are the elements that work properly and those that would require a revision;

6. If needed rewrite or adjust the scenarios by means of different iterations, taking into account the result of the simulation.

 

 

Illustrative Example

A simulated futuristic decision support tool for ATM operations

Context: Futuristic scenario in which an adaptive automation functionality in integrated into the ATM system, capable to understand in real time the operator’s psycho-physical state, matching it with the situation in the ATCO is operating. The envisaged adaptive automation uses neuromeric indicators to derive information on the level workload, stress, situation awareness and vigilance.
 

Specific tool: Multiple decision support tool.

Reference scenario (narrative description): An Executive ATCO is controlling air traffic in an en-route sector in the Area Control Centre of a large Air Navigation Service Provide. During her shift there a significant increase of traffic. The ATCO starts experiencing time pressure. The Adaptive Automation functionality detects an increase of ATCO’s workload and stress. The attention, situation awareness and vigilance are also decreasing. Therefore, the decision and action selection activity by ATCO becomes harder, in particular to evaluate and compare different alternative options to solve potential conflicts. The adaptive automation functionality activates the multiple decision supporting tool, which provides, for each conflict (one at a time, based on relevance), 3 possible solutions. The ATCO goes through them and select the most appropriate one according to her judgement. In particular, for the conflict between DAL241 and ADR258 she selects the option requiring DAL241 to climb to FL 390. She communicates the instruction to the fight crew via R/T, and then she closes the supporting tool window. 

In this simulation, different roles were involved, including ATCOs, pilots, tool designers, human factors experts and psycho-physical activity measurement experts.
Data were collected from the physiological measurements, from the Multiple decision support tool; advice and criticisms from the participants of the simulation were also collected.
The designers and the other experts made their reflections and proposed changes for the further development of the technology.

 

Illustrative Example for Maritime: A simulated futuristic decision support tool for VTS operations

Context: Futuristic scenario in which an adaptive automation functionality in integrated into the VTS system, capable to understand in real time the operator’s psycho-physical state, matching it with the situation of the VTS for monitoring and controlling the vessel traffics. The envisaged adaptive automation uses neuromeric indicators to derive information on the level workload, stress, situation awareness and vigilance.


Specific tool: Multiple decision support tool.

Reference scenario (narrative description): VTS is a marine traffic monitoring system to improve the safety and efficiency of vessel traffic in critical waterways. During a specific shift, it is observed a significant increase of traffic. The VTS starts experiencing time pressure. The Adaptive Automation functionality detects an increase in both, workload and stress. The attention, situation awareness and vigilance are also decreasing due to the increase in the workload. Therefore, the decision and action selection carried out by the VTS becomes harder, in particular to evaluate and compare different alternative options to solve potential conflicts between vessels approaching the berth at the same time. The adaptive automation functionality activates the multiple decision supporting tool, which provides, for each conflict (one at a time, based on relevance), possible solutions. The VTS personnel go through them and select the most appropriate solution according to their judgement. In particular, for the conflict between “Pride of Kent” and “Seafrance Manet”, they select the option requiring “Seafrance Manet” to wait until further notice before approaching berth. They communicate the instruction to the vessel crew, and then they close the supporting tool window. 

In this simulation, different roles were involved, including VTS, crew members, tool designers, human factors experts and psycho-physical activity measurement experts.
Data were collected from the physiological measurements, from the Multiple decision support tool; advice and criticisms from the participants of the simulation were also collected.
The designers and the other experts made their reflections and proposed changes for the further development of the technology.

 

FS #T04/B CREAM - Cognitive Reliability and Error Analysis Method

 

Background

The CREAM was developed by Hollnagel (1998) to analyse Cognitive Human Error and Reliability. The method enables users to assess the probability of human errors during the completion of a specific tasks. The aims of the technique are to identify human errors, quantify human errors, and minimize human errors. In order to achieve these aims, the technique performs both retrospective and prospective analysis. The method comprises a basic and an extended version. The basic one performs initial screening of human interactions while the extended version performs comprehensive analysis for human interactions by building on the output of the basic version. 

The technique has four different control modes to ascertain human failure probabilities in various actions. The concepts of the control modes were derived from the Contextual Control Model (COCOM) whose aim is to yield a practical and conceptual basis to improve human performance (Hollnagel, 1993). The control modes are scrambled, opportunistic, tactical, and strategic controls respectively. While the strategic control mode represents the lowest human error probability, the scrambled control mode gives the highest HEP.

 

 

 

Key Concepts

The key concepts of the techniques are:

Task analysis: To define the activities to achieve the goal. 

Common Performance Conditions (CPC): Human cognition and action context are determined in accordance with CPCs which provide a well-structured and comprehensive basis for characterizing the circumstances under which performance is expected to occur.

Control modes: Scrambled, Opportunistic, Tactical, Strategic modes. The control modes are linked with different failure probability intervals representing human action failure probabilities.

Cognitive activities: These are relevant for work in process control applications and also provide a pragmatic definition for each. Communicate, compare, diagnose, execute, monitor, observe, etc. are some of the cognitive activities. 

Context Influence Index (CII): This value can be calculated by deducting the number of “reduced” CPCs (i.e. CPCs likely to have a negative impact on performance in the situation) from improved CPCs.

Performance Influence Index (PII): Specify proper weighting factors for cognitive functions such as execution, planning, observation, and interpretation.

Cognitive failure probability (CFP):  This is probability of failure for each cognitive failure type.

The main focus of CREAM technique is based on 2 elements:

human performance - how a system behaves over time and 
• cognition activity description of cognition activity in the system/process.

 

 

Benefits

The technique presents a quantitative approach to systematically predict human error for designated tasks and ascertain the desired safety control level. It offers a clear and systematic approach to quantification. The quantification problem is, in fact, considerably simplified because CREAM is focused on the level of the situation or working conditions rather than on the level of individual actions. 

The technique is based on a fundamental distinction between competence and control. A classification scheme clearly separates genotypes (causes) and phenotypes (manifestations), and furthermore proposes a non-hierarchical organisation of categories linked by means of the sub-categories.

 

 

 

How It Works

The technique includes both basic and extended versions. 

The purpose of the basic method is to produce an overall assessment of the performance reliability that may be expected for a task. The basic one consists of the following steps:

1. Scenario development: This step provides a variety of scenarios including time availability, condition of the work place, the environment, stress, noise level, collaboration of the crew, etc. The CPCs are determined according to the scenario.

2. Task analysis of the process/work: The objective of this step is to identify the tasks and sub-tasks of the process according to the hierarchical task analysis (HTA).

3. Determine CPC: These are used to characterise the overall nature of the task. The combined CPC scores are needed to figure out human performance. There are nine CPCs defined in the CREAM: Adequacy of organisation, working condition, adequacy of the man-machine interface, availability of procedures/plans, number of simultaneous goals, available time, time of day, adequacy of training and experience, quality of crew collaboration. For each task, a CPC level is determined.

4. Determine control mode: The control mode is determined from the combined CPC score. The control mode corresponds to a region or interval of action failure probability.

The purpose of the extended version of CREAM is to produce specific action failure probabilities. The following steps are used:

1. Scenario development

2. Task analysis of the process/work

3. Determine CPC

4. Identify context influence index (CII): The CII is used to quantify CREAM, in particular CPCs, in order to simplify calculation. This value can be calculated by deducting the number of reduced CPCs from improved CPCs (Akyuz, 2015; He et al., 2008).

5. Determine performance influence index (PII): This step provides PII values which were generated to specify proper weighting factors for entire cognitive functions such as execution, planning, observation, and interpretation. Each CPC has a different PII value which means there are different weighing factors (He et al., 2008)

6. Calculate cognitive failure probability (CFP): The CFP specifies human failure probabilities for each cognitive failure type in order to calculate HEP value. Once the nominal cognitive failure probability (CFP0) has been nominated for each sub-task, the CFP value (HEP) can be calculated respectively (Akyuz, 2015). The following equation is used.

CFP=CFP0 x 100.26. CII     

 

 

 

Illustrative Example

Maritime domain
In the illustrative example, the case of ship collision during passage through a strait is handled. Human factors are analysed to systematically predict human error for each task. The extended version of CREAM technique is used.  

Step 1 Scenario development
The collision occurred between a tanker ship and a livestock ship during the Dardanelle Strait passage at sea. The collision occurred during overtaking. The VTS (Vessel traffic services) was informed about the collision since it occurred in the Dardanelle Strait passage. The weather was partly cloudy and sea state was calm at the time of collision. The time was early morning. The officer was keeping navigational watch at the time of collision. 

Step 2 Task analysis: 
Hierarchical task analysis was created covering potential activities/works during ship strait passage.   

Step 3 Determine CPC
Assessor determined appropriate CPCs for each task according to the scenario and crew condition.

Step 4 Identify context influence index (CII): 
Assessor determined CII values for each task

Step 5 Determine performance influence index (PII): 
Assessor determined PII values for each task. 

Step 6: Equation was used to calculate the human error probability (CFP) 
for each task.

Following table shows task analysis including responsible person, relevant cognitive activity, cognitive function and calculated CFP for the case of ship collision during the strait passage (He at al., 2008; Apostolakis et al., 1988).

Step 7: Identify significant human error: 
Analyse which of the sub-task of the ship strait passage need further details and if there is any control measure to mitigate human error. 

In the view of analysis, sub-task 2.2 (Proceed inside the separation line) has the highest cognitive error rate where OOW (Officer on Watch) and helmsman failed to observe monitoring ship position. Likewise, sub-task 3.3 (Increase ship manoeuvring ability by consuming diesel oil and steering engines) is another critical activity where human error increased. The Master failed to enhance the ability of ship manoeuvring by consuming fuel oil (instead of diesel oil) during strait passage.  

 

 

FS #M03/B: LOAT Levels of Automation Taxonomy

 

Background

The Levels of Automation Taxonomy (LOAT) classifies and compares different kinds of automation. It was originally developed in the context of the SESAR Programme by analysing 26 different automated functionalities for the Air Traffic Control and the flight crew. The taxonomy is grounded on the seminal works of Sheridan and Verplanck (1978). A new model was developed in order to overcome limitations encountered when applying the theory in practical situations. This formed the basis to develop a set of Automation Design Principles. The LOAT has 2 main purposes: 

provide potential design solutions with lower/higher level of automation; 

help classify automation examples and provide specific human centred suggestions.

 

 

 

Key Concepts

The LOAT brings into discussion important matters related to systems’ innovation.

Automation is not only a matter of either automating a task entirely or not but deciding on the extent to which it should be automated.

Introducing automation brings qualitative shifts in the way people practice rather than mere substitutions of pre-existing human tasks.

Classification of different levels of automation, recognises various ways in which human performance can be supported by automation.

The “optimal” level of automation in a specific task context is about matching the automation capabilities to a number of operational situations, while increasing the overall performance in efficient human-machine cooperation.

It provides potential alternatives for designing the human-machine interaction by taking full benefit of the technical solution.

 

 

 

Benefits

The LOAT can be used to:

Compare different design options and determine the optimal automation level in all operational contexts. The best level of automation depends on the capabilities of the automation itself and on the operational context in which it is included. You can use the LOAT to orient yourself in these choices.


Understand the impact of automation, both positive and negative, on the system. The highest level of automation is not always the best one. Finding the right level helps to make sure that you are taking the full benefit from it, without being negatively affected by side effects, such as nuisance alerts or erroneous directions.


Get insight on how to prevent human performance issues and take full benefit of available technical solutions in various domains. A common taxonomy for different domains will mean a shared approach to automation design. Developed in the Air Traffic Management domain but applicable to all domains in which automation plays an important role in sustaining human performance.


Classify existing implementations and derive lessons to improve automation design. Classifies the ‘nuances’ of automation enlightening the possibilities in design. Modifying existing tasks or introducing new ones involve the use of different psychomotor and cognitive functions, which implies the adoption of different automation solutions.

 

 

 

How It Works

The LOAT is organized as a matrix:

I. in horizontal direction: the 4 generic functions, derived from a four-stage model of the way humans process information:
1) Information acquisition: Acquisition and registration of multiple sources of information. Initial pre-processing of data prior to full perception and selective attention;
2) Information analysis: Conscious perception, manipulation of information in working memory. Cognitive operations including rehearsal, integration, inference;
3) Decision and action selection: Selection among decision alternatives, based on previous information analysis. Deciding on a particular (‘optimal’) option or strategy;
4) Action implementation: Implementation of a response or action consistent with the decision previously made. Carrying out the chosen option.


II. in vertical direction: each cognitive function groups several automation levels (between 5 and 8). All automation levels start with a default level ‘0’ - corresponding to manual task accomplishment - and increase to full automation.
 

The initial methodology that has been defined for the LOAT incorporates 4 steps: 1) Identify the automated tool to be classified; 2) Determine which function is supported (can be more than one); 3) Determine the relevant cluster of automation levels; 4) Identify the specific automation level and consider the design principles associated to it. The main LOAT goals are:

it is a method by which we can efficiently identify the most suitable human-automation design;
based on the classification of already developed automation examples we can provide specific and relevant human factor recommendations.

 

 

Illustrative Example

Aviation Domain
To see how the LOAT can be implemented in practice we should consider an example of human-machine interaction in industry.

SCENARIO: The Remote (or virtual) Tower uses an array of sensing and communication technologies on the airfield and the surrounding airspace, while providing the actual control functions (using certified controllers). In matter of automation the remote towers come with the following technology:

surveillance is generally provided by an array of video cameras, both conventional and infrared, to replace and improve upon the controller’s OTW view.

communications between controllers at the RTC and aircraft that use the airport are primarily via radio.

displays typically include an “out-the window” display of the airfield, data from radar (if available) and any other surveillance method. They may also include equipment indicating weather conditions (such as a wind rose), a compass rose, and geographical overlays.

controls would include microphones lighting controls, camera controls, etc.

other technologies include object tracking and alerting (in the air and on the ground) and features such as automatic runway intrusion alerts.

As in a conventional tower, all surveillance and communications information is transmitted to one or more controller positions in the Remote Tower Center (RTC).

ANALYSIS: Following the aforementioned steps, we can use the LOAT to identify the level of automation.

Step 1: Identify the automated tool to be classified. From the technologies of the RTC mentioned in the SCENARIO we choose the one we want to classify: e.g. Surveillance

Step 2: Determine which function is supported. We should then consider the main purposes of surveillance cameras: i.e. they collect images. Considering this, the function supported is: Information Acquisition (A).

Step 3: Determine the relevant cluster of automation levels. While the cameras are the ones collecting and generating the image to the user but only on the user’s demand, the relevant automation level is: supported by Automation. 

Step 4: Identify the specific automation level. The user is still in control to decide whether the information received from the cameras is relevant or not, and can filter the information received, so the automation level is: with user filtering and highlighting of relevant info (A2) 

In this way the LOAT can be used to decide upon the optimal level of automation suitable for the user and can help to establish whether or not it is a good choice to automate the system in the first place.

 
 

FS #T03/B: HEART Human Error Assessment and Reduction Technique

Background

HEART is a first-generation technique developed by Williams for the nuclear industry, and used in the field of Human Reliability Assessment (HRA) for the purposes of evaluating the probability of human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety.

 

 

Key Concepts

HEART method is based upon the principle that every time a task is performed there is a possibility of failure and that the probability of this is affected by one or more Error Producing Conditions (EPCs) (e.g. distraction, tiredness, etc.) to varying degrees.

Within HEART, those factors which have a significant effect on performance are of greatest interest. These conditions can then be applied to a "best-case-scenario" estimate of the failure probability under ideal conditions to then obtain a final error chance.

By forcing consideration of the EPCs potentially affecting a given procedure, HEART also has the indirect effect of providing a range of suggestions as to how the reliability may therefore be improved and hence minimising risk.

Benefits

On the positive side, HEART presents a long list of benefits:

HEART is rapid and straightforward to use.
HEART has a small demand for resource usage.  
It has a good track record in several industries such as nuclear, healthcare, rail, maritime, offshore and aviation industries. 
It can be used early on (i.e. at the development stage).
Several validation studies have been conducted and concluded that this technique produce estimations that are equivalent to those obtained by applying more complicated methods.

On the negative side, HEART also presents several well-known disadvantages:

The EPC data has never been fully released and it is therefore not possible to fully review the validity of it.
HEART relies to a high extent on expert opinion, first in the point probabilities of human error, and also in the assessed proportion of EPC effect. The final Human Error Probabilities are therefore very sensitive.

 

 

How It Works

HEART is an Additive Factors Model (AFM) that proposes the application of a multiplicative impact on human reliability calculations. This technique works under the assumption that any estimated reliability of task performance may be modified according to identified EPCs.

The HEART methodology generally consists of two assessment process steps (a qualitative and a quantitative step).

The qualitative method is started by classifying the generic task based on the accidents data report.  After finding the general task, it then breaks it down to smaller parts called Error‐producing conditions (EPCs). 

Once this task description has been constructed, the second step of this methodology is the quantitative method, wherein Human Error Probability (HEP) is calculated , usually by consulting local experts. To obtain the HEP using the HEART methodology, first it is necessary to obtain the nominal human unreliability, which belongs to the generic task. When the EPCs are determined, the multiplication number of each EPC is obtained.

 

 

 

Illustrative Example

Aviation domain
This example is derived from an application of CARA (Controller Action Reliability Assessment), an adaptation of HEART specific for the Air Traffic Control domain). In this case CARA is used to asses human performance for anticipated landing clearances during low visibility.

During periods of low visibility, for aircraft landing at airports, a support system was being designed to enable controllers to be provided with information that runways were clear and available for aircraft to land on when they may not be visible from the tower. One key area around the runway in this process is the ‘Obstacle Free Zone’ (OFZ) for which consideration was being given as to the form of alert a controller would receive if the following hazard occurred “The aircraft which landed and is moving clear of the runway stops beyond the trigger line (which enables the next aircraft to start to approach the runway) but not clear of the OFZ with the landing aircraft less than 200 feet above threshold.”

Simple event trees were developed and focused on human error probabilities of two key ATC tasks:

1. Controller identifies the aircraft which has landed is stationary within the OFZ.
2. A warning is issued to the landing aircraft that the aircraft is stationary in the OFZ.

To populate the event tree, CARA values were calculated for these two tasks. The values were produced by two human factors experts and drew on human reliability analysis and event trees from previous hazard analysis work. For task 1, three potential designs were considered and values calculated separately:

1. Use of an audible and visual alert
2. Use of a visual alert only
3. Use of no alert, and relying on controller scanning patterns

Examples of calculations of the human reliability are presented below.

Issue warning to aircraft 2 (Aircraft approaching runway)
Task Description and Assumptions: having identified that aircraft one (the aircraft already landed) is stationary, the controller must identify and warn aircraft 2. The error under consideration is that the controller does not warn aircraft 2, having identified that AC 1 is stationary.

 

Results: Calculation  = 0.003 x [(11-1) x 0.2 + 1] = 0.003 x 3 = 0.009

The analysis provided the following quantified outcomes for the Hazard using the event trees for the three design scenarios:

Use of an audible and visual alert - 9.4E-03
Use of a visual alert only - 1.4E-01
Use of no alert, and relying on controller scanning patterns 3.1E-01

The analysis identified that much better reliability would be achieved using option A, a visual and audible alert, as there is not a reliance on the controller to scan the display to identify the hazard. It was beyond the scope of this work to validate and take these data forward in to the system design.

 

Maritime domain
In the illustrative example, the case of ship collision during passage through the waterway (narrow canal) is applied. Human based factors are analysed to systematically predict human error for each task. The collision occurred between a tanker ship and a livestock ship during the narrow canal passage. The tanker ship which were steering with 11,7 knot speed to the course of 244° in Marmara Sea in the direction of Dardanelles, have contacted with her starboard bow to port stern quarter of the livestock ship within 20 nm range of Dardanelles at 06:25 LT on the date of 01.10.2013. Development of the event was as follows; the tanker ship overtook the bulk carrier ship when she was steering close to the middle of the separation in the direction of Dardanelles. It has been observed that livestock ship was steering in the starboard bow of bulk carrier ship in direction of Dardanelles and close to outmost line of the separation where she was also in the range of about 0,6 NM with the tanker ship. It also has been observed that another ship (container ship) was steering in the starboard bow of the tanker ship within the range of 1 mile. 

At the result of investigation of ECDIS records and statements of the officer on watch, it has been perceived that; After taking over bulk carrier ship with the range of 0,3 NM and heading towards to container ship, the tanker ship has contacted with livestock ship which were steering outmost line of the separation. Even though small course alteration to port side that was made by the tanker ship, contact couldn’t be prevented due to sudden and big course alteration to the port side had been made by the livestock ship. The root cause is attributed to the mental fatigue, inadequate manning on the bridge, inadequate standards, lack of communication, poor risk evaluation, delayed collision avoidance manoeuvring, limited time, lack of knowledge and inadequate checking. 

The VTS (Vessel traffic services) was informed about the collision. The weather was partly cloudy and sea state was calm at the time of collision. The time was early morning. The officer (OOW) was keeping navigational watch at the time of collision. The Master of ship was informed just before of the collision. Helmsman was called by the Master before collision. The hierarchical task analysis was performed to understand relative activities/tasks during ship strait passage. Then, HEART is applied for quantification of human errors. Following table shows details including tasks, responsible person, GTT, EPC, APOE and calculated HEP for the case of ship collision during the narrow canal passage.

 

FS #T02: TRACEr

 

Background

TRACEr and Human Error Reduction in Air Traffic Management (HERA) are Human Error Identification (HEI) techniques developed specifically for use in Air Traffic Control (ATC). They provide means of classifying errors and their causes retrospectively, and to proactively predict error occurrence. TRACEr was developed in 1999 by National Air Traffic Services, UK. Similarly, HERA technique was then developed by EUROCONTROL in 2003 and used NATS as its main input. It aims to determine how and why human errors are contributing to incidents, and thus how to improve human reliability within a high-reliability system.

 

 

 

Key Concepts

The TRACEr technique is based on the human information processing paradigm which claims that the human mind is like a computer or information processor. It contains eight interrelated taxonomies that altogether describe:

The context of the incident – using:
1. Task Error – describing controller errors in terms of the task that was not performed satisfactorily.
2. Information – describing the subject matter or topic of the error.
3. Performance Shaping Factors (PSF) – classifying factors that have influenced/ could influence the controller’s performance, aggravating the occurrence of errors or assisting error recovery.

The cognitive background of the production of an error – using:
4. External Error Modes (EEMs) – classifying the external and observable manifestation of the actual or potential error.
5. Internal Error Modes (IEMs) – linked specifically to the functions of the cognitive domains and describing which of the cognitive functions failed/ could fail and in what way.
6. Psychological Error Mechanisms (PEMs) – describing the psychological nature of the IEMs within each cognitive domain.
7. Error detection – describing the error using specific keywords.

The incident recovery – using:
8. Error correction – describing how the error was mitigated.

 

 

 

 

Benefits

• The TRACEr technique provides feedback on organisational performance before and after unwanted events. 
   Its main strengths are comprehensiveness, structure, acceptability of results and usability.
• It requires moderate resources.
• It offers a possibility to derive error reduction measures.
• It helps to determine what errors could occur, their causes and their relative likelihood of recovery.
• It facilitates the identification and classification of human errors in relation to Human-Machine Interaction (HMI).

 

 

 

How It Works

TRACEr can be used in two different ways, for a retrospective analysis (i.e. following an incident or accident) or for a predictive analysis, in order to anticipate the possible errors, in the context of a risk analysis. For the sake of brevity only the retrospective use is here illustrated, but in two different instances: first the original one, developed in the air traffic control domain, then a modified one adapted to the maritime domain.

Aviation Application
The process for applying TRACER retrospectively in the ATC domain can be represented as a flowchart, and identifying six different steps.

 

Analyse incident into 'error events', identifying the task steps in which an error was produced. 
Task Error Classification, classifying the error with the Task Error taxonomy 
Internal Error Modes (IEMs) Classification, deciding which cognitive function failed.
Psychological Error Mechanisms (PEMs) Classification, identifying the psychological cause.
Performance Shaping Factors (PSFs) Classification, selecting any PSFs related to error under analysis.
Error detection and Error correction, identifying errors/ corrective actions by answering to four questions.

Once the analyst has completed step number six, the next error should be analysed. If there are no more “error events” then the analysis is finished.

Maritime Application
The TRACEr methodology was adapted to the maritime sector by the World Maritime University and named TRACEr-Mar, it consists of nine coding steps or classification schemes that can be divided into three main groups, which describe the context of the incident, the operator context and the recovery from the incident. The overall process can be summarized with a flowchart and table below provides a brief summary of the nine TRACEr steps.

 

 

Illustrative Example

 

Aviation domain

To see how the technique can be implemented in practice we may consider the following ATCO-Pilot communication scenario which resulted in a loss of separation minima.

SCENARIO:  
A very busy day, young and unexperienced ATCO, tired because of lack of sleep. ATCO issuing clearance for aircraft A: “ABC123 descend FL240”; Pilot A readback: “ABC123 descend FL220”; ATCO issuing clearance for aircraft B: “XYZ789 climb FL230”; Pilot B readback: “XYZ789 climb FL230”.

STCA starts (Short Term Conflict Alert – ground-based safety net intended to assist the controller in preventing collision between aircraft). ATCO: “ABC123 avoiding action turn right.”

A representation of this scenario is illustrated in the figure below.

ANALYSIS focused on the first 'error event' (ATCO’s error): 

Step 1: Develop the sequence of events and identify error events.
. Controller fails to notice error
. Controller is late to respond to STCA
. Controller gives indication in wrong direction

Step 2: Task Error classification 
. Controller Pilot communication error

Step 3: IEM classification
. Readback error

Step 4: PEM classification
. Expectation bias

Step 5: PSF classification
. Traffic complexity

Step 6: Error detection and Error correction

 

 

Maritime domain 

SCENARIO: On December 2012, the dry cargo vessel Beaumont ran aground on Cabo Negro on the north Spanish coast while on passage from Coruna to Aviles. At the time of the grounding she was proceeding at full speed, and the Officer Of the Watch (OOW) was asleep. An inspection of the vessel’s internal compartment quickly established that, despite being driven hard aground on a rocky ledge, there was no breach of the hull. The MAIB investigation identified that the OOW had fallen asleep soon after sending his night lookout off the bridge. Available bridge resources that could have alerted the crew and/or awoken a sleeping OOW were not used resulting in Beaumont steaming at 11.5 knots with no one in control on the bridge for over an hour. 

The table below presents an example of applying the TRACEr-MAR taxonomy to the task error: Officer Of the Watch fell asleep on the bridge.

 

FS #T01 Human HAZOP - Human Hazard and Operability Study

 

Background

HAZOP (Hazard and Operability Study) was developed in the early 1970s by Professor Trevor Kletz and his colleagues at the UK chemical company ICI, as a means of safety assurance of chemical installations. It is essentially a ‘what-if?’ approach, using design and operational experts to rigorously analyse a design or an operational procedure to determine what could go wrong, and how to prevent it going wrong in the future. HAZOP quickly spread to other industries as a robust means of checking a design for safety problems. In the 1990s it transitioned to the consideration of human error and the design of human machine interfaces, for example in the air traffic industry. 

The HAZOP process is exhaustive and has been shown many times to be effective. It does not place high demands on analytic expertise as certain other techniques do, instead offering a structured way of interrogating a design. It results in a table highlighting any potential vulnerabilities (these can be ranked or weighted in terms of severity if they were to occur), and design or operational remedies to mitigate the risk.

Human HAZOP focuses on safety, but it also identifies design issues related to productivity and system performance, and so can also lead to efficiency and effectiveness enhancements. This is reflected in the word ‘Operability’ in the title of the technique. Designers often find HAZOP approaches to be an excellent means of quality assurance for their designs.

 

 

Key Concepts

Human HAZOP is a Group method, rather than a single-analyst approach such as TRACER or STAMP.

It relies on ‘structured brainstorming’ by a small group (e.g. 6-8) design, operational and Human Factors experts, led by a HAZOP chairman

It requires a representation of the system, procedure or interface being evaluated. Typically design documents and drawings (or photos if operational) are present, along with a task analysis 
   (typically Hierarchical Task Analysis or Tabular Task Analysis which details how the human operators are intended to interact with the system.

The HAZOP study group proceeds through each step of the procedure or task analysis and considers potential deviations from the expected behaviour, prompted by HAZOP guidewords.

The consequences of deviations from the intended functioning of the system are considered, as well as existing safeguards & recovery means

Required safety recommendations are stated and recorded by the HAZOP secretary.

HAZOPs are formally recorded and logged.

 

 

 

 

Benefits

HAZOP has survived a long time, and has a long list of benefits:

Human HAZOP provides systematic and exhaustive design review, and can lead to the discovery of new hazards.

It requires limited technical training – it is an ‘intuitive’ method.

The use of a team gives a range of viewpoints.

It has a good track record in several industries.

It is versatile, and can be applied to all sorts of design formats.

It can be used early on (e.g. at the concept design stage).

It is good at finding credible vulnerabilities, including violations.

It has high acceptance by both designers and operators, as they both input to the HAZOP.

It creates a good ‘bridge’ between design and operations.

 

On the negative side, there are several well-known disadvantages:

HAZOP, especially if applied to a complete system, is resource intensive.

The GIGO (garbage in, garbage out) principle applies. If, for example, inexperienced operational people are used as experts, don’t expect too much.

HAZOP tends to concentrate on single deviations or failures – it does not look at how different failures might interact – only risk modelling does this.

 

 

How It Works

Human HAZOP works by applying a set of guidewords to the chosen design representation, be it design drawings and documentation, task analyses, or more dynamic visualisations (e.g. working prototypes) of the system under investigation.

The guidewords are as follows shown in the table below. As an example, the most commonly used guideword is ‘No’, meaning the human operator does not do something when required (e.g. failing to lower landing gear). Guidewords such as More or Less could be applied to entering correct weight into software used to manage balance (aviation) or ballast (maritime). Reverse is rarer but can still happen, as in the AF447 air crash where the co-pilot manoeuvred the controls in the wrong direction, possibly due to startle response and inexperience, putting the plane into an irrecoverable stall. In maritime it could mean setting thrusters in the wrong direction when docking. Early and Late could relate to use of flaps or deciding when to descend an aircraft, or when to turn to pass another vessel (maritime). The most interesting guideword is ‘Other than’, wherein a completely unintended action is the result. This requires operational experience rather than fanciful imagination, and typically in a HAZOP only one or two (or none) such possibilities are identified. 

In the HAZOP session, the HAZOP Chairman leads the group through each step of the activity associated with the interface or procedure, and each guideword is considered in turn. 

The results of a HAZOP are logged in a HAZOP table by the HAZOP secretary. Mitigations may not always be apparent, in which case actions are given to HAZOP team members or their departments to develop mitigations and report back to the group or design authority to ensure they are sufficient and mitigate the hazard.

 

 

Illustrative Example

Aviation domain
The new Air Traffic Controller systems shift the focus away from one primary form of information presentation and usage (paper) to a completely different presentation medium (computer screen).  Additionally, although both these media are visual in nature, the computerisation of flight progress strips paves the way for automated ‘up-linking’ of messages to the cockpit of aircraft, reducing the need for oral communication for a number of tasks. This transmission of information is known as ‘data-link’. Given the high performance and proven nature of the current system, it is sensible to evaluate the transition to a new system interface.  A HAZOP analysis was therefore conducted on the proposed HMI (Human Machine Interface) of the new (electronic) flight progress information system. 

The HAZOP team that assessed the implications of the new interface was made up of three designers, one air traffic controller and two human factors specialists. The study identified a number of ‘vulnerabilities’ in the prototype system and ‘opportunities’ for error that needed to be designed out or worked around (e.g. via procedures and training).  A total of 87 recommendations, covering changes to the HMI, improved feedback and training / procedure improvements were generated in the three HAZOP sessions.  

The HAZOP approach proved surprisingly useful and productive of agreed changes in interface design. Moreover, since the designers were not only present, but were actively involved, any design changes they thought necessary were simultaneously accepted for implementation. The HAZOP therefore had very fast and effective impact on the design process. To illustrate the meeting output, part of the study is shown in the table below. 

  

Maritime domain

In shipping, the techniques to represent the geophysical characteristics of the seabed have shifted the focus away from paper-based nautical charts to computerized electronic charts such as sonar charts. A sonar chart is an HD bathymetry map featuring extraordinary bottom contour detail for marine and lakes, excellent for increasing awareness of shallow waters and for locating fishing areas at any depth level.

Although both techniques are visual in nature, by computerising the seabed mapping, it is possible to increase the visualization details and to reduce the need for communication and coordination amongst crew members. Therefore, a Human HAZOP analysis can be conducted on the proposed human-machine interface of the new electronic system. The outcomes of this study were used to identify a number of ‘vulnerabilities’ in the proposed system and ‘opportunities’ for error that needed to be designed out or worked around (e.g. via training in the use of sonar chart).To illustrate the HAZOP output, an example is shown in the table below.

 

Would like to know more ?