There are many articles and blogs written on how ChatGPT, an AI language model by OpenAI, assists with brainstorming, problem solving, reporting, and organizational communication. Within aviation-based SMS applications, ChatGPT may become the research assistant that could be leveraged by many safety managers. While industry experts continue to design the next-gen AI-backed SMS software solution, safety managers can use ChatGPT to help them treat safety issues, audit findings, classify hazards, contained in their SMS reporting.
I must warn you that ChatGPT is not an end-all, autonomous safety decision making tool. It may draw upon its own knowledge base to shape responses - unless prompted- and its current knowledge depth of the entire aviation domain is lacking. Despite the huge inroads taken in the processing of machine learning data, the full potential of this processing is not even close to being fulfilled in the aviation domain.[i] However, ChatGPT becomes a strategic time saver when used by experienced safety managers for the pre-research modeling and synthesis of issues and events contained in SMS reports.
What are Prompts?
To interact with ChatGPT effectively, users provide it with prompts. Prompts are typed instructions or queries given to ChatGPT to receive responses or information. A prompt is a text string or question that sets the context and guides the conversation with the machine. It can be as simple as asking a question or providing a statement to start a conversation. For example:
“Give me tips for effectively communicating with my flight crews.”
“Explain the importance of flight risk assessments. What is the most important element?”
ChatGPT responds to these prompts by generating text that continues the conversation or fulfils the requested task. It leverages its training data to provide coherent and contextually relevant responses. As noted above, the quality and accuracy of its responses depend on the clarity and specificity of the prompts provided. Knowledgeable users can refine their prompts and iterate on them to achieve better results and more meaningful interactions with the model.[ii]
To demonstrate how a safety manager might use ChatGPT prompts to help process a high-risk safety report, I pulled an actual maintenance error from old safety report data. I used a (free) version of ChatGPT 3.5 to produce the following results and tried to complete the old report’s management analysis section using Chat prompts. Although a highly experienced maintenance manager may find the following to be very elementary, there still may be one new facet discovered as contributing to the actual cause. The intent is to use Chat’s broad net of responses, while refining prompts to quiet its “babble,” to bring a new richness of data in many current SMS platforms.
Problem Scoping and Synthesis
My mechanic inadvertently installed the copilot’s cyclic controls backwards, while preparing our ECxxx helicopter for external load operations. Can you help me complete a root cause analysis on how the mechanic could have committed the error? I need to scope my safety inquiry.
Certainly! There could be several factors contributing to this error. It might involve ambiguous or inadequate instructions, distractions during the installation process, lack of proper labeling or checks in place, or even a lack of standardized procedures within the maintenance process. Understanding these aspects can help prevent similar errors in the future.
Fish-bone Cause and Effect
Could you help map a fish bone digraph labeling potential causes?
Absolutely! The fishbone diagram, also known as an Ishikawa diagram, can help in identifying potential causes. In this case, the major categories could be:
1. Human Factors:
Lack of training on specific model variations
Fatigue or distraction during the installation process
Misinterpretation of instructions or diagrams
2. Procedural Factors:
Inadequate labeling of parts or controls
Absence of a double-check system in place
Ambiguity in the installation manual or documentation
3. Equipment/Tool Factors:
Faulty or misleading markings on the controls
Incompatibility between parts or tools used during installation
Lack of standardized components across different models
4. Organizational Factors:
Time pressure to complete the task
Insufficient oversight or supervision during the installation process
Absence of a robust quality assurance process for complex installations
By examining these potential causes within each category, it can help identify areas for improvement in the process to prevent such errors in the future.
What would be a 5-why technique to uncover a root cause?
Sure, here's a 5-why analysis to uncover the root cause of a backwards installation of the copilot’s cyclic controls:
1. Why was the copilot’s cyclic controls installed backwards?
Because during the installation, the mechanic connected the controls incorrectly.
2. Why did the mechanic connect the controls incorrectly?
Because the mechanic misinterpreted the orientation or instructions for installation.
3. Why did the mechanic misinterpret the orientation or instructions?
Because the instructions provided were unclear or lacked specificity for the correct orientation.
4. Why were the instructions unclear or lacking specificity?
Because the manual for this variation or the guidelines for installing external load operations were not comprehensive or detailed enough.
5. Why were the guidelines or manuals not comprehensive or detailed enough?
Perhaps there's a lack of standardized procedures or specific training for installing variations like external load operations in the maintenance manuals.
By asking "why" iteratively, it helps to delve deeper into the layers of causation, eventually pinpointing the underlying issues within the system or procedures.
With further investigation, the helicopter’s cyclic control mount design permitted either forward or backward placement. Can we drill down any further on the root cause to create future prevention?
Certainly! Given that the cyclic control mount design allowed for either forward or backward placement, let's delve deeper into potential root causes for future prevention:
1. Design Specification:
Were there inadequate design constraints that allowed for backward placement without clear indications or limitations?
2. User Interface and Ergonomics:
Was there a lack of clear markings or indicators on the cyclic control mount itself, leading to confusion during installation?
3. Testing and Validation:
Were the design variations adequately tested and validated to ensure foolproof installation without the possibility of backward placement?
4. Feedback Loop:
Is there a mechanism for gathering feedback from mechanics or users regarding the design's usability or potential confusion during installation?
5. Standardization and Best Practices:
Are there industry-wide standards or best practices for cyclic control mount designs that should be integrated or adhered to in future designs?
Examining these aspects can help identify improvements in the design process, user interface, testing protocols, and feedback mechanisms to prevent such errors in future designs or variations of helicopter controls.
Change Management Planning
Help me draft a change management plan to implement policy and procedures for steps 2,3, and 4 (user interface, testing, feedback loop)?
Sure, here's a general outline for a change management plan to implement policies and procedures for the user interface, testing, and feedback loop:
Assessment and Planning:
Identify Stakeholders: Determine key stakeholders involved in the design, testing, and installation processes.
Current State Analysis: Assess current practices and deficiencies related to user interface, testing, and feedback mechanisms.
Goal Definition: Define specific, measurable objectives for improving user interface, testing protocols, and feedback loops.
Policy and Procedure Development:
User Interface Policy: Develop guidelines for clear and intuitive markings/indicators on cyclic control mounts to prevent installation errors.
Testing Protocol: Establish a comprehensive testing and validation protocol for all design variations to ensure foolproof installation.
Feedback Loop Procedure: Create a systematic process for gathering feedback from mechanics/users regarding design usability and installation concerns.
Training and Communication: Develop training modules to educate stakeholders on new policies and procedures.
Rollout Plan: Implement phased deployment of these policies and procedures, starting with pilot groups or specific departments.
Allocate resources (time, personnel, tools) required for successful implementation.
Monitoring and Evaluation:
Performance Metrics: Define key performance indicators (KPIs) to measure the effectiveness of the new policies and procedures.
Regular Audits: Conduct regular audits or assessments to ensure compliance and identify areas needing improvement.
Feedback Integration: Establish a system to integrate gathered feedback into continuous improvement cycles.
Sustainment and Continuous Improvement:
Training Updates: Provide ongoing training and updates on policies and procedures.
Adaptation: Adapt policies and procedures based on lessons learned and evolving best practices.
Recognition and Rewards: Implement a system to recognize and reward stakeholders who contribute positively to the improvement process.
By following these steps, you can systematically introduce and sustain changes that enhance user interface, testing protocols, and feedback loops, thereby reducing the likelihood of installation errors in the future.
Is ChatGPT the ultimate tool for safety management?
We must remember that comprehensive safety management systems are created and maintained by employees who embrace a sound and active safety culture. While ChatGPT may excel at synthesis and information management, the ultimate tool for an SMS involves a suite of specialized software, custom data collection and analytics, training platforms, and effective communication systems. Combining these tools ensures a holistic approach to managing safety culture and driving organizational performance .
[i] Amin, N., Yother, T., Johnson, M., Rayz, J., Exploration of Natural Language Processing (NLP) Applications in Aviation, . Collegiate Aviation Review International, 40(1), pp. 210. Retrieved from http://ojs.library.okstate.edu/osu/index.php/CARI/article/veiw/8564/7733.