Challenges

Well Paul, a worthy initiative for sure. At least you got me curious, particularly about how we aregoing to get to a useful forecast. And also about how we will get investments in effort (andperhaps even money) going in anticipation of future threats. (This is a bit of an achilles-heel for initiatives such as FAST as well). This would seem to me to be some of the challenges here. And as challenges make life interesting, this is not a bad thing. Do you have some ideas already ?

Michel Piers
NLR Netherlands

About Paul Miller

Captain Paul Miller has developed a unique approach to aviation safety by forecasting safety hazards. Yes, that is right, forecasting safety hazards. Most people think of safety as a study of what has already gone wrong. But SafetyForecast looks forward in time at factors that will negatively impact your business in the future. Then using this unique and very powerful forecasting tool, you will be able to develop a unique Safety Plan. That is right, a Safety Plan. Your tailored Safety Plan will reduce losses of personnel and property to well below your most extraordinary expectations. That is our guarantee. No other safety consultant uses this method and no method has ever achieved the superior results by driving the mishap rate towards zero in such a short time. SafetyForecast modest consulting fees will be recovered by your business in a short time by the reduction of just the financial and legal losses alone, losses that you do not often think about due to mishaps. No other consultant will provide a unique Safety Plan that will make your company prosper by saving lives and injuries. Additionally your company will prosper by reducing the damage and destruction of property. Try SafetyForecast and you will see for yourself how well the Safety Plan works.
This entry was posted in Uncategorized and tagged . Bookmark the permalink.

3 Responses to Challenges

  1. Paul Miller says:

    A forecast is possible due to one very significant safety theory: Things start “going wrong” in small degrees, before they seem to go wrong in large degrees. Call this “Miller’s Observation of Safety Hazard Degrees.” So if a safety manager is observant of the local operation, and is carefully tracking similar operations world wide, that safety manager will it seems, get many hints of problems, of operations gone wrong in small degrees, before a safety hazards of a larger degree occurs locally.
    This observation comes from the completion of many thorough aircraft mishap investigations. What Miller observed was that during every mishap investigation, there appeared, either from secondary evidence or more often from testimony of witnesses, that a similar safety hazard, albeit to some smaller degree was occurring locally within the operation, but was going unaddressed by any supervisor within the local organization. I can’t really be sure as to why this phenomenon recurred so regularly within an organization, but I have some theories and correlations.
    In any event, even though several keen observers within the organization saw the hazard of a lesser degree, for some odd reason no one either of the rank and file or of the supervisor population ever made it a safety goal to directly address this lesser degree hazard. Why?
    I think in some cases the rank and file is intimidated by the standards staff during regular functions and are often just glad to “get a checkride done and survive it.” So they do not feel either qualified to make a suggestion or they feel disenfranchised by the process and withhold or do not communicate their suggestions and observations. This is often the sign of a disfunctional organization. Everyone puts on the happy face until the day of the big smoking hole. Then no one seems to know why this happened; then one person seems to come forwards and say some version of, “I knew that this would happen. I even tried to tell someone, but there was no one to listen to me.”
    How odd, yet how common an experience for mishap investigators. Some one knew before hand, yet for any number of reasons, the information was never communicated.
    Anyway, this is what I call, “Miller’s Observation of Safety Hazard Degrees.”
    So an observant and communicant safety manager will have the benefit of this person’s insight. From this insight, the safety manager can make a forecast of this hazard whilst it still is a hazard of a lesser degree, before it becomes the local smoking hole, and develop a safety plan to combat it.
    Now I know that all of the deductive based thinking leads us to conclude, “if there is no evidence of a hazard, then how can you make a forecast and a plan based on science and evidence?”
    I agree. That is why I searched for other reasons and chose inductive reasoning. What is the difference and why did I choose it?
    First I chose it because I observed that there always was a person who came forward voluntarily during mishap investigation, who saw this coming. In some cases they did tell someone but no one in authority took any actions to alleviate the hazard. In some cases the person spoke only to contemporaries and not to authorities. In some cases the local standards were wrong all along, but people were either intimidated or there was no functioning local safety communications system in which to place the concern.
    With inductive reasoning, the safety manager only needs evidence of a hazard to conclude a loss will eventually occur. This is opposed to the more common deduction which requires a loss to occur, from which you deduce that a particular factor is a hazard.
    Do you see the difference?
    One example: runway surface friction and weather data available to the flight crew is either not current, incomplete or in a format or form which is difficult to process or readily use. Deduction says, “So what? We haven’t had any mishaps at this airfield, so aren’t you just Chicken Little crying out that ‘the sky is falling?’ Wait until we actually have a problem before you ring the alarm.”
    On the other hand, induction says, “Due to the observation made that there is a safety hazard of a lesser degree at this field, that is runway friction data and weather data are either unavailable or not current or in an unusable format,we need to do something to prevent a future mishap.”

    At Toronto Canada, the runway was not grooved ever. Funny thing though, very few people saw this as a hazard. One or two did. But Air France and Transport Canada and ICAO did not. So when the Airbus 340 crew tried to get their airplane landing in a tail wind from convective activity on the runway stopped, they skidded off of the 4600 feet of runway remaining after landing. Had the runway been grooved, the crew would have been able to stop the A340 in that 4600 feet, just as they had done many times before at CDG airport, which is grooved.
    So deduction says now, whoops we have a problem. Induction said before the mishap, we have a problem.
    Which system do you think promotes lower mishap rates? Which system of thinking promotes driving the mishap rate towards zero?

  2. Paul Miller says:

    Training is one of the main elements which comes to mind.

  3. rudi den hertog says:

    Paul & Michel

    Agree with Michel, and even when everyone would make their safety plan, that still represents in some way an extrapolation from the past. Many R&D programs today do this straight extrapolation from the past, even though paradigm shifts have happened and will continue to happen. The trouble is that we have to come to grips with these paradigm shifts. At the same time, we will have to eradicate these streams of incidents still leading to accidents. Another question towards the future are things like: what has made this system so safe, which safety barriers are gradually being eroded and which (new) ones do we need to put in place. This and other questions will be amongst other things the focus of the FAST restart coming up in June. You are welcome to be there.

    Rudi den Hertog
    Netherlands

Leave a Reply