If the functioning of a system depends on the infallibility of a human being that system is bound to fail. The standard approach to try to prevent that failure is to establish rules and procedures that restrict the actions of the human.
This is effective, especially in simple systems, but high reliability will always be just out of reach. You need some obvious restrictions, but beyond that the benefits of restricting will disappear. Restricting space for actions might even lead to more instead of less mishaps.
So for the next step you need to move from preventing wrong actions to supporting the right actions, including the associated decision making. In all cases rules are always means to an end: rules are tools. That is also true for the support that can be offered to the fallible human, like checklists, procedures and teamwork.
A very effective tool like a checklist is exactly that: support for the fallible human. Unfortunately, these tools are quite often not used as tools. The moment a checklist is just a list of boxes to check and needs to be filed, it becomes an instrument for monitoring and for control. Instead of a tool it will more likely be a burden.