When initiating or invigorating a management system audit procedure, sheq officials often come up against audit perception, procedure, design, resistance, report and corrective delay problems, writes Mabila Mathebula in a report for SHEQafrica.com.
Audit value test
Kildahl (1998) provides a short test of audit system effectiveness, by way of a list of signs of audit system trouble;
At follow up for corrective action, they say ‘What audit?’
Nothing improves after audits.
Listed citations are interchangeable.
Audit staff totals one, or a part time auditor.
Opening and closing meetings are not attended.
Audit findings implicate the auditor.
Management uses audits for performance management.
Diversity policy is “our diversity is no worse than anyone else’s”
Your practical motto is ‘If it is not broke, do not fix it’.
Any three or more of these reasons indicate a need for audit renovation.
Build audit infrastructure
Designing and implementing a diversity audit system must happen on several levels; cascading team infrastructure for involvement, empowerment, problem solving and business process improvement, as well as alignment of organisational systems to support a diversity culture.
A checklist could enable organisations to assess their diversity management infrastructure support and goals. The checklist can be used for planning and evaluation. Following these guidelines can help to jump start a change effort by providing the required infrastructure.
A perfect audit is possible
There is no ‘one size fits all’ solution to creating an audit system. Each system is unique, an artifact of its environment. Problems arise when system elements are implemented without regard to the environment.
People have strong ideas about what an audit should do, but those results would not happen without dedicated effort to create an audit system tailored to the organisation it serves.
‘Begin with the end in mind’ is a dictum borrowed from Stephen Covey (Covey, 1989) in his book ‘Seven Habits of Highly Effective People’. Audit outcomes (Simons, 2002) include:
Assist management in improving executive and implementation decisions.
Identify areas where a system or process is not performing as intended.
Reflect and facilitate realisation and priorities of the organisation.
Generate opportunities for improvement.
Provide benefits to all stakeholders.
Assess current situation in compliance, effectiveness, implementation, accuracy, and processes.
Manage resources more effectively by focusing attention on systems.
The procedure recommended by Simons (2002) is based on first understanding exactly what you want from an audit program, then charting a course to improving the audit system by way of six generic steps:
1. Find out what your audit system should do, and not do.
2. Use policy and procedure guidelines to shape the ideal audit.
3. Plan audits to support goals and policy.
4. Tailor an audit life cycle design to your plan.
5. Plan, do, check, act (PDCA) on the new audit system.
Set audit goals
Audit system features have to be designed from the start, before you set a procedure or send people to be trained as auditors, decide what your audit system is supposed to do.
Most auditors perceive ‘management indifference’ to audit issues, but this perception is usually a distortion. Management usually does care, and audit problems lie not in report readers, but in the audit, auditor, or audit report.
Prepare a report that is useful to management. By gaining management input up front on how best to present audit data, auditors assure management participation in the audit cycle, and eliminate confusion from audit report formats (Mathebula, 2002).
Audit outcome is not a score
The proper response to a good audit report should not be ‘Did we pass?’ or ‘What is our score’, but rather, ‘How did non conformities happen?’ and ‘Where were we at fault?’
Audits should not be geared to generating a score, report card, or key performance area entry. A popular management expectation is that an audit would score departments or plants against one another, or against their previous audit results. Avoid this trap in audit design.
Audit scoring may reflect direct personal involvement in a sheq intervention or diversity program by members of the management term. Take a lesson from Dennis Arter’s book, ‘Quality Audits for Improved Performance’ (Arter, 1994); “If you must score something, create a score based on continuous improvement, the real measure of audit effectiveness”.
Chart a course to audit improvement
Once the target is clearly in focus, define roles and responsibilities for stakeholders. Who is responsible for various aspects of the audit program? Who decides what and where to audit? Who is accountable for corrective action? What are reasonable cycle times? Can audits conflict with moral or legal issues like medical data or consent? Could some auditees refuse internal auditors? What is the authority of internal auditors?
Management participation in the resolution of these issues is a good measure of commitment. Answers to these questions form a basis for audit documentation, policies and procedures.
Formulating recommendations is a persistent issue in audit program design. Should auditors write findings only, or add their recommendations? Quality Audit Handbook (Russell, 1991) restricts auditors to making recommendations to a separate report, and then only when requested. Arter (1994) gives four compelling reasons to avoid specific recommendations. Short of a corrective prescription, what is the poor auditor to do? One answer is to avoid prescription in favour of referring back to what the system is supposed to do. Arter (1994) argues, “Don’t tell the auditee how to fix anything, but describe what the fix must accomplish”.
Rather than address the issues raised in the audit, “some managers”, warned Arter (1994), “avoid the issue by demanding recommendations from the auditor, and then taking issue with the recommendations”. They are never guilty of ignoring the audit, but rather transfer responsibility for corrective action to the auditor. Not only is this tactic an abrogation of responsibility by the manager, but it also serves to delay acting on the issue of the audit report.
Remove system delays
One of the signals of a languishing audit system is protracted delay in resolving the issues raised in the audit. Kildahl (1999) chalked out the following list of reasons given for stretching out the audit cycle, for delaying action:
Let us not be hasty, here. We need time to consider all the implications
There isn’t enough evidence to be sure of the right thing to do.
You don’t understand the whole situation.
There is no money in the budget
You found the only example of that problem
We are too busy doing it wrong to fix the system.
Now is not a good time – we will do it later.
Why worry about the past when we have real problem today?
Have not you got something better to do than bug me?
All these are examples of avoidable behaviour. It is a common, widespread problem that needs an auditor’s attention as one puts the system together. Resistance to auditor comes mainly from gross ignorance or from deliberate opposition.
Education is key to dealing with ignorance of the need to act on audit results. According to Dennis (1997) the auditee who does not or cannot see the values of using the audit to address important problems needs help to understand the basic logic of audit. We only audit the stuff that matters (PLAN). We assess objectively, without any hidden agenda or bias (DO). We report the facts and test the implemented system (CHECK). Action by the system owner is required to change the way things are done for the better (ACT). After the Audit (Russell and Regel, 1997) has a wealth of material for one to use in addressing this issue in an audit program design.
Audit is not turf war
Deliberate resistance to audit processes range across a spectrum of organisational ‘politics’ that usually has more to do with culture than with systems.
Managers may feel that acknowledging audit merits is an admission of fault, or auditees may see an audit as a challenge to their authority, ego, ambition, self image, pride, qualifications, or experience.
An excellent guidance in these areas is Covey’s book, Seven Habits of Highly Effective People (1989), particularly parts on habit four, ‘Think Win-Win’, and habit five, ‘Seek first to understand, then to be understood’.
Plan audits to support a system
Auditors and auditees should focus on system issues, not on evidential details or individuals. Kildahl (1999) opines that auditors should establish a format, content, layout, and timing for an audit report to maximise involvement in resolving issues.
Dennis (1997) argues for creating several different presentation formats to find the one that generates the best response.
There is a widespread perception that an audit report is mere ‘bad news’, instead of useful information and loss prevention. It may ‘sting’ a little and should not waste time on trying to reward correct behaviour.
Choose auditors carefully. Not everyone is emotionally, psychologically, or motivationally suited to the task. Characteristics to look for are documented in Quality Audit Handbook (Russell 1997) and Arter’s book (1994).
Audit system component failure
Systems do not work to the same rules as processes. A traditional process model of input –transformation -output does not account for systems (Dennis, 1997).
Systems do not fail in the same way that processes do. If you remove a major element of a process, most likely it would just stop. Systems rarely stop, they accommodate change, for better or worse.
System components are interdependent and interactive, but absence or failure of any one piece would not stop the whole. Systems fail by doing less, more, or other than they were intended to do. Auditors must be alert to indications of various failure modes.
In a team function, incentives based on individual achievement could destroy teamwork. Be alert to interventions contradicting system design, and their effects.
Some systems take on a life of their own, diverting from their original intention to facilitate the work of an organisation, some even becoming goals in themselves. Be alert for evidence of activity being more important than results.
Test a new audit process
Russell (1997) argues that one should test a new audit cycle in practice; gather data, collect samples, interview participants, verify records, draft a report, follow up corrective action.
Then conduct a debriefing with auditee management to assess effectiveness of the new system. Build on what works, and what management accepts, and discard what does not.
Evidence presented in an audit report should be a product of the system evaluated. Often the focus of corrective action is on this evidence and not on the system that produced it. To focus on the subject, instead of the audit, Kildahl (1999) advises;
Corrective action procedures must assign responsibility for system oriented action on audit issues.
Proposals for corrective action must include a verifiable prediction of the effect of the action.
Discuss corrective action verification at every exit meeting.
Present in-house seminars on differences between processes and systems.
Publicise success with a new audit approach in newsletters and awards.
Audit plan and schedule
Audit schedule must be linked to priorities of the organisation. Managers often complain that audits have little bearing on things that matter most. Guidelines by Russell (1997) include:
Establish audit priorities based on management strategy.
Establish audit priorities based on budget sizes and priorities.
Interview or survey management during the annual planning phase.
Use state of the art intra company communication, like applications of intranets and electronic media.
Limit audits to the things that matter most, and increase data focus and quality.
Plan, do, check, act (PDCA) your system
The PDCA cycle is usually credited to Deming, even though he repeatedly cited Shewhart as its originator. It is deceptively simple:
Plan what you want to accomplish over a given period and what you need to do to get there.
Do what you planned. Start on a small scale with a pilot program.
Check results to see if you achieved your objective.
Act on your information. If you succeeded, standardise the plan, if not, continue with cycle to improve.
The essence of the PDCA cycle is an objective assessment of the current status, a clear understanding of the desired state and the gap between the two, and a plan to bridge the gap.
It would not impress management to set up an unrealistic audit program with impossible goals. Do not promise world class compliance and effectiveness audits in an environment unfamiliar with auditing and its attendant activity.
Return on investment for audits should be improvement in the activities audited. Report successful corrective actions in brief and clear terms.
Audit value guide
The author, Mabila Mathebula, offers this brief audit quality guide table to SHEQafrica.com visitors. Tick the first column if your organisation had accomplished the statement, or tick the second column if not.
– – Accomplishment has stated
– – Overall scope, quantities and parameters are determined
– – Resources to accomplish the mission are determined
Audit client satisfaction
– – Internal and external customers have been identified
– – Customer needs, wants and expectations are identified and clarified
– – Stakeholders are identified
– – Unrealistic or conflicting client wants, needs, and expectations are managed
Audit goals and outputs
– – Planning tools have been used to identify and develop goals, objectives, specific activities, and resources
– – No more than five goals have been identified
– – Key individuals who are responsible for carrying out the goals have been identified
– – Prioritised goals were achieved through consensus
– – Goals have been connected to outputs
– – One to three objectives have been identified for each goals
– – Specific activities with clear deadlines have been assigned for each objective
– – Responsibilities and resources have been identified for each activity
Audit success criteria
Done / Help
– – Measurable requirements and interpersonal requirements that audit outputs must meet to satisfy clients, are determined
– – Who determines acceptance?
– – What approvals are required?
– – When is completion required?
– – What forms and methods should be used?
– – How much or many should be delivered?
– – What strategies should it support?
– – What values should it support?
– – What impact did it create?