Most internal audit functions are better at auditing than they are at documenting how they audit. The methodology, the formal description of how the function plans, executes, reports, and monitors its work, is often the last thing to get written down, if it gets written down at all.
This matters more than it used to. Regulators increasingly expect to see a documented methodology. Audit committees ask for it. And when a QAIP review happens, either internally or by an external assessor, the absence of a documented methodology is usually the first finding.
What a methodology actually needs to cover
The IIA's Global Internal Audit Standards, the 2024 version, which replaced the previous International Standards, set out what an internal audit function is expected to do. A methodology is the function's documented approach to meeting those expectations. It is not a policy document written at a high level of abstraction; it is a working guide that tells practitioners how things are done.
The essential components are: the planning process, from universe maintenance and risk assessment through to individual audit scoping; the fieldwork standards, covering how work is documented, how samples are selected, and what evidence is required to support findings; the reporting standards, setting out how findings are written, rated, and agreed with management; the follow-up process, defining how management actions are tracked and validated; and the quality assurance process, covering both ongoing supervision of individual audits and periodic assessment of the function as a whole.
Start with the risk assessment and audit universe
The audit universe is the foundation of everything else. A methodology that does not explain how the universe is constructed and maintained is missing its most important element. The universe should reflect the organisation's risk profile, not just its organisational structure. Functions that build their universe from the org chart end up auditing business units; functions that build it from the risk profile end up auditing risks. The latter is more useful.
The risk assessment methodology, how auditable entities are scored and ranked to produce an audit plan, should be transparent enough that the audit committee can understand and challenge it. A black-box model that produces a plan without a clear rationale is difficult to defend and difficult to update when the risk environment changes.
Fieldwork standards: less is more
One of the most common methodology failures is over-engineering the fieldwork standards. A methodology that specifies exactly how every type of audit should be executed ends up either ignored or applied mechanically without judgement. The better approach is to set minimum standards, what must be documented, what must be evidenced, what supervisory review is required at each stage, and then allow experienced practitioners to exercise judgement within those standards.
The key fieldwork standards to document are: the requirements for an audit programme before fieldwork begins; the documentation standards for working papers; the criteria for selecting and documenting sample sizes; the requirements for factual accuracy checking before a report is issued; and the standards for grading findings.
Finding ratings: be specific about what they mean
Most methodologies include a finding rating scale, Critical, High, Medium, Low, or similar, but many fail to define what those ratings mean in terms of impact and likelihood. The result is inconsistent ratings across different audits and different auditors, and audit committees that cannot compare findings across the portfolio.
A good rating framework defines each level in terms of specific impact criteria, financial, regulatory, reputational, operational, and specifies the expected management response and timeline for each rating. A Critical finding should trigger a specific governance response, not just a management action plan at the next convenient point.
The follow-up process
Follow-up is where many audit functions lose credibility. Findings are raised, management commits to actions, and then the tracking process breaks down, actions are closed without adequate evidence, timelines slip without escalation, or findings are downgraded to make the position look better than it is.
The methodology should specify: how management actions are agreed and recorded; what evidence is required to close an action; who has authority to extend a target date; and what escalation path applies when actions are overdue. The audit committee should receive a regular report on the status of open actions, and the methodology should specify what that report contains.
Making it usable
The final test of a methodology is whether practitioners actually use it. A document that lives in a SharePoint folder and is consulted only during QAIP reviews has not done its job. The methodology should be the reference point for how work is done, which means it needs to be written clearly, kept up to date, and embedded in the team's day-to-day practice through training, supervision, and periodic review.