56 0 10MB
Part 3: Business Knowledge for Internal Auditing Welcome to Part 3 of The IIA’s CIA Learning System®. The self-study text for the learning system includes the content addressed in The IIA’s CIA syllabus. (You can download the syllabus from the online Resource Center or from The IIA’s website.) However, in some cases, the content has been reorganized to facilitate instruction and understanding. Refer to the Table of Contents for an outline of the content. To get the most out of the course materials, complete the course in this order: 1. Begin by accessing the course at www.learncia.com. 2. Read the overview and return to the menu. Select Part 3 from the menu. 3. Complete the pre-test and view the report to help focus your study efforts. 4. Read each section and follow the Next Steps directions included at the end of the section. 5. Complete Part 3 as outlined in the online overview. Note that Part 3 of the CIA exam will consist of 100 multiple-choice questions and test takers are given 120 minutes to complete this portion of the exam. You can go to https://na.theiia.org/certification/CIACertification/Pages/CIA-Certification.aspx to register for the exam separately.
Study Support The IIA’s CIA Learning System includes online tools to support your study. These tools may be accessed from the menu at any time.
•
Glossary—Refer to the glossary for definitions of terms used in all three parts of The IIA’s CIA syllabus.
•
Reports—Refer to the reports to review your most recent test scores
and progress through the learning system.
•
Resource Center—Refer to the Resource Center to access information about The IIA’s International Professional Practices Framework, updates, test-taking tips, printable flashcards, related links, and reference material and to provide feedback to The IIA regarding the learning system.
The IIA’s CIA Learning System® The IIA’s CIA Learning System® is based on the Certified Internal Auditor® (CIA®) syllabus developed by The IIA. However, program developers do not have access to the exam questions. Therefore, while the learning system is a good tool for study, reading the text does not guarantee a passing score on the CIA exam. Every effort has been made to ensure that all information is current and correct. However, laws and regulations change, and these materials are not intended to offer legal or professional services or advice. This material is consistent with the revised Standards of the International Professional Practices Framework (IPPF) introduced in July 2015, effective in 2017.
Copyright These materials are copyrighted; it is unlawful to copy all or any portion. Sharing your materials with someone else will limit the program’s usefulness. The IIA invests significant resources to create quality professional opportunities for its members. Please do not violate the copyright.
Acknowledgments The IIA would like to thank the following dedicated subject matter experts who shared their time, experience, and insights during the development and subsequent updates of The IIA’s CIA Learning System. Pat Adams, CIA Terry Bingham, CIA, CISA, CCSA
Al Marcella, PhD, CISA, CCSA
Raven Catlin, CIA, CPA, CFSA
Markus Mayer, CIA
Patrick Copeland, CIA, CRMA, CISA, CPA
Vicki A. McIntyre, CIA, CFSA, CRMA, CPA
Don Espersen, CIA
Gary Mitten, CIA, CCSA
Michael J. Fucilli, CIA, QIAL, CRMA, CGAP, CFE
Lynn Morley, CIA, CGA Lyndon Remias, CIA
James D. Hallinan, CIA, CPA, CFSA, CBA
James Roth, PhD, CIA, CCSA
Larry Hubbard, CIA, CCSA, CPA, CISA
Brad Schwieger, CPA, DBA
Jim Key, CIA
Doug Ziegenfuss, PhD, CIA, CCSA, CPA, CMA, CFE, CISA, CGFM, CR.FA., CITP
David Mancina, CIA, CPA
Part 3 Overview This part of The IIA’s CIA Learning System focuses on key areas of knowledge that can help internal auditors directly or indirectly with audit engagements. Some subjects will be directly applicable to any internal audit activity, such as effective management and leadership skills. Knowledge in subjects such as financial management or global business environments can also help the internal auditor to demonstrate to stakeholders that he or she has a firm understanding of the organization’s business practices and industry environment. Internal auditors who are perceived as having business savvy and familiarity with the organization will be in a better position to deliver value and insight. Decision makers will place more weight on recommendations that demonstrate sensitivity to the organization’s strategy and the complexities of its global challenges. In this way, internal auditors can elevate their role in the organization to one that is perceived as adding value. In brief, the sections in Part 3 are as follows: • Section I: Business Acumen—organizational objectives, behaviors, and performance; organizational structure and business processes; data analytics • Section II: Information Security—common physical security controls, various forms of user authentication and authorization controls, data privacy laws and their potential impact, emerging technology practices, existing and emerging cybersecurity risks, and security-related policies • Section III: Information Technology—application and system software, information technology (IT) infrastructure, IT control frameworks, disaster recovery, and business continuity • Section IV: Financial Management—financial accounting and finance and managerial accounting References are made throughout Part 3 to specific external auditing or accounting standards (e.g., U.S. GAAP and IFRS). Your focus should be on the learning point rather than the specific language of the auditing or
accounting standard.
Section I: Business Acumen
This section is designed to help you:
• • • •
Describe the strategic planning process and key activities.
• • •
Examine common performance measures.
•
Describe management’s ability to demonstrate entrepreneurial skills.
Define objective setting. Identify globalization and competitive considerations. Explain the process of aligning strategic planning to the organization’s mission and values.
Explain organizational behavior. Describe management’s effectiveness in leading, mentoring, and guiding people and in building organizational commitment.
The Certified Internal Auditor (CIA) exam questions based on content from this section make up approximately 35% of the total number of questions for Part 3. Some topics are covered at the “B—Basic” level, meaning that you are responsible for comprehension and recall of information. (Note that this refers to the difficulty level of questions you may see on the exam; the content in these areas may still be complex.) Other topics are covered at the “P—Proficient” level, meaning that you are responsible not only for comprehension and recall of information but also for higher-level mastery of the content, including application, analysis, synthesis, and evaluation.
Section Introduction In a tightly competitive market, increased demand and cost savings have organizational ramifications beyond matching or surpassing competitors. Customers demand more for less and have access to multiple sources of quality goods and services at competitive prices. Organizations are examining every business process with an eye toward improving quality and performance in order to address these rising customer expectations. Proponents of quality also point out that a key long-term benefit of investing in quality is that organizations have a strong potential to improve their revenue/profit due to repeat business from loyal customers. This section will examine a number of different techniques and concepts that organizations can use to help them analyze business process performance and be more competitive.
Chapter 1: Organizational Objectives, Behavior, and Performance Chapter Introduction Organizational behavior refers to the way individuals and groups behave in the organizational setting. The organization can be thought of as a system with interdependent parts. The culture and other factors influence the way individuals and groups respond. In turn, individual and group dynamics affect the dynamics of the organization. Organizations foster certain behaviors by their operational and motivational frameworks. This chapter touches on factors that affect how motivated and empowered organizations, groups, and individuals feel. These factors include organizational structure, management style, exertion of power and influence, organizational culture, cultural differences, communication strategies, and employee recognition and reward systems. Internal auditors need to understand organizational behavior because different controls work differently in the control environment and in different organizations. Also, the root cause of a control deficiency may lie in dysfunctional organizational behavior. Auditors will benefit from a broader, enterprise-wide view of organizational behavior. The auditing activities become a knowledge source in the organization.
Topic A: The Strategic Planning Process and Key Activities (Level B) Objective Setting An organization’s objectives define what the organization wants to achieve, and its ongoing success depends on the accomplishments of its objectives. For most organizations, a primary blanket objective is to enhance stakeholder value. Objectives also indicate what is expected from a governance, risk management, and internal control perspective. At the highest level, these objectives are reflected in the organization’s mission and vision statements. To generate buy-in, a best practice is to get input from people at all levels of the organization when developing or updating these statements. The mission statement is a broad expression of what the organization wants to achieve today. It needs to clearly indicate the organization’s purpose, including its reason for being and how it proposes to add value for its customers and other stakeholders. The mission statement serves as a day-today guide or charge to the individuals in the organization on how to achieve this purpose. It also serves as a bridge between the organization’s purpose and its vision statement. The vision statement conveys what the organization aspires to achieve or become in the future. It represents the highest aspirational view and goals of an organization in the context of serving and adding value to its stakeholders.
Types of Objectives Objectives may fall under several categories. Though these categories are distinct, there is often overlap. An objective may address more than one need or responsibility or may relate to different segments of the business or different individuals. Strategic Objectives and Strategic Planning Strategic objectives are goals set by management that specifically relate to stakeholder value enhancement, especially over the long term. They are
reflected in the organization’s strategic plans, which are long-term plans for multiple years into the future. The strategic plan is an important source for many types of assurance and consulting engagements, because other plans and objectives need to align with and integrate into these top-level plans. Also, strategic plans are a valuable communications tool that can set the tone for proper governance. Because strategic objectives and strategic planning are so critical to an organization’s success and growth, this is a key area to consider as part of the audit universe. Too often this area is overlooked and a strategic plan is simply used as an input to audit planning rather than being seen as an opportunity for adding value from a consulting perspective (such as improving the strategic planning process itself) or as an area for providing assurance coverage (such as ensuring effective communication of the plan). Ensuring that an organization has sound strategies and a strategic planning process is an important component of effective governance. Operational Objectives Operational objectives relate to the effectiveness and efficiency of operations. This includes but is not limited to operational and financial performance goals and safeguarding of assets. Reporting Objectives Reporting objectives relate to financial and nonfinancial reporting, both internal and external, and may include reliability, timeliness, transparency, completeness, or other terms as identified by the standards setters, regulators, or policies of the entity. Compliance Objectives Compliance objectives relate to the laws, regulations, policies, and procedures to which the entity is subject and the entity’s adherence to the same. Compliance objective subcategories could include contract compliance, compliance with industry standards and best practices, policy compliance, and so on.
Relationships Between Objectives
There is a direct relationship between the objectives that an entity strives to achieve. This includes the components that represent what is required to achieve the objectives and the entity’s overall structure, including operating units, legal entities, and other organizational structures and substructures. The relationship between these objectives can be illustrated in the form of a cube, as depicted in COSO’s Internal Control—Integrated Framework model and shown in Exhibit I-1. Exhibit I-1: COSO’s Internal Control Framework
COSO is a U.S.–based framework that is used by organizations to evaluate internal controls. The purpose of a cube metaphor is to show that each side of the cube relates to and influences the other sides of the cube (i.e., the framework has multiple dimensions). The rows represent the five components required for adequate governance, risk management, and internal control: the control environment, risk assessment, control activities, information and communication, and monitoring activities. Adherence to last four of these components is highly dependent on the quality of the first, the control environment, especially the organization’s values, attitudes, and ethics. The columns represent the three categories of objectives: operations, reporting, and compliance. The entity structure, which represents the overall entity, divisions, subsidiaries, operating units, or functions, including business processes such as sales, purchasing, production, and marketing and to which internal control relates, is depicted by the third dimension of the
cube.
Globalization and Competitive Considerations An organization sets a strategy to determine not only what type of organization it wants to be but also how such an organization will be likely to thrive in its environment, which is sometimes called an organizational ecosystem. It might, for example, want to be an agile organization that adapts well to changes or a large organization that can offer economies of scale and thus low prices. The organization’s success in its strategy depends not only on the successful execution of the strategy but also on the opportunities and risks that exist in the organization’s environment. Globalization has expanded most organization’s environments to include access to larger potential customer bases at relatively low costs (opportunities), but this also results in more potential competitors from all around the world (risks). The organization will likely have some competitive advantages relative to its competition. A competitive advantage is a relative advantage one organization (or nation) has over its competitors. Here are some potential sources of competitive advantage: • Labor market. Access to low-cost or high-skill labor, a wide labor pool. • Suppliers and raw materials. Access to materials at favorable prices, good or long-term relationships with suppliers, some degree of ownership or control of (or independence from) suppliers, supplier proximity. • Customer base. Established customer base/market share, loyal and satisfied customers. • Process and methodology maturity. Risk, control, quality, change management, manufacturing, or other frameworks; their maturity level and difficulty in achieving that level of maturity. • Supply chain and transportation. Relative cost and speed of supply chain, number of options for and level of convenience to customers. • Competitor maturity and ease of market entry. Relative number of
competitors, competitor sophistication, capital investment needed to become a viable competitor. • Technology. Labor-saving or insight-generating technology, proprietary technology. • Regional economy, politics, culture, legal, and regulatory environment. Regional economic prosperity, favorable politics and taxation, culture that promotes good values such as hard work or innovation, favorable laws and regulations. Successful strategies leverage the organization’s competitive advantages relative to its competitors. However, competitors’ strategies will likely rely on their own competitive advantages due to their geographic location, size, access to capital, and so on. The organization’s strategy therefore works to find a way to leverage relative strengths and mitigate relative weaknesses in order to succeed in leveraging opportunities wherever they exist (e.g., in local markets, by expanding globally, by leveraging the online global marketplace) while minimizing the probability or impact of risks, including the threat of competitors taking market share. Internal auditors may be in a position to help evaluate whether the organization is accurately assessing the current state of its strengths and weaknesses relative to changes in globalization and the competition. For example, this may include assessing whether the organization is altering its strategy in a timely enough fashion to continue surviving and thriving when such factors are changing quickly.
Mission and Value Alignment Recall that the organization’s mission expresses what the organization wants to achieve today. Part of this mission will be to provide and add value to stakeholders; another part will be to state and live up to the organization’s values. One way organizations align their mission with their organizational values and ethics is to create corporate social responsibility (CSR) or sustainability programs. The basic concept is that organizations are not responsible for just short-term financial results; they are also responsible to
the communities in which they operate, to their workers, and to the environment that sustains all humankind. As organizations implement formal sustainability programs and practices, they develop related performance measures. Internal auditors are starting to play a role in auditing sustainability programs and the design and reliability of the related measures. One way to do this is with a balanced scorecard, which is discussed in the next topic. For more information on CSR, see the discussion in Part 1, Section V, of this learning system or review The IIA’s Practice Guide “Evaluating Corporate Social Responsibility/Sustainable Development.”
The CAE’s Role The role of the chief audit executive (CAE) related to strategic objectives includes establishing a risk-based plan to determine the priorities of the internal audit activity, aligned with the organization’s goals. To ensure that the risk-based plan is aligned with these goals, the CAE must consult with the entity’s board and senior management and obtain an understanding of the organization’s strategies, key business objectives, associated risks, and risk management processes. Additionally, the CAE must review and adjust the plan as necessary in response to changes in the organization’s business, risks, operations, programs, systems, and controls.
Topic B: Common Performance Measures (Level P) Internal auditors may need to assess the organization’s performance measurement system or the performance measurement system of an audit area and determine whether it is efficient, effective, and timely. Can it measure whether central organizational objectives are being achieved? Does it provide reliable information in a timely enough fashion to enable decision making and control? The basic considerations in assessing performance are: • Identifying related standards for performance. • Assessing the reasonableness of performance standards in addressing organizational and audit area objectives. • Comparing performance to the identified standards. • Evaluating performance gaps (deviations or variances from the standards). Required corrective actions should be specified and completed in a timely manner. Ultimately, an effective performance management system is one that supports the achievement of organizational goals and objectives, audit area objectives, or, for personnel performance measures, individual and personal goals and objectives. The most common weaknesses in performance measurement systems involve using the wrong key performance indicators or the wrong number of indicators. Key performance indicators (KPIs) focus on accomplishments or behaviors that are valued by the organization and are needed to successfully achieve the organization’s strategy and mission. They are valid indicators of performance if they measure the right things and are understandable to management (who use them to guide and improve performance). An audit of a functional area, for example, may include review of its performance measurement system to ensure that its local or detail-level KPIs align with the organization’s strategic objectives and most recent risk
assessment. The CAE may also review the entire organization’s KPIs for continued relevance. For example, take a manufacturer who sets a strategy to distinguish itself in its market through innovative products built on resource-intensive research and development (R&D) programs. In this case, the CAE may review the organization’s KPIs to ensure that they include measures related to R&D efficiency and/or effectiveness. This could be the number of R&D leads at a certain level of development or the number of ideas used in new products that generated a certain level of revenue. The internal audit activity can also audit for controls on the security of proprietary information. The CAE should also consider whether the organization is meeting its goals, possible reasons for performance gaps, and the role internal auditing could play in addressing these gaps. For example, if a credit card company has not been able to lower customers’ default rates, the audit activity might evaluate the credit functional area’s KPIs around customer credit approval, timeliness of monitoring delinquent accounts, collection staff productivity, and so on. In addition to determining whether the KPIs are supporting effectiveness toward reaching goals, another part of the assessment can focus on the efficiency of the KPIs in promoting goal achievement. Too few KPIs might mean a lack of incentive to pursue some of the organization’s objectives, such as managers not being assessed on whether they are supporting or promoting the sustainability policy. Too many KPIs is a more common occurrence, and this can also cause problems. The first word in the phrase is “key,” and, while the organization can have lots of performance indicators, only a small number should be designated as “key.” Too many KPIs can create a situation of information overload. This can confuse or delay decision making or lead to the wrong conclusions, such as allowing a minor criterion to have more weight than it deserves, with an unintended consequence of obscuring the more vital indicators. Prior to discussing key performance indicators further, this topic first introduces two broad ways of assessing organizational performance.
Organizational Performance Many of the themes discussed later in this course are examples of things that may affect an organization’s performance: • Trends in the industry and marketplace • Life cycle of the product and current demand • Orientation and skills training for employees • Cross-cultural communication • Employee motivation and rewards • Job design and work group design • Management styles • Team effectiveness • Individual and team communication • Organizational dynamics such as expectations, organizational structure, politics, workplace ethics, change, and diversity • Advances in electronic communications technology • Maturity level of an organization in its use of technologies, processes, frameworks (e.g., risk management), collaboration, or other areas An organization’s ability to execute its goals and the results it achieves are prime indicators of its overall success in accomplishing its performance objectives. Performance objectives are the goals and activity-based targets related to the organization’s strategy. The performance success factors are indicators of success, which will look quite different from one organization to another. We’ll now discuss two important concepts in this regard—productivity and effectiveness.
Productivity Productivity is the ability to produce a good or service. In an organization, it refers to the quantity of the outputs (products and services) in relation to the inputs (human and physical resources). Productivity is a way to achieve cost and quality advantages over the competition. Quality refers to an organization’s standards of excellence related to product or service output. The meaning of quality will vary by the type of organization. Physical product quality factors include features, reliability, durability, serviceability, performance, and conformance. Service quality factors include responsiveness, trust and assurance, reliability, and perceptions of customer care. Performance measures related to quality may include things like the number of defects or rejects located by inspection, the number reported by customers, the response time for recovery (e.g., from customer errors), the degree to which the product or service is meeting customer needs, and so on. Efficiency refers to minimizing the use of resources in a product or service process as compared to standard expectations. Various ratios generally measure the resources actually used against the resources that were planned to be used. Other measures of efficiency include turnover ratios, such as inventory turnover, or the number of times per year inventory is sold and replenished. Efficiency ratios, however, do not indicate the quality level of the outputs. The standards used for the assessments may also need to be reviewed to see if they are still accurate and realistic yet challenging. Productivity is also linked to profitability, but it is only one factor. Profitability refers to making a profit, or achieving financial gain from an effort over and above the expenses that were required to generate that profit. Various profitability measures are generated by determining which expenses to include or exclude from the analysis, such as operating profit, which measures the earnings before interest and taxes (EBIT) and can help show whether core operations are efficient enough and management is competent enough to keep the organization viable. While productivity measures primarily relate to the short term, profitability can relate to both the shorter and the longer term and may take into account other internal and
external factors. The basic guidelines for improving productivity are to: • Determine where improvements are needed the most and set priorities. • Select appropriate measurement tools. • Assess the current level of productivity. • Identify and analyze the key factors affecting productivity. • Set new improvement standards (e.g., best business practices) and provide resources (e.g., funding, new technologies) and support. • Communicate changes and conduct training if necessary. • Establish procedures to monitor the new efforts. Performance measures are used to improve productivity. Simply put, if the quality or quantity of products or services increases, there is an increase in productivity for the organization. Or, if there is the same level or quality of product and service outputs but fewer resources are needed, there is an increase in productivity. There are several ways to measure productivity, and the methods will depend on the circumstances. A few strategies are noted here: • Time and motion studies determine how much time is involved in an activity. • Sampling techniques use observation and samples from processes and outputs to assess workflow and quality. • Capacity planning identifies the capacity for workflow and outputs. • Volume analysis looks at product volume and ways to meet product demand. • Task analysis looks at the tasks involved in jobs and the appropriateness of job design.
• Cost analysis studies cost allocation, cost-effectiveness, cost-benefit tradeoffs, and the possible effects of changing costs. There are other systematic ways to monitor quality and make continual improvements: • Benchmarking can be used to compare the organization’s practices against the best practices of one or more comparable organizations. • Quality approaches for continual quality improvement, such as total quality management (TQM), can be used. • Improvement processes can be implemented, such as Six Sigma, which seeks to improve processes by eliminating defects, and lean, which seeks to improve processes by reducing waste. It is more difficult to measure performance in nonmanufacturing and knowledge-based industries, such as financial or legal services, because the outputs and value creation are often harder to measure or could include intangible benefits. In other words, performance in some industries may need to be stated more qualitatively and less quantitatively than in others. In these cases, it is important to use more than one performance measure. An example of a set of operational KPIs for one organization might include the following: • Gross profit margin • Net profit and net profit margin • Debt ratio • Employee productivity • Employee adherence to values, ethics, and regulations • Inventory turnover • Return on marketing spend • Customer acquisition cost
• Perfect customer orders (zero defects, correct items, complete items, on time, etc.) • Customer satisfaction Note that the first few of these metrics are defined later, in Section IV, Chapter 1.
Effectiveness Effectiveness relates to outputs and the degree to which an organization’s goals and objectives are achieved. Productivity, quality, efficiency, and profitability are all part of the overall effectiveness of the organization. Today’s organizations need to be adaptive and innovative to respond to unexpected internal or external events. Organizational learning and knowledge management are important. Organizations that learn from their mistakes, formulate best practices, and share knowledge are more likely to be successful. The organization’s structure and its work systems, procedures, and processes make a difference in employee motivation, adaptability, and creativity. The interrelation of the organization’s physical location, external environment, management style, resources, and other considerations directly impact organizational effectiveness. Organizations strive to be high-performing. In a high-performance culture, employees not only contribute to the success of the organization but are in a mindset to continually assist the organization with improvements. They are encouraged and rewarded for thinking smarter, making new suggestions, and being innovative. The use of work teams is a strategy in high-performance cultures. Collaborative efforts often produce better problem-solving and decisionmaking results. Others in the organization are also more likely to accept the outcomes from a team. Some other ways that an organization might improve effectiveness are to:
• Foster organizational learning and knowledge sharing. • Encourage improvement on a continual basis. • Develop a culture of trust, which is critical to individual and team work. • Provide adequate physical space and workspace flexibility. • Restructure management and reporting lines. • Redesign jobs to reflect changes in the internal or external environment. • Form strategic alliances or out-source. • Make technology and equipment improvements. These strategies help measure and monitor organizational effectiveness: • Determine critical success factors that align with performance objectives. • Determine ways to measure the critical success factors. • Use sound data for measurement, monitoring, and control processes. • Capitalize on information technology and knowledge sharing. • Make ongoing improvements as necessary.
Key Performance Indicators The organization and each of its subsets need to take care to identify appropriate performance measures—measures that are aligned to and target the performance necessary to meet the organization’s objectives. The IPPF Practice Guide “Measuring Internal Audit Effectiveness and Efficiency” describes a four-step process for establishing an effective performance measurement process for the internal audit activity. This same process can be adapted to help determine whether a functional area (or the organization as a whole) has established effective and efficient performance measurement. We will use an assessment of a credit department’s KPIs as an example.
Step 1: Define Effectiveness. The effectiveness of KPIs is based on whether the KPIs can be linked to achievement of the area’s objectives and, by tracing upward to summary or aggregate levels of KPIs, whether they also can be linked to the organization’s overall objectives. Assessors and perhaps experts in the area can determine what constitutes an effective set of KPIs by coming to an internal consensus regarding completeness of the KPIs in meeting all of the needed objectives at the local and overall organization levels. There is also a benefit in a discussion of how many KPIs is the right number for the area and then working to get to that number. For customer credit, this could include ensuring that the rate of default remains within tolerance levels and that credit is still liberal enough to attract a sufficient number of new and return customers, among other things. These will then point up to overall organizational objectives related to profit margins and growing the base of trustworthy, profitable, and loyal customers.
Step 2: Identify Key Internal and External Stakeholders. Internal stakeholders may include the board, senior management, operations and support management, and the audit area’s internal customers (e.g., areas that rely on the outputs of the functional area being assessed). External stakeholders may include customers, shareholders, third-party vendors, regulators, standards-setting bodies, and external auditors. In-depth interviews and surveys can be conducted to develop a clearer understanding of the needs and expectations of each of these stakeholders. An example of an internal stakeholder for the credit area is the sales functional area. Sales will want a higher percentage of customers approved. Finance will be another internal stakeholder, and finance will want to limit defaults on credit payments, which will tend toward fewer approvals. The chosen performance indicators will need to account for both of these interests and find a way to keep them in balance. External stakeholders may include customers who will naturally want to be approved but should be in a position to repay in a timely fashion if they are. Regulators who work to ensure that the credit policies provide equal opportunity and not predatory are also stakeholders.
Step 3: Develop KPIs for Effectiveness and Efficiency. KPIs are valuable to each functional area in an organization (and the organization itself), because they allow management to detect shortcomings in execution and plan remedial action. They also allow the functional area to demonstrate its value to its internal and/or external customers. KPIs can be used to support requests for resources needed to support the desired level of performance. Because of the close relationship between the KPIs and the expectations of important stakeholders, it is important that certain stakeholders be consulted with (or at least informed about) the KPIs being considered. This helps ensure that the KPIs focus on meaningful performance that is aligned with the organization’s strategic goals. Whether internal auditors are evaluating KPIs during an audit project or are looking at organization-wide KPIs, they need to get answers to several questions related to effectiveness and efficiency: • Are the KPIs designed effectively? (Are these the right measures?) • Do they cover all the objectives? • Can users understand them? • Do they ensure that higher-priority objectives get sufficient weight in decision making? • Do they consider other priorities to the degree possible while remaining efficient? • Are the KPIs operating efficiently? • Are there just the right number of KPIs to enable timely and methodical decision making? • Can the data be collected, prepared, and analyzed in a timely and costeffective fashion? • Are the reports or analysis ready by the time decisions need to be made? • Are the KPIs operating effectively?
• Do they result in positive changes in actual performance? • Are the calculations accurate? • Are the information sources reliable? Usually, KPIs measure outcomes (e.g., sales, production). Sometimes they measure process characteristics (e.g., timeliness, accuracy). KPIs may be quantitative (e.g., the percentage of customers who repay in full without delinquency) or qualitative (e.g., appropriate use of red flags when evaluating customers who are borderline for credit approval or denial). Sometimes KPIs measure risk (e.g., delinquency rates, the trend in error rates); these are referred to as key risk indicators, or KRIs. KRIs are often used as leading indicators of risk. That is, if the KRI trends dangerously upward or crosses a predefined threshold, management can identify and correct the root cause before actual damage occurs. Balanced Scorecard A balanced scorecard approach can be used to develop specific KPIs. A balanced scorecard examines performance from four different perspectives: financial needs, customer satisfaction, business processes required to accomplish the activity’s mission, and learning and growth to ensure continuous improvement. Many organizations include customized categories that are meaningful to the industry, organization, or functional area. A balanced scorecard is often used by organizations who want to embrace sustainability or corporate social responsibility. Increasingly, organizations are reporting their corporate social responsibility performance measures to external stakeholders. Internal auditors are starting to play a role in auditing sustainability programs and the design and reliability of the measures. The idea is to create long-term value for the organization and the communities in which it operates. Even as organizations work to add longterm value by considering customers, processes, and learning and growth, they need to stay in business to do so; therefore, the financial perspective is still a necessary and vital area of an organization’s performance even as it expands its perspective. Examples of financial metrics (some of which might be designated as KPIs) are discussed later, in Section IV, Chapter 1.
While financial metrics will be primarily quantitative in nature, the other three balanced scorecard perspectives may contain a mix of quantitative and qualitative measures. Some of these other areas are more difficult to measure, especially over the short term. For example, as organizations implement formal sustainability programs and practices, they are developing related performance measures, and some of these may be quantified while others will be more subjective or require estimation, such as the impact of higher quality or the impact on customer loyalty of a particular loyalty program. Exhibit I-2 shows an example of a balanced scorecard that might be developed for the credit functional area of an organization. Note that the sources of the organization’s objectives are shown in the center. Exhibit I-2: Balanced Scorecard for Credit Functional Area
Step 4: Monitor and Report Results. When providing consulting or assurance related to an audit area’s KPIs, it is important to verify whether or not performance against the KPIs is monitored, considered as the basis for quality improvement, and reported at an agreed-upon frequency to the appropriate levels of management (and perhaps the board, depending on the area) and in the manner desired by the area’s stakeholders (e.g., presentations, automated dashboards, emails). Occasionally, in-depth interviews and surveys should be conducted with
stakeholders. Internal auditors may also want to benchmark the audit area’s KPIs against those of similar functional areas of competitors, of industry leaders in a given functionality, or of similar functional areas in different business units. Assurance or consulting engagements may also assess the quality and accuracy of the data used, the correctness of the calculations or formulas used in ratios, whether automation is being used properly to make data collection and analysis seamless (and more likely to be done on a regular basis), and the risk of errors in the analysis and reporting systems as well as how the errors might be introduced (e.g., a spreadsheet is easy to create but also easy to alter, creating a significant risk of errors being introduced even into a previously error-free spreadsheet).
Topic C: Organizational Behavior and Performance Management Techniques (Level B) Internal auditors who understand what motivates people will be in a better position to determine whether a given decision, performance measure, incentive/penalty, policy, procedure, or control may be efficient and effective at encouraging people in the organization to work toward organizational goals. They will also be more versed in human nature and therefore more able to detect when these things are likely to generate unintended consequences (the negative or counterproductive side effects that can result from a decision, measure, control, etc.). This could be a manager not budgeting for sustainability improvements because sustainability is not part of that manager’s performance assessment. Or it could be a control weakness that promotes altering, ignoring, or finding loopholes to exploit in the control for personal or organizational gain. Part 1 of this learning system looks at three conditions—opportunity, motive, and rationalization— that can suggest the possibility of fraud if present in the right proportions. Learning about the motivation of individuals and groups can help internal auditors understand more about each of these conditions. Human motivation is complex, especially considering that organizations have a certain degree of cultural diversity and contain persons from different generations and age groups. Due to these and other complexities, no one motivational theory has been determined to be the best for predicting organizational behavior in all situations. Many experts have generated motivational theories, and the ones discussed below have had some staying power. Learning about these various motivational theories will provide internal auditors with a grounding in this area of considerable research and debate. This topic will then go on to discuss some ways to understand the organizational environment and the people in it. This understanding is needed for the effective design of performance management techniques such as job design or customizing rewards, which form the conclusion to this
topic.
Motivational Theories Basic to human behavior, and thus organizational behavior, is motivation. Motivation is an individual’s desire or drive toward a reward or goal. In the workplace, this refers to an individual’s self-direction and persistence toward accomplishing work goals and outcomes. Motivation has to do with people’s needs, what they value, and their perceptions and feelings. There are two basic types of motivation: • Intrinsic motivation is internally driven, such as when an action is important or matches personal values. • Extrinsic motivation is externally driven by factors such as money, public recognition, or other rewards. There are several historical theories of motivation that have relevant application for behavior in organizations today. A brief account of some of the primary motivational theories follows.
Hierarchy of Needs (Maslow) Abraham Maslow’s hierarchy of needs is generally described as a pyramid with five levels, starting with the most basic physiological needs on the bottom. The basic premise is that only after the lower-level needs are met can the higher levels be met. Exhibit I-3 lists the levels in the hierarchy, examples of what each level includes, and ways to meet individual needs in the workplace.
Exhibit I-3: Maslow’s Hierarchy of Needs Maslow’s Hierarchy Self-actualization needs
Examples Personal growth and striving to reach one’s full potential
Workplace Applications Challenging assignments, professional development opportunities, and leadership responsibilities
Esteem needs
Internal needs such as selfesteem and self-respect and external needs such as status, reputation, and recognition
Promotions, job titles, special recognition, and rewards
Social/belonging needs
Friends, love, and a sense of belonging
Employee orientation programs, peer and mentor coaching, work teams, and social functions
Safety needs
Safe environment, protection, and financial security
Safe physical environment, job security, and job benefits
Physiological needs
Body functioning such as sleep, food, and water
A job, sufficient earnings, work breaks, refreshments, and health and wellness programs
Managers have an opportunity to take an interest in those around them and encourage growth toward self-actualization. Understanding where an employee might be in the needs hierarchy helps managers determine what strategies might motivate the employee. It should be noted that needs will vary given current circumstances in an individual’s work or personal life. They will also vary depending on the current business cycle or job market.
Motivation-Hygiene Theory (Herzberg) Frederick Herzberg developed a “two factor” theory that says that there are factors in a work environment that cause employee satisfaction or dissatisfaction. Herzberg’s theory is that people have two important types of needs—survival and personal growth. In the workplace, survival (or hygiene) factors can become sources of dissatisfaction. Motivator factors are sources of personal growth and satisfaction. Exhibit I-4 provides examples of each.
Exhibit I-4: Hygiene and Motivator Factors Hygiene (survival) factors
• • • •
Organizational policy Manager/supervisor relationships Working conditions Salary and benefits
Motivator (personal growth) factors
• • • • •
Achievement Recognition Responsibility Training/development Advancement
An important part of the theory is that a hygiene factor cannot itself provide job satisfaction; it can only prevent dissatisfaction. For example, good working conditions may prevent an employee from being dissatisfied, but they do not in and of themselves provide job satisfaction. A motivator factor can create job satisfaction. However, if the factor is not there, it does not lead to dissatisfaction. For example, added responsibility may increase an employee’s job satisfaction, but if the responsibilities were not added, the employee might not be dissatisfied. With application to motivation in the workplace, the thrust of Herzberg’s theory is that hygiene factors must be provided to prevent dissatisfaction. For job satisfaction, additional motivation factors should be provided. Job enrichment, discussed later in this topic, is a key strategy in this regard.
Theory of Needs (McClelland) David McClelland’s theory describes three types of motivational needs that are learned and acquired over time: • Achievement. Motivated individuals need accomplishment. They strive toward goals and want feedback on their progress. • Affiliation. Motivated individuals need interaction with others. They seek acceptance, develop friendships, and cooperate well with others. • Power. Motivated individuals need power and authority. They want to lead, influence, and make an impact. Recognition and status are important. McClelland’s theory is that individuals have some degree of each characteristic. In the organizational setting, managers will want to consider
need motivation to shape work responsibilities and rewards for individuals.
Theory X and Y (McGregor) Douglas McGregor developed Theory X and Theory Y, espousing that there are two basic approaches to management based on assumptions about employees. Theory X states that the average employee: • Dislikes work and will avoid it when possible. • Must be coerced to achieve organizational goals. • Has little ambition and prefers to be directed. • Seeks security above all else. Theory Y states that employees: • Enjoy work as a natural effort. • Are motivated by rewards. • Seek responsibility and, when committed, are self-directed. • Have creative and intellectual potential that is underutilized. Theory X lends itself to an authoritarian management style, where managers and supervisors exert a higher level of authority over employees with regard to decision making and work accomplishment. Theory Y relates to a participative style, where managers and supervisors encourage a high level of employee participation and collaboration in decision making and work accomplishment. Most organizations’ managers and employees fall somewhere in between these two theories. The theories cultivate awareness about motivation. McGregor implied that either theory could motivate employees but that Theory Y is a more positive approach.
Organizational Management Styles (Likert) Rensis Likert identified four organizational management styles: • In the exploitive-authoritative system, leaders have authority, decisions are imposed, and threats are made. There is little communication and no teamwork. • In the benevolent-authoritative system, leaders have authority and motivation comes through rewards. There is little communication or teamwork. • In the consultative system, leaders have a good deal of trust in employees and motivation comes through rewards and some involvement. There is some communication and some teamwork. • In the participative system, leaders have full trust in employees and rewards and goals are set in a mutual discussion. There is much communication and much teamwork. Likert’s overall thrust is that a high level of participation between leaders and employees fosters a high level of motivation among all.
Expectancy Theory (Vroom) Victor Vroom’s expectancy theory is based on the assumption that employees’ motivations and actions are choices based on three beliefs: • Expectancy refers to how high an expectation there is that effort will produce successful outcomes and rewards. • Instrumentality is how strong the belief is that rewards will actually be received if effort is exerted. • Valence is how strongly rewards are valued and desired. In essence, individuals will tend to be more motivated if they have high expectations of success, if they have a high belief that they will receive rewards, and if the intrinsic or extrinsic rewards are highly valued.
The expectancy theory has implications for today’s managers. Managers can encourage individuals toward successful outcomes, act on promises to deliver rewards in a timely manner, and discern which rewards are most valued.
Equity Theory (Adams) J. Stacy Adams put forth the equity theory, which refers to employees’ expectations that they will be rewarded fairly for their contributions to the organization. Employees want to receive rewards, or “outputs,” that align with their contributions, or “inputs,” to the organization. Inputs in the equity theory are hard work, dedication, years of service, special skills, flexibility, ambition, and other contributions; outputs are pay, benefits, perquisites, flexible work arrangements, praise, promotions, status, and professional development opportunities. Individuals seek a fair balance of the inputs and outputs, both by their own estimations and by comparison to others. If the circumstances feel unfair in the reward system, individuals may lose confidence and become demotivated. They may reduce their efforts, cause disruption, or resign. If the balance of rewards is perceived to be overcompensated, individuals may try to increase their efforts to better match the outputs. Or individuals may become demotivated, given the higher balance of outputs for their current inputs, and consequently decrease their efforts. If circumstances feel fair, employees are likely to be motivated and content and maintain their contributions. Related to the equity theory, managers and the human resources area need to carefully design reward systems to be as fair and equitable as possible.
Goal-Setting Theory (Locke and Latham) Edwin Locke and Gary Latham’s goal-setting theory espouses that: • Setting specific and challenging goals (as opposed to no goals or vague goals) results in improved performance. • The more challenging the goals, the higher the performance outcomes
(unless a goal is unrealistic). • Feedback helps individuals adjust performance and reach goals (unless an individual is not committed to the goals). • Having an employee participate in goal setting helps as an information exchange rather than as a way to get goal commitment. Goal setting is important to motivation. It helps employees determine the activities and adjust their level of effort to reach the goals. This encourages persistence until a goal is reached. Managers will want to encourage the use of goal-setting techniques to realize organizational objectives.
Reinforcement Theory (Skinner) B. F. Skinner’s reinforcement theory says that behavior is a function of its consequences. Behavior modification techniques are involved in trying to “modify” employee behavior: • Positive reinforcement delivers a desirable consequence to encourage repeat behavior in the future. • Negative reinforcement includes an undesirable consequence to encourage desired behavior in the future. A speeding ticket is an example of negative reinforcement. It strengthens a behavior (following the speed limit) because a negative condition is stopped or avoided as a consequence of the behavior. • Extinction removes a reinforcing consequence to discourage repeat behavior in the future. For example, a behavior might be ignored if it seems to be motivated by giving it attention. • Punishment delivers a negative consequence to discourage repeat behavior in the future. It should not be confused with negative reinforcement; punishment weakens a behavior while negative reinforcement encourages a desired behavior. Reinforcement has much relevance in organizational behavior. There are distinct challenges, however, in recognizing when and how to use behavior
modification techniques. The techniques can be applied on a scheduled basis, such as yearly bonuses, or on an intermittent basis, such as periodic rewards for work well done.
Environmental Factors Motivated and engaged employees help organizations become more productive and reach overall organizational goals. One may well understand theories of motivation, but they need to be understood in the context of the organizational environment. Strictly speaking, employee motivation must come from within each individual. However, motivation needs the right environment to thrive. For example, for employee empowerment to work, there needs to be a culture of trust and an atmosphere of learning from mistakes. The organizational environment therefore has much to do with shaping employee motivation and outcomes. There are influences in the organizational culture that may work for or against employee motivation and performance improvement. For purposes of this discussion, the environmental factors that directly influence the design of performance management techniques in an organization include organizational structure and culture, organizational politics, and trait theory.
Organizational Structure and Culture The organizational structure and culture are the foundations for organizational behavior. As described previously, an exploitive-authoritative organizational management system will look very different in practice than a participative system. There are many ways that organizations can provide a supportive environment: • Communicate organizational mission, vision, objectives, goals, and expectations clearly and widely. • Establish and regularly reemphasize the importance of staying committed and adhering to organizational core values and codes of ethics/conduct. • Develop a culture that welcomes employee participation.
• Provide necessary resources and remove system or process barriers. • Provide for physical needs such as a clean, safe, and ergonomic work environment. • Provide options such as flexible work schedules and choices in health plans. • Support continuing education and professional development activities.
Organizational Politics Organizational politics describes informal structures of power and influence that can be used to obtain various objectives: to obtain self-interested or other unsanctioned goals, to achieve organizational goals using unsanctioned methods, or to find solutions or compromises when there are multiple competing interests. Organizational politics could easily become a governance, risk, or control (GRC) issue at an organization when the objectives are unsanctioned or if the ends are used to justify unsanctioned means. For example, a manager’s self-interest may be to gain power, get a promotion, or get a bonus. If this person withholds required resources needed to retain a major client to improve his or her department budget and thus get a bonus, the organization is the loser. Similarly, if a person achieves desired organizational goals but does so in a way that violates policies or procedures, this becomes an ethical question of whether the end justifies the means. It also engenders further disregard for policies and procedures. Office politics are most useful when the objective is to help broker compromise or consensus among competing interests. From a performance management design perspective, it is important for managers to understand the degree to which organizational politics exists at the organization and to keep this in mind as they design performance management techniques so as to minimize governance, risk, and control problems or other unintended consequences. Note that senior management can also be part of the problem when it comes to the negative aspects of organizational politics, and internal auditors should be aware of this
possibility. For example, senior managers who engage in organizational politics may develop or modify controls or management techniques to have deliberate deficiencies to enable their continued accumulation of power and so on. Organizational politics is an extremely important aspect of organizational dynamics, communications, relationship building and maintenance, and so on. It can greatly impact the control environment, in other words, and internal auditors need to make this part of ongoing control environment assessments.
Trait Theory The term “trait theory” refers to various theories that have been developed to categorize and understand human personality traits. Understanding how to develop effective performance management techniques often depends on the type of person being managed. Trait theory has been applied to people in general and also to help determine whether a person would make a good leader; the latter is addressed in the next topic. One trait theory for people in general is called the “Big Five” theory, which considers the following key personality dimensions: • Extroversion, or the degree to which a person is outgoing, assertive, or willing to socialize (or shy, unassertive, or antisocial) • Agreeableness, or the degree to which a person is cooperative, helpfully disposed, and trusting (or uncooperative, ill-natured, or distrustful) • Conscientiousness, or the degree to which a person can be persistent, dependable, and reliable (or a quitter, undependable, or unreliable) • Neuroticism, or the degree to which a person can remain relaxed, secure, and free from worries (or is tense, insecure, and worried) • Openness, or the degree to which a person is open to new experiences, broad-minded, imaginative, or curious (or set in his or her ways, narrowminded, unimaginative, or incurious) Another type of trait theory, popularized by the Myers-Briggs Type Indicator (MBTI) survey that many organizations have used with their
employees, was developed by psychologist Carl Jung. One thing it describes is four different approaches people use to solve problems: • Sensation-feeling persons are oriented toward human interaction and open communication and so are good at problems requiring empathy and cooperation. • Sensation-thinking persons are oriented toward technical detail and logic and so are good at problems requiring precision, order, and dependability such as observing, recalling, and correct execution. • Intuitive-feeling persons are oriented toward insight, creativity, idealism, and the big picture related to people and are good at problems requiring imagination and elegant solutions. • Intuitive-thinking persons are oriented toward synthesizing and interpreting ideas and speculating on causes or results using logic and objectivity and are good at problems requiring problem solving, inquiry, or discovery. Trait theories are interesting to study in general but gain relevance when used to better understand specific workers, managers, leaders, and oneself.
Performance Management Techniques Given an understanding of organizational behavior and the organizational environment, managers and supervisors can design effective performance management techniques that minimize unintended consequences and organizational risks. Managers and supervisors have numerous ways to manage performance through regular interactions, so this is discussed first. They can also use work group design, job design, and reward systems to manage performance and properly motivate subordinates. Internal auditors can assess the quality and effectiveness of such techniques as well as whether they are generating unintended consequences such as governance, risk, or control issues.
Managers and Supervisors Managers and supervisors interact with employees on a daily basis. Several
studies have indicated that the relationship an employee has with his or her supervisor is very important to workplace attitudes and employee retention. When managers set high expectations and create a positive work environment, employees are more likely to reach those expectations as long as they were feasible in the first place. (Goals that are clearly unrealistic can be demotivating.) Goal setting is an effective means to encourage achievement. It is important to provide the resources and feedback necessary to propel employees toward the established goals. Managers should understand the basic concepts of motivational theories and the application to the workplace. However, managers will want to first examine their own beliefs about motivation and consequences. An atmosphere of trust is built by delegating to persons who have exhibited responsibility in the past and by expressing confidence in such employees’ abilities to succeed. That, in turn, builds employee confidence and empowers work groups. Performance feedback is a vital component in motivational behavior. Constructive feedback should be given on a continual basis so that employees will learn, grow, and take corrective measures. Opportunities should be taken to praise, recognize, and celebrate successes and to otherwise use appropriate reward systems.
Work Group Design The way an organization organizes work affects employee attitudes and behaviors. Work groups and teams are increasingly used to achieve organizational objectives. They have goals to achieve, and rewards are based on team outcomes. Work groups also support employee affiliation and social needs in the workplace setting. Team members are behavior influencers in an organization. Group norms, dynamics, communication, and other issues may affect individual motivations and outcomes. For example, groupthink causes members to conform without considering a range of alternatives.
Job Design and Motivation
Motivation in the workplace really begins with the selection process. An individual who is a good fit for a role is likely to be more motivated from the start. Appropriate selection and promotion decisions are important to the organizational framework. Job design , the way a job and its tasks are organized, also impacts employee motivation. A person’s job can be a source of reward in and of itself. Job design includes what the job tasks are, the order in which they are performed, and how they are done as well as how the job relates to other jobs in the organization. The workplace design for the job is important as well. Employees need certain resources in their work environment to be able to physically do a job, whether that be an ergonomic office arrangement or specific equipment or tools. Factors to consider in job design include: • Proper orientation and training. • Variety in task type and level of challenge. • Clear links from tasks to organizational outcomes. • Solicitation of employee input. • Autonomy to complete the work. • Work schedule balance, including breaks and vacations. • Mental and physical exertion requirements balance. • Performance feedback opportunities. • Sense of accomplishment. An important concept in job design is that adjustments can be made over time to help increase employee satisfaction. These adjustments include job enlargement, job rotation, and job enrichment: • Job enlargement broadens the scope of a job with an expansion of
similar or different tasks. A person’s responsibilities in the organization are not necessarily increased. Job enlargement reduces the risk of boredom and encourages employees to learn and grow. • Job rotation is a method of job enlargement where employees move between different tasks and jobs. • Job enrichment is when more depth is added to a job by adding responsibilities. Employee participation increases with more responsibility, accountability, and independence.
Reward Systems Employee behavior is influenced by intrinsic and extrinsic rewards. Organizations will want to develop effective reward systems based on guidelines such as these: • Communicate the organization’s reward systems widely. • Provide reward options that are meaningful to individuals. • Ensure that rewards are consistent with levels of accomplishment. • Ensure that rewards are readily available for distribution. • Distribute rewards close to the time of accomplishment. • Clearly communicate reasons for individual or team rewards. • Make rewards as long-lasting as possible. • Set policies that are equitable when compared internally and externally. • Praise publicly but reward privately to reduce perceptions of unfairness. Reward systems are most effective when managers can customize the types of rewards they provide based on their knowledge of what is currently motivating the individuals under their authority. For example, persons who have young children may appreciate more flexibility in work schedules as a reward for accomplishing annual performance goals. A young employee wanting to jump-start a career may value being enrolled in a sales training
seminar as a reward for meeting sales goals.
Performance Appraisals Many organizations use performance appraisals to encourage desired behaviors and link job performance to the reward system. Traditionally, performance appraisals were done on a set schedule and followed a formal process. Alternative methods are often the result of the motivational theories discussed earlier and the need for employees and organizations to include frequent performance feedback. In either case, the best use of performance appraisals focuses on communication between managers and employees. Traditionally, input for the appraisal came from management; however, it can also come from sources such as the employee’s peers, customers, the employee himself or herself, or a combination, as in 360-degree feedback. In this case, feedback is received from everyone, including peers, selfratings, upward assessment, and management.
Topic D: Management’s Effectiveness in Leadership Skills (Level B) Many factors can be considered in assessing managers’ effectiveness, including, for example, whether they: • Generate good results rather than just good intentions. • Provide guidance on worthwhile goals but also are able to inspire the workforce to commit to those goals (organizational commitment) and to work toward them in a proactive and self-motivated manner. • Have the ability to develop the workforce to meet current and upcoming organizational challenges, such as through mentoring. Some of these measures of effectiveness will be best developed using management skills; others will require leadership skills. Truly effective managers will acquire and use both skill sets. Management and leadership are different but complementary skill sets. Internal auditors who take the time to understand the difference will not only be in a better position to recognize when a person is applying one or the other of these skills effectively (or needs improvement) during an assurance or consulting engagement; they will also be able to evaluate these skills in assessments of themselves or others in the internal audit activity. After defining management and leadership, this topic discusses a number of leadership theories to help internal auditors get a grounding in some of the schools of thought in this field of study. The topic concludes with a discussion of mentoring and coaching.
Management Defined Management is the conduct of business to achieve organizational objectives by planning, organizing, and controlling activities. A manager implements the organization’s strategy and provides the necessary structure for people and operations on a day-to-day basis. Managers judiciously allocate and
control resources and subordinates to effectively and efficiently accomplish goals. The manager’s activities of planning, organizing, and controlling can be defined as follows: • Planning is setting the organization’s course by specifying expectations, goals, and performance objectives for the long, medium, and short term. It includes strategic planning, tactical and operational planning, short-term planning and forecasting, and planning for project management. • Organizing is developing an appropriate organizational structure, a process flow, and policies, procedures, and practices so as to coordinate the organization’s components into an interdependent system. Organizing activities include staffing, resource gathering, and team building. • Controlling is the use of formal authority in an organizational hierarchy to direct or restrain inputs, processes, or outputs and people. The need for control can be based on a manager’s business knowledge and intuition, or it can be more methodical, such as using observation, measurement, and analysis of variances from plan. Controlling can be thought of as course corrections that are needed to correct variances, get back on plan, or achieve planned results. Managers who can reliably produce planned results will be judged as having entrepreneurial ability, which refers to someone who is ultimately responsible for success or failure: An entrepreneur is responsible for the bottom line and gets no credit for good intentions. In addition to formal authority or legitimate power, management tools include the power to reward and promote, the power to coerce (either by threatening punishment or threatening to withhold rewards or promotions), the power to control who gets what information, and the power to control the steps and order of processes or tasks—not just controlling the results. These are called bases of power and are defined more formally later in this topic. A key point is that all of these powers could be abused, to the detriment of the organization’s effectiveness. Good managers use these powers appropriately. Note that these management powers help define the employer-employee
relationship. By contrast, a manager’s relationship with an independent contractor differs. For example, in an independent contractor relationship, managers have the power to direct the end result only, not the means of accomplishing the task. Organizations can face significant liability if employees are misclassified as independent contractors to avoid paying benefits, employment taxes, and so on. In the U.S., classification is based in part on whether management directs and controls work processes as well as the result. Having formal authority and a few management powers is necessary but not sufficient for management success. Great managers get their subordinates to take on organizational goals as their personal goals and to do so voluntarily and with enthusiasm. They get there by exhibiting leadership qualities.
Leadership Defined A leader is a person who influences others to accomplish organizational goals and objectives. Leaders hope to inspire employees to follow them on a voluntary basis. The word “inspiration” is from a Latin root that means “to breathe in.” As it relates to leadership, inspiration refers to breathing life into or enlivening the way people think, feel, act, and dream so they are motivated and enthusiastic to accomplish the goals the leader sets. Leaders are responsible for communicating the organization’s vision and for providing a motivating environment to gain followers. Success in these areas will result in employees feeling strong organizational commitment; they will make the organization’s goals their goals. An effective organization needs both strong leaders and strong managers. One person can, and should be, both manager and leader. A manager needs to be an effective leader, and a leader needs some task focus in the organization. A good balance of both can inspire others to achieve organizational objectives.
Leadership Skills Inspiring followers requires a different skill set than being a good planner,
organizer, or controller. It requires building personal influence with others. Personal influence is power that is associated with the individual rather than that person’s position. It may be easier to develop for a person with a certain level of formal authority, but organizational position does not guarantee personal influence. Personal influence can be built up in different ways, such as by making wise and fair decisions over time based on knowledge and experience and by using rational persuasion, which is the use of rhetoric to make goals seem both desirable and achievable. Also quite critical to leadership and its related qualities of influence and inspiration is relationship building. Building dynamic relationships involves treating subordinates with respect, living by the organizational and social values that one espouses, and following through on promises. It requires communication skills such as active listening, empathizing with others’ points of view, and empowering and collaborating with subordinates. Other ways to develop into a leader who can influence and inspire followers is to study leadership theories and find methods that work well. What works well will differ for different personality types and work environments.
Leadership Theories Many leadership theories have evolved over time to form a foundation for organizational leadership and management. A limited number of theories are highlighted here under key classifications.
Trait Theory Trait theory was introduced in the previous topic. Developed in the 1930s, it is one of the earliest approaches to leadership. Trait theory asserts that some people are born with certain traits or characteristics—decisiveness, energy, intelligence, persistence, self-confidence—that naturally make them good leaders. Over time, the research showed that while traits are important, traits alone do not make effective leaders. Eventually, the research shifted to focus more
on what effective leaders do, and thus the behavior theories emerged.
Behavior Theory Behavior theory, developed in the 1940s and 1950s, focuses on how effective leaders behave. Key studies in this regard include those conducted by the University of Michigan and Ohio State University and the Leadership Grid. University of Michigan Research This research identified two forms of leadership behavior, one focused on the job and the other on the employee. Job-centered leader behavior is when a leader concentrates on the work being done and coaches employees to complete tasks. Employee-centered leader behavior focuses on the person and group performance. Ohio State Research Two types of leader behaviors were identified in these studies. Consideration behavior is when a leader is considerate of employees’ feelings and shows a caring attitude. Initiating structure is when a leader uses schedules, rules, and other means to ensure that employees complete their work. Leaders can be high on one behavior and low on the other, high on both, or low on both. Leadership Grid The “Managerial Grid,” developed by Robert Blake and Jane Mouton in the 1960s, is frequently referenced in discussions about management in organizations. The model was modified and renamed the “Leadership Grid” in the 1990s by Robert Blake and A. Adams McCanse. The basic premise is that a leader has a management style that relates to his or her concern for people, or relationship development and maintenance, and his or her concern for production, or getting tasks done, as depicted in Exhibit I-5. Exhibit I-5: Leadership Grid
This grid is used to characterize leader styles: • Country club management is low concern for tasks and high concern for people. The environment is friendly, but there is a lack of attention to tasks. • Impoverished management is low concern for people and low concern for tasks. Here, the work is done with minimal effort and minimal direction to people. • Authority-compliance management is high on tasks and low on concern for people. The style is authoritarian and task-oriented and not very collaborative. • Team management is high concern for people and high concern for tasks. The work is productive, and a supportive individual and team environment is encouraged. • Middle-of-the-road management is a halfway balance that falls in the middle of the grid. There is middle-level concern for the tasks and the people. This theory helps managers see themselves and how they attend to the work and to the individuals and teams in the environment. In this theory, the team management style is said to be the most ideal to strive for, although it may not be ideal in all situations.
Participative Leadership
Participative leadership is an approach that encourages employees to be involved in the decision-making efforts of the organization. When leaders delegate problem solving and decisions to others, employees feel more empowered about their work and their ability to influence organizational outcomes. There are several variations on this theme. At one end of the spectrum is the authoritarian manager with high decision-making power; on the other end is a participative manager who encourages high participation in decision making to the point of full delegation to the team. This is shown in Exhibit I-6. Exhibit I-6: Autocratic to Participative Spectrum
The following are theories that describe leadership styles along this spectrum. Likert Leadership Styles As described in the discussion of motivational theories in the previous topic, Rensis Likert categorized exploitive-authoritative, benevolentauthoritative, consultative, and participative leadership styles that are based around involvement in decision making. Lewin Leadership Styles Kurt Lewin identified three leadership styles: • The authoritarian leader makes all the decisions, selects the team members and the tasks, and does not participate in the group. • The democratic leader encourages team decision making, allows the team to manage its own tasks, and shares options and ideas with the team.
• The laissez-faire leader allows the team complete freedom for decisionmaking tasks and assists only by request. Likert selects the participative style and Lewin the democratic style as the most effective. The basic assumption is that participation in decision making makes for better decisions and fosters employee commitment and empowerment. Ouchi’s Theory Z In the 1980s, William Ouchi introduced the Theory Z management approach, which modified American individualistic management practices with aspects of Japanese collectivistic practices. Theory Z applies to the organizational level and relates to corporate culture. Some of the characteristics of a Theory Z organization are: • Common cultural values. • Collaborative environment. • Consensus decision making. • Stable and longer-term employment. • Promotion from within and slower promotions. • Downplay of titles and rank. • Work team environment with more participation. • High level of trust and employee loyalty. • Recognition of individual contributions. • Concern for employee well-being.
Contingency and Situational Theories Contingency models of leadership are yet another way to discuss effective leadership. Contingency models take into account the context or situation the leader is in. A leader who is effective in one environment or set of
circumstances may not necessarily be effective in a different environment. Fiedler’s LPC Model Fred Fiedler’s least-preferred-coworker (LPC) model asserts that leadership effectiveness is based on the leader’s personality (task or relationship orientation) and how favorable the situation is. Leaders who are task-oriented are similar to the job-centered or initiating structure leader, who values tasks and work completion. Relationshiporiented leaders are similar to the employee-centered or consideration leaders in that developing interpersonal relationships is highly valued. Fiedler developed an exercise for managers that asked them to think about past work relationships and to identify the coworker they least liked to work with. They were then asked to rate this least-preferred coworker on a scale of 1 through 8 with descriptors at opposing poles. Such factors were unfriendly (1) to friendly (8), disagreeable (1) to agreeable (8), closed (1) to open (8), and so forth. Fiedler asserted that leaders who scored high on the LPC scale tended to rate more positively and were more relationship-oriented. Leaders who scored low tended to rate more negatively and were more task-oriented. The premise is that either relationship- or task-oriented leaders can be effective, but their orientation must fit the situation. Fiedler suggested that there are three factors that determine how favorable, or how easy, it is to manage in a situation: • Leader-member relations refers to how good the trust and relationships are with employees. The better the relationships, the more favorable the situation. • Task structure refers to how structured tasks are. Structured tasks are favorable because unstructured tasks require more direction. • Leader position power refers to the manager’s power because of his or her position. Stronger position power is a more favorable situation.
The theory describes a way to match leader styles and situations for optimal relationships and performance. Fiedler asserted that a leader’s style does not change, so managers should be put in situations that are a good fit for their style, or the situation should be changed. Path-Goal Theory Developed by Martin Evans and Robert House, the path-goal theory suggests that effective leaders can motivate employees to achieve goals by: • Clearly identifying outcomes and paths to the outcomes. • Removing obstacles that stand in the way. • Offering incentives and rewards along the way. The path-goal theory says that leaders can adapt their behavior according to situations. Four leader behaviors are identified: • Directive leadership conveys expectations, gives specific guidance, and helps subordinates improve performance. • Supportive leadership shows concern for subordinates and provides a friendly climate. • Participative leadership consults with subordinates and takes opinions into account. • Achievement-oriented leadership sets challenging goals and shows confidence in subordinates’ abilities. A theme in this theory is that leader behaviors can be adapted to the employee situation for increased efficiency and effectiveness. This theory also relates to the expectancy theory (discussed in Topic C). Individuals are more motivated if they believe that their efforts can lead to successful outcomes and rewards and they will actually get the rewards if successful and if the rewards are something they actually value and desire. Hersey-Blanchard Situational Leadership Theory The Hersey-Blanchard situational leadership theory says that leaders should
adapt their style to the maturity level of followers. There are four leadership styles placed on a leadership matrix. One axis shows how much relationship and supportive behavior is needed, and the other axis shows how much task and directive behavior is needed. The four leader behaviors are as follows: • Telling/directing is used for followers who have a low level of readiness and maturity and need guidance. This is a high task and low relationship focus. • Selling/coaching is appropriate when a follower has a low to moderate level of readiness and needs information, explanation, and encouragement. This is a high task and high relationship focus. • Participating/supporting applies in cases where the follower has a medium to high level of readiness and can share in decision making. This is a low task and high relationship focus. • Delegating/observing is appropriate when followers have a high level of readiness to work independently. This is a low task and low relationship focus. The premise in this theory is that the readiness, or maturity, of an employee will change over time. Managers can adapt their strategies for communicating according to the four leader behaviors to meet the employee’s situation. This approach encourages the employee in a way that fosters self-confidence and motivation.
Influence and Power Theories An effective leader influences others in the organization to accept changes, make decisions, and implement the results of decisions. Influence is the ability to affect thinking, attitudes, and behavior change in the organization. Power is the ability to influence others. Power and influence theories are more people-oriented than other types of theories. Transformational Leadership
Transformational leadership involves influencing, or transforming, change in employees and the work environment. Leaders who have a strong vision and inspire others toward that vision are better able to move the organization forward. Employees are more likely to follow leaders who are enthusiastic and can sell their vision. Transformational leadership also helps employees see the larger picture for the organization, inspires individual contributions toward organizational goals, and promotes collaboration for the greater good. Charismatic Leadership Charismatic leadership usually goes hand-in-hand with transformational leadership. It describes leaders who possess charisma in the interpersonal way they enthusiastically and energetically communicate the organization’s vision. A leader who is seen as having charisma is more likely to influence others in the organization. Transactional Leadership Transactional leadership focuses more on accomplishing the work in the organization through structures and reward systems. This type of leader uses rewards, takes corrective action, and reprimands as necessary. Bases of Power J. French and B. H. Raven developed a theory involving the following bases of social power, which are useful for considering relationships in an organization. Note that the terms “managers” and “employees” are used for illustration purposes. • Legitimate power is the power that managers have because of their authority and position in an organization. Employees may comply but may not necessarily feel committed. • Reward power is the power that a manager has over resources and rewards in the organization, including promotions. Rewards will need to be of interest and value to employees. • Coercive power is the power that a manager has to force an employee to
comply or to administer noncompliance punishments. It can involve threats to punish or to withhold rewards or promotions. Coercive power is a last resort because it typically results in only short-term compliance and can generate resentment. • Expert power is the power a manager or other individual has because of his or her special knowledge or abilities. Managers may choose how much they are willing to share their expertise. • Referent power is the power a manager has because the employee respects and admires him or her. Referent power develops over time as trust and respect grow. There are various theories about the types of power, the sources of power, and political behavior. The primary points to remember are that leaders wield influence in a number of ways and those ways affect the motivation, attitudes, and participation of others in the organization.
Mentoring and Coaching Mentoring and coaching are techniques to encourage learning, career growth, and participation in the organization. These techniques are useful in developing learning organizations.
Mentoring Mentoring is a process in which a mentor who has developed certain expertise shares that expertise with a protégé. Mentoring programs can be formally established or accomplished through informal networks and communication. Formal mentor-protégé relationships are usually short-term in nature. Some of the benefits of mentoring programs are: • Organizational intelligence and best practices are shared. • Mentors demonstrate and model for the protégé. • Protégés are groomed to take on higher-level responsibilities and positions.
• Protégés are given challenging assignments. • Protégés find encouragement for personal career growth and direction. • Mentors find new perspectives and a sense of accomplishment. • Mentors from other countries provide cultural insights. • Mentors serve as models for behavior in the organization. • Lifelong friendships are often formed. The key in mentoring arrangements is to find or make good matches between individuals. Once the agreement is made, both the mentor and the protégé have duties to uphold: • Mentors need to be patient, be available, adapt their communication style, share personal experiences, provide challenging learning experiences, assess progress, reflect with the protégé on an ongoing basis, and treat communication confidentially. • Protégés need to accept opinions and advice, show respect, keep appointments, express appreciation, keep the mentor informed, take on challenging assignments, learn from successes and failures, and treat communication confidentially. Mentor-protégé relationships go through development stages similar to group development. In the end, there is a separation stage, but individuals may keep in contact on an infrequent basis. In some cases, it is determined that there is not a good personality or skill competency match, and the relationship separates early on. As organizations become more networked, the environment naturally encourages the spontaneous development of mentor-protégé relationships.
Coaching Coaching in the organizational setting refers to specific advising for new
learning and improved work performance. While individuals are responsible for their own learning, guidance facilitates faster and smarter learning. Managers and supervisors need to develop good coaching skills to make a positive difference in the performance of individuals and teams. Many of the principles covered in the previous discussions of motivational and leadership theories apply in coaching. Effective coaching involves: • Assisting with goal setting and the path to get to the goals. • Questioning and listening skills. • Trusting and empowering. • Demonstrating how to perform tasks. • Giving positive reinforcement. • Providing resources and removing obstacles. • Designing challenging learning opportunities. Supervisors have a responsibility to help employees succeed. Employees benefit from coaching and feedback on a regular basis, not just in weekly meetings or at the time of performance reviews. Specific benefits of coaching are that it: • Facilitates self-directed learning. • Provides information and techniques for problem solving. • Pulls people out of their comfort zones. • Encourages and helps motivate. • Builds confidence and trust. • Brings about organizational results.
Other coaching situations include executive leadership assessment and coaching programs, peer coaching such as matching a new hire with a seasoned colleague, and subject matter coaching by internal or external experts. One cautionary note is that coaching in the business environment is not to be seen as a dependent counseling or therapy relationship. Individuals who need psychological counseling should be referred to professional help for their situation.
Chapter 2: Organizational Structure and Business Processes Chapter Introduction Organizational structure is part of an organization’s control environment. The Standards Glossary defines control environment as follows: The attitude and actions of the board and management regarding the importance of control within the organization. The control environment provides the discipline and structure for the achievement of the primary objectives of the system of internal control. The control environment includes the following elements:
• • • • • •
Integrity and ethical values. Management’s philosophy and operating style. Organizational structure. Assignment of authority and responsibility. Human resource policies and practices. Competence of personnel.
When auditing the control environment, internal auditors may need to take a critical look at organizational structure to see if it effectively fulfills the organization’s governance objectives and overall business objectives. The introduction to The IIA’s International Standards for the Professional Practice of Internal Auditing states, “Internal auditing is performed in diverse environments and within organizations that vary in purpose, size, and structure” and that such “differences may affect the practice of internal auditing in each environment,” before going on to highlight the mandatory nature of the Standards regardless of these differences. Understanding and documenting the structure of an organization or one of its subdivisions is therefore a necessary preparatory step for an audit engagement. Different organizational structures will have different audit implications. Each structure will have different risks and will need specialized controls. For example, a decentralized structure may have higher risks related to synchronizing organizational goals. Controls requiring process approvals
may require more effort and creativity to implement successfully—such as by getting buy-in from autonomous managers and using distributed, automated control processes to ensure compliance without undue hardship or delay. When internal auditors show sensitivity to the organizational structure in their workpapers, findings, and recommendations, it helps prove that they understand the area being audited and have tailored their engagements and findings to the needs and realities of that area. In short, understanding organizational structures is part of showing competence and adding value.
Topic A: The Risk and Control Implications of Different Organizational Structures (Level B) Organizational structure is the organization’s formal decision-making framework and its way of organizing authority, responsibilities, and performance activities. In the context of organizational structure, chain of command refers to the line of authority in the organization. Span of control refers to the number of employees who report to an individual in the chain of command.
Centralized and Decentralized Structures Organizational structures can be centralized (hierarchical) or decentralized (flat) or somewhere in between these points along a spectrum. There is no one right degree of centralization or decentralization; one is not necessarily better than the other. The optimum structure for a given organization depends on various factors, including its industry, organizational culture and values, organizational management style, national or regional location(s), and global footprint. A centralized structure is one in which there are several levels of authority, a long chain of command, and a narrower span of control. In times past, most organizations used this type of structure, so it is often considered a traditional structure. Decision making is concentrated in the higher levels of the management hierarchy. This structure is more bureaucratic, with a top-down management philosophy. Employees have little autonomy and must gain approval for actions. A decentralized structure is one in which there are fewer levels of authority, a shorter chain of command, and a wider span of control. Decision making is dispersed in the lower levels of the organization, giving employees more freedom to take action. The structure is less bureaucratic, with more bottom-up and lateral communication. Trends are shifting toward decentralized structures to allow more organizational flexibility and adaptability in today’s changing world. In more geographically dispersed organizations, a decentralized structure can provide timely and responsive
decision making that can leverage local expertise. As organizations grow by mergers and acquisitions, a decentralized structure between corporate headquarters and each business unit may become more and more necessary to minimize complexity and allow the leader of each business unit to apply local expertise in decision making. It is common to see hybrid structures forming in large diversified organizations, in which selected functions are managed in a centralized fashion to provide control and economies of scale while other functions are decentralized to reduce bureaucratic complexity and improve local accountability. Each individual business unit could be more or less centralized or decentralized depending on how it was originally formed and what model works best going forward to achieve its objectives.
Departmentalization Traditionally organizations have been structured vertically, with top-down authority configurations. Such organizational structures are organized around work and job specializations. Departmentalization is a structure for grouping organizational work into specialized units and jobs. Grouping classifications may include product, geographic, process, and customer departmentalization as well as functional, divisional, and matrix. • In a functional structure , authority and decision making are arranged by functional groups such as finance, marketing, manufacturing, and research. Advantages are the ability to specialize and control business activities. A disadvantage is narrower perspectives in the organization. • A divisional structure is one in which divisions are fairly autonomous units within the organization. Divisions are specialized and may not even relate to one another. A division may contain all functions for a distinct group of products or services. Overall support is received from the centralized core of the organization. Advantages and disadvantages are similar to those of the functional structure, with the ability to specialize but narrower organizational perspectives. • A matrix structure is a team- and project-based approach between
functions and divisions. An employee from a functional department works with a manager from another department on a special team assignment. In essence, the employee reports to two managers for the duration of the project. The matrix structure permits greater flexibility and use of resources. However, there can be accountability and work conflict issues because of the dual reporting relationships. A matrix assignment can be short or long term. A primary benefit of departmentalization is that efficiencies are gained from grouping common knowledge and skills for a focused effort. Disadvantages may be departmental conflicts and the formation of a “silo” mentality that creates artificial barriers between departments that nevertheless create very real effects, so that the overall process suffers from inefficiency and ineffectiveness.
Other Structures A number of other structures exist, including the following: • Hourglass. Hourglass-structured organizations attempt to minimize middle management and instead empower lower levels of management and employees and rely on information technology to perform many tasks traditionally done by middle management. Middle managers who remain are generalists who can handle cross-functional issues. • Network. A network organizational structure is similar to a matrix structure, but team members are much more likely to be contractors who are acquired for a given project only (or they may be remotely based employees). The organization may have a workspace or encourage working remotely. This type of structure depends heavily on technology for communications and may need additional layers of oversight or project management. • Cluster. A cluster organizational structure is very decentralized. Rather than having senior management or even committees, there are cluster groups and task forces. A cluster group is a small number of staff members with a cluster leader. Cluster groups exist for communication and
problem solving. Task forces are also created among cluster groups as needed to work on short-term goals. This might be seen in a hospital, where a cluster group would be all staff that work the same shift in the same ward. • Virtual. A virtual structure, also called a virtual network, involves a company acting as a hub or central core and then forming partnerships with various external organizations to provide specialized services (e.g., design, manufacturing, distribution, accounting, and so on) as a form of out-sourcing. The organizations could be in any country. The headquarters organization acts much like a general contractor would in construction, subcontracting all work to organizations with core competencies in the desired areas of expertise or with the needed regional presence. Networked computers and collaborative software may be needed to achieve seamless operations and communications. Components can be added or removed based on current needs. Exhibit I-7 compares the advantages and disadvantages of the various types of organizational structures discussed in this topic.
Exhibit I-7: Organizational Structure Comparisons Structure
Advantages
Disadvantages
•
Management consistency and control
•
Slower decision making/responses
•
Economies of scale
•
Low employee participation
•
Higher employee participation and satisfaction
Loss of economies of scale
•
• •
Faster decision making/responses
Departmentalization
•
Focus on common knowledge and skills
•
Possible “silos,” conflict/inefficiency, and interdepartmental communication barriers between departments
Functional
• •
Specialization by function
• •
Narrower area perspective
Centralized
Decentralized
More employee participation
Less control over productivity and efficiencies
Coordination difficult
Divisional
• •
Autonomy by division
•
Blend of technical and market emphasis
•
Efficient use of resources
Hourglass
•
Network
Matrix
Cluster
Virtual
• •
Narrower perspectives
•
Dual reporting causes employee confusion and possible manager conflict
Broader span of control at the bottom for daily decision making
•
Slower responses through the channels for important decisions
• •
High flexibility and adaptability
Difficulty in lateral management
•
• • •
Global possibilities
• •
Team-based and high flexibility
•
Strong leaders and communication necessary
•
High adaptability and response times
Communication conflict
•
Team-based with specializations
• • •
Specialization
Combined strengths and synergy
Encourages motivation and learning
Loss of economies of scale
Difficulty in sustaining interest Vulnerabilities in sharing knowledge
Less loyalty Information overload
Elements of Effective Organizational Structure A critical consideration in organizational design is how to best facilitate effective communication and coordination to achieve business goals and objectives. Regardless of what an organizational structure looks like on paper, an effective design will: • Reflect the entity’s size and nature of activities. • Establish formal lines of authority. • Define key areas of responsibility. • Establish reporting lines. • Establish relationships among individuals, groups, and departments. • Coordinate diverse organizational tasks.
• Assign responsibilities to specific jobs and departments. • Allocate and deploy organizational resources.
Organizational Structure and Risk Overall, an organization’s structure provides the framework to plan, execute, control, and monitor activities. COSO’s Enterprise Risk Management— Integrating with Strategy and Performance explains how an entity’s structure will specifically impact the following areas. (Note that ISO 31000 terms are presented in parentheses to indicate how the ISO risk management framework addresses similar activities.) • Development of goals and objectives (and subobjectives). Organizations first set strategic objectives aligned to organizational goals. More specific objectives (sub-objectives) applicable to departments, functions, and individuals can then be developed. No matter what the organizational structure is, the critical aspect in developing these cascading objectives is that they are consistent with and support the strategic perspectives. Further, all objectives should be clearly communicated and measurable. Everyone in the organization must understand the objectives related to their sphere of influence—how the functional area’s objectives and goals align with and support the overall organization’s objectives and goals, including what needs to be accomplished and how performance will be measured. • Event identification (or risk identification). As COSO points out, events can have a positive or negative impact or both on the implementation of organizational strategy and the achievement of objectives. Management must understand how one event can lead or relate to others across the organization so that risk management efforts are appropriately coordinated. • Risk response (or risk treatment). Organizational structure is an important consideration when an organization evaluates how to best manage risk. Risk response or treatment should be an iterative process that considers not just the enterprise level but departments and functions as well. For example, the risk tolerance for specific departments may be
individually appropriate but collectively may exceed the risk appetite of the organization as a whole. Internal auditors can play an important role in identifying such situations, particularly in cases where management has not already done so (or has been ineffective in doing so). Or some functions may incur higher risks than others but the collective risk responses end up balancing the organizational risk appetite. • Control activities (or monitoring and review of the framework). Control activities are generally established to ensure that risk responses are appropriately carried out in support of related objectives. As is the case in other aspects of risk management, control activities do not occur in isolation. Many different types of control activities are typically performed by many people at different levels in an organization. It is the range and variety of control activities across an organization that keep all levels tracking toward the achievement of business objectives. Control measures are not transportable across different organizations. COSO makes the point that even if two organizations had identical objectives and similar strategies on how to achieve the objectives, the control activities would be different based on organizational specifics such as environment and industry, size and complexity, nature and scope of operations, history and culture, and individual judgments of people affecting control. The concept that one control may serve multiple purposes is useful to understand in relation to organizational structure, since control activities come with a cost. For example, requiring a receipt to support a business expense may be used to control the accuracy of entries into the general ledger, it complies with tax legislation, and it reduces the likelihood of fraud. Depending on the organizational structure, it may be easier or harder to ensure that multiple benefits are achieved from a given control. For example, a matrix structure may need to clarify which “boss” should receive the receipts and approve the related business expenses—a process that might not seem so clear-cut to another manager because that expense comes out of his or her budget. The larger and more complex the organization, the more risk/control issues
and challenges there are to face. Activities are more diverse in larger organizations, and there are exponentially more things to consider than in small, simple organizations with less variation in business activities. On the other hand, smaller organizations often have their own unique control challenges. For example, smaller organizations have fewer personnel and resources and therefore may have limited ability to apply controls such as segregation of duties or dual control. • Information and communication (communication and consultation). Every organization must capture a wide array of information related to internal and external events and activities. In turn, personnel throughout the organization must receive the information they need to efficiently carry out their responsibilities. An information infrastructure must capture data in a timely manner and at a level of detail appropriate to the organization’s need to identify events and respond to risks. The design of the system architecture and the acquisition of technology are critical and must accommodate the reporting relationships contained within the given organizational structure. Data integrity and reliability cannot be compromised. Management (and internal audit from an assurance perspective) needs to consider how a given organizational structure can accommodate challenges such as: • Conflicting functional needs. • System constraints. • Nonintegrated processes. To complement the information infrastructure, internal and external communications should support the organization’s risk management philosophy and approach. For example, all internal personnel should understand the importance of risk management, the organization’s objectives, and the roles and responsibilities to support initiatives. Personnel need to understand how their individual activities relate to the work of others. This implies that there must be open channels of communication across an organization as well as a cooperative spirit and a willingness to listen. Centralized organizational structures may face greater challenges in this regard and may need special processes in place to
encourage appropriate communication flows if the root cause of the problem—the centralized structure itself—cannot be changed. Communication with external parties (customers, suppliers, stakeholders, regulators, and others) also needs to be pertinent and timely. For example, meaningful related risk appetite and risk tolerance communication with suppliers can prevent an organization from inadvertently accepting excessive risk from a supplier who has different values. Understanding the organizational structures of each external party can help when evaluating the effectiveness of controls and contractual agreements with that partner. • Monitoring (monitoring and review, continual improvement). Risk management is hardly static. Over time, changes in organizational structures, personnel, processes, business objectives, the competitive environment, and other areas can make current risk responses irrelevant. Control activities may also lose effectiveness. Management must have reasonable assurance that risk management remains effective. The specifics on how this is accomplished will depend on the organization. Typically this involves two monitoring approaches: • Ongoing monitoring—built into normal, recurring activities and performed on a real-time basis • Separate evaluations—conducted after the fact (often by assurance activities independent of management) and intended to take a “fresh look” at risk management effectiveness Requiring an assessment or reassessment of organizational structure at the start of each audit engagement is one way internal audit can help determine how the organization is changing. It encourages a fresh look at the organization’s governance, risks, and controls. Ongoing monitoring of recurring activities could also highlight areas where the organizational structure is creating value or causing problems.
Topic B: The Risk and Control Implications of Common Business Processes (Level P) Common business processes are often grouped into functional areas or departments such as human resources (HR), procurement, product development, sales, marketing, production, finance, accounting, IT, and logistics. Each business process might be managed in-house and/or outsourced in whole or in part. Management of these processes directly and/or as out-sourced functions can carry different risk and control implications. In addition to business processes that are managed by a functional area, some business processes are handled as projects that may or may not cross functional areas. Other business processes may cross between functional areas, requiring close coordination and communication. (Note that project management in general is addressed in the next topic.) Functional areas or projects might also be differentiated as core versus noncore activities. Operations (production or service delivery), product development, sales, or perhaps logistics might be core processes, while HR, finance, and other administrative or support functions typically are designated as non-core processes. The differentiating factor is usually one of competitive advantage. If the organization determines that a business process is capable of providing a competitive advantage, it will typically retain this process in-house because it can provide these functions at lower cost and/or higher quality (i.e., better value) than if they were out-sourced. Conversely, the organization may or may not out-source part or all of the non-core processes, depending on the best overall value. A vendor that provides out-sourced HR services would consider these services to be part of their core operations, because HR services is what they are selling. For example, the top sellers of smartphones and similar devices tend to have core processes of designing and marketing devices; they often out-source manufacturing (but may maintain close control over the manufacturing organizations). Business processes exist to support achievement of one or more business objectives. They are a grouping of sub-processes; it is important to understand why the sub-processes are grouped together in the first place
(and whether some other grouping would make more sense). The various sub-processes are all likely interlinked primarily because it creates economies of scale to plan, direct, monitor, and control them all as one unit. Logistics and supply chain management arose because new methods were needed address a business process that crossed over multiple functional areas (procurement, warehousing, shipping and receiving, customer service, supplier relationship management, etc.). The new management model created efficiencies and a better customer experience over maintaining the departmental “silos” that were once the status quo. Some of the methods discussed next for evaluating business processes or specific functional areas could be used from a big-picture perspective to define engagements in the annual audit plan. Here we will assume that this work has already been done and a given functional area has been selected for an audit in the annual audit plan. Prior to delving into an audit of the area, or perhaps to add detail to the annual audit plan, the next thing to determine may be how thorough the audit should be. For example, this could be: • A routine checkup as part of an audit rotation. • An alignment review to determine how well the area aligns with organizational objectives. • A compliance review. The process explored next helps to determine the overall scope of the engagement, then involves reviewing or analyzing business process or area risks to determine which areas should receive higher priority and more audit resources. The last step of this process involves assessing whether internal controls are appropriate and effective. This topic will use HR as an example.
Understand the Business Process In order to determine the intensity level and areas of focus for an audit engagement of a functional area, internal auditors need to understand the
business process. What are the area’s objectives and how do these trace upward to the organization’s strategy, mission, and vision? What long-term strategy and annual goals were set for this business process? Auditors can start to understand strategic and annual goals by reviewing business process documentation, including plans and budgets for the area, policy and procedure manuals, job descriptions, area organizational charts, and trends in key performance indicators. Reviewing process flowcharts and related narratives is especially valuable. If a process flowchart does not exist, creating one with the help of the process owner can help the auditor understand how various parts of the process interrelate as well as the process inputs and outputs. Taking the time to do this is vital, because it can reveal where one process or sub-process interacts with or impacts other processes. Learning about process interdependencies is key to understanding the impact of various risks and the implications of a control on interrelated processes. It can also help to differentiate between key and support processes. If a key process fails to occur correctly, achievement of a specific objective could be directly and negatively impacted. Even non-core functional areas will likely have key processes, because they may support the achievement of a top-level business objective, such as procurement needing to minimize the cost of goods sold while maintaining agreed-upon quality levels for procured materials (competitive price and customer satisfaction). Note that lack of documentation for an area in question may be a risk in itself that needs to be part of engagement observations, because it can potentially negatively impact new employee orientations, leave roles and responsibilities open to interpretation, make it hard to assess area efficiency, and make risk and control assessments themselves more difficult. Depending on the area, documentation review may also include review of external documents. For example, management’s discussion and analysis section of the organization’s external financial statements may discuss the functional area’s objectives and key risks. A regulatory report or finding may have been issued in the past. There could be court cases or settlements to review.
For each process, internal auditors also enlist the help of the process owner to determine: • Why the process exists. • What functional area objective(s) it supports. • Whether it can be linked to achievement of overall organizational objective(s). • What policies and procedures exist to direct how people involved are supposed to act. • What its inputs and outputs are and whether these result in difficulties due to the need for cooperation and communication with other functional areas. • Whether the process provides other important benefits to management. If the process owner is having difficulty describing these elements, one way to get to the important parts of the process is to ask “What part of your job gives you the most satisfaction?” Another question to ask is “What would most endanger organizational success if it were done wrong?” The HR functional area may be a strategic partner that develops the programs and systems necessary to fulfill the organization’s mission and that plays a strong role in shaping the organization’s culture and control environment. HR objectives may include: • Developing and executing HR strategic planning that is effective in realizing the human potential required to achieve organizational strategy. • Ensuring that HR staff are appropriately skilled. • Increasing HR productivity through HR technology while securing sensitive data. • Accurately determining workforce staffing requirements. • Developing and administering effective organizational design.
• Developing and administering an effective recruitment and recruit selection process. • Developing legally defensible contractor management and use policies and processes. • Managing employee turnover and retention (churn) appropriately. • Ensuring compliance with employment regulations. • Accurately assessing training needs and administering effective new employee training, technical area training, and supervisor training. • Developing and administering a training effectiveness assessment process. And this list could go on with compensation and benefits, disciplinary processes, retirement, leave, payroll, employee and labor relations, safety and security, and out-sourcing or co-sourcing. Given an understanding of the business process, its objectives, and its subprocess interactions, the next step is to understand the current state of risks affecting the process so this can guide audit priorities.
Map and Weigh Business Process Risks Assessing risk for a business process involves harnessing the organization’s chosen risk management framework, tools, and techniques. Since the CAE is responsible for ensuring that a risk assessment is done at least annually, an overall assessment will likely exist, and this may have been the reason to include the business process in the annual audit plan in the first place. When determining the risk and control implications of a particular business process, after reviewing the applicable risk management reports, internal auditors may need to evaluate risk at a more detailed level to determine which risks are most likely to negatively impact key processes as well as to update the assessment for any changes in likelihood/impact or to identify new risks. This will help determine the depth of the engagement as well as areas that require prioritization.
The next step after revisiting risk identification and risk prioritization is to determine which risks affect which processes or sub-processes. One way to do this is to use a risk by process matrix, which lists processes or subprocesses in rows and risks in columns. Such a matrix can differentiate between key (K) and secondary (S) links between the process and the risk. There should be only a limited number of key links for a process, perhaps just one. Secondary links between objectives and risks help show how processes are interrelated and affect one another. There could be any number of secondary links. Exhibit I-8 shows an example of a risk by process matrix for the HR functional area. (Note that this matrix is abridged.) Exhibit I-8: Risk by Process Matrix for HR Functional Area (Abridged)
For HR out-sourcing or co-sourcing, the objectives are to develop and
administer appropriate service provider selection and management (this may be called vendor due diligence) and to provide effective change management for the transition period toward the new sourcing model. Key risks for this may include underestimating the time needed for the transition due to the complexity of the process, underestimating organizational resistance to change, HR technology incompatibility, and information security breaches. Other tools may be used to assess and prioritize risks at this point. The final major step in a business process assessment is to determine whether internal controls adequately address the identified and prioritized risks from a design-level perspective.
Assess Internal Controls for Risks One way to assess internal controls against identified risks is to create a risk impact and control matrix. This type of matrix lists each objective and the key risk that might negatively impact achieving that objective. It has columns for assessments of probability and impact, a column for the relevant activity that is performed to implement the objective, and a final column for controls. This could be a listing of controls that exist or of typical controls for the objective. Exhibit I-9 shows an abridged risk impact and control matrix for the HR functional area. It lists the controls that exist as well as ones that might be recommended.
Exhibit I-9: Risk Impact and Control Matrix (HR Example, Abridged) Objective Effective HR strategic plans
Key Risk HR strategic plans nonexistent/ deficient.
Probability Low, but will grow over time (See needed controls.)
Impact High
Activity HR program creation.
Controls Existing
•
Strategy linked to organizational strategy, consistent with culture.
•
HR operational plan outlines
programs, staff, and time lines. Needed
Skillful HR staff
HR staff lack Medium appropriate skills, risking noncompliance with employment law.
Medium
Recruit and select HR staff.
•
Ongoing HR area assessments.
•
Monitor legislative changes and alter plans.
Existing
•
Clear HR position descriptions, tasks, authorities, and competencies.
•
Education, experience, and continuing education requirements are adhered to.
Needed
HR technology that
HR technology Medium privacy risks: legal, financial,
High
HR staff recruitment and
•
HR staff encouraged to get HR professional certification (PHR, SPHR, CCP, or CEBS).
•
HR staff compensation reflects desired service quality.
Existing
•
Employee
enables productivity while controlling sensitive data
and/or employee dissatisfaction and loss of productivity/ reputation damage.
recruit selection.
information safeguards exist.
•
HR technology system security exists.
Needed
•
Effective staffing needs assessment
Wrong number Low now, of workers are could grow identified, risking unnecessary expense, incorrectly balanced roles, or poor productivity.
Medium
Workforce needs identification process.
HR staff training on social engineering scams.
Existing
•
Workforce plan is linked to organizational strategy and mission.
•
HR forecast of number of workers needed per position.
Needed
•
Gap analysis of current versus future workforce profile.
•
Link staffing forecast to training plans in addition to recruitment.
This matrix would continue on for each of the many objectives of the area. Note that the above matrix was inspired by the Sample HR Risk Impact and Control Matrix that is an appendix of The IIA Research Foundation’s
Auditing Human Resources, 2nd edition, by Kelli Vito. See that publication for more information. For the out-sourcing or co-sourcing of a business process or a functional area, controls may include the following: • Statements of work in the request for proposal (RFP) accurately describe scope and scope limitations. • The process owner and other stakeholders such as budget analysts are involved in RFP creation. • Bids are evaluated for both best value and service provider competency. • Sole-source contracts are justified, if used, and the selected provider is capable of providing the full range of services. • Service provider selection uses an adequate due diligence process, including checking of references. • The process owner reviews future workforce needs to ensure that the service provider is capable of scaling up to meet future demand. • Contract negotiations gain agreement on appropriate incentives, penalties, and the definition of specific services to provide in a service level agreement (SLA). • The service provider contract has appropriate clauses, including a definition of nonperformance, the means of correcting deficiencies, and when and how the contract can be voided by either party. In addition to determining if existing controls adequately address the prioritized list of risks, internal auditors may need to determine control effectiveness. A risk control map, with risk significance on one axis and control effectiveness on the other axis, can be created to determine which controls may need improving and in what priority. Such a map or other analysis might also identify if a business process has too many controls (i.e., too many controls over low-impact or low-probability risks). If so, the process might be made more efficient by eliminating some unnecessary
controls. Reviews such as these may be especially needed during times of change for the business process. Out-sourcing or co-sourcing is one example, but rapid growth or downsizing, implementation of new technology for the area, new regulations, or changes in cultural expectations for the process or area are other examples.
Topic C: Project Management (Level B) Project management is the process of planning, organizing, directing, and controlling an organization’s resources (people, equipment, time, and money) for a temporary endeavor so that project objectives can be met within defined scope, time, and cost constraints. Internal auditors typically have excellent project management skills since both assurance and consulting engagements are examples of projects. It is therefore incumbent upon newer internal auditors to acquire these skills and for all internal auditors to continue developing project management skills. Why use project management techniques? It takes time to set up a project and develop a project plan. Why not just get started on all the work that needs to be done? Project management requires much up-front work to define the problem that needs to be solved and then form a plan to achieve it. However, without such a plan, the total effort (including cost) and the project duration may end up being far greater overall because of problems such as scope creep and/or rework. Scope creep is when project objectives are extended by external influences, resulting in unplanned additions to a project’s scope or time, cost, and quality constraints. It is a common cause of missed deadlines and budgets and unnecessary project features. While project change is necessary to keep the project responsive to changes in the situation and environment, such changes must be controlled using the project objectives as a gatekeeper. Rework may also be needed because the wrong tasks (i.e., audit tests) were performed. Exhibit I-10 shows how more up-front “pain” or effort can reduce total effort, thus reducing risks of uncertain achievement of goals or failure. Exhibit I-10: More Up-Front Planning Effort Reduces Total Effort Required
Project Management Basics The basic challenges of successful project management include delivering a project: • That maintains consistent alignment with project goals and objectives. • Within defined constraints. • At a desired performance/quality level. • By effectively optimizing allocation and integration of the inputs needed to meet the predefined objectives. Projects can vary in duration and complexity, but the majority of projects share the characteristics listed below: • A project is a series of tasks and activities with a stated goal and objective. • It fulfills some need or requirement in an organization. • It has objectives that outline a path for achieving the goal. • It has a defined start date, time line, and target completion date. • It has funding or budget limits and dedicated resources (which also include materials, energy, space, provisions, communication, quality, risk, etc.).
Project Life Cycle Most projects cycle through similar stages from beginning to end. Although the terms and specifics of the cycles vary from industry to industry, they generally include these stages: • Conception or project initiation is where the project is born and the project goals and objectives are established. Stakeholder expectations must be clearly identified. It is vital to obtain support from senior management at this stage. During this stage, the nature and scope of the project are determined in a project charter and the project manager and project team are selected. • The planning, design, and scheduling stage is where the project schedule is outlined and resources are assigned. • The execution and production stage is when the work takes place. • During monitoring and control, the project manager is responsible for overseeing the quality of the work being produced, the progress against the schedule, and the use of resources necessary to complete the project. Project control systems keep a project on track, on time, and within budget. Each project is assessed for the appropriate level of control needed. Internal auditors can help determine how important specific projects are to an organization’s bottom line, the types of controls that exist, and any additional controls necessary. • The completion and evaluation stage typically involves some culminating event, for example, the launching of a new line of software. Evaluation often includes assessing the project’s effectiveness at the end of the process. Administrative activities include archiving files and documenting the lessons learned. Exhibit I-11 shows the project life cycle and the tasks associated with each phase. Exhibit I-11: Project Life Cycle Project Phase
Project Tasks
Conception or project initiation
Planning, design, and scheduling
Execution and production
Monitoring and control
Completion and evaluation
• • • • • •
Analyze project and spell out organizational needs in measurable goals.
•
Develop project charter, including costs, objectives, tasks, deliverables, and schedules.
•
Gain approval for the project charter and acquire funding.
• • • • •
Define work requirements.
• •
Establish basis for performance measurement.
• • • • • •
Launch the project management plan.
• • • •
Track progress, especially during execution but also during planning.
• • • •
Obtain client acceptance based on acceptance criteria.
•
Issue final project report and communicate lessons learned.
Conduct review of current operations. Complete conceptual design of finished project. Prepare financial analysis, cost and benefits, budget. Prepare list of assumptions, risks, and obstacles. Select stakeholders, including users and support personnel, and develop an understanding of their expectations.
Determine quantity and quality of work. Determine and allocate resources needed. Establish major timetable milestones. Define deliverables (can include feasibility study, scope statement, project plan, communications plan, issue log, resource management plan, project schedule, status report). Generate a project management plan and get formal approval for it, including approval for the required resources.
Confirm availability of adequate and appropriate project resources. Document work teams. Teams do work, provide status updates, and produce deliverables. Project managers lead, direct, and control. Managers and stakeholders receive progress reports and review action plans for correcting differences between plan and actual.
Compare actual and predicted outcomes. Analyze impact. Make adjustments to meet project objectives and acceptance criteria.
Install project deliverables. Complete project documentation such as lessons learned. Complete evaluation/post-implementation audit such as measuring stakeholder satisfaction.
Project Teams Project plans and their execution are only as successful as the manager and the team who implement them. Building effective teams is critical to the success of any project. Projects commonly include the following roles and team members: • Project stakeholders are individuals and organizations (both internal and external) who are actively involved in the project or whose interests may be affected as a result of project execution or completion. Key stakeholders can include the project manager, the customer or end user (e.g., the board for internal audit projects), the people executing the project, and many others. • The project sponsor is the person or group who wants the project to occur, who champions support for the project, and who commits the necessary financial resources, in cash or in kind, for the project. • The project manager is the leader of the project. He or she is responsible for coordinating and integrating activities across multiple functional lines in order to reduce the risk of overall failure or scope creep. A project manager is often a client representative who must determine and implement the client’s needs. • The project team is the core group of people who come together for a specific project and then disband when the project is over.
Constraints Projects need to be performed and delivered under what has traditionally been known as the “project management triangle,” as shown in Exhibit I-12. One side of the triangle cannot be changed without impacting the others. As continuous quality and performance initiatives like TQM have become increasingly important in performance management, quality and performance are sometimes separated from scope, turning quality into a fourth constraint. Exhibit I-12: Project Management Triangle
• Time is the amount of time available to complete the project. It is broken down into the time required to complete each component of the project, which is then broken down further into the time required to complete each task that contributes to the completion of each component. • Cost refers to the budgeted amount available for the project. It depends on variables such as labor rates, material rates, risk management, plant, consultant rates, equipment, and profit. • Quality and performance of the final product are major components of scope. The amount of time put into individual tasks and the amount of cost expended on resources influence the overall quality of the results. Over the course of a large project, meeting a defined quality level can have a significant impact on time and cost. Often, organizations define what quality should be from the start, thus fixing the size of this side of the triangle and requiring juggling of the other constraints to meet this requirement as defined by customer acceptance criteria. • Scope means what must be done to produce the project’s end result. It is sometimes represented as the area of the triangle to show that scope is strongly affected by the time, cost, and quality inputs. This is the overall definition of what the project is supposed to accomplish and a specific description of what the end result is supposed to be or accomplish. These constraints often compete with each other. Increased scope or quality typically means increased time and increased cost. A tight time constraint might mean increased costs and reduced scope. A tight budget can mean increased time and reduced scope. Quality project management is about
providing the tools and techniques that enable the entire project team to organize their work and meet these constraints.
Project Management Techniques Project managers and their team members can use a variety of tools and techniques to plan, schedule, and manage their projects. Tools commonly associated with project management include Gantt charts and two types of network analysis: the critical path method and the program evaluation review technique. The essential concept behind these tools is that during a project, some activities, known as “sequential” or “linear” activities, need to be completed in a particular sequence, with each stage being completed before the next activity or task can begin. Other activities are not dependent on the completion of any other tasks and can be completed at any stage during the time line. These are known as nondependent or “parallel” tasks. In addition to these planning and schedule management tools, two other essential project management techniques are the project budget for budget planning and control and change management to control the scope of a project. The project budget can be used as a baseline against which variances from intended project costs can be measured. Because it is similar to other budgets that are discussed elsewhere in these materials, it is not discussed further here. The change management process is discussed at the end of this topic.
Gantt Chart The Gantt chart (also known as a horizontal bar chart, a milestone chart, or an activity chart) is a project scheduling technique that divides each project into sequential activities with estimated start and completion times. It allows the decision maker to visually review a schematic presentation of the project time budget and compare it with the actual times. To create a Gantt chart, the project manager plots the steps of the project and their sequence and duration. The list includes the earliest start date for
each task, the estimated length of time it will take, and whether it is parallel or sequential. This forms the basis of the scheduling chart shown in Exhibit I-13. A Gantt chart’s simplicity allows for easy schedule modifications. Exhibit I-13: Gantt Chart
A Gantt chart: • Helps plan tasks that need to be completed. • Provides a basis for scheduling when tasks will be executed. • Helps plan the allocation of resources necessary to complete the project. • Helps determine the critical path for a project that needs to be completed by a specific date. • Is appropriate for internal audit scheduling because the audit process does not often require sequence revisions.
Network Analysis (CPM/PERT) A project network is the graphical representation of a project’s tasks and schedule. Network analysis involves evaluating the network of tasks and functions that contribute to a project in order to determine the most efficient path for reaching the project goals. It can help project managers carry out their scheduling activities during projects that consist of many separate jobs or tasks performed by a variety of departments and individuals. It can also help identify possible ways to revise or shorten the sequence of activities to expedite the project and/or lower costs. Network analysis computer
programs can help complete project scheduling, including tracking resource costs and usage. In industries like construction and aircraft manufacturing, an understanding of networks is critical to an internal auditor. Although developed independently, two of the most common types of network analysis, the critical path method (CPM) and the program evaluation review technique (PERT) are so similar as to be nearly synonymous. This type of network analysis is now often referred to as some variant of PERT/CPM. These methods are used to schedule, organize, and coordinate tasks, generally for large, complex projects with a high degree of inter-task dependency. Internal auditors may be called on to use these tools in evaluating efficiency and adherence to an organization’s policies and procedures. A PERT/CPM chart illustrates a project flow graphically. A number of circles or rectangles represent project milestones that are linked by arrows that indicate the sequence of tasks. Constructing a PERT/CPM network requires three inputs: the tasks necessary to complete the project, the time required to complete these tasks, and their sequence (i.e., the degree to which one task’s completion depends on the completion of a separate task). The goal of the PERT/CPM chart is to identify the critical path—the sequence of tasks that will take the longest time to complete, without any slack time between activities. All of the activities on the critical path must be completed in order; a delay in any activity will delay the entire project. Tasks that are not dependent on any other tasks, which can be completed simultaneously with other tasks, are referred to as parallel or concurrent tasks. Generally, the critical path is defined as the path for which the earliest start time (ES) equals the latest start time (LS) and the earliest finish time (EF) equals the latest finish time (LF), where: • ES is the soonest an activity can start after any necessary preceding steps that must be finished first. • EF is the ES plus the time needed to finish the activity.
• LF is the longest deadline allowed for an activity without delaying the project. • LS is the LF less the time needed to finish the activity. Exhibit I-14 shows an example of a PERT/CPM chart. Exhibit I-14: PERT/CPM Chart
Source: Sawyer’s Internal Auditing, fifth edition, by Lawrence B. Sawyer, et al. Used with permission.
In Exhibit I-14, there are five possible paths to reach the project endpoint (7): • 1-2-4-7 (98 days) • 1-2-3-5-7 (100 days) • 1-2-4-5-7 (108 days) • 1-3-5-7 (102 days) • 1-6-7 (92 days) Path 1-2-4-5-7, requiring 108 days, is the critical path. It includes all the required activities, in the necessary sequence of completion, without slack time. Activities B and D (which end in node 3) and activity C (which ends in node 6) have slack and could be delayed or durations extended without affecting the total project duration—up to a point.
Due to unexpected delays or tight deadlines, a project manager can use PERT/CPM charts to help identify ways to shorten a project’s time line. The project manager can: • Allow for lead time . Lead time is when a scheduled task begins before its predecessor task is completed. For example, the original time line for an advertising brochure may call for the graphics to be completed after the writer finishes the first draft. However, if the illustrator receives the list of necessary graphics two weeks prior to the first draft completion date, the illustrator will have two weeks’ lead time to finish the graphics and will be able to deliver them at the same time the writer completes the first draft. • Identify slack time . Slack time is the amount of additional time that an activity can consume without delaying the project past the expected completion date. Slack is the difference between the earliest expected time and the latest allowable time for each task. By definition, all activities in the critical path have a slack of zero. But other activities not in the critical path will often have slack. In our brochure example, the marketing activities are ancillary to the critical path activities. This means that there is slack in the start date for the marketing activities. • Assign additional resources. Depending on the project, it may be possible to increase the resources committed to a task on the critical path. Assigning two people to write the first draft of the advertising brochure could cut the writing time in half (assuming no learning curve). The process of adding resources to shorten the length of a task on the critical path is called “crashing.” The length of the project could also be shortened by “fast tracking,” or performing certain tasks simultaneously. • Schedule overtime. Any of the tasks may be shortened by scheduling project members for overtime. If the critical path is shortened, a different sequence of tasks could become the new critical path. The following are benefits and disadvantages of PERT/CPM: • They identify and prioritize tasks that must be completed on time for the whole project to be completed on time.
• They identify sequential and parallel tasks. • They identify which tasks can be delayed or accelerated without jeopardizing the overall timing of a project. • They assess the shortest time in which a project can be completed. • They form the basis for all planning and predicting. • They help in scheduling and managing complex projects. • They provide management with the ability to plan for the best possible use of resources to achieve a given goal within time and cost limitations. • They do not make the relation of tasks to one another as obvious as in Gantt charts. (Gantt charts may still be necessary with CPM/PERT.) • They can help a project manager determine an approximation of project scheduling. However, there are a number of uncontrollable unknowns that can impact a schedule, such as delays in the availability of critical resources. PERT and CPM are very similar. However, there are a few key differences: • PERT is a variation of CPM that takes a slightly more conservative view of time estimates for each project stage. • PERT was developed to address projects with uncertain task times; it allows task times to be forecast based on a range of possible values, from worst-case to best-case scenarios. • PERT is appropriate in projects that involve new or unique situations, where task times cannot be accurately forecast. • CPM was developed for factory-type projects where task time is already known. • CPM is able to relate costs to rewards because task times are known. Rewards for shortening the completion time of a contract, for example, may be substantial. In return, the costs associated with moving up the
completion date (additional resources, overtime pay, etc.) can be tracked.
Scope Control: Change Management While schedules and budgets can be used as baselines against which to measure variances and exercise cost and time control, an additional tool is needed to ensure that the project remains on scope. Serious problems can occur if stakeholders are allowed to add requirements to a project without also providing additional money and time (or additional human and material resources) to get the extra work done. This scope creep (called gold plating when staff add to the scope without authority) has caused numerous projects to fail. Adding to the scope not only consumes staff time and other resources, but it throws schedules and plans into confusion because people are working on things that are not even in the schedules or plans. The way to prevent scope creep/gold plating is to create and enforce a disciplined change management process. All stakeholders need to be informed in advance of the process that is required for requesting changes to the scope as agreed upon and proven by the signatures on the project charter. Project team members need to be given training on avoiding doing more work than is in the plan, because the client may not even appreciate this work and the organization will definitely not appreciate the project going off schedule or off budget for unnecessary or avoidable reasons. A formal change management process (also called change control) involves the project manager or a change control board for the project first assessing the technical merits of a proposed change (including how it impacts any interrelated components) and then assessing the impact of the change on the schedule, budget, or other constraints such as quality. If the change is deemed to have technical merit, the project manager must insist on the project sponsor approving additional resources as needed to make the change. If the additional resources are not provided, the project manager should reject the change. Project managers might create a list or “parking lot” for requested changes to be considered later or included in a future project.
Topic D: Forms and Elements of Contracts (Level B) Internal auditors may need to provide assurance or consulting in relation to external business relationships (EBR), which are often formalized using contracts. Audits of contracts regulating EBRs are often called contract audits. Learning about the various forms and elements that contracts may include will provide internal auditors with the knowledge needed to determine if the contract is the most appropriate type for the given relationship and situation and whether the details are appropriate, complete, and correct. A contract is a legally binding written or verbal agreement between two or more competent parties that provides legal recourse if the terms, conditions, responsibilities, or scope of work defined and agreed to in the contract fail to be performed or complied with by any party to the contract. There are numerous varieties of contracts, such as purchase orders, sales orders, labor agreements, and licensing agreements. Often, severe penalties are enforced if one or more parties fail to perform their responsibilities. Having appropriate internal controls in place to ensure that all parties maintain compliance with the stated provisions of contracts is a consideration for an internal audit activity when establishing its audit universe for purposes of risk assessment and annual planning. Contracts will identify the rights and obligations of all parties, along with the consequences of noncompliance should terms and/or conditions be breached. As an internal auditor, it is important to understand the ramifications that are associated with each contract and discern which contracts pose significant risks to the organization’s ability to achieve its objectives. This will help determine which contracts should receive priority assurance emphasis by internal audit. Note that some of the content that follows is reproduced from the Part 2 materials in this learning system (from the topic on types of assurance engagements). The information presented here on categories of contracts is not found in Part 2.
Contract-Specific Risks A major risk of contracts is the risk of lawsuits related to perceived contract breach on the part of one party or the other. Major misunderstandings occur when contracts are worded in a way that allows product or service requirements to be interpreted differently by different parties. Lawsuits are expensive, and even a successful outcome may be more costly than the benefit gained. They can also result in significant delays or damage to reputation. Contracts are classified in a variety of ways; the following classifications can be used to describe some inherent risks: • Express and implied. An express contract is one in which the terms are expressed verbally, either orally or in writing. Implied contracts are not expressed in words. An informal verbal agreement can be as binding and legally valid as a written contract. The risk is that an organization can be found to have unwittingly entered into an express or implied contract. • Bilateral and unilateral. A bilateral contract is most common, and it is one in which both parties make a promise. In unilateral contracts, one party makes a promise (such as an insurance or reward contract). Risks involve being liable for the performance of promised work that is more costly than the agreed-upon payment or that cannot be supplied, such as in the case of a disaster; receiving products or services of unacceptable quality; or the other party defaulting on or delaying delivery or payment. • Void, voidable, and unenforceable. Void contracts are considered never to have come into existence (such as being based on an illegal purpose). A voidable contract is one in which one of the parties has the option to terminate the contract (such as a contract with a minor). An unenforceable contract is one in which neither party may enforce the other’s obligations (if it violates the statute of frauds, for example). The risks here involve developing a contract that is void or unenforceable. One control for this risk involves including contract language to the effect that if one element is found to be unenforceable, the rest of the contract remains in force. (Legal wording will differ.) Voidable contracts should be entered into
knowingly and willingly rather than being a loophole. Other risks specific to particular contract types are discussed later in this topic. A valid contract typically requires the following elements: • Mutual agreement—There must be an express or implied agreement with evidence that the parties understand and agree to the details, rights, and obligations of the contract. • Consideration—Each party exchanges something of value (cash, goods, or a promise to do something). • Competent parties—The parties must have the capacity to understand the terms of the contract. Minors and mentally disabled people do not have this capacity. • Proper subject matter—The contract must have a lawful purpose. • Mutual right to remedy—Both parties must have an equal right to remedy a breach of terms by the other party. While even a verbal contract can be enforceable, ensuring that these elements exist reduces the risk of a contract being successfully contested.
Categories of Contracts Contracts regulate the day-to-day activities of external business relationships. They are the means of describing, identifying, and assigning both the responsibilities and the risks to all parties. The main categories of contracts to be identified here include product contracts, services contracts, solutions contracts, turnkey contracts, and out-sourcing.
Product Contracts Product contracts are contracts for the sale or purchase of products. These contracts frequently have simpler concerns than service contracts or contracts that encompass both products and services. It is recommended that
contracts to purchase products limit potential disputes and manage the consequences of potential risks. Additionally, product contracts should clearly identify delivery terms that suit the business purpose and should set boundaries around warranties and performance requirements. Product contracts also assist in identifying guidelines for intellectual property (inventions, patents, computer programs, product and service names, technical and business information, logos, artwork, geographic indication of source, industrial design, and the like). Intellectual property needs to be protected just as much as physical property does; however, intellectual property can be a difficult and sensitive topic. The termination and remediation of product contracts are often discussed together since there are common remedies implemented when a contract is terminated for cause. Termination provisions are significant and may require more detail than is often included; alternately, they could include details the organization would find undesirable. Import and export contracts are unique in that they must incorporate regulations from multiple regions or countries. Additionally, multiple entities, including customers and suppliers, instill a level of complexity for issues such as payment of import and export duties, taxes, clearance documentation, licenses, and permits.
Services Contracts Services contracts can be relatively brief and, at a minimum, should include who is providing the services within the supplier organization. Additionally, services contracts should include the scope of work to be performed and the development process, including design, development, installation and testing, and processes around excusable delays and recoverable damages. A significant issue could be limitations on liability (and the impact this could create). Often, buyers and sellers will have radically different views of liability and coming to a mutually agreeable balance can be challenging. Finally, milestones and methods of payment should be included in service contracts.
Solutions Contracts A solutions contract implies a higher standard of deliverables than a product or service contract. Solutions contracts differ from other types of contracts in that the buyer is also purchasing the seller’s expertise in the areas of needs analysis, design, engineering, or consulting. The buyer expects guidance and troubleshooting capabilities to be included in what they are purchasing. Solutions contracts can include, but are not limited to, systems or networking integration or management; optimizing and managing customer resources, facilities, or networks; or the implementation and operation of one or more management systems, such as marketing, billing, distribution, or inventory. These contracts require more of a partnership between buyer and seller, along with a more seamless flow of information between the organizations to effectively realize a solution.
Turnkey Contracts As the name implies, with a turnkey contract the buyer needs only to “turn the key” to implement the contract. The integral elements of a turnkey contract include the terms and conditions for system or facility acceptance and how and when payments will be made. A turnkey contract must also identify the requirements for a system or facility to be considered satisfactorily operational along with the functionality of the system or facility needed for the buyer to take ownership. Once satisfied, the seller can be paid.
Out-Sourcing Out-sourcing functions or activities has become increasingly common since the end of the 20th century. Out-sourcing provides the opportunity to shift performance responsibilities and the day-to-day operational controls and expenses from an entity to a supplier or vendor who will be responsible for those services for a set amount of time. While the supplier or vendor has responsibility for the performance of agreed-upon controls and procedures, managers in the organization receiving the services still retain overall responsibility for the quality and effectiveness of the controls as well as for supplier relationship management. Though out-sourcing had its origins in the IT arena, it is now being used in areas such as logistics, human
resources, building facilities management, and back-office accounting services. Advantages of out-sourcing include leveraging scale and capabilities, reducing the risks of managing complex relationships, and establishing an identified source of investment, ideas, and energy. Disadvantages of outsourcing include loss of control, increased costs, lack of flexibility in addressing change within the source organization, dissatisfied customers, extensive disputes and lengthy resolution processes, loss of data and inhouse skills, and the potential lack of a suitable remedy in the event of a failure. Out-sourcing contracts also introduce a range of issues that are not as common to product, services, or solutions contracts. These issues include, but are not limited to, labor laws, data ownership, corporate policy and procedure adherence, long-term change management, data protection and security laws, and compliance with government regulations in all countries of operation.
Types of Contracts Evaluating the soundness of contracts from cost and contract compliance standpoints is an increasingly important aspect of an internal auditor’s job. Appropriate contract types will help ensure that an organization successfully meets its strategic objectives and avoid the risks associated with excessive costs, project delays, and quality issues. The following types of contracts are discussed next.
Fixed-Price (Lump-Sum) Contracts
A fixed-price contract (lump-sum contract) requires a contractor to successfully perform the contract and deliver supplies or services for a price agreed to up front. A firm fixed-price contract is appropriate when goods/services can be described in sufficient detail to ensure that both parties fully understand the contract requirements and inherent performance risks. Fixed-price contracts often include methods of reducing risk: • Economic price adjustment factors to allow for volatile market prices • Escalation clauses to increase prices per a schedule or against an economic index • Re-pricing provisions to permit fixed-price orders but with later reasonableness checks • Incentives for good performance or penalties for poor performance • A specified level of effort These contracts are commonly used if the work required is uncomplicated. If completed as agreed upon, there is little reason for an audit of the contract. If these contracts have a change of scope and additional expenses while the work is occurring (who bears the risk of these additional expenses must be specified in the contract), an audit may be warranted. The major risk of using fixed-price contracts is of receiving inferior-quality goods or services. Expectations of quality need to be explicit in the form of acceptance criteria or specific materials to be used, or the contractor could substitute materials of lower quality. Fixed-price contract audit review areas also include: • Inadequate insurance and bond coverage. • Charges for equipment not received or activities not completed. • Escalation clauses or re-pricing provisions. • Authorization for extras, revisions, or change orders.
• Overhead expenses charged separately. • Certification of completion before work has actually been completed. • Inadequate inspection relative to specifications or inadequate completion.
Cost Reimbursement (Cost-Plus) Contracts A cost reimbursement contract (or cost-plus contract) is an economical way of handling pricing difficulties when there are numerous unknown factors. It is appropriate when the uncertainties of performance will not permit a fixed price to be estimated with sufficient accuracy. In a cost reimbursement contract, the contractor is reimbursed for costs above what was specified. Additional costs are usually based on the initial costs plus a fixed fee or a fee based on a percentage of costs. In the latter type, a risk is that there is usually an incentive for the contractor to escalate costs. This type of contract places the least cost and performance risk on the contractor and requires the contractor’s “best efforts” to complete the contract. Significant risks of cost reimbursement contracts include being charged over the market value or for goods that were not actually delivered. Common controls for these risks are to set a predetermined ceiling on costs that can be allocated to the contract, are allowable within cost standards, and are reasonable. This cost ceiling is a key control that should be audited to ensure that such contracts cannot be used to overcharge or underdeliver to the organization. Other risks for auditors to consider when auditing cost reimbursement contracts include: • Direct billing of overhead costs. • Inadequate cost controls on the contractor’s part and no effort to obtain best prices. • Unreasonable charges for contractor-owned equipment or idle rented equipment. • Excessive hiring, poor work practices (e.g., absences, excessive overtime).
• Excess billing over contractor costs. • Failure to pass along discounts, refunds, salvage, etc. • Duplication of effort between headquarters and field offices. • Inadequate job site supervision, inspection, follow-up from headquarters, etc. • Unreliable cost accounting (e.g., billing supervision as labor in violation of contract). • Extravagant use or early arrival of material and supplies. • Quality or grade issues: excessively high or low standards for materials or equipment. • Poor physical protection of materials or equipment.
Unit-Price Contracts In unit-price contracts , a price per unit of work is agreed upon. These contracts are best for a large number of identical products or services. Total cost is the per-unit price times the number of units (e.g., number of brochures printed). The following risks are important for the auditor to consider: • Excessive progress payments • Improper reporting of units completed • Prices unrelated to actual costs or improper extension or escalation of unit prices • Improper changes to the original contract
Joint Venture Contracts Joint venture contracts are often based on cost-, revenue-, or profit-sharing or profit-and-loss-sharing arrangements. In audits of these contracts, the
engagement objective is often to evaluate compliance with financial and nonfinancial terms and conditions. Financial terms may include: • Reliability of cost allocation and billing systems and data. • Reliability of revenue management and distribution. Nonfinancial terms may include: • Safeguarding of assets, including information, reputations, and brands. • Proper governance and compliance with laws, regulations, and contractual obligations with third parties such as corporate social responsibility policies and procedures. • Reliability of nonfinancial information. • Reasonableness of budgets and forecasts.
Additional Contract Types Additional contract types include: • Time and materials contracts—Fixed rate for services; materials at cost plus a handling fee. • Letter contracts—A preliminary instrument letting a contractor begin work prior to contract finalization (only for circumstances of unusual and compelling urgency). • Indefinite delivery contracts—A delivery or task order initiates delivery.
Chapter 3: Data Analytics Chapter Introduction Data analytics is the process of gathering and analyzing data and then utilizing that data and the results gathered to provide business information for making better organizational decisions and implementing more relevant policies and procedures. A more refined definition relevant to CAEs is that data analytics is the process of quantifying and highlighting potential risks and opportunities using operational, financial, and other data. Data analytics is also considered to be automated processes that can be repeated, for example, by using scripts to search for patterns and identify anomalies. It can also refer to data mining— gathering information from multiple sources to acquire results upon which management can make better-informed decisions.
Topic A: The Value of Using Data Analytics in Internal Auditing (Level B) Each functional area in an organization needs to justify its own existence by showing that it adds more value than it costs to maintain. This is as true for internal auditing as it is for production, sales, or finance. One way to add organizational value is to find ways to operate more efficiently, or do more with less. Another is to find ways to operate more effectively, or do the right things in the first place. Still another is to identify cost-saving or revenue-generating opportunities for the organization, or add consulting value. Data analytics has the potential to assist an audit review by transforming what otherwise might be a surplus of data into useful and actionable information in a timely fashion. Indeed, because internal audit has access to data from multiple areas of the organization, the function is uniquely positioned to transform data into information valuable to the organization. Data analytics will only become more common in the future in internal auditing functional areas; therefore, the CAE may want to be proactive and sell the organization on making these strategic investments sooner rather than later. After all, identifying even a single major area for cost savings could pay for the investment in software and training. Here are some other specific benefits that can be gained by adopting data analytics in internal auditing: • Spend less time on data preparation, formatting, or doing calculations and more time on value-added analysis. • Fully or partly automate previously manual audit tests and perform them on more (or all) of the items in a population, reducing the need to rely on random or judgmental sampling. • Better filter out false positives or false negatives from results. • Set rules such as a threshold for an invoice amount. • Plan better audits by using analytics to better understand which areas or processes would receive the most benefit from an audit.
• Identify, categorize, prioritize, monitor, and manage risk more efficiently and effectively. • Better detect fraud, errors, inefficiencies, and anomalies. This topic starts by addressing the four Vs of data—the qualities that are needed for data to become useful. This discussion helps show why data analytics is becoming increasingly necessary for internal auditing. Then the topic addresses a framework for building data analytics into an internal audit function. The topic concludes with the definition and importance of data governance.
The Four Vs As stated in Data Analytics: Elevating Internal Audit’s Value, the four Vs of data include volume, velocity, variety, and veracity. Within this context, volume refers to the amount of data, which is significantly greater than it has ever been due to our ever-increasing abilities to capture data from unlimited sources via the Internet. Velocity can be defined as the increased number of devices that can be used and the large amount of collected data from all around the world. Information can be gathered from anywhere at increasingly rapid speeds. Variety is the numerous types of data being identified, captured, and stored. This can include categorizations such as data formatted for a particular type of software or for a given functional area such as finance. One broad categorization is structured versus unstructured data. Structured data is data formatted for ease of use, such as into columns and rows, much like a wellordered spreadsheet. This will include data from databases and information systems such as functional area modules in an enterprise resources planning (ERP) system or an audit software package. Unstructured data is data that has not been formatted (i.e., data that is not easy to sort through or tabulate). According to 2016 research by the International Data Corporation, a global market intelligence firm, unstructured data may already account for almost 80 percent of all enterprise data. This could include data from social
media, blogs, emails, word-processing documents, court proceedings, etc. Finally, veracity is the truth of the data. Veracity is key, as data analytics is only as good as the underlying data. The adage of “garbage in-garbage out” is never more true than in data analytics, yet veracity is often the most overlooked aspect of data analytics. Without veracity, organizations run the risk of faulty decision making, incomplete records, entry errors, or inconsistent data.
Data Analytics Framework An effective data analytics framework should answer questions such as “What are the top issues facing the organization?” or “How can the audit add more value?” Answering these questions allows for developing a framework that is achievable, aspirational, and identified by smaller milestones that show the progress to achieving the long-term objective. When building a data analytics framework, an entity develops its vision, and then determines how to progress in building data analytics capabilities, including what steps should be taken to elevate performance. Part of this process includes evaluating current capabilities and identifying people, processes, and technologies to enhance those capabilities. This can include spending money in two critical areas: talent, such as training and staffing, and technology, such as hardware and software. Once the data analytics framework is established, the entity should progress to implementing and monitoring this new plan. Implementation should be addressed in stages so as not to overwhelm current resources. Monitoring has a two-part role – to gauge the level of adoption from each impacted department and to act as an independent party to assist other areas in improving their data analytics. As an organization implements its data analytics framework and the entity evolves, the organization’s strategies should also advance to meet those changes.
Data Governance Data governance involves the organization’s policies and procedures,
controls, and related information technologies regarding the collection, use, storage, usability (e.g., formatting for ease of use), analysis, deletion, and safeguarding of data. Safeguarding of data includes ensuring data availability (protection from loss), integrity (protection from corruption), access (role-restricted access to sensitive organizational or customer data), and compliance with relevant laws and regulations, such as for privacy. A shorter definition of data governance is that it is a way of ensuring and continually improving data quality. Management will develop, authorize, direct, manage, and monitor the organization’s data governance policies, procedures, controls, and information systems to ensure alignment with the organization’s strategy, objectives, mission, vision, and ethics statements. For example, management may be concerned about ensuring that data analytics enable confident and timely decision making and that staff can do their work efficiently and effectively as well as leveraging data to maximize profit potential. Like all types of governance, the board of directors and its relevant committees provide oversight over the organization’s data governance plans and activities. The board has a fiduciary responsibility to the organization’s stakeholders and, as such, must understand their needs related to data governance. However, data governance is management’s day-to-day responsibility. Internal auditors play an important role in assessing the effectiveness of data governance activities.
Topic B: The Data Analytics Process (Level B) Data analytics allows internal auditors the ability to focus their efforts on those items that have been identified as requiring a higher level of assurance due to higher risk. A proven process for data analytics uses the following five steps. • Define the questions. The first step is to define the potential achievements and the anticipated value the data analyst is trying to attain. One approach to do this is to develop a solid question that needs to be answered. For example, a function of the internal audit may be to determine the locations and parties involved in potential fraud within the organization, so asking “How can we identify where potential fraud is occurring and what parties are involved?” helps establish a solid starting point and provides a base from which multiple sources of data can be pulled. • Obtain the data. The next step is information discovery, which is a process to obtain access to the needed data to perform the analysis. Getting access to data and making the data usable can be difficult and expensive, and internal audit executives have identified obtaining data as the greatest challenge to incorporating data analytics into internal audit functions. • Cleanse and normalize data. Cleansing data includes identifying and removing duplicate data and identifying whether identically named data fields from different systems have identical or different meanings. Normalizing data is the process of organizing data in order to reduce the potential of redundancy and to facilitate the use of the data for specific purposes. Normalizing also allows for the identification of anomalies, which might represent actual problems or potential opportunities. • Analyze the data. After the data has been cleansed and normalized, it should be analyzed. The analysis process used may differ depending on the type of data being analyzed. However, once analyzed, all data should be interpreted: Have patterns emerged? Are identified anomalies errors in the feature or system or process? Is senior management aware of the feature and its consequence? This preliminary analysis can provide initial
results and assist in determining if anomalies reflect errors, violations of company policies, or red flags for fraud. • Communicate the results. The final step is to communicate the results to the board and senior management. Because data analytics results are often heavy in numeric and data tables, providing data visualization and graphical representations are excellent ways to inform leadership and enhance the decision-making processes.
Topic C: Data Analytics Methods (Level B) Data analytics is making great strides in industries, and the list of possibilities for its use is ever-increasing. There are several types of data analytics, including descriptive, diagnostic, predictive, and prescriptive. Internal audit data analytics use can also be categorized into one of four common categories: compliance, fraud detection and investigation, operational performance, and internal controls. Other types of data analytics include network and text analysis. These ways of describing data analytics methods are discussed next.
Types of Data Analytics Data analytics exists on a continuum from the most straightforward to the most complex and probabilistic. • Descriptive analysis. A descriptive analysis gathers information and uses hindsight to identify “what happened.” It is the easiest analysis, but it also provides the least information value. However, even descriptive analysis can be used for anomaly detection—identifying the outliers, exceptions, duplicates, or gaps in a set of data that require further review. For example, internal auditors for a utility company used data analytics to generate automated reports on drivers’ fuel use, and an exception report was automatically emailed to the drivers’ managers, which dramatically reduced the number of weekly exceptions. Anomaly detection may also take the form of pre-developed scripts that can be run against standard data sets (or internal auditors with the right training can make customized ones for nonstandard data sets). These scripts can also apply numeric analysis. • Diagnostic analysis. Diagnostic analysis also uses hindsight and examines specific data or content to uncover the answer to the question “Why did this happen?” It commonly uses techniques such as drill-down, data discovery, data mining, and correlations. • Predictive analysis. Predictive analysis uses insight to turn data into
actionable information to determine “what will happen?”—the probability of an event, situation, or outcome occurring. • Prescriptive analysis. Prescriptive analysis involves the highest level of difficulty and results in the greatest value. It uses foresight and optimization to build and test scenarios around different policies, combining data, business rules, and mathematical models to determine what course of action would lead to potential outcomes.
Internal Audit Uses for Data Analytics Internal audit most commonly uses data analytics in assessments of compliance and operational performance, fraud detection and investigation, and internal control analysis. • Compliance uses. Data analytics help in assessing whether the data used to determine compliance is sound or contains quality or integrity issues. Another use is when evaluating expense reports, purchasing cards, or vendor invoice line items for trends or anomalies. Data analytics can also be used to assess regulatory requirements such as by doing key word searches. • Fraud detection and investigation uses. Data analytics can detect “ghost” employees by looking for gaps in the various records that should exist. The same can be done to detect fake suppliers or service providers. Data analytics can create exception reports that are prioritized by those most likely to result in financial or reputation risk to the organization. Such systems can also do some of the root cause analysis after fraud has been detected, answering questions or providing short lists related to who, what, where, and when. • Operational performance uses. Data analytics may aid in the identification of the following types of errors and/or inefficiencies: • Duplicate payments • Foregone payment discounts or failure to assess late collection penalties • Slow-moving inventory or inventory held in quantities that are too high
• Cost escalation that is unusual or is not allowed in contract Data analytics could also highlight better KPIs or help similar areas converge on the best KPIs. • Internal control analysis uses. Data analytics can be used to analyze proper user access privileges or proper segregation of duties or whether control performance is effective. As stated earlier, anomaly detection is a powerful tool that can be leveraged to find areas of control weaknesses or failures.
Other Types of Analytics Data analytics can be applied to some specialty applications such as network analysis and text analysis. • Network analysis. Network analysis refers to the mathematical analysis of complex work activities in terms of a network of related activities. This can pertain to the components and dependencies of all factors within the network. • Text analysis. Text analysis involves extracting machine-readable facts from the text of various sources and creating sets of structured data out of large compilations of electronic and print documentation. This process dissects the data into smaller, more manageable data pieces. Corporations can use text analysis as a starting point for managing content from a datadriven approach. This assists in automating processes such as decision making, product development, marketing optimization, business intelligence, and more.
Next Steps You have completed Part 3, Section I, of The IIA’s CIA Learning System®. Next, check your understanding by completing the online section-specific test(s) to help you identify any content that needs additional study. Once you have completed the section-specific test(s), a best practice is to reread content in areas you feel you need to understand better. Then you should advance to studying Section II.
You may want to return to earlier section-specific tests periodically as you progress through your studies; this practice will help you absorb the content more effectively than taking a single test multiple times in a row.
Index The numbers after each term are links to where the term is indexed and indicate how many times the term is referenced. Adams’s equity theory 1 balanced scorecard 1 bases of power 1 behavior leadership theories 1 modification 1 organizational 1 bilateral contracts 1 business processes 1 CAE (chief audit executive) 1 centralized organizational structure 1 chain of command 1 change control 1 management 1 charismatic leadership 1 chief audit executive 1 cleansing data 1 cluster organizational structure 1 co-sourcing 1 coaching 1, 2 Committee of Sponsoring Organizations frameworks Enterprise Risk Management—Integrating with 1 Internal Control—Integrated Framework 1 communications organizational 1 competitive advantage 1 compliance assessments 1 objectives 1 contingency/situational leadership theories 1 contracts 1 bilateral 1
cost reimbursement 1 cost-plus 1 express 1 fixed-price 1 implied 1 joint venture 1 lump-sum 1 product 1 risks in 1 services 1 solutions 1 turnkey 1 unenforceable 1 unilateral 1 unit-price 1 void/voidable 1 control activities 1 control environment 1 controlling, as management function 1 internal 1, 2 core activities 1 corporate social responsibility 1 COSO frameworks Enterprise Risk Management—Integrating with Strategy and Performance 1 Internal Control—Integrated Framework 1 cost reimbursement contracts 1 cost-plus contracts 1 cost(s) in project management 1 CPM (critical path method) 1 critical path method 1 CSR (corporate social responsibility) 1 culture, organizational 1 data analytics 1 cleansing 1 governance 1 normalizing 1
obtaining 1 structured/unstructured 1 decentralized organizational structure 1 departmentalization 1 descriptive analysis 1 diagnostic analysis 1 divisional organizational structure 1 documentation 1 effectiveness 1 efficiency 1 equity theory 1 event identification 1 expectancy theory 1 export/import contracts 1 express contracts 1 feedback, on performance 1 Fiedler’s LPC (least-preferred-coworker) model 1 fixed-price contracts 1 four Vs of data 1 fraud detection/investigation 1 functional organizational structure 1 Gantt charts 1 globalization 1 goal-setting theory 1 goals 1, 2 governance of data 1 Hersey-Blanchard situational leadership theory 1 Herzberg’s motivation-hygiene theory 1 hierarchy of needs 1 hourglass organizational structure 1 implied contracts 1 import/export contracts 1 independent contractors 1 influence/power theories 1 information management 1 intellectual property 1
internal controls 1, 2 International Organization for Standardization ISO 31000, “Risk management—Guidelines” 1 job design 1 enlargement 1 enrichment 1 rotation 1 joint venture contracts 1 Jung’s trait theory 1 key performance indicators 1 key risk indicators 1 KPIs (key performance indicators) 1 KRIs (key risk indicators) 1 leaders 1 leadership 1, 2 Leadership Grid 1 participative 1 theories of 1 least-preferred-coworker model 1 Lewin’s leadership styles 1 Likert’s organizational management/leadership styles 1, 2 Locke and Latham’s goal-setting theory 1 LPC (least-preferred-coworker) model 1 lump-sum contracts 1 management 1 of performance 1 of projects 1 managers 1 role in performance management 1 See also management 1 Maslow’s hierarchy of needs 1 matrix organizational structure 1 McClelland’s theory of needs 1 McGregor’s Theory X/Y 1 mentoring 1 mission, organizational 1, 2 motivation 1, 2 motivation-hygiene theory 1
network analysis 1, 2 network organizational structure 1 non-core activities 1 normalizing data 1 objectives 1, 2, 3 Ohio State University leadership research 1 operational objectives 1 organizational behavior 1 organizational culture 1 organizational management/leadership styles 1, 2 organizational politics 1 organizational structure 1, 2, 3 and risk 1 centralized 1 cluster 1 decentralized 1 departmentalization 1 divisional 1 functional 1 hourglass 1 matrix 1 network 1 virtual 1 organizing, as management function 1 out-sourcing 1, 2, 3 participative leadership 1 path-goal theory 1 performance appraisals 1 assessment of 1 in project management 1 management 1 measurement systems 1 measures 1 organizational 1 PERT (program evaluation review technique) 1 planning, as management function 1 politics, organizational 1 power, bases of 1
power/influence theories 1 predictive analysis 1 prescriptive analysis 1 product contracts 1 productivity 1 profitability 1 program evaluation review technique 1 projects constraints 1 life cycle of 1 management of 1 teams 1 quality 1, 2 reinforcement theory 1 reporting objectives 1 reward systems 1 risk and internal controls 1 and organizational structure 1 business process 1 identification 1 impact and control matrix 1 in contracts 1 responses to 1 treatment of 1 scope 1 control 1 creep 1, 2 services contracts 1 situational/contingency leadership theories 1 Skinner’s reinforcement theory 1 solutions contracts 1 span of control 1 stakeholders 1 strategic objectives 1 strategic planning 1 structure, organizational. See organizational structure supervisors, role in performance management 1
sustainability 1 teams 1 text analysis 1 theory of needs 1 Theory X/Y 1 Theory Z 1 time, in project management 1 trait theory 1, 2 transactional leadership 1 transformational leadership 1 turnkey contracts 1 unenforceable contracts 1 unilateral contracts 1 unit-price contracts 1 University of Michigan leadership research 1 variety, as one of four Vs of data 1 velocity, as one of four Vs of data 1 veracity, as one of four Vs of data 1 virtual organizational structure 1 vision, organizational 1, 2 void/voidable contracts 1 volume, as one of four Vs of data 1 Vroom’s expectancy theory 1 work group design 1 “Big Five” theory of personality 1 Build 08/24/2018 15:40 p.m.
Contents Part 3: Business Knowledge for Internal Auditing The IIA’s CIA Learning System® Part 3 Overview Section I: Business Acumen Section Introduction Chapter 1: Organizational Objectives, Behavior, and Performance Topic A: The Strategic Planning Process and Key Activities (Level B) Topic B: Common Performance Measures (Level P) Topic C: Organizational Behavior and Performance Management Techniques (Level B) Topic D: Management’s Effectiveness in Leadership Skills (Level B) Chapter 2: Organizational Structure and Business Processes Topic A: The Risk and Control Implications of Different Organizational Structures (Level B) Topic B: The Risk and Control Implications of Common Business Processes (Level P) Topic C: Project Management (Level B) Topic D: Forms and Elements of Contracts (Level B) Chapter 3: Data Analytics Topic A: The Value of Using Data Analytics in Internal Auditing (Level B) Topic B: The Data Analytics Process (Level B) Topic C: Data Analytics Methods (Level B) Index
Section II: Information Security
This section is designed to help you:
• • • • • • • •
Differentiate types of common physical security controls.
• • • • •
Identify emerging technology practices.
Differentiate various forms of user authentication. Identify various types of authorization controls. Identify potential information security risks. Explain the purpose of various information security controls. Define the use of information security controls. Recognize data privacy laws. Define the potential impact data privacy laws have on data security policies and procedures.
Define the potential impact emerging technology practices have on security. Describe existing cybersecurity risks. Identify emerging cybersecurity risks. Describe cyber- and information security-related policies.
The Certified Internal Auditor (CIA) exam questions based on content from this section make up approximately 25% of the total number of questions for Part 3. All topics are covered at the “B—Basic” level, meaning that you are responsible for comprehension and recall of information. (Note that this refers to the difficulty level of questions you may see on the exam; the content in these areas may still be complex.)
Section Introduction The goal of systems security is to maintain the integrity of information assets and processing and mitigate and remediate vulnerabilities. COBIT, formerly known as Control Objectives for Information and Related Technology, is an internationally accepted framework created by ISACA that helps enterprises to achieve their objectives for the governance and management of information technology. With the release of COBIT 4.1 in 2008, 11 systems security objectives were identified that reflect the breadth and complexity of the systems security environment: • Manage IT security, as aligned with business requirements.
• Implement an IT security plan that balances organizational goals and risks and compliance requirements with the organization’s IT infrastructure and security culture. • Implement identity management processes to ensure that all users are identified and have appropriate access rights. • Manage user accounts through appropriate policies and processes for establishing, modifying, and closing accounts. • Ensure security testing, surveillance, and monitoring to achieve a baseline level of system security and to prevent, identify, and report unusual activity. • Provide sufficient security incident definition to allow problems to be classified and treated. • Protect security technology by preventing tampering and ensuring the confidential nature of security system documentation. • Manage cryptographic keys to ensure their protection against modification and unauthorized disclosure. • Prevent, detect, and correct malicious software across the organization in both information systems and technology. • Implement network security to ensure authorized access and flow of information into and from the enterprise. • Ensure that sensitive data is exchanged only over trusted paths or through reliable media with adequate controls to ensure authenticity of content, proof of submission, proof of receipt, and proof of nonrepudiation of origin. (COBIT 5 is the current version of the framework, released in 2012, and it is addressed in more detail later, in Section III.) Systems security is made up of controls general to the organization and specific to IT and physical security systems. Because a system is only as
strong as its weakest link, systems security must start with use of a control framework such as COSO’s Internal Control—Integrated Framework. While this section covers only the general controls specific to IT security, other controls such as proper segregation of duties are a prerequisite for IT systems security. When auditors find a weakness in general or application controls, pointing out the issue is only part of the task. Auditors also need to explain to management the risk exposure that the deficiency is causing. The auditor should recommend the best system that can address the control given the particulars of the organization. Continual monitoring is required for controls to be effective. For example, whenever a software application is reviewed for controls, the security administration procedures and password controls around it should be reviewed, including whether the right people have the right authority to access appropriate areas or data in the system (“user roles”). When auditing for computer-related fraud, auditors trained in computer controls should try to think like a thief or a hacker in determining areas of greatest vulnerability and considering how they could be exploited, how the audit trail might be covered up, what level of authority would be needed to enact the cover-up, and what explanations could be used if the issue were detected. While this is not an easy task, it is important to determine what fraud would “look like” in the particular area under review so as to design the audit for maximum impact.
Chapter 1: Information Security Chapter Introduction Auditors not only need to understand information security principles and controls in general; they should also understand the security needs of the particular facet of the business where the controls and information security systems reside. Both are needed to gain a full appreciation of information security risks and controls. This chapter starts, in Topic A, with a discussion of systems security, which is founded on a strong set of general controls. Topic B addresses various forms of user authentication and authorization controls. Topic C covers information security controls. Topic D provides an overview of data privacy laws and their potential impact on data security policies and procedures. Topic E addresses emerging technology practices and how those practices can impact security. Topics F and G cover cybersecurity risks and how those risks affect security-related policies.
Topic A: Systems Security and IT General Controls (Level B) Systems security needs to be a holistic endeavor so that a high level of protection in one area is not simply bypassed in some other way, such as an outside person bypassing strong external access security by sneaking into an unguarded office and accessing the network through a computer with weak protections (or stealing a laptop with sensitive data) or an unscrupulous programmer adding a backdoor into a computer system during systems development or a system update. According to COBIT, ensuring systems security involves both creating security policies and continuously monitoring and responding to security threats. Security policies are part of IT general controls (ITGCs), which are a framework for ensuring that systems security is comprehensive. ITGCs apply to all system components, processes, and data in the organization or the system environment. The effectiveness of ITGCs is measured by the number of: • Incidents that damage the enterprise’s public reputation. • Systems that do not meet security criteria. • Violations in segregation of duties. ITGCs are classified in the Practice Guide “Information Technology Risks and Controls,” second edition, previously Global Technology Audit Guide 1 (GTAG® 1), as follows:
Due to their importance, the first two of these categories are addressed in more detail later in this chapter. Logical access controls are addressed in the next topic, while systems development life cycle controls are addressed in Section III, Chapter 1. The remaining four categories are addressed next.
Program Change Management Controls Changes in the IT environment may be frequent and significant. The auditor should look for adequate change controls, including security, audit trail, quality assurance, provision for emergency changes, source, and tracking. According to the Practice Guide “Change and Patch Management Controls: Critical for Organizational Success,” second edition (previously GTAG® 2), change management includes application code revisions, system upgrades, and infrastructure changes such as changes to servers, routers, cabling, or firewalls. The process and results should be predictable, defined, and repeatable. Patch management updates applications that are already in production and involves installing a patch—a bundled set of fixes to a software’s code to eliminate bugs or security vulnerabilities. It should be handled as its own category. High-performing organizations perform far fewer patches than low-performing organizations. Organizations with poor change management controls have low success for IT changes due to project delays or scope creep. They suffer from unexpected outages and may frequently be in crisis mode, with many emergency or unauthorized changes. (For the latter, even one is too many.) Constant crisis creates stress and high turnover for IT staff, indicates a lack of control over problem escalation, and increases risks that a change will cause unintended consequences. If IT staff has no time for new projects, deteriorating service results. If a change results in downtime or, even worse, a material error in system data (such as in financial reporting data), it could carry a higher risk of loss than even that of a system attack. When a possible patch or change comes up, IT staff and management should perform triage, sorting out the true emergency situations from those that can be handled as routine. Criteria
should be based on business need and the relative risk of waiting. The end user should test planned changes using a robust testing plan in a sandbox environment first. A sandbox environment is a copy of the system that is not the live version. It is a test environment that helps determine if there will be unintended consequences of installing a patch or making another change. To make the change management process cost-effective, multiple changes are bundled. Production changes should be performed in off-hours.
Change Management Process Steps “Change and Patch Management Controls” lists the following change management process steps: 1. Identify the need for change. 2. Prepare. Document the step-by-step procedure for the change request, the change test plan, and a change rollback plan. 3. Justify the change and request approval. Determine the impact and costbenefit; review associated risks and regulatory impact. 4. Authorization. Reject, approve, or request more information. Set priorities relative to the overall schedule. 5. Schedule and implement change. Schedule a change implementer and a change tester, test in preproduction, communicate to affected parties, get final approval, and implement change. 6. Review implemented change. Measure change success, use of process, variances, and regulatory compliance. Report lessons learned. 7. Back out change if unsuccessful. 8. Close change request and report to stakeholders. 9. Document the final changes that were made. 10. Revisit the change management process for improvement.
Reducing Change Risks
Complex production environments require more independent controls. Adherence to development methodologies such as the systems development life cycle (discussed in Section III, Chapter 1) is critical. Routine maintenance changes are easier to audit because their results can be objectively determined and management override risk is low. More scrutiny is needed for software controls that detect when controls are being overridden due to higher risk of management override and the need for auditors to subjectively judge their effectiveness. Software applications also have detective controls to verify production changes against authorizations. The development department should report to a high enough level of management to keep department heads from scheduling low-priority projects at a higher priority than they deserve. Reporting to higher levels will also help ensure that limited technology resources are used effectively. Top management needs to set the proper tone. Other supervisory controls include preventive controls such as enforcing change and patch management policies as well as having key stakeholders assess change risks. Detective supervisory controls involve measuring and correcting poor performance, such as by measuring mean time to repair. Exhibit II-1 summarizes risks, controls, and related metrics for change and patch management.
Exhibit II-1: Metrics for Determining Change and Patch Management Success Risk
Control
•
Policy for zero unplanned changes
• •
Proactive management
Changes fail to be implemented or are late
•
Change management process
Unplanned work displaces planned work
• • •
Unauthorized changes
Detective software
Triage Planned changes bundled Patches treated as a
Metric
• • • •
Number of unplanned changes
•
Greater than 70% change success rate
•
New work created by change
• •
Less than 5% of work is unplanned
Number of unplanned outages Number of changes authorized Number of changes implemented
Percentage of time on unplanned work
normal process to expect
•
Percentage of projects delivered late
•
Percentage of patches installed in a planned software release
Source: Practice Guide “Change and Patch Management Controls: Critical for Organizational Success,” second edition.
Physical Security Controls Prior to discussing physical security controls, this topic first presents some basic information on physical security in general.
Physical Security Physical security involves the physical and procedural measures used to protect an organization’s buildings, the occupants, and the building contents. The goal in workplace security is to eliminate or reduce the risk of harm to facility occupants first, followed by risk of loss of organizational assets— tangible and intangible—from human and natural disasters. Physical Security Vulnerabilities There are many sources of physical security vulnerabilities. Examples include: • Unauthorized access to facilities, systems, etc. • Natural disasters (e.g., fires, floods, hurricanes, tornadoes, earthquakes). • Service disruptions (e.g., telecommunications, network, Internet access, electrical power, or equipment failures). • Human error. • Theft and vandalism. • Terrorism. • Sabotage.
Ideally, physical security begins with workspace design. A few obvious examples are: • Smoke alarms. • Adequate lighting throughout a facility. • Installation of an electronic security system for building entry. • A reception area with staff or a security guard, sign-in sheets, and visitor badges. • Restricted areas, such as the data center. Preemployment background reference checks, postemployment security clearances, and separation of job duties are additional measures that can help mitigate physical security risks. Security Risk Management Process It is not possible to mitigate all information or physical security risks. An organization needs to ensure that it has a risk management process to manage its exposure to potential information or physical losses. Security risk management encompasses the processes an organization puts into place so that security controls and expenditures are appropriate and effective at mitigating the risks to which the organization is exposed. Typical security risk management steps include identification, probability determination, quantification of potential loss, and selection. Exhibit II-2 provides an overview of these steps.
Exhibit II-2: Risk Management Steps Step
Description
Identification
Identifies the exposure to loss in terms of threats (an object, a person, or another entity that represents a risk of loss) and vulnerabilities (a weakness or fault in a system or protection mechanism that exposes information or physical assets to an attack, damage, or theft).
Probability
Determines the probability that a threat or vulnerability will materialize;
determination
includes a spectrum from high to low, such as:
• • • • Quantification of potential loss
Selection
Virtually certain. Highly probable. Moderately probable. Improbable.
Quantifies the potential loss in terms of financial and nonfinancial impact; involves cost factors such as:
• • •
Temporary replacement of lost or damaged assets.
•
Loss of investment income due to short-term expenses incurred to meet the replacement costs or restore normal operations.
•
Loss/damage to reputation due to the inability to conduct business.
Permanent replacement of lost or damaged assets. Related losses due to inability to conduct normal business operations.
Evaluates the feasibility of alternative risk management techniques; results in the selection of the best technique(s).
These steps are just one possible approach to security risks. The security risk management process should be appropriate for the organization and its security objectives. The internal audit activity may perform an assessment of security risks by employing the following techniques and tools: • Analysis of reported incidents. Records can provide valuable information about potential and actual losses. • Review of exposure statistics. Statistics from insurance carriers, industry associations, and regulatory agencies can provide guidance about where to look for potential risk exposures. • Mapping key processes. Developing process maps and identifying potential risk points provide helpful insights. • Periodic inspections. Health and safety inspections can surface compliance lapses and also uncover opportunities to decrease risks.
• Periodic process and product audits. Such internal audits can incorporate specific questions to identify potential risks. • Assessments of management system effectiveness. Beyond internal audits conducted to verify compliance and conformance to one or more standards or to assess continual improvement, this technique can identify gaps in management systems that expose the organization to potential losses. • Scenario analysis. Tools such as brainstorming and mind mapping are effective to identify all the consequences that could occur in a worst-case scenario. This list is not all-inclusive. The point is to do whatever is necessary to identify and prioritize risks. Special Information Security Considerations Implementation Guide 2130 notes that: [The CAE] should first consider the risk appetite, risk tolerance, and risk culture of the organization. It is important for internal auditors to understand the critical risks that could inhibit the organization’s ability to achieve its objectives, and the controls that have been implemented to mitigate risks to an acceptable level.
The CAE determines whether the internal audit activity possesses, or has access to, competent audit resources to evaluate information reliability and integrity and associated risk exposures. This includes both internal and external risk exposures and exposures relating to the organization’s relationships with outside entities. If specialized knowledge and skills are required, the organization may need to secure external service providers. Guidance recommended by The IIA includes specific responsibilities for the internal audit activity. As Implementation Guide 2130 further states: It is important for internal auditors to obtain a thorough understanding of the control framework(s) adopted either formally or informally by the organization and to become familiar with globally recognized, comprehensive control frameworks.
To fulfill this standard, the CAE determines whether information reliability and integrity breaches and conditions that might represent a threat to the
organization will promptly be made known to senior management, the board, and the internal audit activity. Internal auditors assess the effectiveness of preventive, detective, and mitigation measures against past attacks, as appropriate, and future attempts or incidents deemed likely to occur. Internal auditors determine whether the board has been appropriately informed of threats, incidents, vulnerabilities exploited, and corrective measures. While the primary monitoring role over information security (and other areas) is with management, rather than internal audit, internal audit’s role is to periodically monitor the effectiveness of management in the area of information security. This includes assessing the organization’s information reliability and integrity practices and recommending, as appropriate, enhancements to, or implementation of, new controls and safeguards. Such assessments can either be conducted as separate stand-alone engagements or integrated into other audits or engagements conducted as part of the annual audit plan. The nature of the engagement will determine the most appropriate process for reporting to senior management and the board. Determine Disposition of Security Violations It is reasonable to expect that the internal audit activity will monitor whether and how well security violations are corrected when they are discovered (similar to corrective action plans in response to internal audits). In doing so, the focus of the internal auditor should be to ensure that the root cause of the security violations is addressed. Disposition of all security violations should be reported to the board periodically, including the number and type of violations as well as management’s actions to resolve the root cause. Report on Compliance The internal audit activity can report to management and the board on the level of compliance with security rules, significant violations, and their disposition. With regard to information security, high-level compliance can be achieved
through the implementation of codes of practice for information security compliance. An example is ISO/IEC 27002:2013, which establishes guidelines and general principles for initiating, implementing, maintaining, and improving information security management in an organization. The focus of ISO/IEC 27002 is information security controls. It contains best practices for control objectives and controls that can be applied by any organization, regardless of size or industry. Organizations adopt ISO/IEC 27002 to develop organizational security standards and effective security management practices, address legal and regulatory concerns, and better manage compliance.
Controls for Physical Security Physical security controls include physical access controls, environmental hazard controls, and fire and flood protection. Physical access controls are the real-world (tangible) means of providing and limiting access to buildings, data centers, record rooms, inventory areas, and key operational areas to only authorized persons (and denying access to unauthorized persons). Note that many of these same types of access controls can be used to provide or deny access to computer systems or other devices, as is discussed later in this topic. Access controls could include keys or keycards, some type of code or password, and/or a biometric scan. Higher levels of security may be provided by increasing the complexity of one of these levels (also called factors). For example, preventing access to an asset could use a lock and a physical key, but there would be no definitive audit trail of who accessed that door (except perhaps for security camera footage). Keycards use swipe or radio frequency identifiers to identify a particular user badge. A security computer checks the badge against a list for access and also maintains an access log (indicating which badge was used and when). Biometric devices can check a user’s identity through fingerprints, palm scans, iris photos, face recognition, and/or other unique physical identifiers. The scan is compared to a copy in a security database, so there is also an audit trail here. Even greater security could require two-level identification (or even three-level identification): a keycard
and a password, a keycard and a biometric scan, etc. In addition to authentication for access, all areas of a building should be covered by a general security system, including motion sensors and cameras in key areas as well as devices to detect break-ins. Physical security can also be role-based, with certain areas more secure than others, even to IT staff. Hardware not in a data center, such as laptops or PCs, can be physically secured with locks and have their own small uninterruptible power supplies (UPSs) and surge suppressors. Exposed wiring should be minimized using wiring closets or patch panels. Data centers should not be located along an exterior wall but should be in an inconspicuous location with as few doors as fire codes allow. Media storage should be fire-rated, and backup and disaster contingency measures should be in place. Fire alarms and moisture detectors should be used. If the data is extremely sensitive, the walls may need to extend all the way to the permanent ceiling above and be made of reinforced material. Heating, venting, and air conditioning (HVAC) are vital, because servers function better in cool, low-humidity rooms. UPSs and surge suppression should be employed. Devices need to be grounded and the floor covered with static takeoff. The air must be clean and free from smoke and particles, especially metallic particles, which can ruin tapes or CPUs. Other physical risks include electromagnetic interference from outside devices, which can be minimized by proper shielding. Maintenance and housekeeping schedules for dust removal should be set and adhered to as per manufacturer recommendations. Logs of hardware cleaning and malfunctions should be kept. Internal auditors can check to see if actual maintenance patterns match suggested patterns; they can also check on the lag between when issues are reported and when they are fixed. Hardware Controls Hardware controls are built-in controls designed to detect and report hardware errors or failures. Hardware is becoming more reliable but is still
a possible source of errors. After determining the existence of hardware controls, auditors should put more effort into finding out how the organization reacts to hardware errors than checking the controls themselves. The controls will report the issue but will not fix the resulting output errors, so a process needs to be in place. The following are types of hardware controls: • Redundant character check. Each transmitted data element receives an additional bit (character) of data mathematically related to the data. Abnormal changes will void the mathematical relationship. • Equipment check. These are circuitry controls that detect hardware errors. • Duplicate process check. A process is done twice and results are compared. • Echo check. Received data is returned to the sender for comparison. • Fault-tolerant components. Fault-tolerant components have redundancies in hardware or software to allow continued operations if a system fails.
System and Data Backup and Recovery Controls Backup methodologies include the grandfather-father-son concept, in which the son is the most recent backup followed by the father and grandfather backups. As a new backup is made, it becomes the new son, the old son becomes the father, and so on. The old grandfather may be marked for overwriting. The number of generations retained is set by policy. The organization defines a backup period for a particular data set (hourly, daily, monthly), determined by the frequency with which the data changes. For example, payroll data that is changed twice a month would need biweekly updates. Different permanent or secondary storage devices exist, but they can be classified generally by how they access data. Sequential access means that the data must be accessed in the order it was recorded, such as for tape
storage. Note that tape storage for backups is becoming more rare as cloud backups become more common, but this medium is still in use. Direct or random access means that the system can go to any location for faster retrieval, such as for magnetic and optical disks. Another differentiator is whether the system is designed only for full-volume backup or if it allows incremental backup of just the changes.
Off-Site Storage Data should be backed up to an off-site storage facility physically distant from primary operations to keep area catastrophes from affecting both sites. Physical controls for an off-site storage facility might include: • Revealing the location of the facility to as few people as possible. • Ensuring that the outside of the facility does not reveal its purpose or use. • Securing all access points and eliminating windows. • Providing appropriate controls on environmental conditions (e.g., raised platforms, waterproofing, fire alarms, and climate monitoring and control). • Keeping inventory of the contents.
Cloud Backup The use of cloud-based backup methods satisfies the physical distance and secret location criteria, because the cloud is a network of distributed databases and servers in which data is placed wherever there is available capacity rather than having designated storage areas. In this method, backups are electronically transmitted to the cloud, which could be internally owned or a third-party system. Internally owned clouds need to ensure that the physical distance criterion is satisfied for backups.
Electronic Vaulting Electronic vaulting involves electronically transmitting changes to data to an off-site facility and then creating backup long-term storage, eliminating physical transportation. It is a hybrid solution, combining physical off-site
vaulting with electronic journaling. Electronic journaling is a log of the transactions or changes that have happened since the last regular backup. The recovery point is the time after the last safe backup up to the point of failure. Traditional daily off-site backups offer, at worst, a recovery point between 24 and 48 hours. For businesses that see this delay as an unacceptable risk, electronic vaulting can provide a shorter recovery point.
Backup Data Controls In addition to physical and logical security, backup systems need to have a methodology for labeling and storing backups and application library items if they are in physical form such as tape, CD, or disk. The labels should be internal (digital) and external (physical) and use a logical file-naming convention to prevent files from being deleted accidentally. This will prevent restoration delays or inadvertent restoration to the wrong point. In addition, the methodology should cover rotating the files from the data center to an off-site location. Large data centers may use a tape management system rather than external labels. The tape management catalog itself must be backed up to prevent disruption of the process. If the backup is on permanent disks, the operating system manages the backups. Such systems need to be closely monitored for disk capacity, and files not used for a given period should be purged and stored instead in the cloud or perhaps on tape backups. To safeguard against storage media failure, critical data should be stored on two separate types of media.
Ethics in Data Storage Data storage becomes an ethical issue if data needed for audits or evidence of compliance is deleted. Electronic data such as emails are considered legal evidence (in the U.S., this is covered under the Federal Rules of Evidence), and some companies have received large fines for denying access to or deleting such evidence. Other issues include safeguarding data for privacy. Internal auditors need to develop an awareness of these and other ethical implications when assessing and providing assurance or consulting in relation to the IT security and control environment.
IT Operational Controls IT operational controls include planning controls; policies, standards, and procedures; data and program security; insurance and continuity planning; and controls over external service providers (vendor risk management). Segregation of IT duties should follow the IAM (identity and access management) principle of allowing access only if the job function requires it. Information on applications also needs to be restricted. Initiation, authorization, input, processing, and validation of data should all be done by different individuals and often by different departments. The other basic separation is between systems development and operations. Programming and change deployment should be organizationally and physically separate from users with access to production systems, and neither should be able to do the others’ tasks. Neither should have access to file libraries (a function of a system librarian) or input/output controls (a function of the systems controller). Other segregations include systems analysis, IT audit, and data entry. Smaller organizations may not have the luxury of this level of segregation of duties, but, if this is the case, combined roles require greater scrutiny. Inadequate segregation of duties could heighten the potential for the commission of fraud, including misappropriation of assets and fraudulent financial reporting or statements. It could also result in data tampering and loss of data privacy. Operational controls might involve: • Ensuring that adequate audit trails exist. • Reviewing exception reporting and transaction logs. • Minimizing the number of users with administrative privileges. • Using software tools and direct observation by supervisors to monitor the activities of users with administrative privileges. • Setting policy guidelines for all employees to take a certain number of minimum consecutive days off at least annually, for example, as vacation, with special emphasis and/or required job rotations for persons with
sensitive roles or access privileges, such as systems controllers. • Separating testing environments and production environments by formal data migration processes. • Ensuring that employees with physical custody of assets do not have access to the related computer records or have any other related authorization rights or privileges. Audit trails log the functions performed and the changes made in a system, including who made the change and when. The trail is either kept in a separate file or sent to the system activity log file. The audit trail must be secure from as many users as possible, and access restrictions should be reviewed. For example, an audit log could show repeated incorrect password entries to investigate. Comparisons of users to their activities can highlight unusual activities. Use of sensitive or powerful command codes should be reviewed. Preventive maintenance should be performed on hardware and software systems and on their controls, because doing so is almost always less expensive than dealing with problems arising from poor maintenance. An operations control group should also be formed to monitor the results of production, including record keeping and balances of input and output.
Operational Data Security Controls In addition to controls for the backup of data, organizations need controls over data as it is being used. In general, data security must be maintained:
Data policies are enforced through data standards, which define how things need to be done to meet policy objectives. Enforced standards keep systems functioning efficiently and smoothly. Standards should be set for systems development processes (see Section III, Chapter 1), software configuration,
application controls, data structures, and documentation. All of these relate to data security, but only standards for data structures are covered here. Data structure standards are rules for consistency of data definitions, or the programming tags that define what a data item is used for and its place in a data hierarchy. If all applications use the same data standards, seamless interfaces can be created and security controls will be uniformly applied regarding data privacy and security. Some controls over data security have already been mentioned. A few others are covered briefly here. End-user training in the proper use of email and the Internet is important but should be backed up by logical controls such as not allowing end users to install new software. Applications should be safeguarded by keeping them in computer program libraries, which should be restricted by physical and logical access controls. Another example of data security is ensuring that deleted files are really deleted. This can be accomplished through special file deletion software or through physical means, such as electromagnetic wiping. This should be performed on any hard drives or backup tapes being resold.
Other Considerations in Systems Security Security Levels Not every system needs the highest level of security. The cost of the security measures should be commensurate with the level of risk mitigation required. To determine appropriate network security levels, first the organization must assess its data repositories and assign security risk levels. This can be done by categories of data, but the highest security data in a database defines the security level. Assessing the availability, integrity, and confidentiality requirements for a group of data is a start. Vital security for key projects such as R&D data is also elevated. The data could be categorized, for example, as low, medium, or high. • Low security data is data that would not have a great deal of impact in terms of reputation or productivity losses or lost assets if it were
compromised. Note that even low security data must be safeguarded. Data on public servers such as web pages fits in this category. Extraordinary measures aren’t necessary. • Moderate security levels are used for data that would have a serious impact on the organization’s mission and could cause market losses if stolen. Major damage to assets or resources could occur. Most data for an organization fits into this category, including enterprise resource planning (ERP) data, data required to comply with government agency requests, and personal data such as medical records. • High security data is data that, if compromised, could cause the organization to be in jeopardy of catastrophic losses to reputation, productivity, or market share, for example, contingency plan data listing off-site storage locations, loss of R&D data to a competitor, or accumulated evidence for a court trial. Once the security level of the data is known, a multi-tiered security system can be designed, including provisions for physical, software, program library, and application security. Security levels must be customized to the particular organization and its risks. Low security would still have firewalls, hardware locked in a data center, and off-site or cloud backup storage. Moderate security would include all of the low security items plus items such as electronic vaulting or a redundant data center. High security would also include biometric devices, perhaps a physical security checkpoint, and other considerations.
Computer Forensics (e-Discovery) Computer forensics is a scientific discovery process applied to computer records, needed for information to be admissible evidence in a court. When fraud or material misstatements are suspected, the organization may need to delegate discovery to computer and physical forensics specialists. Computer forensics attempts to discover three things: how, why, and who. Finding out how a fraud was committed can lead to determination of likely motives. Understanding possible motives and the required level of access or computer proficiency will lead to a list of suspects.
A mirror image backup, or bit stream backup, is an exact copy of a hard drive, primarily used for forensic auditing, not as a way of backing up data for recovery. Properly trained forensic auditors must be used to avoid corrupting the data that needs to be studied.
Role of IT in Control Self-Assessment Control self-assessment (CSA) presumes that the scope of control for an organization is so broad and continually changing that it takes the efforts of the entire organization to make a timely and adequate assessment. CSA generally takes place in group settings, not in an individual survey form. However, once CSA teams have met and compiled a list of issues, they can use an intranet survey or electronic voting technology to vote on the issues that they think need to be addressed. The conclusions of the CSA should be reported to participants as soon as possible, with IT potentially being able to help speed distribution.
Topic B: User Authentication and Authorization Controls (Level B) The risks of failing to properly authenticate users or systems or to provide proper authorization controls include but are not limited to the following: • Inappropriate employee or contractor access to confidential information (e.g., payroll) • Access from external persons or entities into organizational information systems to steal proprietary information (e.g., patented formularies for drugs at a pharmaceutical company); modify, corrupt, or encrypt data; install malware or spyware; gain access to other systems; or delete information • Compliance risk such as material breach of privacy • Loss of customer trust (reputation risk) and loss of market share (market risk) User authentication and authorization controls for applications are sometimes called application authentication. With application authentication, a software application is able to grant access only to authorized users or systems and prevent unauthorized access. As with physical access authentication, user authentication can require up to three levels of authentication, which is discussed next. Application authentication also depends on implementing logical access controls, which are basically a framework for allocating appropriate access.
Levels of Authentication The three basic levels, or factors, for authenticating an individual to provide physical access, access to a device, or access to an application are: • Something the person has, such as a key, a keycard/badge, a credit card, a cryptographic key, or a registered mobile device.
• Something the person knows, such as a user name and alphanumeric password or a numeric code. • Something unique to the individual, in other words, a biometric trait (e.g., fingerprint). One form of application authentication, possible in Microsoft Windows, for example, is the creation of role-delimited accounts for authorized users with required identification (something the person knows). Web applications can also authenticate users, who may be assigned to roles, such as customer, user, manager, etc., and assigned a log-in code, which is sent to the web server for verification. This verification process creates an audit trail. As described in the previous topic, greater security may be provided by increasing the complexity of one of these levels or by requiring two or more levels. Two-level (or two-factor) authentication is usually “adequate to meet the highest security requirements,” according to NIST Special Publication 800-63, “Digital Identity Guidelines.” (NIST is a U.S. national standards-setting body). A common example of two-level identification for some types of access is a person entering a password (something he or she knows) but also receiving an access code on a mobile device (something that is registered to him or her). Many mobile devices and laptops also now have built-in fingerprint or facial recognition as an alternate level of authentication.
Digital Signatures Another type of user authentication is a digital signature , which uses public key encryption (discussed in the next topic) and a hashing algorithm, which is information about the transmitted data, to prevent a message from being reconstructed or altered. It provides not only user authentication; it also provides proof of message integrity and nonrepudiation, because the digital signature is basically the entire encrypted message being sent or received. Digital signatures carry the same legal standing as physical signatures in the U.S. and in many other countries. They rely on something the person has (an application or an account in a cloud-based system designed to generate digital signatures, which stores a private key for the
person) and, usually, something a person knows (a password used to access the private key). Private keys are described in the next topic.
Logical Access Controls Logical access controls are the ways that computer program logic can identify authorized users—a challenging task in a large and complex enterprise in which many groups must have access to data. The various policies, procedures, activities, and technologies used to identify authorized users comprise a process called identity and access management (IAM). The Practice Guide “Identity and Access Management” (previously Global Technology Audit Guide 9 [GTAG® 9]) poses three fundamental questions whose answers should inform access decisions and management: • Who has access to what information? • Is the access appropriate for the job being performed? • Are the access and activity monitored, logged, and reported appropriately? The IAM process is designed to allocate identities and provide appropriate access. An “identity” is defined as a unique descriptor (or combination of descriptors) of a person or machine—for example, a name, a password, an ID number, or a biometric identifier. Proper identity provides access to information systems and data. “Access “may be defined as the right to perform certain transactions (e.g., copying or transferring data). These access rights are termed the user’s “entitlements.” Three processes are involved in an IAM system: • Provisioning. The most visible aspect of IAM is provisioning—the creation, changing, termination, validation, approval, propagation, and communication of an identity. • Identity management. Identity management refers to the establishment, communication, and management of IAM strategies, policies, and processes. It entails monitoring, auditing and reconciling, and reporting system performance.
• Enforcement. Enforcement occurs automatically, through processes or mechanisms, as identities are authenticated and authorized and activity logged. Exhibit II-3 illustrates the way in which the IAM process manages identity and access. Exhibit II-3: IAM Process
Source: Practice Guide “Identity and Access Management.”
The primary logical access control is password authentication. Authentication techniques include digitally enforcing use of alphanumeric passwords, enforced password changes, and password management such as deleting unused passwords and user accounts (provisioning) or detecting user accounts that have no password or use a default password. Unlike a physical signature, use of a valid password doesn’t prove the authenticity of a user. Authentication can be reinforced by a physical device such as an access card or by software designed to recognize a user’s keystrokes. Also, password protection can be bypassed if there are other access points, such as a logical/software backdoor created by a flaw in design or on purpose. End-user security training can make a huge difference to application authentication security. Password and log-on methodology training teaches
users to avoid common mistakes. Users will be trained to avoid storing their password near their computer or using easily deduced passwords such as their child’s name or the word “password.” Under the concept of least privilege, users and/or departments are assigned roles or profiles granting them access only to areas where there is a genuine business need. Access rights are based on a role name set in a hierarchy, which should be audited to see if roles are too broad and some users get unnecessary rights. Roles can be used to enforce laws and regulations, such as preventing a nurse role from creating prescriptions. Finally, roles can allow for some users to have read-only access (no modifications). Other logical access controls include: • Automatic log-off procedures. • Monitoring and controlling access to computers with remote control privileges (e.g., help desk). • Access logs (application and Internet logs). • Single-use access codes or codes with defined start and end dates for contractors.
Topic C: The Purpose and Use of Various Information Security Controls (Level B) Information protection is a management responsibility. This responsibility includes all the critical information of the organization, regardless of how the information is stored. The internal audit activity should ensure that: • Management recognizes this responsibility. • The information security function cannot be breached. • Management is aware of any faulty security provisions. • Corrective measures are taken to resolve all information security problems. • Preventive, detective, and corrective measures are in place to ensure information security.
Elements of Information Protection An organization’s data can be one of its most important assets. As such, information security is a critical control. There are three universally accepted elements of information security: • Confidentiality. Policies and practices for privacy and safeguarding confidential information and protections against unauthorized access or interceptions. • Integrity. Provisions to ensure that data is complete and correct, including how it relates to financial reporting. • Availability. Actions to ensure that there is very little downtime and to enhance recovery of data after disruptions, disasters, and corruptions of data/services. IT general controls and application controls such as passwords and
privileges are the basis for information protection. Information security is the foundation for most other IT controls, and it has two aspects: data and infrastructure. Data security should ensure that only authorized users can access a system, their access is restricted by user role, unauthorized access is denied, and all changes to computer systems are logged to provide an audit trail. Security infrastructure can be part of end-user applications, and/or it can be integral to servers and mainframes, called security software. When the focus on security is primarily at the application level, such as for small environments, user access and role-based access controls are generally strong but controls over expert programmers very often tend to be weak. Security software resides at the server, client, or mainframe level and provides enhanced security for key applications. One typical control provided by security software is allowing only certain transactions to be entered at specific terminals, such as being able to change the list of authorized employees only from within the payroll department. Such terminals can also be set to be available only during normal business hours, automatically time out, or require reentry of a password for each transaction. Finally, such systems can tell users when they last accessed the system so they can know if their user ID is being used illicitly. Errors introduced into a computer system can be just as costly as malicious attacks. One key control that will help is setting a clear policy on the use of hardware and software and training personnel to address the most common errors. The policy should also address ethics, such as computers being used for personal goals or even illegal acts.
Internal Auditing and Vulnerability Management Internal audit may perform an assessment of information vulnerabilities and follow with recommendations for improvements related to information security and vulnerability management. Internal auditors should assess the effectiveness of preventive, detective, and mitigation measures against past attacks, as deemed appropriate, and future attempts or incidents deemed likely to occur. They should confirm that the board has been appropriately informed of threats, incidents, vulnerabilities exploited, and corrective
measures. The Practice Guide “Managing and Auditing IT Vulnerabilities” (previously Global Technology Audit Guide 6 [GTAG® 6)] lists six indicators of poor vulnerability management: • A higher-than-acceptable number of security incidents within a given time period • An inability to identify IT vulnerabilities systematically, resulting in exposing critical assets • An inability to assess risks associated with vulnerabilities and to prioritize mitigation efforts • Poor working relationships between IT management and IT security • Lack of an asset management system • Lack of a configuration management process integrated with vulnerability mitigation efforts To improve management of vulnerabilities, this document recommends: • Enlisting senior management support consistent with the enterprise’s risk appetite. • Inventorying all IT assets and identifying their associated vulnerabilities. • Prioritizing mitigation/remediation steps according to risk. • Remediating vulnerabilities by presenting planned work projects to IT management. • Continually updating asset discovery, vulnerability testing, and remediation processes. • Using automated patch management and vulnerability discovery tools as much as possible. These steps are represented in the vulnerability management life cycle, a
process for managing IT vulnerabilities, shown in Exhibit II-4. Exhibit II-4: Vulnerability Management Life Cycle
Source: Practice Guide “Managing and Auditing IT Vulnerabilities.”
The following are examples of various information security controls that can be used to manage IT vulnerabilities.
Encryption Encryption uses a mathematical algorithm to scramble data so that it cannot be unscrambled without a numeric key code. Encryption is used on stored and physically transmitted data (e.g., on a flash drive) and electronically transmitted data. Server access control is the use of internally encrypted passwords to keep technical persons from browsing password files. Wireless data can also be encrypted to prevent compromise if it is
intercepted. Two basic types of encryption exist: private key encryption and public key encryption. • Private key encryption (or symmetric key encryption) is a method where a sender creates an encryption key and sends it to a trusted recipient, who can use it to decrypt all messages in that session. The method of sharing the key needs to be controlled, since the key could be intercepted in transit (though the key might itself be encrypted). Poor controls at the receiver’s end could allow the key to be compromised as well. The advantage of private key encryption is its simplicity: There is only one key for both encryption and decryption. • Public key encryption (or asymmetric key encryption) is more secure. Public key methods create two keys, a private key and a public key. The sender places the public key in a directory, or an application automatically applies it to lock sent data. To decrypt the data, the private key must be used. The private key needs controls to keep it secret, but since it never needs to be shared, it is more secure. The public key is known by many, but the private key is known by only one system. Public keys are generally used for brief messages since they are resource-intensive. Another consideration is the number of users of the public key. If the key needs to be changed, all of these users must be informed and the new key distributed. Digital signatures verify the authenticity of a user of a public key (including non-repudiation) and the integrity of the message itself. Conversely, a server certificate can establish the authenticity of a site. The relative security of a key is determined by its bit length. When passwords are used to create keys, effective password creation rules must be applied. External aids include cryptographic module testing (CMT) labs and validation programs for cryptographic modules and their algorithms. To illustrate public and private key encryption, review Exhibit II-5, which presents both of these processes.
Exhibit II-5: Public and Private Key Encryption
Auditing Issues Evaluating encryption includes evaluating physical controls over computers that have password keys, testing policies to see if they are being followed, and implementing and monitoring logic controls. Each security domain should be able to share its local identity and security data without compromising its internal directories.
Firewalls Perpetually available broadband connections need constant monitoring. A firewall is a hardware/software combination through which all communications to or from the outside world are routed; the firewall compares access rules (controlled by network administrators) against the IP addresses, names, files, and applications attempting to enter the system and
blocks unauthorized traffic. Firewalls can: • Improve security by blocking access from certain servers or applications. • Reduce vulnerability to external attacks (e.g., through viruses) and ensure IT system efficiency by limiting user access to certain sites. • Provide a means of monitoring communication and detecting external intrusions (through intrusion detection systems, described below) and internal sabotage. • Provide encryption internally (within an enterprise). Corporate firewalls are often multi-tiered: A firewall is placed before the web server and any other public access servers, and another firewall is placed between the public access servers and the private network areas. Additional firewalls can be used to protect sensitive data such as payroll. An organization’s firewalls should be installed on dedicated hardware that has no unnecessary software. Several types of firewalls exist. They can be located at the network or transport layers. These are layers 3 and 4 of the seven layers of the Open System Interconnection (OSI) model, which is a framework used to describe how a network is built and where security can be located, from the physical layer of wires and hardware (layer 1) up to the end-user application layer (layer 7). The following are descriptions of layer 3 and 4 firewall types. • Packet filtering . This type of firewall works by comparing source and destination addresses to an allowed list, specifically examining headers and other fields in packets of data. Because it examines packets in isolation, it can miss many types of attacks. Packet filtering can be enhanced in the following ways: • Stateful inspection . This firewall enhances packet filtering by monitoring packet flows in general. State tables are used to track the data flowing through multiple packets, and the firewall analyzes whole conversations for appropriateness and legitimacy. • Network address translation (NAT) . Firewalls with packet filtering
and stateful inspection can use NAT to hide their internal host computer IP addresses from packet sniffer utilities (a software monitoring tool that captures and logs web-browser-to-web-server requests and responses). • Gateways. A gateway firewall stops traffic flowing to a specific application such as file transfer protocol (FTP), e.g., rules may block outgoing FTPs but permit incoming FTPs. One common type of gateway is the application gateway/proxy server . Proxy servers are an intermediary for communications between the external world and private internal servers. They intercept external packets and, after inspection, relay a version of the information, called a proxy, to private servers, and vice versa. Proxy servers are specific to an application. Auditors need to determine if firewalls can be bypassed or the controls overridden by alternative transactions. User prompts for allow/deny communications can be the most risky. Auditors should work with the network administrator to determine the efficacy of a firewall, how specific its rules are, and whether the lists of acceptable users, IP addresses, and applications are kept up-to-date such as by promptly removing terminated employees. Because a firewall is a chokepoint, it can be used to audit controls or trace the source of an incoming attack. Firewall logs could be used as legal audit evidence if the data was collected, processed, and retained properly. A firewall has limitations. For example, data can still be stolen via telephone, CD, DVD, or USB flash drive. Employees or visitors could have a conflict of interest (industrial espionage), or they could simply be gullible and “help” someone by providing access. Firewalls can be configured incorrectly; they can also be circumvented by using a personal modem on a voice line. Auditors should assume that firewalls are always being probed for weaknesses and that they cannot prevent all attacks. DMZs (from military jargon for demilitarized zones) are portions of a network that are not part of either the Internet or the internal network, such as between the Internet access router and the host. If the access router has an access control list, it creates a DMZ that allows only recognized traffic
to even contact the host.
Intrusion Detection/Prevention Systems Systems are now vulnerable through the multiple browsers at the application layer (layer 7 of the OSI model) of a network. Normal firewalls cannot process the vast amount of data at this layer. Intrusion detection systems (IDSs) reside at layer 7 to monitor systems for intrusions. An IDS combined with an application layer firewall is called an intrusion prevention system (IPS). Host IPS (HIPS) software functions at the operating system kernel level to detect and block abnormal application behavior before it executes. HIPS assumes that abnormal behavior is an unknown form of attack. Network IPS (NIPS) are hardware and software systems on a network that analyze incoming packet content, dropping malicious packets. These types of intrusion detection/prevention systems usually are more conservative than other types of firewalls and provide more detailed reports.
Controls for Malicious Software (Malware) Malware is malicious software designed to gain access to a computer system without the owner’s permission for the purpose of controlling or damaging the system or stealing data. While the public perception of malware perpetrators is of computer-savvy teens with only mischief as a motive, the actual situation is much worse. Writing malware is a lucrative organized crime. According to a Malwarebytes white paper titled “The New Mafia: Gangs and Vigilantes,” malware that targets businesses is on the rise. For example, the rate of ransomware attacks shot upward by 289% in 2016 from the prior year. These professional criminals have profit as a motive, and therefore the types of attacks that are increasing are those that gain unrestricted access to user systems and data or gather network passwords and financial data. Purely destructive malware is becoming relatively less common. Also, while malware used to be confined mostly to the Microsoft® platform, the growth in popularity of other platforms such as .NET® and Linux® correlate to growing attacks on these systems. Types of malware include the following:
• VirWare. VirWare is a grouping of malware that includes viruses, worms, and ransomware: • A virus attaches itself to storage media, documents, or executable files and is spread when the files are shared with others. One type is a macro virus, which uses the macro function of software such as Microsoft Word® to create executable code. In response, Microsoft created new file extensions to indicate whether a file could contain macros (e.g., .xlsx—no macros allowed, .xlsm—macros allowed). • Worms are self-replicating malware that can disrupt networks or computers. Unlike a virus, a worm does not attach itself to an existing program or to code. It spreads by sending copies of itself to terminals throughout a network. Worms may act to open holes in network security. They may also trigger a flood of illegitimate denial-of-service data transmissions (in which a system is flooded with false requests or messages from many sources) that take up system bandwidth. • With ransomware, software encrypts all of the files on a computer or network of computers and the criminal party sends the user a demand indicating that the encryption key won’t be released unless a payment is made quickly, usually through a cryptocurrency. Avenues of attack include links or attachments in unsolicited emails as well as malvertising, or malicious advertising on websites that can direct users to criminal servers even if the user never clicks on an ad. Ad-blocking software is one of several types of defense that may partially protect users from the latter avenue. The number of new types of VirWare has been decreasing. Instant message (IM) worms, worms for mobile devices, and net-worms have been increasing, because these are relatively new areas for attack and they don’t need to rely on users opening email. Email worms have been decreasing, partly due to the rapid response system and improved antivirus software. Cybercriminals have shifted to using more Trojan horses. • Trojan horses. Trojan horses are malicious programs disguised to be innocuous or useful using social engineering. Social engineering is a set of rhetorical techniques used to make fraudulent messages seem inviting; it is
initiated through deceptive emails, instant messages, or phone contact. Once installed, Trojan horses can install more harmful software for longterm use by the writer, such as spyware. Trojan horses are cheaper to develop because writers do not need to create a malicious program capable of self-delivery. They are also smaller and easier to transmit. Therefore, the growth of Trojan horses exceeds that for all other types of malware combined. Trojan horses are defined by how they are initiated. For example, Trojan-clickers require clicking on a hyperlink. Trojan horses include: • Banker programs, which steal bank account data. • Backdoors, or trapdoors, which bypass normal authentications for remote access. Backdoors, which can also be installed by worms. • Root kits, which are tools installed at the root (administrator) level. • Trojan-proxies, which use an infected computer as a proxy to send spam. • Piggyback malware, which allows unauthorized users to enter a network by attaching data to an authorized packet. • Logic bombs, which are dormant malware activated by a specified variable, such as an action, the attainment of a certain size in a database, or a date. They could also be triggered by a message or a lack of action—for example, failure to log in within a certain period of time. Logic bombs destroy data but can also be used as a threat or for extortion. • Other malware. When criminals have compromised a number of computers, they set up botnets, which use chat programs to send simultaneous instructions to all systems or upload malware upgrades. SpamTools gathers email addresses for future spam mailings. A key logger records keystrokes to steal passwords and anything the user types on his/her keyboard. A dialer automatically dials a 900 number (a high-fee line) to generate huge debts. Adware creates pop-up advertisements; spyware gathers information on the user’s machine for marketing or illicit purposes. Both are technically legal and are openly developed, but some make use of Trojan horses, infect executables, use root kits, or use other
exploits to self-install. • Other external threats. A hacker is anyone who gets into a computer system without authorized access, sometimes called a cracker, or a hacker with criminal intent. Unethical organizations employ hackers to perform industrial espionage. Organized crime uses them for profit. A third reason for hacking is cyberterrorism or cybervandalism, the intentional disruption or destruction of computer data, a website, or a network. One example is a denial-of-service attack. Hacktivism is hacking for political purposes. Phishing, or spoofing, is creating a website that appears identical to an organization’s site and then luring the organization’s users to that site through social engineering, thus capturing IDs and passwords, including social security numbers or other government IDs. Pharming is a more sophisticated attack on the browser bar, using Trojan horses or worms to redirect a valid URL entry to the hacker’s site. An evil twin is a Wi-Fi network operated by a cybercriminal that mirrors a legitimate network. Piggybacking is either physically following someone through a secure door or using someone’s legitimate password to access a network. A key control is to educate users to initiate all contact themselves (i.e., don’t click on an email link; go to the site directly). Identity theft is the illegal use of sensitive information to impersonate an individual over computer networks in order to defraud the person or commit a crime without the perpetrator’s true identity being known. The human-to-browser phase of transactions is where most identity theft occurs, not in the space between browser and web server. Most of the problem is due to poor password controls and scams that lure users to initiate a compromising transaction. One potential solution is the use of virtual information cards, in which user information is encrypted and hardened against spoofing. Wireless networks and extensive use of laptop computers have also posed threats to information security. Wardriving software allows intruders to drive through an area and locate vulnerable wireless networks. The intruder can eavesdrop or use overheard data to break encryption codes. Wi-Fi piggybacking refers to the practice of using another’s access to
enter a network. The practice may be harmless or unintended but may also be malicious. • Internal threats: illegal program alterations. Hackers, or more likely, legitimate users with programming privileges but malicious intent, can alter the code of programs, usually to perpetrate fraud or theft. The following are examples of data manipulation techniques: • Asynchronous attacks cause an initial system action and then a subsequent system reaction. For example, after a system has been shut down and before it restarts automatically, changes may be made to the restart parameters that weaken security. When the computer restarts, intrusion is now easier. • Data diddling is intentionally manipulating data in a system. • Data hiding is manipulation of file names or extensions or other tricks to hide a file from its normal location so that it can be manipulated at leisure (e.g., hiding an audit log). • Backdoors/trapdoors can be installed by direct code manipulation. • “Rounding down” and the “salami technique” skim funds by manipulating code to round off the fractional remainder of multiple monetary transactions or alter the final digits of a number, redirecting small amounts to a bank account. • Server/mainframe malware. The percentage of attacks on mainframes is extremely low (almost nonexistent) because of the specific knowledge needed for each particular mainframe. Publicly available servers (servers connected to the web) are assumed to be under a constant barrage of attacks. When it comes to server attacks, there are two types of hackers: “real” hackers and script kiddies. Real hackers are very knowledgeable about the targeted server system, network, and organization. They collect data on the organization and passively monitor traffic in both directions, probing for a security flaw. Script kiddies are inexperienced hackers who search the Internet for scripts that will do the hacking for them and apply them randomly or to servers that have known flaws. When they fail, they simply move on to easier targets.
Server attacks start by attempting to gain low security access followed by an attempt to elevate the security levels. Once inside, changes include hiding tracks, stealing data, and breaking or taking control of the system. Microsoft servers have security issues that are regularly patched and publicly announced, but script kiddies will exploit systems that aren’t updated. Linux® servers also have flaws that are regularly patched, but the flaws and updates are less publicized and therefore more servers may need updates. Linux servers aren’t less prone to attacks and are commonly attacked by real hackers who often have more knowledge about a server configuration than the administrators themselves. In addition to system attacks, publicly available servers can also be attacked through their applications. For example, an intranet server might use a distributed application to allow employees to check customer data. Hackers find flaws in such applications and then publish their findings for use by script kiddies. The number and frequency of network attacks is increasing, sometimes with several versions of the same type of malware appearing in one day, so much so that antivirus vendors have had to change their update frequencies from several times a day to hourly. The antivirus industry has developed a rapid response system to new threats, but organized criminals have also developed their own structure to scan for and infect vulnerable systems. For example, a network sniffer may detect credit card number formats in streams of data. A packing program is a way of making malware harder to detect.
Protecting Systems from Malicious Software and Computer Crime All operating systems contain bugs that create vulnerabilities and affect overall system performance. The use of homogenous operating systems allows wide-scale exploitation of bugs, which is why there are frequent updates and patches to operating systems. In addition to installing these updates promptly, other solutions should be pursued, such as running systems with administrative privileges turned off. Also, most systems allow any code executed on a system to receive all rights of the system user,
called over-privileged code. To fix this security flaw, the operating system would need to restrict rights given to code, such as use of a virtual area or sandbox. A key tool to combat viruses is to use antivirus software, which maintains lists of known viruses and prevents them from being installed or helps recover a computer once a virus is removed. Such software scans both incoming and outgoing data. Automated downloads and regularly scheduled scans are important controls to keep such systems up-to-date. Some antivirus programs use nature-based models that look for any unusual code and can detect new viruses. Basic policies can also help, such as allowing downloads only from reputable locations with security seals. Other tools to consider include blockers for spyware, spam, macros, and pop-ups. One method of self-protection from malware in general is to follow a minimum set of agreed-upon controls, called baseline controls. One example is the VISA® Cardholder Information Security Program (CISP), which has made a set of security guidance rules available to credit card network users. This advice, called the “Digital Dozen,” can be found in the Practice Guide “Information Technology Risks and Controls” (previously GTAG® 1). Other broad controls that can make a difference include taking sensitive information offline and performing background checks on new employees and users with security clearance. Newer browsers contain phishing filters, which send data to the browser manufacturer for validation. Controls associated with proper user identification and authentication of identity are critical. Authentication mechanisms must be secured and assessed. Users must be aware of the dangers of sharing or not securing passwords or creating weak passwords. The best means of securing access to data may be through the use of biometric controls, which use unique physical or behavioral traits to authenticate identity. Such controls might focus on a user’s fingerprints, the palm of the hand, patterns in the iris of the eye, or facial or vocal features.
Topic D: Data Privacy Laws (Level B) Adherence to data privacy laws and regulations requires having robust data security policies and practices, because such laws specify the need to properly secure all end-user data. Also, many laws and regulations place additional emphasis on certain types of sensitive data, such as medical, credit card, or other financial data.
Privacy Privacy is essentially the right to be left alone and free from surveillance by individuals, organizations, or the government. From an IT standpoint, privacy is the right to have a say over how personal information is used and collected. Personal information is any information that can link back to a particular individual. Any transaction entered into a computer, from simple purchases to medical records, can be stored indefinitely and potentially used for marketing or crime fighting as well as for illegal activities such as blackmail. IT can make invasions of privacy easy and inexpensive. Privacy is an issue for corporate data, employees, and customers. Corporate data must be safeguarded for a business to stay viable. Employees and their employers are in conflict on privacy, because organizations want to protect both their interests and guard against improper activity, while the employees want to feel that they have a measure of privacy at work. Programs can be used to log websites visited and track every keystroke a user makes. Higher levels of monitoring can provide control but at the possible price of lower morale. Clear communication of the privacy policy will help with morale. The policy should inform employees what is and isn’t monitored as well as what is expected of them, such as using the Internet only for specific activities. Logical controls over possible sites that can be visited can reduce the need to monitor employee activities.
Privacy Laws and Regulations
The privacy laws in Europe and in the U.S., Canada, and other countries are based in part on fair information practices (FIPs). FIPs acknowledge that the two parties in a transaction have obligations to each other. Individuals have rights to privacy but need to prove their identity; organizations have responsibilities over the collection and use of information. FIPs include: • Notice. Prior to collecting data, websites must disclose who is collecting the data, its uses, other recipients, what is voluntary, and what will be done to protect the data. • Choice. Consumers should be able to choose how the information is used outside of support for the current transaction. • Access. Consumers should be able to access and modify their personal information without great expense or hardship. • Security. Data collectors must ensure that they have adequate data controls. • Enforcement. FIPs must be enforced via self-regulation, legislation giving recourse rights to consumers, and other laws. A number of laws exist to protect privacy against government intrusion, such as the Canadian Privacy Act, which sets rules for the government’s ability to collect and use information about its citizens. Far fewer regulations apply to the private sector, and self-regulation is the general tendency. One example of a private-sector law is the U.S. Health Insurance Portability and Accountability Act (HIPAA), which governs the disclosure of medical records. Because many nations have privacy laws that may differ considerably, the Organisation for Economic Cooperation and Development (OECD) and similar organizations are working to create consistency in privacy laws and laws on the transborder flow of information. In the European Union (EU), the General Data Protection Regulation (GDPR), a binding regulation, became effective on May 25, 2018. The GDPR obliges EU member states to protect the fundamental rights and freedoms of persons, in particular their right to personal data privacy. The
GDPR is related to Article 8 of the European Charter of Human Rights on personal data protection. Much like the FIPs described above, the GDPR gives individuals the right to be informed of how organizations are using their personal data (i.e., through a privacy policy), the right of access to one’s personal data, the right to rectification of incorrect information, the right to be forgotten (individuals can request deletion of their personal information), the right to data portability (individuals can request a copy of their personal information), and the right to object or opt out of future data collection at any time. While this is an EU regulation, any organization in any part of the world that collects or holds the personal data of persons residing in the EU will need to have policies, procedures, and IT systems in place as appropriate to comply with these regulations. Many organizations who do business globally have welcomed this as a gold standard for privacy that may prevent needing to instead comply with a patchwork of national regulations.
Auditors and Privacy The primary role auditors fill with regard to privacy is to ensure that relevant privacy laws and other regulations are communicated to the responsible parties. Personnel must be told what is expected of them and what the individual and organizational penalties are for noncompliance. Auditors may need to work with legal counsel to identify other steps that should be performed to meet all requirements. Note that proof of compliance is required, not just compliance, so documentation must be addressed. Auditors must also determine if management is spending more on privacy controls than is warranted (e.g., expensive encryption for routine data). Some specific company policies may also need to be reviewed for privacy risks. For example, a bring-your-own-device (BYOD) policy relates to whether or not an employee or contractor can bring their own laptop or mobile device to the workplace and use it for work purposes. Risks include that such devices will not have adequate security protections or patch updates and could become an avenue for an external breach from a third party who has compromised the user’s device. Note that prohibitions on
laptops or tablets might be enforceable so long as a suitable device is provided to the employee or contractor, but prohibitions on mobile phones would be feasible only in very high security environments. An acceptable use policy can be created along with a clear indication of penalties for noncompliance, and some basic security training can be provided, such as ensuring that user devices have user authentication turned on (e.g., a numeric code) in case the device is stolen.
Topic E: Emerging Technology Practices (Level B) Technology is constantly advancing, and practices that seemed new and amazing last year can feel very dated or “old school” this year. No sooner is one malicious attack thwarted than another starts. How to keep up and get ahead of the malcontents? Beginning with tried-and-true methods of security is a start. Biometric, electromechanical, fail-safe, fail-secure, and mechanical locks all help to maintain the physical security of an organization. Security badges, identification cards, and closed-circuit television (CCTV) are also designed to verify identities and movement within buildings. Additional environmental controls include motion detectors, thermal detectors, and vibration sensors. But what other practices can be used? • The Internet of things (IoT) refers to a system of interrelated physical devices around the world connected to the Internet, collecting and sharing data. It allows for the transfer of data over a network independently without human action. IoT has emerged to allow machine-generated data to be analyzed for insights to drive improvements. It is big and getting bigger—analyst Gartner calculated that around 8.4 billion IoT devices were in use in 2017, and it is estimated that more than 24 million Internet-connected devices will be installed globally by 2020. The benefits to businesses are that IoT allows more access to data about an organization’s products and internal systems and a greater ability to make changes as a result. However, this raises new concerns about data privacy and security. The increase in connected devices gives hackers and cybercriminals more entry points and leaves sensitive information vulnerable. Establishing a standardized security protocol to address the scope and diversity of devices will continue to be a central challenge. • Hardware authentication incorporates authentication into a user’s hardware. This means that an organization’s IT department can require end users to use two or three different methods of authentication in tandem.
For example, an end user may be required to use a biometric identifier, such as a fingerprint, along with entering a PIN number and a code sent to their mobile device in order to achieve authentication. The idea behind this level of authentication is that the more validation options required or the more sophisticated they are, the more certain the organization can be that end users are who they say they are. • User-behavior analytics operates on the premise that by identifying activity that does not fit within the normal routine of an employee, IT can identify a malicious attacker posing as an employee. • Data loss prevention ensures that end users do not send sensitive or critical data outside their corporate network. The key to successful data loss prevention is technology such as encryption and tokenization, which can provide data protection down to a subfield level. • Deep learning encompasses numerous technologies, such as machine learning and artificial intelligence. Instead of looking at the actions of the end user, the system looks at “entities” and can be used to distinguish between good and bad software and provide an advanced threat detection and elimination solution. • Cloud computing security, or simply “the cloud,” refers to a vast set of controls, technologies, and policies in place to protect data, applications, and the infrastructure of cloud computing. Cloud security architecture can use numerous controls, such as deterrents, prevention, and detective and corrective controls to safeguard potential system weaknesses. In addition, there are cloud access security brokers (CASBs) who provide software that aligns itself between end users and the cloud applications to monitor activity and enforce security policies. For further reference, COSO has an ERM guidance document from 2012 titled “Cloud Computing.” In addition, ISO 27017 focuses on the protection of information in cloudbased services.
Topic F: Existing and Emerging Cybersecurity Risks (Level B) Cybersecurity , also referred to as computer or IT security, is the protection of computers, networks, programs, and data from attack, unauthorized access, damage, change, or destruction. Cyber risks (or cyber threats) involve persons or entities that seek unauthorized access to a system, network, or device, either remotely or via inside access. These persons or entities could harm the organization’s employees, contractors, customers, and other stakeholders and its competitive advantage. They could also cause direct monetary loss as well as reputation damage if certain information were made public. Cybercriminals are often motivated by the prospect of monetary gain; this is a growing area of organized crime. Organized crime organizations may have large-scale operations in certain nations that suffer from poor enforcement or graft and corruption. In addition to profit as a motive, perpetrators may include hackers, who may or may not have understandable reasons for their actions. There are generally three main types of computer crime: • Those where the computer is the target of a crime • Those where the computer is used as an instrument of a crime • Those where the computer is not necessary to commit the crime, but it is used as it makes committing the crime faster, allows for processing a greater amount of information, and makes the crime more difficult to identify and trace Cyberterrorism is a premeditated, politically motivated attack against information, computer systems, computer programs, and data. Cyberterrorists, or hacktivists, may also spread propaganda. Cyberterrorists are more likely to commit violence against noncombatant targets. For example, this might be probing a public utility’s electrical grid to bring it down. Hacktivists are more likely to attempt nonviolent methods to increase
their notoriety while causing reputation damage to their victims. Nation-states may also engage in espionage or cyberwarfare, such as a national government hacking another government’s systems or creating fake news to improperly influence a foreign election or vote. Two other sources of cybersecurity risks are insiders and service providers, especially service providers who develop substandard offerings that have security vulnerabilities or who do not promptly patch known vulnerabilities. Aside from negligence, insiders and service providers could use their inside knowledge and access to take advantage of inside information or to perpetrate or conceal fraud. Exhibit II-6 lists common cybersecurity terms. Some of these terms were covered earlier in this section; the exhibit can serve as a summary of computer security terminology.
Exhibit II-6: Cybersecurity Terminology Risk
Description
Adware
Malware intended to provide undesired marketing and advertising, including pop-ups and banners on a user’s screen.
Boot virus
Also known as a boot sector virus, a type of virus that targets the boot sector or master boot record (MBR) of a computer system’s hard drive or removal storage media.
Botnet
A network of remotely controlled systems used to coordinate attacks and distribute malware, spam, and phishing scams.
Denial-of-service attack
An attack designed to ensure that one user takes up so much of a shared resource that none of the resource is left for other users.
Designated denial-ofservice attack
A variant of a denial-of-service attack that uses a coordinated attack from a distributed system of computers rather than from a single source; uses worms to spread to multiple computers (or devices in the Internet of things) that simultaneously request services, causing the target to crash.
Macro virus
A type of virus written in a specific macro language to target applications that use the language. The virus is activated when the application’s product is opened. A macro virus typically affects
documents, slide shows, emails, or spreadsheets created by Microsoft products. Malware
Malicious software designed to gain access to a computer system without the owner’s permission for the purpose of controlling or damaging the system or stealing data.
Malvertising
Malicious Internet advertising that can collect information on a user’s computer, sometimes without the user even clicking on an ad, for later use in a malware attack after probing the device for weaknesses; more prevalent on less trustworthy websites.
Memory-resident virus
A virus that is capable of installing itself in a computer’s operating system, starting when the computer is activated. Also known as a resident virus.
Non-memory-resident virus
A virus that terminates after it has been activated and has infected its host system and replicated itself. Also known as a non-resident virus.
Patch
A bundled set of fixes to a software’s code to eliminate bugs or security vulnerabilities.
Pharming
A method by phishers to deceive users into believing that they are communicating with a legitimate website.
Phishing
A social engineering scam meant to trick the recipient of an email into believing that the originator is a trustworthy person or organization even though the message is from another party; intent is to deceive people into disclosing information such as credit card numbers, bank account information, passwords, or other sensitive information.
Polymorphic threat
Malware (i.e., a virus or worm) that over time changes the way it appears to antivirus software programs, making it undetectable by techniques that look for preconfigured signatures.
Ransomware
Malicious software that encrypts all of the files on a computer or network of computers; the criminal party then sends the user a demand indicating that the encryption key won’t be released unless a payment is made.
Security posture
The current status of the organization’s cybersecurity defense or timely reaction capabilities for information systems, networks, and data based on internal audit resources and staffing, training, software systems, policies, and controls.
Spamming
Unsolicited commercial email advertising, possibly linking to sites or
servers that deliver malware. Spooling
Creating a fraudulent website to mimic an actual well-known website run by another party.
Spyware
Malware installed without the user’s knowledge to surreptitiously transmit data to an unauthorized third party.
Trojan horse
A malicious programs disguised to be innocuous or useful using social engineering.
Viruses
Malicious code that attaches itself to storage media, documents, or executable files and is spread when the files are shared with others.
Virus hoax
A message that reports the presence of a nonexistent virus or worm and wastes valuable time as employees share the message.
Worm
Self-replicating malicious software that can disrupt networks or computers.
Zero-day attack
An attack that makes use of malware that is not yet known by the anti-malware software companies.
Topic G: Policies Related to Cybersecurity and Information Security (Level B) An effective information security policy should provide guidelines for preventive and detective controls to address a variety of risks. Risks can include unauthorized access, disclosure, duplication, modification, misappropriation, destruction, loss, misuse, and denial of use. Information security policies guide management, users, and system designers in making information security decisions. The International Organization for Standardization, or ISO, the world’s largest developer and provider of international standards, has established guidelines and general principles for initiating, implementing, maintaining, and improving information security management within organizations. ISO provides the 27000 family of standards for the development of organizational security standards and effective security management practices and to help build confidence in inter-organizational activities. An ISO 27001–certified organization can realize improved enterprise security, more effective security planning and management, more secure partnerships and e-commerce, enhanced customer confidence, more accurate and reliable security audits, and reduced liability. For internal auditors, a key resource is The IIA’s supplemental guidance “Assessing Cybersecurity Risk: Roles of the Three Lines of Defense.” Some information from this guidance is discussed in this topic, including an overview of how the three lines of defense apply to cybersecurity. To begin the process of designing an information security policy, the organization should perform an assessment of their security needs. This allows for an understanding of the organization’s business needs and security objectives and goals. Common questions that this assessment should ask include: • What information is considered business-critical? • Who creates that critical information?
• Who uses that information? • What would happen if the critical data were to be lost, stolen, or corrupted? • How long can our business operate without access to this critical data? As information crosses multiple lines within an organization, so too does information security. Therefore, an information security policy should be coordinated with multiple departments—including systems development, change control, disaster recovery, compliance, and human resources—to ensure consistency. Additionally, an information security policy should state Internet and email ethics and access limitations, define the confidentiality policy, and identify any other security issues. Good policies also need to provide precise instruction on how to handle security events and escalation procedures, if necessary, including how to escalate situations where a risk is likely exceeding the organization’s risk appetite. One essential information security policy is to ensure that the organization’s three lines of defense also cover information security roles and responsibilities, as is discussed more next.
Three Lines of Defense Applied to Information Security The first line of defense for an organization is operational management; the second is the risk, control, and compliance oversight functions of the organization; and the third is the internal audit activity. Senior management objective and strategy setting and board governance are considered prerequisites to the three lines of defense. As applied to cybersecurity, operational management is accountable for developing, monitoring, and controlling data administration, data processes, data risk management, and data controls. This is usually accomplished by delegation to qualified systems administrators (who will in turn recruit and train certified and qualified staff) and investing a sufficient budget in these areas. Systems administrators need to implement cybersecurity procedures, including training and testing of these procedures. They also need to:
• Keep all systems up to date and securely configured, including restriction to least-privilege access roles (i.e., not overprivileged). • Use intrusion detection systems. • Conduct penetration testing and internal and external scans for vulnerability management. • Manage and protect network traffic and flow. • Employ data and loss prevention programs, including encrypting data when feasible. The risk, control, and compliance functions assess whether the first-line controls are functioning adequately and whether they are complete. This line of defense also needs qualified, talented, and certified individuals who can conduct cyber risk assessments and gather intelligence on cyber threats. The area needs adequate policies, including for ongoing training. They may be involved in helping management design roles to have least-privilege access, assess external business relationships, and plan and test business continuity and disaster recovery. Internal audit maintains its independence and objectivity in part so that it can properly function as the third line of defense. In the event that the first two lines of defense fail to provide adequate protection, have an incomplete strategy, or fail to implement recommended remediation, internal auditors will be in a position to make these observations to senior management and/or the board. This might entail evaluating cybersecurity preventive and detective controls for adequacy and completeness, evaluating the IT assets of privileged users to ensure that they have standard security configurations and are free from malware, and conducting cyber risk assessments of external business relationships.
NIST Cybersecurity Framework In an attempt to assist organizations to address cyber concerns, the U.S. National Institute of Standards and Technology (NIST) has created a set of best practices. The NIST Cybersecurity Framework, or CSF, provides a risk-
based iterative approach to the adoption of a more vigilant cybersecurity stance for organizations in the public and private sectors. It also includes guidance on self-assessment. One of the strongest features of the NIST CSF is the Framework Core, shown in Exhibit II-7. This core includes cybersecurity activities, desired outcomes, and references from industry standards, guidelines, and practices. The Framework Core is made up of five functions, which are further divided into 23 categories.
Exhibit II-7: NIST CSF Framework Core Function Identify
Protect
Detect
Description Identify and communicate cybersecurity objectives and goals. Develop organizational understanding to manage cybersecurity risk to systems, assets, data, and capabilities.
Develop and implement the appropriate safeguards to ensure delivery of critical infrastructure services.
Develop and implement the appropriate activities to identify the occurrence of a cybersecurity event.
Respond
Recover
Develop and implement the appropriate activities to take action regarding a cybersecurity event.
Outline appropriate activities to maintain plans for resistance and to
Categories
• • • • • •
Asset management
•
Identity management and access control
• • •
Awareness and training
• •
Maintenance
• • •
Anomalies and events
• • • • •
Responsive planning
• •
Recovery planning
Business environment Governance Risk assessment Risk management strategy Supply chain risk management
Data security Information protection processes and procedures Protective technology
Security continuous monitoring Detection processes
Communications Analysis Mitigation Improvements
Improvements
restore any capabilities or services that were impaired due to a cybersecurity event.
•
Communications
Source: “Framework for Improving Critical Infrastructure Cybersecurity,” Version 1.0. NIST (National Institute of Standards and Technology), 2014.
Next Steps You have completed Part 3, Section II, of The IIA’s CIA Learning System®. Next, check your understanding by completing the online section-specific test(s) to help you identify any content that needs additional study. Once you have completed the section-specific test(s), a best practice is to reread content in areas you feel you need to understand better. Then you should advance to studying Section III. You may want to return to earlier section-specific tests periodically as you progress through your studies; this practice will help you absorb the content more effectively than taking a single test multiple times in a row.
Index The numbers after each term are links to where the term is indexed and indicate how many times the term is referenced. antivirus software 1 application authentication 1 application gateway proxy servers 1 audit trails 1 authentication controls 1 authorization controls 1 backup controls 1 baseline controls 1 biometric controls 1 bring-your-own-device (BYOD) policies 1 BYOD (bring-your-own-device) policies 1 change management 1 cloud backup 1 security 1 COBIT 1 compliance 1 computer crime 1 computer forensics 1 control self-assessment 1 controls authentication 1 authorization 1 backup 1 baseline 1 biometric 1 for malicious software 1 general 1 hardware 1 information security 1
information technology 1 logical access 1 operational 1 physical access 1 physical security 1 program change management 1 recovery 1 user authentication/authorization 1 crime, computer 1 CSA (control self-assessment) 1 CSF (Cybersecurity Framework), NIST 1 cybersecurity 1 Cybersecurity Framework, NIST 1 cyberterrorism 1 data loss prevention 1 security 1, 2, 3 standards 1 storage 1 deep learning 1 digital signatures 1 DMZs 1 electronic vaulting 1 encryption 1 ethics in data storage 1 fair information practices 1 FIPs (fair information practices) 1 firewalls 1 fraud detection/investigation 1 gateways 1 GDPR (General Data Protection Regulation), European Union 1 general controls 1 General Data Protection Regulation, European Union 1 hackers 1 hardware authentication 1 controls 1
IAM (identity and access management) 1 identity and access management 1 identity theft 1 IDSs (intrusion detection systems) 1 Implementation Guide 2130 1 information security/protection 1, 2, 3, 4, 5 controls 1, 2 International Organization for Standardization ISO 27000 family of standards 1 ISO/IEC 27002 1 Internet of things 1 intrusion detection/prevention systems 1 IoT (Internet of things) 1 IPSs (intrusion prevention systems) 1 ISO. See International Organization for Standardization logic bombs 1 logical access controls 1 mainframes 1 malicious software 1 malware 1 NAT (network address translation) 1 network address translation 1 NIST Cybersecurity Framework 1 off-site data storage 1 operational controls 1 packet filtering 1 passwords 1 patch management 1 physical access 1 physical security 1 privacy 1 private key encryption 1 program alterations 1 program change management controls 1 proxy servers 1 public key encryption 1 ransomware 1 recovery
controls 1 risk cybersecurity 1 in authentication/authorization 1 management 1 of change 1 script kiddies 1 security 1 cybersecurity 1 information/data 1, 2, 3 levels of 1 physical 1 risk management 1 systems 1 violations 1 servers 1 software antivirus 1 malicious 1 standards data structure 1 stateful inspection 1 systems security 1 three lines of defense 1 Trojan horses 1 user authentication/authorization controls 1 user-behavior analytics 1 violations, security 1 viruses 1 VirWare 1 vulnerability management 1 worms 1 Build 08/24/2018 15:40 p.m.
Contents Section II: Information Security Section Introduction Chapter 1: Information Security Topic A: Systems Security and IT General Controls (Level B) Topic B: User Authentication and Authorization Controls (Level B) Topic C: The Purpose and Use of Various Information Security Controls (Level B) Topic D: Data Privacy Laws (Level B) Topic E: Emerging Technology Practices (Level B) Topic F: Existing and Emerging Cybersecurity Risks (Level B) Topic G: Policies Related to Cybersecurity and Information Security (Level B) Index
Section III: Information Technology
This section is designed to help you:
• • • • • •
Recognize the core activities in the systems development life cycle and its delivery.
•
Define the operational roles of IT positions, including network administrator, database administrator, and help desk personnel.
•
Show how various functional areas of IT operations should be organized for efficiency and segregation of duties.
• • • •
Recognize the purpose and application of IT control frameworks.
Explain basic database and Internet terms. Identify key characteristics of software systems. Explain basic IT infrastructure. Describe basic network concepts. Describe the basic purpose of and tools used within common IT control frameworks.
Explain basic concepts related to disaster recovery planning sites. Define the need for systems and data backups. Describe systems and data recovery procedures.
The Certified Internal Auditor (CIA) exam questions based on content from this section make up approximately 20% of the total number of questions for Part 3. Topics are covered at the “B—Basic” level, meaning that you are responsible for comprehension and recall of information. (Note that this refers to the difficulty level of questions you may see on the exam; the content in these areas may still be complex.)
Section Introduction Access to relevant and reliable information is key to business decision making. Relevance includes timeliness of information and an appropriate level of detail. Successfully applied information technology speeds the availability of information, automates aggregation and sorting of data, and ensures information accuracy. Unsuccessfully applied information technology gives away a business’s competitive advantage to better-informed competitors. IT is successfully applied when the organization is able to use
it to fulfill business objectives, measure and address risks appropriately, grow and adapt fluidly, communicate effectively internally and externally, and react quickly to business opportunities as they arise. IT and auditing are primarily concerned with information risk, which includes the risk that inaccurate information is used to make a business decision. However, widespread use of IT for all business processes has led auditing away from a focus on assurance regarding historical data at a specific point in time to assurance about the reliability of processes. This is because IT generates the historical data almost automatically, so, if the process is wrong, the data will be, too, and vice versa. Therefore, auditing can have an effect on mitigating information risk. Note that this does not preclude auditing transactions to determine the impact on the business.
Risks Specific to IT Environment IT can potentially remove risks from a manual system, but it introduces its own risks. In addition, because of the nature of IT activities, these risks may also affect each other. • Physical audit trail replaced by data trail. Many physical documents are eliminated for audits, and controls must be used to compensate. • Hardware/software failure. Permanent loss of data, e.g., from environmental damage, outages, civil disruption, and disasters, is costly. • Systematic errors. IT reduces random errors such as in data entry, but automated systems can uniformly duplicate errors, e.g., via faulty code. • Fewer human inputs/less segregation of duties. Many IT systems reduce labor costs through automation. Mitigating controls include reviewing segregation of duties and requiring end users to review their output at a low enough level of aggregation to catch problems. • Access authorization. Increased ability to access sensitive information remotely also increases the risk of unauthorized access. • Automated transaction authorization. Transactions that formerly required
review and authorization, such as credit decisions, can be entirely regulated by a computer application. Authorization assurance rests on software controls and master file integrity. • Deliberate harmful acts. Dishonest or disgruntled employees with access as well as outside individuals with profit or destructive motives can cause significant harm to an organization. Trusted insiders are a source of significant risk. The Institute of Internal Auditors Practice Guide “Management of IT Auditing,” second edition (previously Global Technology Audit Guide 4 [GTAG® 4]), states that IT risks exist in each component of the organization and vary greatly. For an internal audit to be effective, the risks of each IT layer need to be considered and prioritized, and audit resources should be allocated to each layer according to those risks. While each organization is different, the following identifies the critical IT processes (layers) in most organizations: • IT management. The set of people, policies, procedures, and processes that manage IT services and facilities. This component focuses on the people and tasks rather than a technical system setting. • Technical infrastructure. The technology that underlies, supports, and enables primary business applications. In general, this includes operating systems, files and databases, networks, and data centers. • Applications. Programs that perform specific tasks related to business operations. They are typically classified into two categories: transactional and support. • External connections. The corporate network connections to other external networks (e.g., via the Internet, cloud computing, software as a service, third-party linked networks). When specific IT audit work is planned, it may be organized into categories based on the organization’s processes or a standardized framework. There is no need for a distinct methodology for addressing IT-related risks. Using the same methodology for all risk types is important to ensure that there is one
consistent internal audit risk assessment process that is used across the internal audit function.
Challenges of IT Auditing To identify and assess the control of IT risks properly, an internal auditor must: • Understand the purpose of an IT control, what type of control it is, and what it is meant to accomplish, for example, whether it is preventive, detective, or corrective and the degree to which it is directive in terms of allowed behaviors. • Appreciate the significance of the control to the enterprise—both the benefits that accrue to the enterprise through the control (e.g., legal compliance or competitive advantage) and the damage that a weak or nonexistent control can cause. • Identify which individuals or positions are responsible for performing what tasks. • Balance the risk posed with the requirements of creating a control. • Implement an appropriate control framework and auditing plan. • Remain current with methodologies and business objectives. Exhibit III-1 summarizes the challenges internal auditors must master in conducting IT audits.
Exhibit III-1: The Challenges of IT Auditing Assessing IT Controls Understanding IT controls (covered in Chapter 2, Topic C)
• • • • •
Governance, management, technical General, application Preventive, detective, corrective Degree to which controls are directive Information security
• • •
Reliability and effectiveness
• • •
Governance
• • •
Risk analysis
Monitoring and techniques (covered in Chapter 2, Topic C)
• •
Control framework
Assessment
• •
Methodologies
Importance of IT controls
Roles and responsibilities (see Chapter 2, Topic B)
Risk
Competitive advantage Legislation and regulation
Management Audit
Risk response Baseline controls
Frequency
Audit committee interface
Source: Practice Guide “Information Technology Risk and Controls,” second edition.
Guidance Exhibit III-2 identifies International Professional Practices Framework guidance related to IT auditing. Exhibit III-2: IT Auditing Guidance Type of Guidance Standards
Description Standard 1210.A3 Internal auditors must have sufficient knowledge of key information technology risks and controls and available technology-based audit techniques to perform their assigned work. However, not all internal auditors are expected to have the expertise of an internal auditor whose primary responsibility is information technology auditing. Standard 1220.A2 In exercising due professional care, internal auditors must consider the use of technology-based audit and other data analysis techniques.
Standard 2110.A2 The internal audit activity must assess whether the information technology governance of the organization supports the organization’s strategies and objectives. Practice Guides—General
“Auditing Privacy Risks,” second edition
Practice Guides—Global Technology Audit Guides (GTAG)
“Understanding and Auditing Big Data” “Assessing Cybersecurity Risk: Roles of the Three Lines of Defense” “Auditing Application Controls” “Auditing IT Governance” “Auditing IT Projects” “Auditing Smart Devices: An Internal Auditor’s Guide to Understanding and Auditing Smart Devices” “Auditing User-Developed Applications” “Business Continuity Management” “Change and Patch Management Controls: Critical for Organizational Success,” second edition “Continuous Auditing: Coordinating Continuous Auditing and Monitoring to Provide Continuous Assurance,” second edition “Data Analysis Technologies” “Developing the IT Audit Plan” “Fraud Prevention and Detection in an Automated World” “Identity and Access Management” “Information Security Governance” “Information Technology Outsourcing,” second edition “Information Technology Risks and Controls,” second edition “Management of IT Auditing,” second edition
Role of CAE in IT Auditing The CAE is responsible for ensuring a balance between the enterprise and its IT controls and proper implementation of a control framework. This involves: • Understanding the organization’s IT control environment. • Being aware of all legal and regulatory requirements. • Assessing whether roles related to IT controls are appropriate. • Developing and implementing an appropriate internal audit activity IT risk assessment process for the purposes of annual audit planning. (IT management should be developing its own risk assessment process independent of this.) • Identifying all internal and external monitoring processes. • Establishing appropriate metrics for control success and policies for communicating with management. • Communicating IT risks and controls to the board and executives in an understandable manner.
Ethics in IT There is also an ethical dimension to the design and implementation of an IT control framework. IT systems generate significant information on individuals, making maintaining the privacy of employees and customers a highly sensitive issue. The interests of stakeholders in an organization (e.g., shareholders, communities, governments) pose an additional obligation to ensure that internal controls robust enough to remove the temptation of fraud or management manipulation of financial results are in place. Executives have an ethical obligation to understand IT controls at a high level and to make sure that everyone knows their roles and responsibilities.
Chapter 1: Application and System Software Chapter Introduction The first topic in this chapter explores the core activities in the systems development life cycle and delivery process. The second topic explores common database and Internet terminology, and the chapter concludes with an outline of key characteristics of software systems.
Topic A: Core Activities in the Systems Development Life Cycle and Delivery (Level B) IT systems have a life cycle, from design through implementation to maintenance. Early systems designs were left largely to IT specialists. A better approach is team design. The purpose of team design is to ensure that the needs of all stakeholders are considered. The steps in the process are: • Feasibility study. • Request for system design. • High-level design. • Detailed systems design. • Program coding and testing. • Conversion (of old data files). • Implementation. • Maintenance. Internal audit has a strong role to play, especially when reviewing the feasibility and system study, such as being assured that the team is adequately staffed, control deficiencies are remedied, the system can accommodate growth, budgets are reasonable, users agree to the change, and so on. The use of a formal or normative model for systems development helps developers in much the same way that the use of project management keeps a project progressing toward its goals while handling problems in an orderly fashion rather than as emergencies. Internal auditors can use a normative model to observe where actual practice differs from expected practice in the model. One such normative model is the systems development life cycle.
Systems Development Life Cycle Steps
A development methodology is a vital tool because it forces management to be involved rather than relegating IT to specialists. Requiring a feasibility study, policies, objectives and standards, and testing forces IT to be treated as a resource that must be managed. Formal processes help managers understand how they can be involved. In fact, all stakeholders for a system should be involved in the formal process. Indicators of effective IT controls for systems development include the ability to execute new system plans within budget and on time. Resource allocation should be predictable. The traditional systems development life cycle (SDLC) is a sequential process, moving between formal stages, where one step is completed before the next is begun. In the traditional SDLC, end users are not involved in the process other than as interviewees and reviewers of completed work. Systems analysts and programmers design and build the system. Many organizations have altered the traditional SDLC because they have found that engaging end users thoroughly from the start results in a better product that is “owned” by its users. The traditional process is still used by some organizations for complex, multidepartment projects such as an ERP system, but even these benefit from organized user involvement. Exhibit III-3 shows the traditional SDLC. Each step is described in detail following the exhibit. Exhibit III-3: Systems Development Life Cycle
Systems Planning In the systems planning phase, executives and IT management establish a long-term technology strategy that measures success by its fulfillment of business strategy. Capital investments are allocated in accordance with business priorities. Systems planning is often conducted by an IT steering committee, made up of members from top management and IT. While management alone may not be able to assess if standards are adequate, the committee should be able to do so collectively. They set IT policy, approve both long- and short-term plans, provide monitoring and oversight, and assess the impact of new IT. A master plan schedules resources for all approved projects. Needs are defined, and related business processes are streamlined. The basic question asked at this level is “What problems exist, and are they worth fixing by use of scarce resources?”
Systems Analysis While systems planning is used to identify problems or challenges that are worth addressing in the design and development of new systems, systems analysis is used to point out deficiencies and opportunities in existing IT systems. Systems analysis could indicate that existing system modification is more cost-effective than developing a new system, or vice versa. The result of systems analysis is a request for systems design or selection. This is a
written request submitted either to the steering committee (for large projects) or to IT management (for smaller projects). The committee catalogs the request, and, if it is approved, they allocate money for a feasibility study. Feasibility studies indicate the benefits to be obtained if a proposed system is purchased, leased as a service, or developed, including its operational impact. Off-the-shelf software and out-sourced software development are evaluated against internal development costs and time to market. Feasibility studies: • Identify the needs of all related parties—management, IT professionals, users—and develop metrics for future assessment (e.g., time frame, functionality, cost). • Analyze the proposed system against: • Needs. • Defined resources (e.g., budget, personnel). • Additional costs and future impacts (e.g., impact on existing systems/hardware, additional training/staffing). • Technology trends. • Alignment with enterprise strategies and objectives. • Perform cost-benefit analysis. • Identify the best risk-based alternative (e.g., no change, development of new system, reengineering of existing system, purchase of off-the-shelf product, purchase and customization, lease of online software as a service). Feasibility study conclusions should provide the basis for a go/no go decision. The feasibility study results require written approval of the committee or IT management. Internal auditors should be involved in the process at this point to ensure that control and auditability requirements are included in the scope of the project. Specific controls are defined in the
next step.
Systems Design/Systems Selection Systems design occurs in two phases: high-level design and detailed design. In between these steps, sometimes prototyping (rapid creation of an experimental bare-bones system) is performed. Prototyping makes a functioning model for users to interact with; they can then suggest improvements. The prototype may have more than one revision. High-level systems design has four steps: 1. Analyze inputs, processing, and outputs of existing or proposed system. 2. Break down user requirements into specifics, such as support for a particular inventory valuation method or costing technique. 3. Define functional specifications to accomplish business goals, e.g., accounts receivable data updates customer credit. 4. Compare make-or-buy alternatives, including any needed configuration or customization. Flowcharts showing the path of inputs/outputs can help clarify processing tasks and ensure that user needs are being met. Structural design can facilitate development by identifying and organizing sub-processes. At this time, data files and the database structure must also be considered as well as how existing files and databases can be converted to the new system. If the decision is made to buy a system, systems selection begins. Assuming approval, a detailed systems design is created for both internally developed systems and for purchased software that needs modification. This is a blueprint including program specifications and layouts for files, reports, and display screens. Planners flowchart each process, including the method of implementation and testing. Specific areas of customization are authorized (controls need to minimize this), and configuration settings are determined.
Programming and Customization/Configuration
Typically organizations purchase “off the shelf” software. These systems should be configured rather than customized due to cost, time, and licensing considerations as well as the risk of incompatibility with newer versions of the systems. Another option is for organizations to subscribe to software hosted on a cloud-based service, which automatically keeps the software up to date with the latest version. Customization is not an option for cloudbased software, but some degree of configuration may be available. Off-theshelf and cloud-based systems also incorporate best practices and welldeveloped controls and feature complete documentation. Programmers should follow a detailed systems blueprint when writing or reusing code, debugging code, converting existing data and processes to the new system, reconfiguring and acquiring hardware as needed, and training staff. Online programming allows programmers to write and compile code using real data. It also speeds development time. However, it does introduce risks that must be controlled: • Creation of multiple versions of programs • Unauthorized access • Overwriting of valid code Programmers must get sign-off from superiors at appropriate milestones. Source code must be protected during the project by a librarian.
Testing Testing involves creating a testing plan, collecting or creating testing scenarios, executing the tests and managing test conditions, collecting and evaluating feedback, and reporting the results. Testing and quality assurance are done in two phases: unit testing and system testing. Unit or performance testing keeps the application in isolation to find internal bugs. It is useful to conduct unit testing as early as possible to prevent errors from affecting ongoing work in other units. System testing strings together all programs in the application to find intercommunication bugs. In addition, the new or acquired system’s operation must be tested in an interface with all other systems with which data is transferred. Another type of testing is called regression testing. Regression testing involves tests to determine the degree
to which older elements of the programming are still compatible with new code revisions. Before implementation, the system faces final acceptance testing for quality assurance purposes and user acceptance. Testing terminology includes the following: • Debugging—checking software for “bugs,” or errors in software code that can cause aberrant behavior or worse • Load testing—examining a system’s performance when running under a heavy load (e.g., a large number of simultaneous users) • Throughput testing—validating that a system can process transactions within the promised time • Alpha testing—conducted by developers • Beta testing—conducted by users • Pilot testing—a preliminary and focused test of system function • Regression testing—confirming that revisions have corrected problems and not introduced new ones • Sociability testing (SOCT)—testing the system in its intended environment, with actual hardware and limited resources, while running with competing and collaborating applications • Security testing—validating the ability to control vulnerabilities In some instances, testing may be conducted automatically, during off-peak use times, thus speeding testing and development. Teams not involved in programming deliberately try to make the system fail. Security applications should be tested by deliberately trying to hack into the system. Auditors must guard that testing is not given a shortage of resources, time, or attention. In addition, review of testing results, potential issues identification, and test result follow-up are vital to ensure that testing results in practical improvements.
Conversion and Implementation Conversion is the process of migrating any data to the new system and going “live.” This area is of particular concern to audits because errors can be introduced at this point (after testing) and not detected until they cause material harm. Errors include incorrectly converting code, truncating fields, use of the wrong decimal place in calculations, or loss of records. Manual conversion is physical data entry of old records and should be avoided if possible. To reduce data entry errors, hash totals, record counts, and visual inspections should be used. Both automated and manual data migration should include a data cleansing step. Adequate preparation and training of staff and end users must be planned and implemented as well. Implementation is turning on the new system. Management must sign off on the conversion review. Different implementation approaches can be used. Big bang or cutover approaches have the entire system go “live” at the same time. Phased approaches are implemented by department or plant. Pilot approaches implement a test version and run it for a given period prior to full implementation. Parallel approaches run the old and new systems simultaneously for a period, requiring double entry of all transactions. This safeguards business continuity and provides independent system verification through comparison of process totals. Regardless of the method, internal auditors should ensure that a backout procedure exists. User support, such as help desks and documentation, must be available at the time of implementation. After implementation, the new/acquired system and project should be reviewed, using the metrics defined at the beginning of the project. Attention should focus on whether: • The system has met user requirements (in terms of resource use and performance delivered). • Specified controls have been created and are adequate. • The development process was conducted in compliance with policy.
Systems Change Control, Operation, and Refinement (Feedback) Operations and maintenance are ongoing activities that continue for the life of the software. It is important that management schedule and communicate the need for system downtime for routine maintenance. Change controls can keep numerous noncritical changes from swamping productivity and budgets while allowing for problem escalation in emergencies. Changes must be approved by management, follow development standards, and be tested in a sandbox environment. Change control can also prevent unauthorized changes from being implemented. Changes might be unauthorized because they are not in the scope of currently planned work; because they require thorough design, planning, and testing before being included in updates; or because they require a technical review as part of an internal control step (e.g., to detect whether changes provide system backdoors or other opportunities for programmer malfeasance). In addition to ensuring that changes are orderly and follow required review, testing, and approval procedures, change control involves maintaining thorough documentation on each change in a change log. A system librarian is an IT role that provides control over original documentation and maintains and controls the change logs, which show how the software has changed at each version. This practice helps track down the root causes of issues and facilitates software rollbacks to prior versions as needed.
SDLC Documentation The change log is only part of the documentation produced by the traditional SDLC. Large amounts of other documentation and formal specifications—covering, among other things, the software, the related business process, security features, and backup processes—are also produced. Documentation can be a boon to auditors if it is easy to use, so it should be clear and concise and follow a structured and well-communicated methodology.
The problem with documentation and the traditional SDLC appears when a long-duration project needs to be changed due to shifting business requirements, new technologies, or releases of an application. In this case, the documentation becomes yet another hurdle, as all of it needs to be updated. Therefore the urge to fix design flaws discovered later in the process is sometimes suppressed by freezing the specifications, which could result in a less-than-useful tool. Another risk is that programmers could shirk their documentation duties, preferring to move on to the next task. Early auditor involvement and having a designated person review the documentation as it is submitted can help lower this risk. Asking developers for personal notes can help fill in some blanks. Attempting to change a system without documentation can be made even more difficult if turnover occurs. Documentation is also a control for preventing fraud, but it is useful only if all valid changes are recorded.
Rapid Application Development (RAD) Methods The SDLC can create inefficiencies through its rigidly enforced sequence of events. Simultaneous development efforts, in which portions of the development effort are begun as soon as possible instead of waiting for a prior step to finish, are one adaptation of the SDLC. Tools such as CPM/PERT (see Section I, Chapter 2) can help determine the earliest start times and the shortest project duration. Another method is to create the new system module by module, releasing each into production as soon as it is ready. Many programmers are also employing reusable software code to speed development efforts. Rapid application development (RAD) is a set of methodologies and tools for fast software development. With RAD, users participate in design, source code can be automatically generated, and documentation is reduced. RAD uses a process called joint application development (JAD) , in which an executive champions meetings between IT and other stakeholders to work out the requirements for the system rather than each working independently. Such groups often use group support software to encourage
participation. Agile development also uses frequent in-person meetings between users and developers to allow system blueprints to change during development. Agile development can reduce the risk that a long project will be outdated before it is finished. Exhibit III-4 highlights some RAD methods. Exhibit III-4: Rapid Application Development Methods
In auditing RAD projects, weaknesses to watch include lower quality due to the emphasis on speed. Poor documentation can weaken an audit trail. Information may have been missed, and the system may function but not provide the right functions for business needs. Gold plating can occur, which means that the project’s budget or scope has exploded because the
project has too many requirements or too many are added during the project. Naming conventions could be inconsistent in simultaneous development. The system could have poor scalability. To demonstrate success early on, projects may favor easier systems and push the difficult ones back. All of this makes audits of faster methods more difficult than audits of formal systems.
Topic B: Internet and Database Terms (Level B) The Internet The Internet is a network of networks that have devoted a portion of their processing power and data to public use, the ultimate distributed network. The World Wide Web (www), or the web, is the largest subset of the Internet. The Internet has forever changed every aspect of our lives, including the way we do business. No longer do we exist in corporate silos, working solely on a single computer in a single office. Now, organizations can have employees working all around the world sharing information through globally interconnected systems. However, one of the problems in Internet use lies in these connections to the outside world. They can be a source of risk; organizations are vulnerable to viruses and intruders who enter their internal network of computers through transferred files or email attachments. Internet access increases the risk of inappropriate or illegal use of company assets for personal activity. Another difficulty is sorting out the good information on the Internet from its vast selection of data.
Internet Terminology An intranet is an internal network for employees built using tools, standards, and protocols of the World Wide Web and the Internet. Intranets empower employees by giving them remote access to company information and possibly even by giving business units responsibility over their own content. Obviously both of these benefits will require improved controls to prevent their misuse. An extranet is like an intranet service designed for customers, external partners, or suppliers. Extranets require even greater controls over user authentication and privacy. The following is other Internet infrastructure terminology.
• 10.4 password rule . An industry recommendation for password structure and strength that specifies that passwords should be at least 10 characters long and should contain at least one uppercase letter, one lowercase letter, one number, and one special character. • Address restrictions. Firewall rules designed to prohibit packets with certain addresses or partial addresses from passing through the device. • Browser. A program with a graphical user interface, or GUI, for displaying HTML files. • Click-through. The action of following a hypertext link to a particular website. • Cloud computing. The practice of using a network of remotely located servers hosted on the Internet to store, manage, and process data rather than storing the data on a local server or computer. • Cookies . A package of data sent by an Internet server to a browser and then returned by the browser each time it is accessed by the same server. Cookies are used to identify users or track their access to a server. • Data. Items of fact collected by an organization. Data includes raw numbers, facts, and words. • Database. A collection of related data stored in a structured form and usually managed by a database management system. A database can be a physical or virtual system. • Domain name . A plain language label referring to a numeric IP address. • Domain name system (DNS) . A hierarchical server network that maintains the domain names for conversion to IP addresses. US NIST DNS security extensions authenticate the origin of DNS data and ensure data integrity or authenticate denial of existence.
• Electronic data interchange . The transfer of data from one computer system to another by standardized message formatting, without the need for human intervention. EDI permits companies to exchange documents electronically. • Email. Electronic messages. • Field. A part of a record that represents an item of data. • File Transfer Protocol (FTP) . A protocol that allows transfer of large files between computers on a network or the Internet. • Hacker . A person who accesses systems and information, often illegally and without authorization. • HTML. Hypertext Markup Language, a standardized system for tagging text files to achieve font, color, graphic, and hyperlink effects on Internet pages. • HTTP/HTTPS (Hypertext Transfer Protocol/Secure HTTP) . Regular and encrypted versions of the communications standard for Internet message formatting and transmission. • Instant messaging. Text message services that can be co-opted by hackers as an avenue for remotely controlling user computers. • Internet protocol (IP) address . Numeric address for a specific computer located on the Internet, e.g., 128.6.13.42. • Object. A data construct that provides a description of something that may be used by a computer, such as a processor, peripheral, document, or data set, and defines its status, method of operation, and how it interacts with other objects. • Record. A number of related items of information that are handled as a unit. • Schema. A representation of a plan or theory in outline form. • Telnet. One way of gaining remote control over a computer.
• Uniform Resource Locator (URL) . The combination of transfer protocol, domain name, directory path, and document name. (See domain name system [DNS] above.)
Internet Structure The Internet backbone is a series of high-capacity trunk lines owned and operated by network service providers (e.g., long-distance telephone companies or governments). The remainder of the backbone is owned by regional telephone and cable organizations, who lease access to organizations and Internet service providers (see below). The points of connection between the backbone and the regional areas are called network access points (NAPs) or metropolitan access points (MAPs). Other than this physical infrastructure, the Internet is neither owned nor managed. Internet organizations exist such as the World Wide Web Consortium (W3C), which sets programming standards and protocols. Organizations such as these do not control the Internet; they just work to improve its efficiency or security. Due to this, some nations heavily regulate or outright ban use of the Internet. An Internet service provider (ISP) is an organization that provides connection to the Internet via a TCP/IP (Transmission Control Protocol/Internet Protocol) connection or provides network services (IP network). Control issues for ISPs include choosing a reliable service from a reputable organization to minimize the risk of business interruptions. Use of an IP network is inexpensive, but because the data flows over the Internet, the company’s data is only as secure as its encryption. Broadband involves high-speed transmission methods over a single high-capacity medium or multiple lower-capacity media. Broadband access includes satellite, cable modem, and digital subscriber lines (DSL). Narrowband refers to standard telephone/modem service. Controls to protect intranets or extranets include using a virtual private network (VPN).
Browser Security Browsers, like other applications, have bugs and associated patches. Browser security flaws create vulnerabilities for attack; upgrade processes
must be controlled. Even when browsers are up-to-date, a number of security risks still exist. Internal auditors and/or their designated IT auditor counterparts should be aware of such risks and be able to identify gaps or related control weaknesses. In general, administrators should disable all unnecessary features of browsers. Pages using active content languages such as ActiveX or Java allow more interactivity but could conceal malicious code in their scripts, which operate independently of the browser. Java, for example, operates in a sandbox environment that limits interaction with the rest of the system, but this protection can be compromised by attackers. Active content that runs inside the browser, or plug-ins, should also be treated as suspect. Many organizational sites block such interactivity and allow viewing only in plain text. Websites create cookies on a user’s computer, which, as we learned earlier, are used to identify users or track their access to a server. Administrators should in general allow the use of cookies only for “trusted” sites, or sites allowed normal access. Other browser security measures include blocking pop-up windows using a utility program because they could contain malicious programs. Administrators should set browser security for external sites to “high.” With this setting, administrators need to define trusted sites. This will often include only secure sockets layer (SSL) or HTTPS sites that can verify their authenticity plus a few sites such as the operating system software provider’s update site. Other sites that are trustworthy should still not be set as trusted in case they are compromised. An example of such a compromise is cross-site scripting, where a vulnerable site is used to gain access to its trusted site list. Intranet sites can have lower security, but this content is not immune from attacks. Finally, although it isn’t a complete control, a set of unsafe sites can be designated as restricted. Management should perform ongoing monitoring to ensure that the restricted list is expanded over time as additional unsafe sites are identified. Internal auditors should be aware of whether management has adequate controls in place to identify or restrict unsafe sites.
Web Services and Service-Oriented Architecture (SOA)
Web services use open Internet protocols and standards to create standalone, modular software services that are capable of describing themselves and integrating with other similar services. Web services work independent of platform, operating system, or computer language, and the offerings of other providers can be leveraged without any middleware. Web services can work with traditional applications by creating a universal wrapper around the message content. They speed software development efforts because common services such as a credit check tool can be found on a registry server. Web services are especially good for making automated or one-time connections such as with trading partners. A service-oriented architecture (SOA) is a software system design that allows for sharing of web services as needed. A service consumer sends out requests for services to service providers, which either provide the service or forward the request. SOA has an architecture goal of loose coupling, which means that the data is separated from the application and each service says what it needs another service to do, not how to do it. Advantages include the ability for remote users to access ERP systems using mobile devices and for various applications to work together to synthesize data into information faster. In addition, developers have easier and faster upgrades. SOA packages include Microsoft.NET as well as offerings from IBM® and each of the ERP vendors. What does this all mean for internal auditors? Despite the many advantages of0 this set-up, control issues abound. Internal governance models that were created for traditional software will not suffice and will need to be reengineered. This is especially true if the organization must comply with the rules of Section 404 of the U.S. Sarbanes-Oxley Act or an international equivalent on internal controls. The openness of SOA creates new risks to internal controls. For example, in a traditional IT system, segregation of duties would safeguard electronic sales documents by creating barriers between the sales, credit, and billing modules. The barrier would rely on logical access controls and role-based access to lock out unauthorized users. Customers entering through a web portal would be assigned a customer role and a
temporary unique ID. Furthermore, their access would be restricted to the web portal, and moving further would require knowledge of the proprietary interface that resides between the portal and the rest of the ERP system. Customers could create a purchase but not modify it or change their credit. In SOA architecture, all modules such as sales, credit, billing, and the general ledger are web services connected to the web. The system would still have a firewall and other protections, but the SOA would be like a trunk line to which each set of modules and databases is connected. The entire ERP system would become a web service. Now the customer’s ERP system gets approval for and establishes a direct link to the organization’s ERP system. The two parties can automate their trading. Therefore, some of the segregation of duties created by user interaction will be missing. A compensating control is to designate the machine or system making the interface as a user in its own right, with its own role-based access. The ID of the user commanding that “user” also needs to be mapped to prove compliance with controls (e.g., nonrepudiation, authentication, segregation of duties). Auditors may need to seek external assurance that the SOA system can either authenticate the external system, the system user, and the user’s role or deny all service. In the worst-case scenario, an organization with this set-up could conceivably allow the SOA modules, such as the general ledger, to communicate over port 80, which is an open channel that bypasses the firewall for direct Internet access. Any service anywhere could then modify the general ledger. Horrifying as this seems, it is how some systems have been set up. Greater emphasis must be placed on application level controls than with a traditional set-up. General audit recommendations include implementing SOA in stages, starting with nonfinancial business functions. The organization can then assess risks and upgrade controls using lesssensitive data.
Databases A database is any repository of data in a computer system. A database management system (DBMS) is an application that links users and
programs to the database and allows the database to be manipulated by multiple applications. A DBMS serves as a buffer between the various applications or modules and the database. Database and DBMS combinations are often just called a database. Skilled database administrators are required to keep a DBMS working. Audit uses of databases include: • Audit programs. • Risk and control inventories. • Personnel data related to staff members. • Departmental or organizational fixed assets. • Record retention. • Histories of audit findings. • Data on organizational units or audit sites. Advances in software and hardware technology make ever-larger databases possible, allowing storage of graphics, audio, and video as well as documents. Databases that are shared among multiple applications, such as an ERP system’s database, have more robust controls than a series of databases for each application, because the database can be centrally located and fewer avenues of access need to be protected. Data can be used in strategic analysis, redundant files are eliminated, modifications are easier, and standards and a framework for control can be applied consistently. Because data is independent of the applications, applications gain some consistency and ease of programming. Another option is to use a distributed database system such as a cloud, which creates a virtual centralized database. This has the advantages of a single source of data storage and geographic diversification to reduce some risks, but it creates its own set of control risks, especially if the distributed database is out-sourced and the organization therefore cannot maintain
complete control over the data. Management and oversight must typically be increased in such scenarios, and consideration must be given to the countries in which the data is stored. Some countries have fewer intellectual property protections or less enforcement, for example. In either method, controls must be put in place to limit access to sensitive data by user role, such as allowing only payroll personnel access to payroll files. A key assurance coverage activity for internal auditors is a review of user access controls. Other database drawbacks include greater complexity and expense and the fact that failure of the database or the transmission method to and from the database can halt all computer work. Use of backup procedures is vital. Auditors need to understand how DBMSs are structured, including the underlying rules used to ensure proper controls.
Database Terminology Databases are at the top of a hierarchy: bit, character, field, record, file, database. Each item listed is a larger and larger grouping of data. A bit is a binary digit, a character is any alphanumeric key, a field is a business object such as a name or an asset, a record is a logical grouping of fields, a file is a collection of related records, and a database is a collection of files. When a record relates to a person, place, or thing (i.e., a noun), it is called an entity . The fields relating to entities are called attributes . An employee entity would have a first name attribute. A key field is the field used to identify an entity, such as employee number. Data items are the specific data in fields, while a primary key is a unique key field number (i.e., a proper noun) used to identify a specific entity, such as employee ID #12345. Other database terminology includes the following: • The data definition language describes the data and the relationships between data in a database, including logical access paths and records. • Schema and subschema contain the specifics. Schema , from “schematic,” are the overall rules for the database; subschema are files describing a portion of a database, including authorized read-only/full access users.
• The data dictionary is a master record concerning the data in the database (metadata), e.g., pseudonyms, lists of users responsible for maintenance, ranges of values, and other controls. Auditors can use the data dictionary to check facts if it is up-to-date. • The data manipulation language has commands for viewing or changing the database. • The data query language is a user-friendly method of querying the database for information. Ad hoc queries are possible. A popular language is structured query language (SQL), which allows users to select data from a particular location and qualify search parameters.
Relational Databases Older database types were rigid and resulted in data redundancy. For example, a spreadsheet is an example of a flat database, which is fine for simple single-user work but would become untenable for vast amounts of data. Most databases are now relational databases, so this is the only type that will be discussed here. A relational database is a DBMS that is arranged into two-dimensional files called tables, with links between tables that share a common attribute. A table, or relation, is a file with rows and columns similar to a spreadsheet. Each table contains a business entity such as those in Exhibit III-5—CUSTOMER, SALES_TRANSACTIONS, or ACCOUNTS_RECEIVABLE. Any row in the table is an entity (also called a tuple), while columns contain attributes for entities. Exhibit III-5: Relational Database
The key to a relational database is that any particular data field is entered in only one place in the database and then its relationships are mapped to all relevant tables. Links are the relationships between tables or within the same table that share at least one common attribute. The exhibit shows CUSTOMER_NUMBER and SALES_NUMBER attributes linking tables. As many links are created as necessary, as shown with the PART_NO attribute linking to the PART table. Relational databases require more processing power than older types, but they provide more useful ways of manipulating data. Using a data query language such as SQL, a manager could create a query that eliminated irrelevant rows (entities) from the report, called selecting. Or he or she could pare down the number of columns to make the data more relevant, called projecting. Finally, a query could combine data from two or more tables, called joining. Relational databases are intuitive and allow new links or relationships to be formed easily without reprogramming the model.
Batch Versus Real-Time Processing
Batch processing is the processing of records at specific intervals; realtime processing is the processing of a record as soon as it is submitted. Batch processing can be less expensive and is therefore still used for many types of data. Batch controls can be more robust. Real-time processing is used for data that could have a real effect on company efficiency if received immediately, such as inventory levels. Halfway between the two is memo posting, which is used by banks for financial transactions and others to create real-time entries that are posted to a temporary memo file. The memo file allows the updated information to be viewed; at a designated time, the memo file is batch-processed to update the master file. This way data is available immediately for viewing but batchprocessing controls are applied before the changes become permanent.
Database Controls Database controls focus on maintaining the integrity, efficiency, and security of the database. As with other major aspects of IT controls, properly prioritizing the review of database controls is an important planning task. As is appropriate and applicable, internal auditors may wish to do this in coordination with designated IT audit professionals. This helps ensure adequate overall assurance coverage in this area. A review of database controls may involve: • Enforcing attribute standards and ensuring that data elements and database relationships are accurate, complete, and consistent. • Managing concurrent access to the same data by different users to maintain data integrity and availability. • Integrity controls to ensure that all primary key fields are unique and none are left blank and that all relational links lead to where they should. • Protecting against loss of data during processing through the use of data checkpoints and restart routines. • Protecting against loss of stored data through specified backup routines. • Optimizing database size and efficiency by periodic reorganization
(confirming that all data relationships remain accurate and functional). • Managing access to ensure proper authorization for access to data and rights to update the database and to restrict access by those outside. • Monitoring and reporting on database performance and conditions. The following are controls in specific areas: • Access management. Organizational databases support role-based access, so each user should be assigned a role and a unique ID and password to enforce accountability. Various areas of the database should be segregated by checkpoints, such as the payroll area. Fine-grained access control is when the data itself is restricted. In a relational database, attributes (columns) can be programmed with the controls, such as numerical checks, range tests, or drop-down menu choices. The attribute’s domain is the description of all of its controls. Schema, subschema, tables, rows, and views can also have similar protections. A view is like a stored query, or a presentation of data from several tables. Changing data in a view changes the data in the underlying tables. • Performance monitoring. Regular audits are an integral part of database controls. Audits should review any data needing extra access controls and verify that the controls are functioning properly. Audit procedures should be designed to include an “alarm bell” that is triggered when access or other controls fail. • Database maintenance/utility programs. Database maintenance is the use of transaction processing systems to add, delete, review, monitor, or change data. For example, this could be customer or account profile maintenance changes. For management, there should be some form of segregation of duties in reviewing any maintenance changes, and internal auditors may need to provide related assurance coverage. Maintenance change access should typically be segregated from traditional transaction processing operator access. Internal auditors should be aware of such potential user access conflicts. Independent utility programs such as data cleansing tools (see below) can monitor a database for inconsistencies.
• Data cleansing. Data cleansing is the removal of redundancies and errors in a database. It is vital when two or more databases are integrated, such as for integrating with an external partner. Data cleansing may be outsourced or kept in house. It is not a one-time affair but a regularly scheduled process. The following are data cleansing terms: • Concatenation is linking fields and columns. • Standardization is expanding abbreviations and the use of common terms, prices, and units of measure. • Taxonomy is the use of standard names, while normalization is application of taxonomy standards such as the United Nations Standard Products and Services Code (UNSPC). • Deduping removes duplicate data such as one supplier with two records. • Categorization puts items in classes and groups for proper aggregation. • Enhancement is combining additional internal and external data sources to improve data quality.
Data Warehouses Data warehouses are databases designed to collect the information from one or more transactional databases for purposes of multiyear storage of records, planning and analysis, and reporting. Queries to databases can generate pertinent information for planning and decision making but can also slow down the transactional database due to the processing power required. Data warehouses are critical for organizations that have grown through merger and acquisition and cannot always integrate all of their transactional databases in a cost-effective manner. Queries regarding the entire organization can be done with a data warehouse. Data warehousing can provide management with an array of reporting and monitoring capabilities. Internal auditors should be aware of the capabilities of data warehouse tools and how much reliance management is placing upon them. A core control objective is to ensure the completeness and integrity of warehouse data coming from applicable source system(s).
A data mart is a subset of a data warehouse or database that contains focused information for a particular function, such as customer relationship management. Virtual databases are data partitions in a database, i.e., a virtual data mart.
OLAP and Data Mining Online analytical processing (OLAP) is software that allows multiple perspectives for a set of data to be analyzed. Analysis of complex data is fast, and users can aggregate the data into useful information. OLAP draws a set of data to the user’s computer and allows the user to manipulate that data in multiple ways or dimensions without having to perform a new query. This is useful because querying a data warehouse can often involve some delay. With OLAP, data can be compared in three or more dimensions, such as sales by item, sales by region, and actual versus planned sales. OLAP allows these multidimensional databases to be rotated to show different relationships and sliced and diced or drilled down or up in aggregation level. Data mining software is designed to look for unforeseen similarities and correlations among large amounts of seemingly unrelated data. An internal auditor could use a data mining tool to look through every record in a set of data for potential fraud.
Topic C: Key Characteristics of Software Systems (Level B) This topic looks at operating systems along with software for customer relationship management (CRM), enterprise resources planning (ERP), and governance, risk, and compliance (GRC).
Operating Systems The operating system (O/S) is the software that, in essence, runs the computer. Microsoft Windows, Unix, Linux, and the Mac OS are examples of operating systems. The operating system mediates between the computer hardware and the applications accessed by a user. Windows allows the user to create a slide show in PowerPoint or a spreadsheet in Excel. Different operating systems, or different versions of an operating system, may be appropriate for different types of computers. Operating systems exist for computers, devices, servers, and mainframes. Computers have a built-in operating system (BIOS), a hardware-based O/S that initiates the software O/S. The O/S performs a variety of critical functions that coordinate hardware (e.g., keyboard and mouse, display, scanner, webcam, microphone), memory storage and access (both internal and external), processing tasks, and network communication. These include: • Creation of a user interface for user interaction with the computer—e.g., a graphical user interface (GUI). • Operation of computer hardware (e.g., keyboard). • Communication with application programs. • Allowing network capabilities (sharing of network peripherals and data). • Managing memory to ensure rapid retrieval of data and processing. • Scheduling of resources.
• File management and tracking. (The O/S tracks where files are stored and who is allowed to view/change them.) • Control of access—even by multiple simultaneous users. (Access controls restrict access by ID/password and keep a log of users, the duration of their use, and any attempted security breaches.) • System recovery after failure. Process management includes allocating resources to processes and users for multiprogramming/multitasking (simultaneous tasks). Memory management determines how much random access memory (RAM) and virtual memory to allocate to an application and locates data by its physical address (location in memory) given a logical address (online label). Applications interact with the O/S via an application program interface (API), which can be programmed without needing to understand hidden O/S features. The auditor should pay special attention to operating systems, since a crashed operating system can leave a great many employees without access to their work (if it’s on a mainframe or a network). Auditors of operating systems should be IT audit specialists. Internal auditors reviewing the controls over operating systems security face the challenge that such systems are continually evolving, requiring continuous training. General areas of review include monitoring change procedures, checking that O/Ss are up-to-date, determining if system programmers have enough training, checking the update status of O/S tables (e.g., employee information), and ensuring that an adequate system of error tracking exists. O/S controls include error notification for failed hardware and detection of abnormalities in the system using ZAP programs, or programs that change or fix data or programs but can bypass security safeguards and may not leave an audit trail. Two examples are data handling utilities (DHUs) and data file utilities (DFUs) (e.g., registry fix applications). These utilities are designed to automatically correct some errors caused by ABENDS (abnormal endings), crashes, and data corruption. They can make changes to files without the use of processing programs. Sometimes, no record of the changes or transactions is kept, creating a potential source for errors or
opportunities for abuse. Restricting access to system programmers who must get approval and provide documentation for each use is one method of controlling use. Security software may or may not detect when these utilities are used. Internal auditors may need to assess the potential impact of audit trail limitations. Another area to control is changes to operating systems, usually by update or replacement. Operating system programmers should not be allowed to perform applications programming, because they could commit and conceal fraud. Because an O/S affects an entire data center, it is high risk, and programming should be performed in a sandbox area first or done at night with a backout plan available to reverse the changes. A log of all changes is key. Sometimes O/Ss are customized with software called “hooks,” and these will need to be reinstalled at each upgrade.
Customer Relationship Management Software Customer relationship management (CRM) is an operating philosophy of putting the customer first, a set of methodologies to support this philosophy, and software that supports this philosophy by enabling consistent and wellinformed interactions with customers. CRM software can be installed or cloud-based. The general intent of such software is to ensure that all customer information is consolidated and organized for ease of access so that all contacts with the customer, from salespersons to customer service and beyond, can see information on the latest communications with that customer, the status of orders or complaints, and preferences or special needs. Often this type of software tracks prospective customers in the sales pipeline, prompts salesperson follow-up based on territories, helps prepare and release emails or other marketing materials, performs lead scoring and customer segmentation, helps manage quotes or responses to requests for proposal, and automatically moves converted leads into customer accounts. The systems may contain customer chat room features with logs of these communications, social media integration, and mobile access features. Auditing CRM software could involve operational audits of efficiency and
effectiveness. This could be auditing a CRM software implementation project to determine if it is meeting its objectives, auditing the true costs of customization, whether the system is too much for the needs, or whether an existing system will continue to scale with the business into the future. Audits could also be done to uncover the root cause(s) of inefficiencies or problems, and such audits may need to also address whether underlying processes are compatible with the software and enable the CRM philosophy of being customer-friendly. Often the root cause is an outdated process that is confusing, unnecessary, or contradictory to the method used in the software. Lack of training could also create inefficiency or ineffectiveness. Assurance audits could look at the security of customer data, the status and frequency of backups, the availability and quality of audit trails (some systems have a maximum number of data fields that can be tracked per object), and whether the system complies with privacy regulations such as the GDPR. IT audits will be specific to the type of software, but here are examples: • Customized fields that rely too much on coding and not enough on formulas • Validation rules that require filling out all fields (promoting users to enter “junk” data) • Profusion of checkboxes and dropdown lists for system administrators to maintain • Systems with numerous “mystery fields” or objects that are no longer used • Systems with too many report types that lead to poor maintenance and confusion • Screens that are complex and/or require endless scrolling • System data and metadata that needs to be checked for integrity and usefulness
Enterprise Resources Planning (ERP) Software An enterprise resource planning (ERP) system is installed or cloud-based software designed to have a module for every business process at the organization (accounting, sales, CRM, warehousing, finance, human resources, etc.). A key advantage of ERP software is that there is a single integrated database at its core, so there are no duplicate records or different versions of the “truth.” The records are updated frequently or in real time. For example, if one salesperson sells the last unit of a particular type of inventory, the next salesperson will see that there is a stockout. Furthermore, interactions between business processes are fully integrated and automated, so internal controls can be configured into the system. For example, a segregation of duties/dual control in the system would not allow the same person to create and approve a purchase requisition. The requisition would be automatically forwarded to the supervisor as an electronic requisition. The supervisor would be able to drill down into the details of the prior parts of the transaction as needed prior to granting approval. ERP software implementations are multimillion-dollar endeavors, and updates to new versions mean that expenses can recur often, especially if the organization has decided to customize the system. Even configuration takes significant time and expense. ERP software is therefore high stakes, and identifying ways to improve the efficiency or effectiveness of these systems can result in huge cost savings. All of the types of operational or assurance audits discussed for CRM also apply to ERP software, only on a larger scale.
Governance, Risk, and Compliance (GRC) Software Governance, risk, and compliance (GRC) software enhances existing governance, risk, and compliance frameworks and programs. The software is intended to automate many of the documentation and reporting activities related to risk management and compliance activities, so the end users of such software include audit committees, executives, internal auditors, and
risk and compliance managers. Internal auditors use GRC software to manage working papers, schedule audits and audit tasks, manage their reporting and time management requirements, and access and review applicable organizational GRC documentation as part of ongoing assurance and consulting activities. Risk and compliance managers can create, review, distribute, and update policies and map them up to business objectives and down to risks and controls. Compliance professionals in particular can use the software to document, visualize, and report on control objectives, controls, and related risks as well as control self-assessments. Risk management professionals can use GRC software for the identification and analysis of risks in a consolidated view that simplifies communication and reporting. The systems may also have data analytics, such as credit risk or market risk tools. While GRC software is available in cloud-based systems, adoption of this method is slower than with installed software, partly due to privacy concerns. Again, much of the same auditing for operational efficiency and effectiveness can be done as was discussed for CRM, perhaps with a focus on whether data analytics are being harnessed sufficiently and are meeting objectives and goals for insight generation. Security and privacy can be prime areas for audits.
Chapter 2: IT Infrastructure and IT Control Frameworks Chapter Introduction System infrastructure is part of the design of an information system. The primary components of a system infrastructure are the database and its management system; networks such as a local area network or telecommunications networks; hardware, including workstations, servers, and mainframes; software, including operating systems and general software; and configuration, which refers to set-ups such as a client/server configuration of computers or the use of a cloud. While configuration is not addressed further in these materials, this chapter addresses the other components just listed. In addition, there is content on IT control frameworks and IT job roles.
Topic A: IT Infrastructure and Network Concepts (Level B) Clients and Servers The client-server model is one in which servers provide storage and processing power to a number of clients, which are workstations or other devices such as printers on the network. Client workstations (also called microcomputers) include desktop or laptop computers. (For example, personal computers [PCs] available from multiple manufacturers and Macintosh® computers [Macs] from Apple Computer® are varieties.) Workstations have their own processing power and memory and can stand on their own, but, in the client-server model, they rely on a network connection to other workstations, servers, and peripherals to generate capabilities beyond what the workstation could provide on its own. The decision rule as to what should be on the workstation versus what is on the server is to put any applications dedicated to a single user on the workstation while all resources that need to be shared are on one or more servers. Servers are powerful, specialized computers with much larger memories, multiple processors, and other dedicated hardware as well as dedicated backup systems and protocols. Servers provide specialized services to multiple internal and/or external clients simultaneously and often serve specialized functions such as a web server or an internal database and host for a powerful and complex shared application. Workstations also may have specialized functions in an organization’s information system, for example, data-entry workstations, end-user workstations such as for accounting, computer-aided audit testing (CAAT) workstations, or computer-aided design (CAD) workstations. Workstations may also be connected to several terminals as part of a mainframe system or may serve as the central computer for smaller organizations. Hand-held computer devices may also be considered workstation equipment. Some hand-held devices may be specialized (e.g., for data input only),
while others have a full range of functionality. Many such devices are specialized for a certain organizational function, such as directing a warehouse worker on how to pick inventory or to assist in retail sales. These devices may have specialized interfaces such as bar code readers or RFID readers. Both servers and clients (workstations and peripherals) are part of the hardware of the IT system infrastructure and therefore may be included in audits of hardware controls.
Networks Networks are needed to enable the client-server system to operate. A network consists of physical wires and wireless data transmission hardware as well as other dedicated hardware and software. A common type of network is a local area network (LAN), which is a network that can be physically interconnected using wires or fiber optics. This implies a reasonable geographic limit, although a wireless LAN, which uses wireless networking technology, may extend this range somewhat. When a network is distributed over a wider geographic area, such as among several campuses in different regions, it is called a wide area network (WAN). A variant that has evolved in part due to the larger number of remote workers is a virtual private network (VPN) , which is a set-up of software and/or hardware to allow remote users a secure (encrypted) channel for accessing a network as a full-fledged internal user with appropriate rolebased access. Another option for trusted internal users who may or may not be remote is to set up an intranet site, or a website designed to provide some shared services to internal users such as time card entry or the ability to check sales and inventory information. These are in contrast to parts of a network that are designed to allow limited access to external users, such as accessing a public website that an organization hosts or a passwordcontrolled extranet site for use by external business partners. The Internet itself is essentially a network of networks, and certain hardware is needed to make this or things like WANs possible. One of these is a gateway , which is hardware and related software that provides a
common interface for two dissimilar networks so they can intercommunicate. Another example of network hardware is a router. A router is hardware and associated software that decides how to route network or Internet traffic to ensure efficiency and security based on a routing protocol. The Open Systems Interconnection (OSI) seven-layer model shows how networks comprise systems and related controls and protocols that need to be at the correct level to enable robust security and efficient networking. These layers are as follows: • Layer 7 (top layer): Application layer, i.e., where software resides. • Layer 6: Presentation layer, i.e., how application data is encoded while in transit. • Layer 5: Session layer, i.e., control of dialogue between end systems. • Layer 4: Transport layer, i.e., enabling reliable end-to-end data transfer; firewall location. • Layer 3: Network layer, i.e., routers, switches, subnetwork access, and firewalls. • Layer 2: Data link layer, i.e., data transfer for single physical connection (or series of). • Layer 1: Physical layer, i.e., wires, wireless devices, and means to activate connections. Network security issues include threats to the physical security of the wired connections and/or access to the wireless components. Wireless components require an additional set of security elements that are not used with purely wired networks, which may actually make them more secure than the wired portions of the network. Risks to the physical network could include sabotage or improper access by direct connection or wireless eavesdropping. Auditors need to verify that countermeasures are in place, including network traffic analyzers (packet sniffers) and encryption of data while in transit and in storage. Risks higher in the layers could include missing, inadequate, or
poorly patched firewalls or security holes due to inconsistent policies or implementation of policies. For example, this could be leaving default passwords on hardware such as routers or not replacing hardware that has known flaws. Another avenue of attack is with incompatible systems that do not allow the normal security configuration to be implemented.
Mainframes A mainframe computer is a large computer capable of supporting massive inputs and outputs and many concurrent users. The mainframe is the grandfather of business computers. Mainframes are powerful and are generally connected to a large number of terminals and peripheral devices, such as high-volume printers. They are primarily used for processing and storing large amounts of data. Most systems at organizations are not handled on mainframe computers due to cost considerations and the volume of transactions; instead, these are handled by either servers or cloud-based services. A data terminal, or dumb terminal, is an input/output node for a mainframe system consisting of either just a display and entry devices or a workstation running terminal emulation software. (It acts as if it has no processing capacity.) Mainframes were once the mainstay of business computing. Now, however, servers and clients have taken over the role of the mainframe and terminals for a large number of applications. The mainframe has evolved into a niche application, such as for handling actual transfers of funds for banks. Modern mainframes specialize in highly stable and reliable operations that can continue uninterrupted processing for long periods, which is achieved in part by containing redundant hardware and having strict backward compatibility with older operating system versions. For example, system maintenance, such as adding more hardware capacity, can occur while the mainframe continues normal processing. Mainframes are also capable of vast data throughput because they have extensive input and output hardware. Mainframes have high security, and the specialized nature of the operating systems and other features makes them difficult to hack into;
instances of this occurring are very low. However, internal auditors should not make direct assumptions about the strength of a system’s security without sufficient technical assurance and validations (e.g., provided by IT auditors). Mainframes also allow running multiple operating systems on the same unit, so one mainframe can act as a set of virtual servers that can perform very different tasks. Controls associated with mainframes include locating them in a secure data center, with proper heating, venting, and air conditioning; electrostatic control; and properly trained system engineers. Other controls include automated log-off of inactive users and placing data terminals where they will not be unattended. Internal auditors with sufficient technical expertise or designated IT auditors should prioritize assurance related to controls over the review, testing, and validation of mainframes.
Auditing Hardware Some ways the auditor can evaluate hardware controls are: • Interviewing operators and users to obtain reliable information about equipment. • Determining what actions operators or software takes in the event of hardware malfunction. • Confirming oral statements by cross-checking against maintenance reports and error logs. • Checking temperature and humidity control devices to see that they are installed, functional, and adequate. • Reviewing failure logs to determine loss of time due to malfunction. • Reviewing daily and periodic computer logs and reports to determine whether maintenance schedules conform to manufacturers’ specifications. • Determining whether the timing of maintenance is appropriate. • Comparing actual downtime with normal expectations.
• Checking fire detection and suppression systems. It is critical that internal auditors understand that they must cross-check what is actually done by the organization against what it should be doing.
Topic B: Operational Roles of the Functional Areas of IT (Level B) Internal auditors must understand the IT environment to identify—and fully appreciate the roles and responsibilities of—the departments and individuals involved in IT activities. As explained in the Practice Guide “Management of IT Auditing,” second edition (previously Global Technology Audit Guide 4 [GTAG® 4]), IT has four layers: • IT management comprises the people, policies, procedures, and processes that manage the IT function. This includes system monitoring (to identify failures or other exception conditions), programming, planning to align IT resources and activities with the organization’s strategic goals and objectives, managing out-sourced vendors, and assuring IT governance. • Technical infrastructure refers to the systems involved in business processes: operating systems, databases, and networks. • Applications are programs that perform specific tasks related to business processes. They may be transactional or support applications. Transactional applications perform buy-side activities (e.g., procurement), sell-side activities (e.g., order processing), back-office activities (e.g., invoicing for payables, recording receivables), and enterprise resource planning, which integrates some of the other functions. Support applications include such software as email, imaging, and design tools. Standard application controls include input, processing, and output controls. • External connections include external networks, such as the Internet, EDI systems, and data warehousing providers.
IT Management and Organization The top level of managerial responsibility often lies with the chief information officer (CIO), who reports directly to the chief executive officer (CEO). The CIO is responsible for IT in relation to business strategy and
compliance. The CIO designs and maintains IT internal controls, IT resources, and IT metrics and determines which new IT to pursue. He or she manages an IT domain that includes a variety of functions, depending on the enterprise. Exhibit III-6 shows a generic chart for an organization’s IT area. Note that not all of the following positions are found in all organizations. Positions can be combined, and, if so, internal auditors need to verify that segregation of duties is appropriate. Exhibit III-6: IT Organizational Chart
Operations Operations supports all business units, with a focus on efficiency. The operations manager is responsible for capacity planning, or the efficient allocation of IT resources and the elimination of waste. The following functions are included in operations: • The help desk provides on-demand end-user assistance for IT issues. Providing a little training as part of the solution can reduce persistent system interaction errors by users. • The telecommunications network administrator programs telephones. • Web operations administers websites, extranets, and intranets.
• The change controller makes judgment calls as to whether to escalate an issue or to schedule it. A librarian holds the master versions of applications. • Data entry personnel format data for computer use. Systems should minimize manual entry by capturing data at the point of the transaction. • Each department will have end users with specialized job roles. Training is a key control to prevent input errors.
Technical Support Technical support keeps back-end systems functioning and trains end users: • The data center is a secure location where servers or mainframes are kept, including controls over electricity, HVAC, and physical access. • The information center is a centralized location for support staff, traditionally relating to end-user training and ongoing technical support. • The network/LAN administrator monitors and maintains a network on a daily basis, including monitoring network use. This operational role needs to be staffed by an IT expert with sufficient technical knowledge to keep the network operating correctly and with acceptable cybersecurity. Daily tasks involve installing and maintaining software and hardware for LANs, WANs, intranets, and/or Internet access. • The web administrator develops the company website, monitors it for inappropriate use by employees or others, and maintains appropriate bandwidth and availability. • User training may take place in computer classrooms with a “sandbox” environment or in an area in which applications can be used in a testing mode.
Data Database administrators (DBAs) are trained to design, implement, and maintain databases; set database policy; and train users. The DBAs help
auditors review raw data (e.g., finding payees named “CASH”). Data administrators monitor data use and set policies on how data can be stored, secured, retained for archives, and released. They plan for future data needs and oversee database design and data dictionary development.
Systems and Application Development Systems development functions include systems analysts, programmers, and testers. Systems analysts determine the necessary system outputs and how to achieve these outputs, either by hardware/software acquisition, upgrade planning, or internal development. Programmers translate the systems analysts’ plans by creating or adapting applications. Categories include: • Application developers (end-user applications). • Systems developers (back-end systems and networking). • Web developers (web functionality, web-based applications). Testers test at the unit and system level. Programmers should not be used to test code that they have written themselves. Internal auditors need to stay alert to possible conflicts between systems and application development roles, which should be kept separate (segregation of duties).
IT Security and Quality IT security sometimes oversees other areas and external threats. Security staff enforce password and other security policies. They may also deal with business continuity. A quality assurance (QA) officer may be designated in some organizations to determine whether IT systems satisfy the needs of executives and end users. He or she may head a data quality audit, which tests all or a subset of the data for accuracy, integrity, and completeness.
Out-Sourced or Co-Sourced IT Out-sourcing or co-sourcing (partly out-sourced) IT is common, especially
for application development. If the software vendor is reputable, no further audit activity may be needed unless mandated by policy or law, but their support services can aid understanding. Managed security service providers (MSSPs) out-source security by monitoring network activity for intrusions and using simulated attacks. Internal auditors will need to work with the provider to assess security risks.
IT Role of Senior and Non-IT Management IT governance begins at the top with the board of directors and key executives. Oversight, approval, and understanding of the basic infrastructure are the responsibilities of these parties. The Practice Guide “Information Technology Risks and Controls,” second edition (previously GTAG® 1) notes that an organization’s management layer has significant responsibility for and effect on IT policy, resources, and activities. • The board of directors approves enterprise strategies, in which IT plays an important role. The board must be aware of IT issues and projects and how they affect corporate strategies. Board committees play additional IT roles. Examples include the following: • To fulfill its governance responsibilities, the audit committee must ensure that appropriate financial reporting and ethics monitoring controls are in place and are assessed and tested adequately. • The compensation committee can reflect the importance of IT performance goals in the compensation packages it approves. • The governance committee must include oversight of IT activity and ensure board attention to IT oversight and compliance with external regulations. • The risk management committee must ensure that IT-related risks have been identified, assessed in terms of the enterprise’s risk appetite, and appropriately addressed. • The finance committee relies on IT for data used in preparing financial reports and making financial decisions, such as the replacement or repair of the IT system.
• Management implements enterprise strategies. It includes: • The chief executive officer (CEO), who defines objectives and metrics for IT, approves resources, directs issues to the board, and holds ultimate responsibility for the adequacy of IT controls. • The chief operating officer (COO), who ensures that the organization’s IT fits with the organization’s business plans and business model. • The chief financial officer (CFO), who must understand the role of IT in the enterprise’s financial management and who holds ultimate responsibility for IT controls related to financial systems and data. • The chief security officer (CSO), who is responsible for all security, including IT continuity planning. The CSO documents and enforces the security policy, is responsible for all external network connections and logical and physical security controls, and is involved in compliance, legal, and audit. • The chief information officer (CIO), the senior IT officer who assesses which technologies would add value to business processes and who ensures that they are implemented and integrated correctly to realize that benefit. • The chief information security officer (CISO), who works under the CSO and with the CIO to develop the IT security policy, control IT resources, and oversee IT security. The CISO aligns security with business objectives and risk and educates key executives on security. • The chief legal counsel (CLC), who helps set policy on information disclosures, advises on legal risks for IT, and checks financials. • The chief risk officer (CRO), who manages risks, including IT risk exposures, and measures how they relate to overall business risk. • The chief ethics officer, who looks at privacy issues and proper use of data. • The chief compliance officer, who oversees compliance within the organization by establishing compliance-related policies and procedures as well as monitoring activities to ensure compliance with laws, regulations, and so on.
• The chief technology officer, who explores new IT that may fulfill organizational needs. • The director of contingency planning/continuity planning, who oversees contingency planning. • The chief audit executive (CAE) and audit staff ensure that IT is included in the audit universe and annual plan, advise on the development of controls, provide objective auditing of all types of controls, and monitor the IT risk management plan. External auditors perform audits of the IT system and related controls in some circumstances, for example, as part of a detailed Sarbanes-Oxley Act (SOX) engagement or an internal controls over financial reporting (ICFR) engagement.
Topic C: The Purpose and Applications of IT Controls and IT Control Frameworks (Level B) The Internal Control—Integrated Framework of the Committee of Sponsoring Organizations of the Treadway Commission (COSO) defines an internal control as: A process, effected by an entity’s board of directors, management and other personnel, designed to provide reasonable assurance regarding the achievement of objectives relating to operations, reporting, and compliance.
A key control concept is that IT controls must provide continuous assurance for internal controls. A related concept is that auditors must provide independent assurance of this coverage. After describing some IT control objectives and placing IT controls in a system of classification, this topic discusses IT control frameworks in general and then gives several examples of common frameworks, including COBIT 5, eSAC, ISO/IEC 38500, the ISO 27000 series of standards, and ITIL.The IIA’s Practice Guides are also discussed, at the end of the topic.
IT Controls Effective IT controls provide continuous assurance supported by a reliable and continuous trail of evidence. In addition, this assurance is itself assured through the internal auditor’s independent and objective assessment of the control. According to the Practice Guide “Information Technology Risks and Controls,” second edition (previously GTAG® 1), the goals of the IT controls and the control framework are to provide and document: • Compliance with applicable regulations and legislation. • Consistency with the enterprise’s business objectives. • Continuity with management’s governance policies and risk appetite.
Control Objectives
IT internal control objectives include: • Protecting assets/resources/owners’ equity. • Ensuring that information is available, reliable, and appropriately restricted. • Holding users accountable for functions performed. • Protecting customer privacy and identity. • Providing support and evidence of employee job performance. (Employees can prove that they did the right things.) • Maintaining data and system authenticity and integrity. • Assuring management that automated processes are controlled. • Providing an audit trail for all automated and user-initiated transactions. Exhibit III-7 lists some indicators of effective IT controls.
Exhibit III-7: Indicators of Effective IT Controls
•
Ability to execute and plan new work (e.g., IT infrastructure upgrades to support new products/services)
•
Clear communication to management of key indicators of effective IT control
•
Projects that come in on time and within budget, saving the organization time and resources and improving its competitive position
•
Ability to protect against new threats and vulnerabilities and to recover from disruptions quickly and efficiently
• •
Ability to allocate resources predictably
•
Efficient use of a customer support center or help desk
•
Heightened security awareness throughout the organization
Consistent availability of reliable information and IT services across the organization and with customers, partners, and other external interfaces
Source: Practice Guide “Information Technology Risks and Controls,” second edition.
Control Classification “Information Technology Risks and Controls” describes a hierarchy of
controls that affect an organization from the top down, including controls aimed at assuring good governance, management, and technical control. This hierarchy is depicted in Exhibit III-8. Exhibit III-8: Hierarchy of IT Controls
Source: Practice Guide “Information Technology Risks and Controls,” second edition.
• Policies are IT governance controls. Governance controls are oversight rather than performance controls that rest with the board of directors and their committees, such as the audit committee, in consultation with executives. Examples include setting security policies about the use of IT throughout the organization, including privacy, ownership, level of autonomy to create and use applications, and measures to assure business continuity. These policies must be approved by management (and the board of directors, as appropriate) and communicated throughout the organization to set the “tone at the top” and expectations. They also need to be monitored using metrics and evaluated. An organization may have a technology steering committee consisting of IT, key business functions, and internal audit. The committee prioritizes user technology requests given limited resources. • Management controls occupy the next three levels and focus on identifying, prioritizing, and mitigating risks to the organization, its processes and operations, its assets, and its sensitive data. Such controls have a broad reach over many organizational areas, requiring collaboration
between executives and the board. They include: • Standards for systems development processes (both those developed internally and those acquired from vendors), systems software configuration, and applications controls, data structures, and documentation. • Organization and management of lines of responsibility and reporting, incorporating separation of duties as appropriate, financial controls for IT investment, IT change management, and personnel controls. • Physical and environmental controls to mitigate risks from hazards such as fire or unauthorized access. • Technical controls form the remaining three levels and are the foundation of almost all other organizational IT controls. Technical controls are the specific controls that must be in place for management and governance controls to be effective. Automated technical controls implement and demonstrate compliance with policies. Technical controls include: • Systems software controls such as those controlling access rights, enforcing division of duties, detecting and preventing intrusion, implementing encryption, and managing change. • Systems development controls such as documentation of user requirements and confirmation that they have been met, a formal development process that incorporates testing, and proper maintenance. • Application-based controls that ensure that all input data is accurate, complete, authorized, and correct and is processed as intended; all stored and output data is accurate and complete; and all data processes are tracked from input, through storage, to eventual output. Controls may be classified in other ways, for example, according to the way they are viewed throughout the organization. Exhibit III-9 classifies controls by different perspectives. Exhibit III-9: Control Classifications
Source: Practice Guide “Information Technology Risks and Controls,” second edition.
Since governance, management, and technical controls were addressed above, the other two sides of the cube are addressed in relation to IT next. • General controls and application controls • A general control applies generally to the IT environment or the overall mix of systems, networks, data, people, and processes (the IT infrastructure). The use of an IT control framework requires implementing a general control framework such as the COSO Internal Control—Integrated Framework. • An application control is related to the specific functioning (inputs, processing, outputs) of an application system that supports a specific business process. Balancing of process totals is an example. • Preventive controls, detective controls, and corrective controls • Preventive controls are designed to stop errors or fraud before they occur. Examples include using a firewall or a drop-down menu or assigning access privileges by job role. • Detective controls are triggered after an error (an exception condition) occurs, e.g., automated flagging of inactive users or review of exception reports for completed transactions to detect credit limit overrides. • Corrective controls are used once errors, fraud, or other control issues have been detected. They need their own preventive and detective
controls to ensure that the process isn’t corrupted. Corrective controls range from automated error corrections to business continuity plans. In addition, controls may be directive to one degree or another, perhaps prescribing particular actions or prohibiting particular behaviors. Other controls will specify the result to achieve without specifying the means.
Control Frameworks According to “Information Technology Risks and Controls,” a control framework is an outline that identifies the need for controls but does not depict how they are applied. IT control frameworks are internal control systems that help managers set IT control objectives, link IT to business processes and overall control frameworks, identify key IT areas to leverage, and create a process model that logically groups IT processes. Control frameworks help determine the appropriate level of IT controls within the overall organizational controls and ensure the effectiveness of those controls. Why are control frameworks needed? Managers need assurance that their IT processes are contributing to business objectives and competitive advantage. The organization needs assurance that it is resilient because it can mitigate risks of fraud or cyber attacks. Stakeholders need to know that the organization can be trusted. One way to gain such assurance is for management to increase their understanding of IT operations without getting bogged down in the increasingly complex execution details. Breaking systems down into understandable processes helps managers combine business with IT strategy, align organizational structures, and set performance goals and metrics. Control frameworks provide a methodology for seamlessly linking objectives to requirements and requirements to actual performance. A process model breaks IT down into easy-to-understand activities organized around the control objectives to be achieved and identifies resources to be leveraged. Control frameworks provide a foundational structure upon which effective regulatory compliance can be reasonably addressed and assured, such as for the U.S. Sarbanes-Oxley Act or the U.S. Health Insurance
Portability and Accountability Act [HIPAA]). Use of standardized, well-accepted frameworks means that there is a body of literature available for guidance and that users can benchmark against the standards or against competitors using similar methods. IT controls need to be everyone’s responsibility, and the framework should clearly communicate specific roles. IT controls should provide a “defense in depth,” meaning that multiple layers of controls reduce the likelihood of a control failure.
Selecting an IT Control Framework Selecting an IT control framework involves deciding which model will benefit the entire organization, since the model will be used by a large number of employees with control responsibilities. Frameworks are generalized for broad appeal, but no framework encompasses all business types or all IT. “Information Technology Risk and Controls” states that each organization should “examine existing control frameworks to determine which of them—or which parts— most closely fit its needs.” Control frameworks can be formal, as discussed in this topic, or informal, meaning that they are not written down but are communicated verbally and through action. Such systems are not appropriate once an organization has moved past the earliest stages of organizational development. Satisfying regulatory requirements requires the use of formal approaches. The CAE should work with management to select a framework or portions of several frameworks. Any model, once selected, must be customized. Properly understanding risks is a prerequisite for selecting a control framework. The CAE should determine the organization’s risk appetite, defined by COSO as: The degree of risk, on a broad-based level, that a company or other organization is willing to accept in pursuit of its goals.
Risk appetite is paired with risk tolerance, also defined by COSO: The acceptable level of variation relative to the achievement of objectives. In setting specific risk tolerances, management considers the relative importance of the related objectives and aligns risk tolerances with its risk appetite.
The COSO Internal Control—Integrated Framework was updated in 2013. It is widely used in the U.S. among public companies to provide a structured approach to achieving compliance with financial reporting provisions, such as Sarbanes-Oxley. Its main features from a technology perspective, also shown in Exhibit III-10, are: • Monitoring (e.g., metrics, cost and control performance analysis, internal audit). • Information and communication (e.g., IT performance surveys, help desks, IT and security training, internal corporate communication). • Control activities (e.g., review board for change management, analysis of return on IT investment, enforcement of IT standards, assessment of compliance with business continuity risk assessment). • Risk assessment (e.g., assessment of IT risks and inclusion in corporate risk assessment, IT internal audit assessment, IT insurance assessment). • Control environment (e.g., management support of IT control environment, overall policies, corporate technology governance committee, technology and architecture standards committee). Exhibit III-10: COSO Model for Internal Control Frameworks
Source: Practice Guide “Information Technology Risks and Controls,” second edition.
The following are examples of common frameworks.
COBIT 5® “COBIT 5: A Business Framework for the Governance and Management of Enterprise IT” (hereafter called the COBIT 5 framework) is a family of products developed by ISACA and available at their website, www.isaca.org. Version 5 was released in 2012. It helps management understand the role of IT and its place in organizational strategy, it helps users be more satisfied with IT security and outcomes, and it sets clear lines of responsibility. It also helps managers create more value from IT resources, meet regulatory compliance, and control IT risks by providing enhanced risk awareness so that informed risk decisions can be made. In addition to the framework document, the COBIT 5 family of products includes published guidance related to enabling processes (these are defined later) and other types of professional guidance such as an implementation guide. There is also an online collaborative environment for networking and group problem solving. The COBIT 5 framework is built on a generic set of five key principles and seven enablers that can be adapted for use by any size or type of organization to set and achieve separate governance and management objectives for its information systems. Since the enablers are referred to in each of the five key principles, the seven enablers are listed here first, in Exhibit III-11. Exhibit III-11: COBIT 7 Enablers
Exhibit III-12 illustrates the five key principles that form the COBIT 5 framework. Each key principle is explained next.
Exhibit III-12: COBIT 5 Principles
Source: “COBIT 5: A Business Framework for the Governance and Management of Enterprise IT.” © 2012 ISACA. All rights reserved. Used with permission.
• Principle 1: Meeting stakeholder needs. Stakeholder needs drive value creation in an organization. Since the objective of governance is the creation of value in an organization, governance defines value creation as the realization of the benefits expected by stakeholders while optimizing the use of resources and the management of risks. The needs of stakeholders often conflict, such as shareholders’ need for profit versus regulators’ or society’s need for environmental sustainability. Therefore, the COBIT 5 framework promotes governance as a process of negotiating among stakeholders’ value interests and then deciding how best to create optimum value for stakeholders overall. Also, since this is a generic framework, what constitutes value for stakeholders may differ considerably, such as between for-profit and notfor-profit organizations. To help organizations define value, the COBIT 5 framework includes a values cascade, which is basically a set of tables that start with a set of 17 generic goals, for example, financial transparency. Organizations select from among these generic goals, which cascade down to 17 IT-related goals, for example, transparency of IT costs, benefits, and risk, which in turn cascade down to a set of enabler
goals. Enabler goals are the goals for COBIT 5’s enabling processes, such as people, skills, and competencies. The point is to translate stakeholder needs and the derived governance goals into priority-weighted IT goals and from there to easily implementable processes, policies, and procedures. • Principle 2: Covering the enterprise end-to-end. The second principle is that IT governance must be wholly and completely part of the organization’s overall governance and internal control framework. The COBIT 5 framework integrates the most current governance models and concepts. It also applies to processes that have been out-sourced or are part of an extended enterprise of partners in a supply chain. Because the seven enablers are organization-wide in scope, focusing on each of them allows governance to be top-to-bottom and end-to-end. The last part of this principle involves defining governance roles as well as relationships and activities. Owners or shareholders delegate to a governing body such as the board, who sets the direction for management, who provide instruction to operations so that it remains aligned to stakeholder goals. Each relationship also includes a feedback process of reporting, monitoring, and accountability. • Principle 3: Applying a single integrated framework. The COBIT 5 framework is designed to integrate seamlessly into other governance frameworks to provide a single source of organizational guidance. It avoids getting into technical details and integrates all guidance from prior ISACA publications and is designed to integrate with other governance frameworks, such as ISO/IEC 38500, described below. • Principle 4: Enabling a holistic approach. The seven enablers are used to implement each goal determined using the goals cascade. The first enabler, “principles, policies, and frameworks,” is central, because these provide practical guidance on how to shape desired behavior by doing specific management activities. The processes; organizational structures; and culture, ethics, and behavior principles are governance-directed management organizing activities that help ensure successful adoption of the principles, policies, and frameworks. Governance direction over culture, ethics, and behavior is a critical success factor to achieving goals, although the influence of these factors is often underestimated. The
remaining principles of information; services, infrastructure, and applications; and people, skills, and competencies are resource management enablers of the basic principles and framework. These enablers are interconnected and rely on one another to succeed. For example, processes need proper information, skills, and behavior to make them effective and efficient. The COBIT 5 framework has a set of enabler dimensions that ensure that each of the following is considered for each enabler: • Does measurement of leading indicators (predictive metrics) show that the proper inputs, practices, and outputs are being followed? • Does measurement of leading indicators show that the proper system development life cycle is being used (e.g., feedback is incorporated)? • Does measurement of lagging indicators (historical metrics) show that internal and external stakeholder requirements were met? • Does measurement of lagging indicators show achievement of enabler goals (e.g., quality, efficiency, effectiveness, security, accessibility)? • Principle 5: Separating governance from management. The governance body of an organization, typically its board of directors, needs to see itself as a separate discipline from the management of the organization. The COBIT 5 framework outlines five governance processes and 32 management processes that are developed in detail in a supporting document, “COBIT 5: Enabling Processes.” For each governance process, the key roles are to evaluate, direct, and monitor. Governance processes include ensuring that the governance framework is in place and maintained, stakeholder benefits are delivered, risk responses and resource use are optimized, and transparency exists. The management processes are divided into the following categories that reflect a cyclical set of management roles: • Align, plan, and organize. Processes include managing strategy, systems infrastructure, risk, security, human resources, and relationships. • Build, acquire, and implement. Processes include project and change management, defining requirements, identifying and building solutions, and managing configuration, changes, knowledge, and assets.
• Deliver, service, and support. Processes include managing operations, incidents and problems, continuity, security, and process controls. • Monitor, evaluate, and assess. Processes include monitoring, evaluating, and assessing performance and conformance, the control infrastructure, and compliance with external requirements. The COBIT 5 framework and family of products, taken as a whole, can help organizations get the best value for their investments in IT by finding the optimum balance between achieving stakeholder benefits, effectively managing risks, and efficiently managing resource usage.
Electronic Systems Assurance and Control (eSAC) The IIA’s Electronic Systems Assurance and Control (eSAC) model was designed and published in 2001 to allow auditors to express opinions on the reliability of information created by IT. This framework is a riskassessment-based, process-oriented methodology. eSAC facilitates communications between auditors, the board, and other audit clients. eSAC starts with strategic inputs and ends with measurable results, enhanced reputation of the firm, and opportunities for improvement. The center of the model is COSO’s broad control objectives (e.g., safeguarding of assets), followed by IT business assurance objectives: • Availability—Transactions can be performed at all times. • Capability—Transactions are reliably completed in a timely manner. • Functionality—Systems are user-friendly and responsive and fulfill all business requirements. • Protectability—Unauthorized access is denied through logical and physical security controls. • Accountability—Data is nonrefutable, accurate, and complete. Each specific process is related to one or more of these objectives and to the building blocks of people, technology, processes, investment, and communication. The model also covers internal and external forces, or the
risks and control environment, as well as their maturity, or how quickly such relationships change and evolve. Finally, monitoring and oversight is key.
ISO/IEC 38500 ISO/IEC 38500:2015, “Information technology—Governance of IT for the organization,” is an international standard framework document that provides top management and boards of directors and other owners with a set of guiding principles to ensure that IT at their organizations is acceptable, effective, and efficient. It specifically relates to management processes and management decisions in regard to information systems, regardless of whether the actual processes are carried out as internal activities or are out-sourced. ISO/IEC 38500 also provides guidance for senior managers; controllers and other resource managers; legal, accounting, and other business specialists; hardware, software, and communications suppliers; consultants and other internal or external service providers; and IT auditors. This guidance is designed to help these business professionals to provide better advice and insight.
ISO 27000 Series The ISO 27000 series of standards is related to information security management systems (ISMS). An ISMS is a systematic framework for ensuring that sensitive organizational information remains secure. The series applies a risk management process to information security. ISO 27001:2013 sets the requirements for an ISMS to ensure that the system is appropriate for the organization, is established correctly, and is maintained and continually improved to stay relevant. It provides a code of practice for information security controls to help organizations select and implement those that are relevant to them and also develop customized information security management guidelines. The standard includes control objectives, individual controls, and security control clauses in the areas of information security policies; human resource security; asset management; asset control; cryptography; physical and environmental security; operations security;
communication security; system acquisition, development and maintenance; and supplier relationships. There are numerous other standards in this family that relate to specialized areas such as ISMS auditing (ISO 27007), network security, application security, and so on.
ITIL ITIL 2011 is a five-tiered certification. It was formerly called the IT Infrastructure Library (ITIL), but now goes by just the acronym. ITIL is a framework for management of IT as a portfolio of out-sourced services using service level agreements (SLAs) and ongoing processes for monitoring and controlling availability, capacity, configurations, issues or problems, patches, change management, and so on. It addresses the concept and life cycle of IT service management, from service strategy and design to operations and continuous improvement.
IIA Practice Guides The IIA’s Practice Guides (formerly GTAGs®) are not control frameworks, but they can help in selecting the proper framework for an organization. The Practice Guide “Information Technology Risks and Controls,” second edition (previously GTAG 1), covers IT controls as an executive needs to understand them, including organizational roles and structure and how the IT controls fit within the overall control framework. The other GTAG documents cover specifics such as change and patch management controls. These guides contain advice for set-up, management, and measurement of application-level controls. The GTAG documents can be used to create a unique framework or to supplement an existing one. One example of a tool that can be used to plan for sufficient audit coverage is the CAE checklist shown in Exhibit III-13. Studying the questions CAEs should raise for each of the actions listed shows how a general risk-based framework would be customized for each organization. For further study, the Practice Guides can be found at https://na.theiia.org/standards-guidance/recommended-guidance/practice-
guides/Pages/Practice-Guides.aspx. Exhibit III-13: IT Control Framework Checklist
Source: Practice Guide “Information Technology Risks and Controls,” second edition.
Chapter 3: Disaster Recovery and Business Continuity Chapter Introduction An important risk management consideration for an organization is to have a plan in place to deal with crises and disasters as they arise. Having a plan in advance helps to mitigate losses as effectively as possible. Crisis management plans incorporate plans to deal with the immediate crisis and stakeholder communication as well as longer-term plans to ensure the continuity of the organization. Some organizations may refer to these as disaster recovery (DR) and business continuity management (BCM).
Topic A: Disaster Recovery Planning Concepts (Level B) A disaster recovery plan indicates the who, where, when, and how of restoring systems and processes after an organization suffers an outage or a disaster so that critical systems are prioritized and other systems are restored in a logical and efficient order. A crisis could include events such as the unexpected death of a CEO or product tampering. Such interruptions can have significant financial and operational ramifications. Crises distract attention from the status quo of operating the business and have the potential for productivity and profitability losses and reduced stakeholder confidence. Auditors should evaluate the organization’s readiness to deal with such business interruptions. The Practice Guide “Business Continuity Management” (previously GTAG® 10) defines business continuity management (BCM) as a “process by which an organization prepares for future incidents that could jeopardize the organization’s core mission and its long-term viability.” Business continuity is made up of enterprise-level and end-to-end solutions, from design and planning to implementation and management, with the focus on being proactive. To ensure that an organization can remain functional during and after disasters, it must have a plan for continued operation. A business continuity plan is a set of processes developed for the entire enterprise, outlining the actions to be taken by the IT organization, executive staff, and various business units in order to quickly resume operations in the event of a business disruption or service outage. A comprehensive plan would provide for emergency response procedures, stakeholder communications protocols, alternative communication systems and site facilities, information systems backup, disaster recovery, interim financing, insurance claims, business impact assessments and resumption plans, procedures for restoring utility services, and maintenance procedures for ensuring the readiness of the organization in the event of an emergency or a disaster. Internal auditors can play two distinct roles:
• Contributing to effective risk management and controlling enhancement efforts for the organization through proactive and responsive assurance and consulting services before disaster strikes • Evaluating the efficiency and effectiveness of function and control system restoration in the aftermath of a risk event
Internal Audit’s Role Before a Disaster The internal auditor’s role during normal operations is to determine whether or not the organization could survive disruption of business or IT and how well it is equipped to mitigate the effects of the disaster. One of the questions for which the auditor will be seeking answers is “How well can the organization function when access to information systems has been disrupted?” The answer varies considerably with the type of organization. Stock brokerage, for instance, is difficult without computer, phone, and network access. Retail outlets may be less dependent upon continuous access to information systems. Other questions the auditor will be concerned with are: • Is there a disaster plan in place? • What is the organization’s current disaster capacity? • Have the critical applications been defined? • Does the disaster plan provide for all contingencies, for instance, fire, earthquake, floods, or water damage from leaks or activated sprinklers? • Has the plan been tested? • Are the backup facilities adequately equipped and readily available? The answers to these questions will determine whether the organization is well prepared and, if it isn’t, what it can do to improve the situation. The internal auditor should observe the off-site testing process and realistically consider any gaps that may result in technical issues and
potentially delay start-up.
Keeping Plans Up-to-Date The internal auditing activity should assess the organization’s business continuity management process on a regular basis to ensure that senior management is aware of the state of disaster preparedness. To support an organization’s readiness to deal with business interruptions, the internal audit activity can: • Assist with the risk analysis. • Evaluate the design and comprehensiveness of the plan after it has been drafted. • Perform periodic assurance engagements to verify that the plans are kept up-to-date. • Observe and provide feedback on tests of the plan. Because business continuity and disaster recovery plans can become outdated quickly (due to turnover of managers and executives and changes in system configurations, interfaces, software, and the like), such audits should provide assurance that the plans are not outdated. The audit objective is to verify that the plans are adequate to ensure the timely resumption of operations and processes after adverse circumstances and that they reflect the current business operating environment.
Internal Audit’s Role After a Disaster An organization is extremely vulnerable after a disaster occurs and it is trying to recover. Internal auditors have an important role during the recovery period. Internal auditors should monitor the effectiveness of the recovery and control of operations. The internal audit activity should identify areas where internal controls and mitigating actions should be improved and recommend improvements to the entity’s business continuity plan.
The internal audit activity should participate in the organizational learning process following a disaster. After the disaster, usually within several months, internal auditors can assist in identifying the lessons learned from the disaster and the recovery operations. Those observations and recommendations may enhance activities to recover resources and update the next version of the business continuity plan. The CAE determines the degree of the internal auditors’ involvement in assurance regarding disaster recovery and business continuity management processes. Management may request consulting services in these areas.
Best Practices of IT Contingency Planning Since organizations are becoming increasingly dependent upon access to information systems, business continuity planning must include IT contingency planning as part of the overall information systems security package. The goal of IT contingency planning is to mitigate business risks due to a mission-critical functional failure caused directly or indirectly by noncompliant hardware or software, a vendor, a package, embedded devices, a supplier, or an external interface or environment. Business interruptions can be accidental, or they can be deliberate acts. In either case, not having an IT contingency plan risks the loss of business continuity and possibly the demise of the organization. From an IT perspective, an IT contingency plan within a BCM framework would entail a system of internal controls for managing the availability of computer and other resources and data after a processing disruption. It would include: • Regaining access to data (e.g., records, applications) and equipment. • Reestablishing communications (e.g., email, phone). • Locating new workspace. IT contingency planning consists of forming and testing a plan, incident handling, and disaster recovery. Incident handling includes procedures for dealing with a problem as it is occurring; disaster recovery includes procedures for restoring business processes in the order of their priority.
Exhibit III-14 illustrates the model for the BCM process from the “Business Continuity Management” Practice Guide. Note that IT contingency planning is integral to several steps in this process. Exhibit III-14: BCM Process
Source: Practice Guide “Business Continuity Management.”
The BCM process encompasses the following steps: • IT management gains commitment from senior management to ensure material and organizational support for the plan. • Probable high-impact events (e.g., natural disasters, employee errors, fraud, computer virus or denial-of-service attack) are identified by the organization, and mitigation strategies are developed. • A business impact analysis (BIA) is conducted to: • Identify and define critical business processes. • Define recovery time objectives (RTOs) and recovery point objectives (RPOs) for processes, resources, and so on. • Identify resources and partners that can assist in recovery. • A recovery and continuity plan is developed that defines alternative
sources for staff, functions, and resources and identifies alternative locations for operations (e.g., alternative information networks, backup data centers). • The BCM program is communicated throughout the organization, and all employees are trained in crisis procedures and communications strategies. BCM plans and staff performance are tested.
Developing an IT Contingency Plan IT contingency planning begins by creating a contingency planning team. Contingency plans cannot be the responsibility of just one individual, but the team must have a project leader whose responsibilities include orchestrating the plan document and explaining the plan to management. A contingency plan must delegate specific responsibilities and roles to those who are closest to the associated risks. The team should have adequate authority and also visibility, meaning that plan development is communicated clearly. The process of developing a contingency plan document can be out-sourced, or it can be developed in-house, but on-site employees must be used for actual incident handling and therefore need training. Whether the plan is out-sourced or developed in-house, management must take full ownership and accountability for it, with oversight by the designated organizational group such as the board or its audit committee.
Setting Objectives and Determining Risks IT contingency planning must be integrated with IT systems planning methodologies and overall BCM and risk management frameworks. Management must be educated on effective disaster recovery procedures. Contingency plans start with a risk assessment and follow with a business impact analysis or assessment, which may be performed as part of the risk management framework. For each risk, a probability is assessed as well as the impact it would have on each separate facility, line of business, IT system, and so forth.
Determining Systems Relevance and Setting RiskBased Priorities After determining risks, the order of restoration of services and the level of protections for each service are worked out. Each category would have an acceptable downtime and a cost associated with that downtime. This becomes the organization’s benchmark data. Each organization will have different priorities or categories, such as: • Critical systems—Telecommunications and core processes such as payroll, order processing, invoicing, and shipping. • Vital systems—Finance (accounts receivable, accounts payable, general ledger), customer service. • Sensitive systems—Payroll, end-user data restoration. • Noncritical systems—Human resources, budgeting, purchasing. When making a plan, organizations combine the risks ranked by severity and likelihood with their restoration priorities. Each type of disruption has an appropriate response. Events that endanger employees may require employee evacuation plans, with row or area leaders to ensure that everyone remains safe and no one is left behind. Part of the plan is to provide adequate business interruption insurance to cover operational losses (opportunity costs of lost work and sales) and adequate equipment and property insurance to cover physical losses. Evidence of data backup and recovery controls and business continuity plans will likely reduce insurance costs. Recovery methods include redundant systems at multiple sites, identifying and training backup staff in other parts of the organization who can perform critical functions, and out-sourcing critical IT processes (including staffing). In terms of IT components that may need to be replaced, “Business Continuity Management” lists the following. • IT systems: • Data center
• Applications and data needed by the enterprise • Servers and other hardware • Communications devices • Networks, including third-party networks • IT infrastructure (e.g., log-on services, software distribution) • Remote access services • Process control systems used in manufacturing, such as supervisory control and data acquisition (SCADA) or a distributed control system (DCS) • Information management systems: • File rooms • Document management systems (both electronic and manual) Recovery strategies must meet the business’s needs; they must be complete and elements must work together, leaving no significant gaps and allowing access to all users. The goal is to find the best and most cost-effective solution for each affected system—even if the solution is unconventional. Off-site storage and libraries are used for all data, operating systems, documentation, etc. Such sites may not be available for resuming operations, so organizations usually arrange a space for operations to resume. The following are types of off-site facilities: • A hot site is a dedicated location that is kept fully stocked with the hardware needed for operations configured to specifications. Hot sites will not have the organization’s data, so the first step is to load the most current backup from off-site storage. Hot sites can be fully functional within 24 hours after a business interruption. Fixed hot sites require the firm to relocate; portable sites deliver a trailer where needed. • A warm site provides many of the same services and options as a hot site, but it frequently does not include the actual applications that a company needs. For example, a warm site may include computing equipment and servers but not client workstations.
• A cold site is a space that has no computers but is set up and is ready to be a data center, including raised flooring and specialized heating, ventilation, and air conditioning (HVAC). The organization is responsible for providing computers. Cold sites can take days to go online. • A reciprocal agreement can be made with one or more organizations to share resources if one party suffers a failure. Auditors must ensure that all parties stay technically synchronized. • A time-share is a continuity strategy in which an organization co-leases a backup facility with a business partner organization. This allows the organization to have a backup option while reducing overall costs. Such services have a cost, so recovery priorities may require noncritical systems to use temporary manual workarounds. Perhaps a hot site may be used while a cold site is being prepared. Often, management decides not to mitigate a particular type of risk at all because the cost of mitigation exceeds the estimated loss or the likelihood of occurrence is extremely low.
Documenting the Plan An IT contingency plan has several components: • A clear and simple introduction • A list of team responsibilities and emergency contact information • Backup schedules and locations of off-site backups • An escalation procedure for problems • Action plans, including recovery time frames, recovery strategy, and subplans for hardware, software, networking, and telecommunications • Insurance documentation An out-of-date action plan with incorrect phone numbers or hot sites that haven’t been informed of a necessary hardware upgrade can be entirely ineffective, so the plans must have an owner who is responsible for keeping them current. Plan documents contain confidential information, and therefore
appropriate access controls should be considered.
Testing the Plan According to “Business Continuity Management,” a testing plan should include the following elements: • Tests should be held at periodic intervals, set by the BCM steering committee and based on business goals and objectives. Intervals will vary according to the nature of the business activities. Most organizations test plans one or two times a year, but testing might be more frequent based on: • Changes in business processes. • Changes in technology. • Changes in BCM team membership. • Anticipated events that could result in business interruption (e.g., an anticipated pandemic). • Tests should address a variety of threats/scenarios and different elements within the BCM plan (i.e., broad-based exercises or targeted site or component exercises). • A method should be established for identifying performances gaps and tracking their successful resolution. Exhibit III-15 describes some types of BCM tests and their characteristics.
Exhibit III-15: Types of BCM Tests Test
Characteristics
Desk check or plan audit
Written plan is reviewed in detail and updated. Involves only the plan owner and an objective assessor; ensures relevancy and currency of plan.
Orientation or plan walkthrough
All BCM team members meet to review their roles in the plan; does not constitute a “test.”
Tabletop exercise (boardroomstyle exercise)
BCM team participates in brief (two- to four-hour) simulation of a scenario; includes group self-assessment of ability to meet exercise objectives, performance gaps, and planned remediation.
Communication testing
Actual contact is established with all key stakeholders (as opposed to simply compiling a list of stakeholders to be contacted in case of a disaster). Helps:
• • • • IT environment (systems and applications) walkthrough
Alternate site testing
End-to-end testing
Validates stakeholders’ contact information. Train participants in how to use mass communication tools. Configure communication tools. Identify communication gaps/bottlenecks.
Participants walk through an announced or unannounced simulation and execute system recovery procedures. This type of test:
• • • • •
Is a less costly and disruptive alternative to a full test. Verifies that critical systems and data can be recovered. Identifies impact of the loss of multiple systems/applications. Coordinates resources across multiple locations and lines of business. Ensures adequacy of resources.
Participants test the ability to transfer staff to an alternate site, restore processes, and recover data, as designed. This type of test:
• •
Demonstrates actual capacity of alternate site.
• •
Trains staff in processes and equipment at site.
Identifies whether privacy and security can be maintained at the alternate site. Confirms sufficiency and effectiveness of IT assets at alternate site.
All stakeholders participate, including IT, business partners, suppliers, and customers; demonstrates ability to perform key processes at an agreed level.
Source: Practice Guide “Business Continuity Management.”
Internal auditors should regularly assess the IT contingency plans. The best evidence that contingency planning is working is to test the plan. Internal auditors typically either observe the testing plan or process and its results or review evidence after the fact that supports the same. Either can provide assurance on the adequacy and effectiveness of plan testing, results, and
follow-up. The test should indicate the organization’s current disaster recovery capacity, or the time it takes to load all systems and data and get running again. Variance is determined by comparison to the organization’s benchmarks. The test should duplicate typical transaction volumes, and auditors should record processing times. This will lead auditors to ask questions such as “Was the replacement telecommunications system adequate?” Mainframes rarely restore correctly, even on identical hardware, so in such situations the auditor should measure progress toward the goal rather than the immediate result. Other tests are physical, such as a fire drill. The test results could be used to set realistic benchmarks or as a call for more resources to get to the desired benchmark.
Incident Handling/Disaster Recovery Determining the severity of a disaster is the first task of employees in charge of incident handling. Employees follow their plan, starting by contacting all persons on the list and communicating the issue and what they need to do. Planned alternate workspaces or equipment is accessed if part of the plan. Organizations with a public presence should have a designated spokesperson who has guidelines on permissible communications with the press. In the aftermath of a disaster, internal auditors play a vital role in assessing what parts of the plan worked and what parts need to be revisited.
Topic B: The Purpose of Systems and Data Backup (Level B) The purpose of maintaining a systems and data backup process is to allow an organization to restore files and folders in case of data loss due to circumstances such as computer viruses, hardware failure, file corruption, theft, or natural disasters such as a fire or a flood. System-specific security policies (SysSPs) are organizational policies that often function as standards or procedures to be used when configuring or maintaining systems. SysPSs can be separated into two main groups: management guidance and technical specifications. The SysSPs can be written as a unified SysSP document.
Causes of Systems Failure When a DBMS fails, the data can become corrupt and the system may not function properly. Typical causes of a system failure include application program errors, end user errors, operator errors, hardware errors, network transmission errors, environmental errors, or hacker errors, to name a few. The four major types of system failures are: • Transaction failure. Transaction failures occur when a transaction is not processed and the processing steps are rolled back to a specific point in the processing cycle. In a distributed database environment, a single logical database may be spread across several physical databases. Transaction failure can occur when some, but not all, physical databases are updated at the same time. • Systems failure. Bugs, errors, or anomalies in the database, operating system, or hardware can cause a systems failure. In each case, the transaction processing is terminated without control of the application. Data in the memory may be lost, though data stored in disk storage may remain intact. Systems failures may occur as frequently as multiple times per week.
• Communications failure. As systems have advanced to global networks that are consistently interconnected, successful transfer of information is of utmost importance, so maintaining uninterrupted transfer is critical for maintaining the reliability, integrity, and completeness of information, particularly financial information. The loss of transactional activities in the financial environment could mean substantial losses to investors. • Media failure. A media failure could be a disk crash or controller failure, which could be caused by a disk-write virus in an operating system release, hardware errors in the controller, head crashes, or media degradation.
Backup Process The process of backing up data is a complex series of actions that involves selecting the backup type, establishing a suitable backup schedule that minimizes organization interference, and identifying the need for duplication data created automatically by using a redundant array of independent disks. The three basic types of backups are full, differential, and incremental. A full backup takes a complete duplicate of an organization’s system. While this method creates the most detailed backup, it is also the most timeconsuming and requires a large amount of system space. The other two methods are faster and require less space, because they both back up only the data that has changed. Let’s assume a full backup is done once a week on a Sunday and a differential or incremental backup is done on each of the other days of the week. A differential backup updates only those files that have been changed since the last full backup, but the amount of data to back up grows each day since the full backup (i.e., Monday has one day to back up, Tuesday has two days to back up, and so on). An incremental backup backs up only those files that have been modified since the last backup (of any kind), and, if this was an incremental backup, it needs to update only changes since that point. Using the same daily backup example, this will result in always one day’s worth of changes to back up. A major component of the backup process is the scheduling and storing of the backup data. The most common schedule is a daily on-site incremental
or differential backup, combined with a weekly off-site full backup. Typically, backups are conducted overnight, when the system activity is at its lowest, which greatly limits the probability of user interruption. The methods for selecting files to back up and determining backup file storage locations are as varied as the businesses that require backups. It is up to an organization to choose which method or methods best balance their security needs against the desire to readily access those files. For example, is the need for a full backup more important than an organization’s need to have access to data 24/7? Or is constant access to data of primary importance, and the system backup can occur incrementally? Each organization must determine which set of criteria is most important for meeting their business objectives.
Topic C: The Purpose of Systems and Data Recovery Procedures (Level B) Many organizations have online computer systems that must maintain constant functionality. Most online applications have numerous application programs that access data concurrently and, as such, databases must be correct and up to date at all times. Since information is an essential tool used by all levels of an organization, the security, availability, and integrity of information are of the utmost importance. When a system fails, recovery procedures must be in place to restore and validate the system and return it to normal. The purpose of data recovery is to restore database operations to their prefailure status. A data recovery plan provides detailed guidelines for the recovery of the entire system.
DBMS Recovery Process IT professionals play a key role in data recovery and in the restoring of the DBMS to pre-recovery status. By identifying the type of failure that occurred, the organization as a whole, and the IT team specifically, can define the state of activity to return to after the recovery. This means that the organization must determine the potential failures, including the reliability of the hardware and software, in order to accurately design the database recovery procedures. The four main recovery actions include the following: • Transaction undo. A transaction undo aborts itself or must be aborted by the system during a routine execution. • Global redo. The effects of all incomplete transactions must be rolled back when recovering from a system failure. This means that the system must contact all linked DBMSs to retransmit missing, incomplete, or lost information across communication networks. • Partial undo. A partial undo means that while a system is recovering
from failure, the results of completed transactions may not yet be reflected in the database because execution has been terminated in an uncontrolled manner. This often requires the recovery component to be repeated. • Global undo. If the database is completely destroyed, such as from fire or flood, a copy of the entire database must be reloaded from the backup source. A supplemental copy of transactions is necessary to roll up the state of the database to the present. This means that the system must be able to contact all linked DBMS systems to retransmit missing, incomplete, or lost information across all communication networks
Database Recovery Disaster and incident recovery processes both provide detailed guidance in the event of a recovery event, including details about the roles and responsibilities of the people involved and the personnel and agencies that need to be notified. Once the full extent of the recovery needed has been determined, the recovery process can begin. Full recovery requires the organization to: • Identify and resolve the vulnerabilities that allowed the incident to occur and spread. • Address, install, and replace or upgrade the safeguards that failed to stop or limit the incident or that were missing from the system in the first place. • Evaluate the monitoring capabilities that are present and, if needed, improve their detection and reporting methods or install new monitoring capabilities. • Restore the data from backups. • Restore services and processes in use. Compromised services and processes must be examined, cleansed, restored, and brought back online. • Continuously monitor the system so the incident does not recur.
• Restore confidence to the organization’s community of interest. This requires honesty and transparency in order to prevent panic and confusion from causing additional disruptions to the organization’s operations. Finally, an after-action review should be conducted before returning to routine duties. All key players should review and verify that all data recovery documentation is accurate and precise and should then document any changes or edits. This new document can be used as a training case for future staff within the organization.
Next Steps You have completed Part 3, Section III, of The IIA’s CIA Learning System®. Next, check your understanding by completing the online section-specific test(s) to help you identify any content that needs additional study. Once you have completed the section-specific test(s), a best practice is to reread content in areas you feel you need to understand better. Then you should advance to studying Section IV. You may want to return to earlier section-specific tests periodically as you progress through your studies; this practice will help you absorb the content more effectively than taking a single test multiple times in a row.
Index The numbers after each term are links to where the term is indexed and indicate how many times the term is referenced. access management 1 agile development 1 application controls 1 applications 1, 2 data 1, 2 systems 1 batch processing 1 BCM (business continuity management) 1 board of directors, role in information technology 1 broadband 1 browsers 1 business continuity management 1 change control 1 logs 1 chief information officer 1 CIO (chief information officer) 1 client-server model 1 co-sourcing 1 COBIT 1 cold sites 1 Committee of Sponsoring Organizations frameworks Internal Control—Integrated Framework 1 communications failures 1 configuration, in systems development life cycle 1 contingency planning 1 control frameworks 1 COBIT 1 eSAC 1 ISO 27000 series 1 ISO/IEC 38500 1 ITIL 1
controls application 1 classification of 1 corrective 1 databases 1 detective 1 general 1 governance 1 information technology 1 internal 1 management 1 preventive 1 technical 1 conversion, in systems development life cycle 1 corrective controls 1 COSO frameworks Internal Control—Integrated Framework 1 CRM (customer relationship management) 1 customer relationship management 1 customization, in systems development life cycle 1 data administrators 1 backup 1, 2 mining 1 recovery 1 warehouses 1 database management system 1, 2 databases 1 administrators 1 controls 1 maintenance 1 recovery 1 relational 1 terminology 1 DBAs (database administrators) 1 DBMS (database management system) 1, 2 detective controls 1 disaster recovery 1
documentation 1 DR (disaster recovery) 1 Electronic Systems Assurance and Control (eSAC) model 1 enterprise resources planning software 1 ERP (enterprise resources planning) software 1 eSAC (Electronic Systems Assurance and Control) model 1 ethics in information technology 1 extranets 1 feasibility studies 1 gateways 1 general controls 1 global redo 1 global undo 1 governance controls 1 software 1 GRC (governance, risk, and compliance) software 1 hardware auditing 1 hot sites 1 implementation, in systems development life cycle 1 incident handling 1 information risk 1 information technology 1 applications 1 auditing 1 client-server model 1 contingency planning 1 controls 1 ethics in 1 infrastructure 1 mainframes 1 management 1 networks 1 policies 1 quality 1 roles in 1 security 1
servers 1 workstations 1 internal controls 1 International Organization for Standardization ISO 27000 family of standards 1 ISO/IEC 38500 1 International Standards for the Professional Practice of Internal Auditing 1210.A3 1 1220.A2 1 2110.A2 1 Internet 1 backbone 1 service providers 1 structure 1 terminology 1 intranets 1 ISPs (Internet service providers) 1 IT. See information technology ITIL 1 JAD (joint application development) 1 joint application development 1 LANs (local area networks) 1 local area networks 1 mainframes 1 management controls 1 role in information technology 1 media failures 1 memo posting 1 mining, data 1 networks 1 O/Ss (operating systems) 1 OLAP (online analytical processing) 1 online analytical processing 1 Open Systems Interconnection model 1 operating systems 1 operation in systems development life cycle 1 operations, in information technology area 1
OSI (Open Systems Interconnection) model 1 out-sourcing 1 partial undo 1 performance monitoring 1 policies, information technology 1 Practice Guides 1 preventive controls 1 processing, batch vs. real-time 1 programmers 1 programming, in systems development life cycle 1 quality assurance officer 1 RAD (rapid application development) 1 rapid application development 1 real-time processing 1 recovery data 1 database management systems 1 systems 1 refinement, in systems development life cycle 1 regression testing 1 relational databases 1 risk information 1 routers 1 SDLC (systems development life cycle) 1 security browsers 1 information/data 1 senior management, role in information technology 1 servers 1 service-oriented architecture 1 SOA (service-oriented architecture) 1 standards See also International Standards for the Professional Practice of Internal Auditing 1 SysSPs (system-specific security policies) 1 system testing 1 system-specific security policies (SysSPs) 1
systems analysis, in systems development life cycle 1 systems backup 1 systems change control 1 systems design, in systems development life cycle 1 systems development in information technology area 1 life cycle 1 systems failure 1 systems planning, in systems development life cycle 1 systems recovery 1 systems selection, in systems development life cycle 1 technical controls 1 technical support 1 testing business continuity plans 1 in systems development life cycle 1 transaction failures 1 undo 1 unit testing 1 virtual private networks 1 VPNs (virtual private networks) 1 WANs (wide area networks) 1 warehouses, data 1 warm sites 1 web services 1 wide area networks 1 workstations 1 Build 08/24/2018 15:40 p.m.
Contents Section III: Information Technology Section Introduction Chapter 1: Application and System Software Topic A: Core Activities in the Systems Development Life Cycle and Delivery (Level B) Topic B: Internet and Database Terms (Level B) Topic C: Key Characteristics of Software Systems (Level B) Chapter 2: IT Infrastructure and IT Control Frameworks Topic A: IT Infrastructure and Network Concepts (Level B) Topic B: Operational Roles of the Functional Areas of IT (Level B) Topic C: The Purpose and Applications of IT Controls and IT Control Frameworks (Level B) Chapter 3: Disaster Recovery and Business Continuity Topic A: Disaster Recovery Planning Concepts (Level B) Topic B: The Purpose of Systems and Data Backup (Level B) Topic C: The Purpose of Systems and Data Recovery Procedures (Level B) Index
Section IV: Financial Management
This section is designed to help you:
• •
Identify the concepts and underlying principles of financial accounting.
• • • • • • • • •
Recognize advanced and emerging financial accounting concepts.
Identify different types of debt or equity, the classifications for debt, and the basic means of using derivatives and hedging transactions.
Interpret financial analysis. Describe the revenue cycle. Define asset management activities and accounting. Describe supply chain management. Describe capital budgeting, capital structure, basic taxation, and transfer pricing. Explain general concepts of managerial accounting. Differentiate costing systems. Distinguish various costs and their use in decision making.
The Certified Internal Auditor (CIA) exam questions based on content from this section make up approximately 20% of the total number of questions for Part 3. Almost all of the topics are covered at the “B—Basic” level, meaning that you are responsible for comprehension and recall of information. (Note that this refers to the difficulty level of questions you may see on the exam; the content in these areas may still be complex.) One topic is covered at the “P—Proficient” level, meaning that you are responsible not only for comprehension and recall of information but also for higher-level mastery, including application, analysis, synthesis, and evaluation.
Section Introduction Accounting is the framework that provides financial control over the actions and resources of an organization. Finance, on the other hand, is primarily concerned with funding and maintaining sources of funds, managing bank relationships, conducting financial planning and analysis and releasing funds for internal or external business investments and expenses, ensuring that current obligations are met, and ensuring that the organization has sufficient liquidity or cash available at the right time to meet
obligations. Accounting includes financial accounting and managerial accounting. Financial accounting is primarily concerned with external financial reporting. This requires external auditors to provide assurance that financial statements fairly present the actual financial situation of an organization. With the passage of the U.S. Sarbanes-Oxley Act (SOX), providing assurance for internal controls over financial reporting (ICFR) has been added to the list of necessary assurances for those companies required to adhere to SOX (namely, those that are SEC-registered and publicly traded). Internal auditors can play a key role in helping support management’s assessment of ICFR and can also help coordinate coverage and reliance efforts with external auditors and other assurance providers. While financial accounting still focuses heavily on external financial reporting, COSO’s Internal Control—Integrated Framework has been updated to include reference to and consideration of both external and internal financial and nonfinancial reporting. Managerial accounting is internal financial reporting, and it is primarily concerned with providing timely information to managers and other decision makers so they can make wise choices regarding how available finances should be expended in pursuit of organizational goals. Managerial accounting may make use of methods and processes that are not allowed for external financial reporting. Organizations have great flexibility in choosing appropriate managerial accounting methods since the primary objective is to enhance decision-making ability. For example, a company that uses lean manufacturing methods may wish to use lean accounting methods that have been developed to show the value of reducing inventories and so on. (Lean accounting is not addressed in these materials.)
Chapter 1: Financial Accounting and Finance Chapter Introduction Financial Accounting and External Financial Reporting Financial accounting involves identifying, recording, and communicating the organization’s economic events to interested parties. Economic events include credit sales, collecting cash from accounts receivable, recording payments due to vendors or employees, making such payments, and so on. Accountants need to systematically record the monetary impact of these events. They can also classify these events to better understand what funds the organization has and what it is doing with those funds. Organizations use three basic steps to record financial transactions: 1. Identify and analyze individual transactions for their effect on financial accounts. 2. Enter the transaction data into a journal. 3. Transfer the journal data to the correct accounts in the ledger. The journal is used as the book of original entry for financial transactions. There is a general journal as well as journals for specific purposes. Journals help to show the full effects of a transaction in chronological order and help prevent or reveal errors because they use dual-entry accounting (see Exhibit IV-1 for a definition). The ledger and subledgers are used to keep all data regarding specific account balances in one place. Entries are made in chronological order. At the end of a year or other financial reporting period, accountants prepare a trial balance, which is a summary of the organization’s accounts and their balances at a specific point in time. The trial balance, after making certain adjustments, is used to prepare a key output or communication: external financial reports.
The objective of external financial reporting is preparation of relevant and reliable financial statements that fairly and accurately represent the activities of the organization in accordance with U.S. Generally Accepted Accounting Principles (GAAP) or International Financial Reporting Standards (IFRS). Risks related to financial reporting objectives should form the basis for the majority of internal controls, such as risks of erroneous valuation, incomplete disclosure, or overstatement of assets. The internal controls set reliable financial reporting as a key objective because of its importance not only in satisfying legal and regulatory issues but also in ensuring efficiency and stewardship over the organization’s resources. The financial statements may be the starting point for management when setting general objectives. Specific objectives related to the business processes that can materially affect the statements will then logically follow. Management identifies risks in financial statement assertions for accounts and disclosures, for accounting IT systems, and for business units. Changes such as accounting system upgrades, unusual account variances, or others would trigger greater scrutiny.
Financial Statement Assertions As noted by the American Institute of Certified Public Accountants (one of the governing bodies for accounting in the U.S.) in their Statements on Auditing Standards, there are several general assertions that management should be able to make regarding its financial statements: • Existence or occurrence. This assertion regards temporal associations. Assets, liabilities, and ownership exist at a specific date; reported transactions represent events that actually occurred during a defined period. • Completeness. All transactions that occurred during a period and should have been recognized during that period have been recorded. • Rights and obligations. On a given date, assets are the rights and liabilities are the obligations of an entity. • Valuation or allocation. Assets, liabilities, revenues, and expenses are
recorded at appropriate amounts in accord with appropriate accounting principles. Transactions are mathematically correct, appropriately summarized, allocated to appropriate accounting periods, and properly recorded in the entity’s books and records. • Presentation and disclosure. Items in the financial statements are properly described, sorted, and classified. To meet financial reporting objectives, management must ensure that each transaction, account, or disclosure is evaluated according to each of these assertions, at a level appropriate to the assessed level of risk.
Finance and Treasury Finance can be personal, corporate, or public; this text relates primarily to corporate finance. The goal of corporate finance is to achieve the goals of the organization. A for-profit organization’s goal in financial terms is to maximize the economic value of the organization over the long term. For a publicly owned organization, this means maximizing the value of the organization’s common stock. Shareholders expect to receive a return on their investment in stock (from dividends and/or share price growth, since stock pays no interest) that exceeds the rate that can be earned from other investments of similar risk. A not-for-profit organization’s goal in financial terms is to achieve the goals set in the organization’s charter in a manner that maximizes the benefits to stakeholders while making the most efficient use of resources possible. A public institution or government’s goal in financial terms is to provide the maximum benefits to its constituents while making the most efficient use of resources possible. Finance professionals closely monitor a company’s performance by comparing expected outcomes with actual results (budget to actual comparisons) and calculating and comparing financial ratios. These tasks and the resulting information influence the key decisions of corporate finance. Corporate finance is broadly concerned with three major types of decisions:
The goal of investment decisions is to invest in assets that generate a return in excess of the cost of those funds expended. This involves capital budgeting and financial planning and analysis to make strategic long-term capital budgeting decisions as well as short-term decisions on investments that balance the need for capital preservation, liquidity, and a reasonable return on investment. Liquidity is the ability of an organization to meet its current and future obligations in a cost-effective and timely fashion. Liquidity can also refer to how quickly an asset can be converted to cash. The goal of financing decisions is to provide sufficient funds to support an organization’s strategic goals. Long-term financing includes the issuance of stocks or bonds or entering into long-term debt arrangements or leases. Short-term financing includes the issuance of commercial paper (short-term commercial loans) and arranging for credit lines and revolving credit. Corporate treasury is involved in managing the organization’s liquidity and cash position. An organization must have sufficient cash on hand to meet its obligations that are coming due. If, for example, an organization were to lure retail customers by offering a “no payment for six months” deal, it should be certain it has sufficient liquidity during this time period to pay its debts, expenses such as payroll, and dividends, or it may have to resort to high-cost borrowing while waiting for customers to pay their obligations. The incremental revenue from new customers from the promotion may be less than the incremental financing expense (and it increases risks; some organizations have gone out of business in this way). Financing decisions also include bank and shareholder relationship management. Maintaining good relations with these providers of funds is critical. Reviewing bank relationships periodically can ensure that the organization is getting the best deals possible. The goal of dividend decisions is to decide what portion of after-tax profit should be distributed to shareholders in the form of a dividend and what
portion should be allocated to retained earnings, or the funds used for the future sustenance and growth of the organization. Another task of finance is to perform financial risk analysis, which involves measuring, managing, and responding to the organization’s exposure to many types of risk, including risks of capital budgeting decisions; risks related to the ability of others to fulfill their financial or contractual obligations to the organization; risks from foreign exchange, interest rates, and other international trading risks; and risks derived from the use of complex financial instruments.
Terminology Financial accounting and finance has its own specialized terminology. A familiarity with these terms will help your understanding of the topics in this chapter. Exhibit IV-1 lists key financial accounting and finance terms.
Exhibit IV-1: Common Accounting Terms Used in Financial Accounting and Finance Term
Definition
Accounting
Recording and reporting of an entity’s financial activity, including assets, liabilities, equity, revenues, expenses, and earnings.
Accrual basis accounting
An accounting system that records transactions as they occur, recognizing revenue when earned and expenses when incurred, regardless of when the related cash is actually received or paid.
Amortize
To allocate acquisition costs of intangible assets to the periods of benefit. Called depreciation for plant assets and depletion for wasting assets (natural resources).
Asset
Company-owned economic resource that can be expected to provide future economic benefits; must be quantifiable within a reasonable degree of accuracy.
Balance sheet
A financial statement that shows, at a given point in time, what an organization owns, what it owes (or its obligations) to others, and its capital position (retained earnings and owner/shareholder investments).
Capitalize
To record an expenditure that will benefit future periods as an asset rather than treating it as an expense during the period of its occurrence.
Chart of accounts
Numerical listing of all accounts used to record an entity’s transactions, including assets, liabilities, owner’s equity, revenue, expenses, gains, and losses.
Closing
The process of transferring account balances from subledgers to trial balance accounts at the end of an accounting period; typically associated with income statement accounts.
Credit
To make an entry on the right-hand side of a journal.
Debit
To make an entry on the left-hand side of a journal.
Depreciation
A method of allocating the cost of tangible assets over the periods of expected use. Includes accelerated, activity, and straight-line methods.
Dual-entry accounting
An accounting system in which each transaction is recorded in at least two places: a debit to one account and a credit to another account; also known as double-entry accounting.
Equity
The residual ownership interest in an organization’s assets after deducting all of its liabilities.
Expenses
Money spent or liabilities incurred resulting from an organization’s efforts to generate revenues from ongoing operations.
Financial reporting
The process of presenting information about an entity’s financial position, operating performance, and cash flow for a specified period.
Financial statements
Balance sheet, income statement, statement of cash flows, and statement of retained earnings.
General ledger
Listing of all of an entity’s financial transactions, through offsetting debit and credit accounts.
Impaired/ impairment
Process of recording and presenting the fair value of an asset for which the value has depleted faster than the calculated depreciation or amortization. Also, the amount by which stated capital (outstanding shares multiplied by stated value) is reduced by distributions (e.g., dividends) and losses.
Income statement
A summary of the profitability or success of an organization over a period of time, such as a year.
Intangible assets
Assets that have no physical substance; exclude financial instruments by definition.
Journal entry
Recording of a financial transaction (as a debit and then as a credit)
by date; eventually posted to a ledger. Ledger
Accounting book of final entry, in which transactions are listed under separate accounts; subledgers provide more detailed information about individual accounts (e.g., sales, purchases).
Liability
Obligation or debt that must be paid with assets or services in the future.
Lower of cost or market (LCM)
An asset valuation principle in which “cost” is the original cost and “market” refers to the market-determined asset value, the lower of which becomes the new value.
Minority interests
The stockholders’ equity of a subsidiary company that may be allocated to those owners who are not part of the controlling (majority) interest.
On-top adjustments
Adjustments made by management to reflect judgment calls and deviations from calculated results and estimates; typically are made after a first draft of the financial statements.
Statement of cash flows
A financial statement that reconciles the income statement to the beginning and ending cash on the balance sheet.
Statement of shareholders’ equity
A financial statement that starts with the balances from the end of the prior period and shows changes due to net income (loss) and dividends for the period or any new issuances or repurchases of stock.
Trial balance
Total of all debits and credits; if debits do not equal credits, an error has occurred (e.g., mistake in entry, omission, double posting).
Topic A: Concepts and Principles of Financial Accounting (Level B) Accounting Standards To understand why financial statements are presented the way they are, internal auditors need to understand the audience that the statements are made for and the objectives that form the basis of financial reporting standards. In the United States, the primary standards-setting body is the Financial Accounting Standards Board (FASB), an independent, nonprofit group under the authority of the U.S. Securities and Exchange Commission (SEC). Generally Accepted Accounting Principles (GAAP) is an accounting term describing both the broad guidelines and the specific procedures that have substantial authoritative support in the business community. GAAP evolved from both published standards and conventional practices where no standard existed. The FASB’s Accounting Standards Codification® has been the official source for all GAAP standards since September 15, 2009. This online codification resource collected and renumbered all standards formerly maintained by multiple parties using a new, logical numbering system. In the U.S., financial statements of public companies must conform to U.S. GAAP. GAAP has two main categories of principles: recognition and disclosure. Recognition principles involve the timing and measurement of financial items accounted for; disclosure principles require inclusion in the financial statements or the notes of the financial statements of descriptive nonfinancial elements that, if omitted, could be misleading. Internationally, the most significant standards-setting body is the International Accounting Standards Board (IASB), an independent privatesector body formed from the accountancy bodies of numerous countries. The IASB is responsible for developing the International Financial Reporting Standards (IFRS).
The IFRS is a set of standards required or permitted for use by over 115 countries, including supranational bodies such as the European Commission. The objective of the IASB when forming the IFRS was to create harmony among the regulations and accounting standards related to financial reporting across national boundaries. In addition to comparability issues, multinational organizations needing to prepare financial information in multiple countries want to avoid the multiplication of costs in preparing different reports for each country. To partly alleviate this issue, the SEC now allows foreign private issuers listing on U.S. exchanges to report using only IRFS. While the FASB and IASB standards are more similar than different, there are still gaps that the two organizations are working to close. However, standards are not set in a vacuum but as a result of continuing contributions and pressure from individuals, organizations, nonprofit standards-setting associations, politicians, lobbyists, and many others. Therefore, compromises have been made allowing more than one method of accounting for a particular subject. Where appropriate, the differences will be noted.
Objectives of External Financial Reporting The FASB has created a set of objectives for external financial reporting as the underlying basis for standards being set, and, though these Statements of Financial Accounting Concepts (SFACs) are not binding, they form the basis for the standards and so are reviewed briefly here. External financial reporting is designed not as an end in itself but to furnish information useful in making business and economic decisions. Financial reporting: • Provides specific business data (not macroeconomic data). • Includes estimates and judgments. • Is primarily historical. • Is not intended to be used as the sole source of organizational information. • Has a procurement cost.
Users of financial statements are assumed to have a reasonable understanding of business and economics and to be willing to apply reasonable diligence in the study of the information. Users include present and potential investors, creditors, managers, financial advisors, brokers, and auditors; they should be able to use financial reports to assess the amounts, timing, and uncertainty of planned cash receipts from dividends, securities, or loans as well as net cash inflows. Financial reporting should also provide information on an organization’s economic resources (assets) and claims to those resources (liabilities and equity). Earnings and their components are of primary importance to financial reporting. Financial reporting should include the results of financial performance over an accounting period and an indication of how management has provided stewardship to owners, but it will not directly show the organization’s value. Users must make their own estimate by applying their own analyses to the information. Finally, management is assumed to have better information on the organization than others and is therefore expected to increase the value of the information by identifying key events and transactions (and disclosing them as appropriate).
Accounting Concepts The goal of financial reporting is to provide stakeholders with information to exercise due diligence in decision making. Management may use financial reports to develop strategy, gauge performance, and allocate economic resources. Investors and lenders may use financial reports to make decisions about the size, conditions, timing, and risk level of investments and loans. To ensure the reliability, clarity, and usefulness of financial statements, GAAP describes:
Fundamental Qualities of Accounting Information To ensure that financial statements are truly useful, GAAP requires the information in financial statements to reflect the following fundamental accounting qualities: • Relevant. Relevant information has feedback value and/or predictive value and timeliness. Information with feedback value helps confirm or correct the results of prior expectations; information with predictive value helps in making decisions about past, present, or future events. Both can occur simultaneously, because learning from past decisions helps managers make better future decisions. Timeliness is the concept that information must be available at the time the decisions need to be made or it will be of no value. • Reliable. Reliability is a measure of the neutrality of the sources of information, the faith that the information represents what it purports to represent, and the information’s independent verifiability. Neutrality refers to making choices that are free from bias toward a predetermined result and placing the relevance and reliability of information above other concerns. Representational faithfulness is the assurance that descriptions of events and financial transactions correspond closely to what occurred in reality. Verifiability is the extent to which a high degree of consensus can be formed between independent measurers when using the same techniques. • Comparable. Comparable statements reflect the use of standards and techniques similar to those used in other organizations so that users can differentiate real similarities and differences from those caused by
divergent accounting rules. • Consistent. Consistent means that the same standards are applied over time so that financial statements from differing periods can be compared. For this reason, organizations must show that new accounting methods adopted are preferable to prior methods.
Accounting Constraints GAAP also describes four basic constraints on those preparing financial statements: • Cost-benefit relationship. Gathering information involves significant costs, and while each individual user of the information will place more or less value on some types of data, statement preparers must select a level of information that will provide perceived benefits to a wide enough body of users to outweigh the perceived costs of furnishing this information. In addition to the direct costs such as preparation, this includes the cost of the information entering the public domain (use by competitors). • Materiality. Materiality is a threshold level above which items would make a difference to a decision maker (material) and below which the items are insignificant (immaterial). It is a judgment call that includes not only the magnitude of monetary amounts relative to some overall amount but must also consider the nature of the item and the circumstances in which decisions must be made. Therefore, no specific materiality guidelines can be set. Materiality may vary due to a number of factors, including the person, firm, industry, or transaction in question. • Industry practices. Accounting procedures should follow applicable industry practices. • Conservatism. Conservatism involves prudence and adequate consideration of the risks and uncertainty in business situations when presented with situations that require judgment. It includes selecting the accounting method that is least likely to overstate net income and financial position and doesn’t anticipate gains or losses. Consistent understatement must also be avoided. Conservatism implies a pessimistic frame of mind that does
not recognize revenue until it has been earned and that recognizes expenses when incurred.
Dual-Entry Accounting An account is simply a place to record transactions that fit within a specific category (i.e., a “bucket” for those types of transactions). Dual-entry accounting (or double-entry accounting) is the international standard. In a dual-entry system, each transaction is recorded in at least two places: a debit to one account and a credit to another account. For assets, expenses, and dividends, a debit increases the account balance and a credit decreases the account balance. For liabilities, revenues, capital stock, and retained earnings, a debit decreases the account balance and a credit increases the account balance. For example, when a customer pays an account receivable (an asset), the accounting entry will debit cash (increase) and credit accounts receivable (decrease). Or, for example, to record the cash proceeds from a bank loan, the organization records an increase in assets with a debit to cash and records an increase in liabilities with a credit to notes payable. Exhibit IV-1 illustrates the rules of debit and credit. Exhibit IV-1: Rules of Debit and Credit
The dual-entry accounting system is a self-checking system; the sum of all debits must equal the sum of all credits, because each debit entry should
have a corresponding credit entry. Preparing periodic trial balances can ensure that the accounts balance at that specific moment in time. Furthermore, accounts with a beginning balance and an ending balance, namely balance sheet accounts, will have the following relationship: The use of dual-entry accounting has given rise to the use of special accounts that either increase or reduce a primary account. On the balance sheet, accounts that reduce an asset, liability, or equity account are called contra accounts, for example, discount on bonds payable or sales returns and allowances. Conversely, an adjunct account increases an asset, a liability, or an equity account, for example, premium on bonds payable. These types of accounts allow the primary account’s value and the amount of the adjustment to be known.
Accrual Versus Cash Basis Accounting Accrual basis accounting relies on the principles of revenue recognition and matching. It records transactions as they occur, recognizing revenue when earned and expenses when incurred, regardless of when the cash is actually paid. Accrual basis accounting is the accepted norm for most organizations, and it is considered a better indicator of an organization’s continuing viability than is cash basis accounting. In cash basis accounting, the organization recognizes revenue only when cash is received and recognizes expenses only when cash is paid out. Items promised to be paid or received, such as accounts payable and receivable, are ignored. Cash basis accounting is not allowed under GAAP or IFRS. However, it may be used for tax purposes.
Accounting Assumptions The basic accounting assumptions used in the preparation of financial statements include the following: • Economic entity. An economic entity is any entity that has separately
identifiable accounting and accountability. An entity could be an individual, a type of corporation, or a business unit. Economic entities differ from legal entities, which are legally separate businesses or subsidiaries. • Going concern. An entity is assumed to be operationally viable going into the future—for long enough at least to validate the use of accounting methods such as asset capitalization, depreciation, and amortization. If liquidation of the entity is certain, this assumption does not apply and different accounting methods must be used. • Monetary unit. A stable currency will be used to measure and record economic activity. GAAP ignores the effects of inflation or deflation for most measures at present. • Periodic reporting. Operations can be separated artificially and recorded in different periods (e.g., week, month, quarter, year), thus allowing for comparisons of past and present performance. Even end-of-year inventory levels must include items still in production, which is called work-inprocess inventory.
Accounting Principles Finally, those preparing financial statements face many questions about what, how, and when to report activity. Their decisions are guided by four recognized accounting principles: • Historical cost. Most assets and liabilities are reported at the levels at which they were acquired or incurred rather than fair market value. For example, fuel costs are reported at the level at which the fuel purchases were made, not at the level in effect at the time of the financial report or some point on the future. The historic value should be supported with evidence, such as receipts or records of payment. An exception to this is the reporting of certain marketable securities. • Revenue recognition. Revenue should be recognized when it is realized or realizable and earned.
• “Recognized” means revenue has been recorded as a journal entry. • “Realized” means assets such as goods or services have been exchanged for cash or claims to cash (e.g., an invoice for an account receivable). • “Realizable” means that the assets can be readily converted to cash without significant extra expense through sale in an active market at prices that can be easily determined. It can also relate to a determination that an account receivable (or some portion of it) is still considered collectible. • “Earned” means that the organization has done a substantial amount of what it promised to do (provided goods or services). Thus a prepaid service contract is recorded as a liability until that service has been substantially performed. For example, a company that sells a client a new information system bills the client on completion of installation and testing. The revenue is recognized at this point, although payment may not be received from the client for 60 days. Revenue recognition is addressed more later in this topic, as it is an area where there are high risks of improper reporting. • Matching. When practical to do so, expenses should be recognized in the period in which the corresponding revenues are recognized. Therefore, payroll and material expenses incurred to produce a shoe aren’t recognized when the shoe is produced but when it is sold. Depreciation and amortization are ways to apply the cost of a long-lived asset over the periods in which the benefits are received. Expenses that can be matched specifically to the normal costs of production are called product costs and are expensed in the period in which the revenue is earned; expenses that affect the organization as a whole but cannot be specifically allocated to a product or product line are called period costs. Period costs are expensed immediately because they cannot be matched against specific revenues. • Full disclosure. If information is aggregated at too high a level or is overly detailed, its usefulness can be reduced. The full disclosure principle recognizes that statement preparers must make compromises between a level of detail sufficient to help users with their decisions and condensing that information enough to keep it understandable. Regardless of the
degree and extent of disclosure deemed appropriate, no material or potentially significant information that could impact user decision making should be omitted from disclosure. Supplementary information may be presented outside the main body of the statements—for example, in footnotes.
Revenue Recognition According to U.S. Government Accountability Office (GAO) studies of U.S. public filings, revenue recognition has been a common reason for required restatements, ranking in first or second place for many years (trading off with cost or expense recognition). Improper revenue recognition can take the form of either deliberately overstating revenues, such as by recording false receivables, or understating revenues, such as by improperly recognizing revenue in a later period. It is an important area for internal auditors to consider when assessing internal controls over financial reporting (ICFR). Auditing Standards Board AU-C Section 240 (formerly Statement on Auditing Standards [SAS] No. 99), “Consideration of Fraud in a Financial Statement Audit,” directs auditors to assume that improper revenue recognition is part of fraud risk. Note that the FASB has issued new financial accounting and reporting standards on revenue recognition. Public organizations applied the new standards to annual reporting periods starting after December 15, 2017, and nonpublic organizations will do the same after December 15, 2018. The changes reflect converged guidance from both the FASB and the IASB on how to recognize revenue in contracts with customers so that different industries or geographies no longer record economically similar transactions in different manners. Revenue is usually recognized at the point of sale (i.e., at delivery) because only then is it realized or deemed realizable and earned. Some situations can allow recognition at other times.
Point-of-Sale Recognition Recognition at the point of sale is usually straightforward, but some
exceptions exist. A repurchase agreement is the sale of product or inventory with an agreement to buy back the goods in the future. If a repurchase agreement has set prices that cover the temporary buyer’s total costs, the inventory and matching liability stay with the seller. When this isn’t the case, revenue could be recognized until the time of buyback. Two related misuses of revenue recognition exist and are actively discouraged. Trade loading or channel loading (or channel stuffing) is the practice of manufacturers inducing their wholesalers to carry more inventory than they can reasonably sell. The practice inflates current-period profits at the expense of future profits. In another situation, some retailers experience a high ratio of returned items to sales, and so, even after the sale, they delay revenue recognition until all return warranties are expired. Or they record the sale and either create an allowance for returns or simply record returns as they happen. Revenue can be recognized when the sale occurs only if: • Sales aren’t on consignment. • Prices are easily determined. • The payment obligation cannot be revoked by theft or loss. • It isn’t part of a transfer payment. • Return levels can be estimated.
Recognition after Delivery Revenue is recognized after delivery when there is no reasonable assurance that cash collections will equal the sale price. Two methods of deferring revenue are the installment sales method and the cost recovery method. Installment Sales Method The installment sales method recognizes revenue as cash is collected from prior sales. This method is used for sales on installment where title for the goods is held until the final payment is collected. At the time of sale,
revenue up to the cost of sales plus other direct expenses (selling and administrative) is recognized, but the remainder, or gross profit, is deferred until cash is collected. Special accounts must be set up for all installment sales transactions, for gross profit on sales on installment, and for each year’s deferred gross profit. Ordinary expenses are treated as normal and are closed to the income summary account each year. Only the deferral of gross profit will affect calculation of net income. Cost Recovery Method The cost recovery method is used when there is no reasonable basis for making an estimate of collectability. This method defers recognition of profit until cash collections exceed the cost of goods sold (COGS). At sale, total revenue and COGS are reported and a journal entry records the deferred gross profit. A separate account, realized gross profit, is used in the period when the cash collections exceed costs.
Recognition Prior to Delivery Certain long-term construction contract situations span years and require early recognition of income. Methods include the completed contract method and the percentage-of-completion method. Completed Contract Method This method recognizes revenues and gross profits only at project completion. Accumulated construction costs are recorded in a constructionin-process account (an inventory account), and billings on construction in process (a contra inventory account) records billings. There are no interim credits or charges to revenues, costs, or gross profit (income statement accounts). This method is to be used only when the percentage-ofcompletion method (see below) is inappropriate, such as if most contracts are short-term or the percentage of completion cannot be reasonably estimated. Percentage-of-Completion Method The percentage-of-completion method recognizes revenues and gross profit
based on that period’s construction progress. The same two accounts as discussed above accumulate billings and costs, except that the constructionin-process account also holds any to-date gross profit. This method is appropriate when both parties have enforceable rights and both can be expected to perform their obligations. One method of estimating the percentage complete is the cost-to-cost basis method: This percentage is multiplied by the total revenue to find the amount of revenue to recognize to date, and the current period revenue is this amount less any revenue already recognized in prior periods.
The Accounting Cycle Quarterly and annual external financial reports are the result of the periodend financial reporting process, which is the critical end point of what is called the accounting cycle. The accounting cycle includes entering transaction totals into the general ledger; initiating, authorizing, recording, and processing general ledger journal entries; and recording recurring and nonrecurring consolidating adjustments, combinations, and classifications. Auditors need to be concerned with the inputs, processing, and outputs used in the accounting cycle to produce the financial statements. Standard, nonstandard, eliminating, and consolidating adjusting entries must also be examined. The auditor must gain an understanding of the accounting cycle and its relation to other business processes in order to determine risks and test relevant controls. The accounting cycle repeats, so when the last step is finished, accountants return to the beginning for the next period. Keep in mind that most of the ledgers and transactions described here take place within an IT accounting system. Exhibit IV-2 shows the steps in the accounting cycle. Each step is described more fully below. Exhibit IV-2: Accounting Cycle
• Step A—Identification and analysis. This step involves determining what internal and external events (including transactions) to record using revenue recognition and matching principles and accounting assumptions. (Nonfinancial data is not recorded.) • Step B—Recording in journal (journalizing). Most transactions affect two or more accounts; a sale creates a reduction in inventory and an increase in sales, accounts receivable, and cost of goods sold. Transactions may be recorded in a journal, which is totaled and posted to the general ledger at regular intervals. Journal accounts include a general journal plus journals for cash receipts, cash disbursements, purchases, and sales. Journal entries consist of a debit, a credit, a date, a journal entry identification number, a description, and an approval. Journal entries should be supported by original source documents. • Step C—Posting to general ledger. The general ledger is the primary ledger for an organization, containing all asset, liability, equity, revenue, and expense accounts. Each of these subcategories has its own subsidiary ledger. Posting is recording an item from a journal in the general ledger, including summarizing and classifying the items. For tracking and completeness checks, the general journal contains a ledger account number referring to where each specific account was posted to the general ledger. • Step D—Trial balance and working papers. Usually prepared at the end
of the period, the trial balance displays a debit column and a credit column listing the balances for each account at a specific moment in time. The debit and credit columns must balance. Discrepancies can reveal journalizing and posting errors. Correct reconciliation of the two columns cannot detect when transactions have not been journalized or are entered for the wrong amount or when incorrect or duplicate entries are posted in both columns. Errors can be corrected by tracing accounts between the journal and the ledger and looking for a specific dollar amount. Because duplicate posting doubles the error amount, the auditor looks for debits that should be credits (or vice versa) by dividing the amount out of balance by two and searching the journal for this amount. Transpositions (e.g., 14 instead of 41) or slides (79 instead of 790) will result in evenly divisible numbers when dividing the difference by nine (e.g., 41 – 14 = 27; 27/9 = 3). Worksheets or working papers are paper or electronic documents arranged in a columnar format for accumulating and recording adjusting entries when preparing financial statements. Accountants use worksheets to arrive at the figures needed for the financial statements before all of the journalizing and posting has been officially accomplished. Therefore, worksheets can be used to verify amounts in the journals and financial statements. Columns found on a worksheet include debit and credit columns for: • The trial balance (both the trial balance and the adjusted trial balance). • Adjustments (all adjusting entries, as described previously). • The income statement and the balance sheet. (Items from the adjusted trial balance are moved to their respective financial statement column, either the income statement or the balance sheet.) • Step E—Adjusting entries and adjusted trial balance. To show the correct application of the matching and revenue recognition principles on the financial statements, accountants make adjusting entries so that expenses and their related revenues are matched to the same period. Because of the nonstandard nature of many (if not most) adjusting entries, such entries may require added assurance coverage or emphasis from
auditors in terms of assessing ICFR. Adjusting entries include recurring adjusting journal entries such as depreciation and amortization as well as accruals and prepayments. Accruals are either accrued revenues, which are earned revenues yet to be received as cash, or accrued expenses, which are incurred but unpaid expenses. When unrecorded accruals exist, the revenue and related asset accounts as well as the expense and related liability accounts will be understated. For accrued revenues, the adjustment will debit (increase) the asset account (e.g., interest receivable) and credit (increase) the revenue account (e.g., interest revenue). For accrued expenses, the relationship will be the same, except that it will involve the expense and liability accounts. Prepayments are either prepaid expenses, which are cash paid for goods or services prior to their consumption and treated as assets, or unearned revenues, which are cash received from customers as prepayment for goods or services and treated as liabilities or deferred revenues. Prepayments require adjusting entries because they expire through the passage of time but no recurring entry is made to record this expiration. Prepaid insurance or rent are examples. The adjusting entry would credit the asset account (decreasing it) and debit the expense account (increasing it). The adjusted trial balance is the trial balance after all adjusting entries have been made, reflecting the proper balance of each account. • Step F—Closing accounts and post-close trial balance. Closing is the process of reducing all temporary or nominal accounts to zero so they are ready to be used in the next period. On the income statement, such accounts include revenue and expense accounts by subcategory, such as sales or interest revenue accounts or expense accounts such as cost of goods sold or selling and administrative expenses. The accounts are closed to an income summary account. Revenues would be debited and income summary credited; expenses would be credited and income summary debited. Assuming that revenues exceed expenses, net income or a credit balance would exist, and this balance is transferred from income summary to retained earnings.
The post-close trial balance is an adjusted trial balance prepared after closing to show that debits and credits of the real accounts (assets, liabilities, and shareholders’ equity) are equal. • Step G—Preparing external financial statements. The external financial statements are prepared. A complete set of financial statements comprises a balance sheet, an income statement, a statement of cash flows, a statement of shareholders’ equity, and accompanying notes (such as management’s discussion and analysis). • Step H—Reversing. Some adjusting entries made to prepare the financial statements need to be reversed as of the beginning of the next accounting cycle.
External Financial Statements and Terminology The terminology in external financial statements has been precisely defined by the FASB and the IASB. The terminology presented here conforms to these standard definitions. Collectively, the financial statements described in Exhibit IV-3 capture transactions that reflect the operations and activities of an entity at one point in time. All transactions are supported by appropriate source documents. The exhibit also describes the general order and process used to generate external financial statements. First, net income is determined, including shareholders’ equity (on the income statement and statement of shareholders’ equity); then assets and liabilities are determined and presented on the balance sheet. The statement of cash flows is used to reconcile the income statement to the balance sheet and ties out the beginning-of-period and end-of-period cash balances. Exhibit IV-3: External Financial Statements
Income Statement (Statement of Operations) The income statement is a summary of the profitability or success of an organization over a period of time, such as a year. The following are important income statement terms: • Revenues are enhancements or inflows of assets and/or settlements of liabilities generated when an organization makes or delivers goods or services as part of its primary ongoing operations. • Expenses involve the depletion or outflows of assets and/or the incurrence of liabilities resulting from an organization’s production or delivery of goods or services as part of its primary ongoing operations. • Gains are increases in net assets (equity) due to incidental or peripheral transactions except those resulting from investments by or distributions to owners. Gains are usually reported net of related expenses. • Losses are decreases in net assets (equity) due to incidental or peripheral transactions except those resulting from investments by or distributions to owners. • Income is the combination of revenues and gains. While GAAP recognizes a difference between revenues and gains, IFRS does not consider them to
be separate elements. Similarly, IFRS groups losses within expenses. The income statement should separately present revenue, results of operations, finance costs, share of profit or loss from joint ventures (defined by use of equity method), minority interests, ordinary profit or loss, tax expense, extraordinary items, and net profit or loss. The income statement can be presented in two different formats. • Multiple-step statements (see Exhibit IV-4, the first financial statement for ABC, Inc., which is the first in a set of financial statements that will be used as a running example) separate operating from nonoperating expenses and deduct the matching costs and expenses from each revenue or income category. Intermediate components of income can be highlighted. • Single-step statements (see Exhibit IV-5) deduct the total of all expenses from the total of all revenues in a single step, eliminating classification issues. Since this is a common format for IFRS financial statements, an IFRS statement of profit and loss (P&L) and other comprehensive income is presented as an example. (This is how an income statement is referred to in IFRS.) Note that this is a real statement from a company and contains many complexities that are beyond the scope of this text. Realworld GAAP statements can be equally complex, but Exhibit IV-4 and the other ABC, Inc., statements are somewhat simplified for ease of understanding. Exhibit IV-4: ABC, Inc., Consolidated Multistep Income Statements
Exhibit IV-5: Single-Step Consolidated Statement of P&L Prepared Under IFRS
Additional Items on Income Statement After income from continuing operations, any irregular items should be reported. Here are some irregular items that might be listed: • Discontinued operations. (Note that Exhibit IV-5 shows a discontinued
operation.) Assets to be reported as part of an operation or segment of a business that is or will be discontinued must be clearly distinguished from other activities and assets. Each discontinued operation would report its gain (loss) from continuing operations and its gain (loss) from the disposal of the operation on separate lines. • Extraordinary items. To qualify as an extraordinary item, an event/transaction should be both unusual in nature (highly abnormal for the particular operations, type of business, industry, or geographic region) and infrequent in occurrence (not reasonably expected to occur again, given the particular environment). Some items are always considered extraordinary; others never. Foreign currency gains and losses are never extraordinary; material gains and losses from early extinguishment of debt used to be extraordinary but are now subject to the above tests (unusual in nature, infrequent in occurrence). Extraordinary items are defined by accounting standards and may vary by industry. • Cumulative effect of change in accounting principle. When a different accounting principle is adopted from one in current use, the effect on net income is disclosed separately. Such changes to the principles or the methods of applying them must be justified by management unless externally required.
Balance Sheet (Statement of Financial Position) The balance sheet shows what an organization owns and owes and where the money for the ownership originated. Let’s look at some important terms: • Assets are resources obtained, owned, or controlled by an organization as a result of past transactions or events that will probably result in future economic benefits to the organization. The assets are arranged from most to least liquid. Typical categories include current assets; plant, property, and equipment (PPE); long-term assets; and “other” assets. Current assets include cash and cash equivalents and assets held for sale or expected to be realized in the current operating cycle or within one year of the balance sheet date. Cash, marketable securities, prepaid items, accounts receivable, and inventory are examples. Noncurrent or long-term assets have an
ongoing value and are not readily convertible to cash. “Other” line items include cash and cash equivalents, inventories, accounts receivable, intangible assets, general financial assets, equity method investments, and liquid assets (if material). • Liabilities are an organization’s present obligations due to past transactions or events requiring the future transfer of assets or provision of services— or what companies owe to others. Liabilities are listed in order of the time frame in which they are due. Current liabilities, such as accounts payable or sales commissions payable, are expected to be settled within the normal operating cycle or one year of the balance sheet date and include the portion of long-term debt expected to be paid in this period. Long-term liabilities (e.g., mortgages, bonds) are any liabilities not qualifying as current liabilities or other liabilities (those liabilities that are not material individually). • Equity (shareholders’ equity or net assets) is the ownership interest in an organization’s assets after deducting all of its liabilities. Investments by owners (contributed capital) are increases in an organization’s equity by transfer of assets (or satisfaction or conversion of liabilities) from entities wanting to increase ownership interest (their equity). Undistributed earnings (retained earnings) are the accumulated net incomes (losses) that have been retained in the organization. Distributions to owners (dividends) are decreases in an organization’s equity by transfer of assets to owners. The balance sheet should separately present minority interest, issued capital, and reserves. The relationship between these three topics on the balance sheet is illustrated by the accounting equation: Note that income statement accounts are zeroed out at period end and the net income (loss) for the period is recorded in retained earnings. Balance sheet account balances for asset, liability, and equity accounts are carried forward as beginning balances in the next period. Exhibit IV-6 shows a set of balance sheets for ABC, Inc.
Exhibit IV-6: ABC, Inc., Consolidated Balance Sheets (Statements of Financial Position)
Statement of Shareholders’ Equity (Retained Earnings) The statement of shareholders’ equity (or statement of retained earnings) starts with the balances from the end of the prior period and shows changes due to net income (loss) and dividends for the period or any new issuances or repurchases of stock. The following terminology is associated with this statement. • Capital stock is the par value of issued shares. (Par value is a nominal price per share, set at issuance, usually at a low price to make it unlikely that the stock price will go below this value. No-par stock has a par value of zero.) There may be several classes of stock such as Class A and Class B. Capital stock can have the following subcategories: • Common stock is the default classification for an organization’s public shares granting a portion of ownership. Different classes of common stock will carry different rights, such as voting rights.
• Preferred stock has both debt and equity qualities. Organizations have no obligation to repay the principal amount (equity quality). Although preferred stock usually has a fixed dividend (debt quality), the organization is not obliged to pay the dividend unless it is declared. If a preferred dividend is declared, the dividend can go into arrears. The organization must pay the arrears before paying any common stock dividends. Preferred stock is rare; some organizations have authorized preferred shares, but few issue them. • Treasury stock is stock that has been reacquired by the company. It is broken out separately because a company cannot “own” itself. • Additional paid-in capital is the difference between par value and the amount actually paid for a share of stock when the stock is issued. (It is unrelated to later trading of shares on the stock market.) It is also called contributed capital. • Retained earnings are the undistributed earnings of the organization, calculated using the following formula.
Note that revenues less expenses equals net income (net loss). The dollar amount of a cash dividend is deducted from retained earnings (not from additional paid-in capital) at the time the board declares the dividend. Exhibit IV-7 shows a statement of shareholders’ equity (here called owners’ equity) for ABC, Inc. Exhibit IV-7: ABC, Inc., Statements of Shareholders’ Equity
Statement of Cash Flows The statement of cash flows is used to show cash levels as of two moments in time: the beginning of the period and the end. It is derived from the income statement and the balance sheet and is used to reconcile these statements. The cash flow statement is therefore always the final step in the process of generating external financial reports. The following terms are used in the statement of cash flows. • Net cash flows from operations is net income converted from an accrual to a cash basis to show the cash effects of transactions, omitting any investing or financing items. Net income includes items that don’t involve actual cash transactions, such as depreciation, and these noncash revenues and expenses must be removed. Paper gains and losses refer to gains and losses that have no effect on operating cash flows in the current period. An increase in current assets such as accounts receivable would be subtracted from net income because, under accrual accounting, these revenues are included in net income even though there was not the same increase in cash. • Net cash flows from investing includes acquisition and disposal of debt and equity securities for investment purposes, from both an issuing and a collection standpoint. Property, plant, and equipment are also included. • Net cash flows from financing involves capital structure transactions, including borrowing and repaying loans from creditors as well as obtaining
and repaying equity capital from/to owners and providing a return on equity. The change in cash and beginning and ending cash balances are also listed on the statement of cash flows. Exhibit IV-8 shows a statement of cash flows for ABC, Inc. Exhibit IV-8: ABC, Inc., Consolidated Statements of Cash Flows
The net increase (or decrease) in cash calculated on the statement of cash flows should match the change in cash on the balance sheet. The beginning cash is taken from the ending cash of the prior year’s balance sheet, which should then be summed to arrive at the ending cash for the current year. It should match the cash listed for the current year on the balance sheet.
Statement Interrelationships The financial statements have numerous interrelationships, which can be useful for auditors and analysts when verifying amounts on the statements.
Exhibit IV-9 shows some of these direct relationships, specifically: • Net income from the statement of operations (income statement) is the starting point for both the statement of shareholders’ equity (after prior balance information) and the statement of cash flows. • Totals for each account on the statement of shareholders’ equity are used on the balance sheet. • The final cash and cash equivalents balance on the balance sheet will tie to the statement of cash flows. • Certain items on the statement of shareholders’ equity are also used in the financing activities section of the statement of cash flows. Also, as shown in the walkthrough: • Cash flows from operating activities on the statement of cash flows comprise items listed on the income statement. • Cash flows from investing activities generally comprise changes in longterm assets found on the balance sheet. • Cash flows from financing activities generally comprise changes in longterm liability and equity items found on the balance sheet and the statement of shareholders’ equity. Exhibit IV-9 shows some of the financial statement interrelationships for ABC, Inc., and indicates values that tie between or within the various statements. Exhibit IV-9: ABC, Inc., Statement Interrelationships
Note that a spreadsheet version of the financial statement examples summarized above is available for download in the Resource Center. This spreadsheet has tabs for each statement as well as a tab with financial ratios.
Uses of the Financial Statements The financial statements are intended to be used by interested parties to assess the amount, timing, and uncertainty of future cash flows or, in other
words, to assess the liquidity and financial viability of an organization. Balance Sheet Uses The balance sheet shows assets, liabilities, and equity as of a moment in time, typically the end of the fiscal year. It can give users an indication of liquidity. Other uses of the balance sheet include: • Calculating rates of return. • Evaluating capital structure. • Assessing solvency, the ability to pay debts as they mature. (This involves examining current assets to estimate whether the organization has enough cash and cash equivalents to meet its short-term obligations as well as considering long-term debt. High long-term debt relative to assets lowers relative solvency.) • Comparing relative inventory levels to show whether the organization has sufficient stock to meet short-term sales goals or if it has an excess of inventory and thus risk of obsolescence. • Determining financial flexibility, the ability of an organization to respond to unexpected opportunities by changing amounts and timing of cash flows, a key element in insolvency risk. • Noting increases in accounts receivable that can show a shift in customers’ ability or willingness to pay. When auditing a balance sheet, evidence such as samples and counts of items should be accumulated as close to the balance sheet date as possible, because items such as inventory or marketable securities are always fluctuating. The balance sheet is also the primary statement auditors use when performing tests of details of general ledger balances, such as physical examination of inventory or vendor monthly statements for accounts payable. General ledger reconcilements are typically considered a key control in overall ICFR. Each balance sheet account should require periodic reconcilement by management (at least monthly but potentially on a more frequent basis if needed based on the nature of the account). The
reconcilement process and resident controls should receive direct assurance coverage emphasis by auditors. The items on the balance sheet would also be confirmed by contacting banks for cash balances, customers for accounts receivable, note makers for notes receivable, and so on. Income Statement Uses Net income, also called net earnings or profit, is useful in total, but when the income statement follows a multistep format, net income can be even more useful, because it identifies operating income or loss that shows the undiluted or unaugmented results of the firm’s primary activities. Each subtotal in a multistep statement can illustrate the results with or without that item. For example, revenues less the cost of goods sold equals gross profit, and this could be used for evaluating a manufacturer or retailer. The income statement can be used to evaluate an organization’s use of debt versus equity (leverage) and its earnings per share and earnings per share assuming dilution (a more conservative estimate) to show profitability to shareholders. In addition, it can serve as a long-term measure of a company’s value. The income statement is also used to determine: • Creditworthiness. • Past performance, benchmarked against competitors. • Future performance potential and risk levels of meeting future cash flows, also benchmarked against competitors. Audits of income statement items are more reliable if the auditor can gather samples from the entire period in question rather than for just the end of the period. Analytical review procedures may also play a key role in overall auditor assurance coverage related to select income/expense line items. Statement of Shareholders’ Equity Uses Comparing equity at the end of the period to the beginning of the period can help form a picture of the organization’s prospects and priorities. If equity increased in the period, what was the primary source of that
increase? New shares? Profitable operations? Similarly, financial statement users sometimes study a company’s dividends over time. Regular dividends are considered the norm, so abnormal decreases or lack of dividends in particular years can be perceived negatively by the market. Statement of Cash Flows Uses Net income is the primary long-term measure of success; cash flow is the primary short-term measure, especially for small or young companies. Positive net income but poor cash flow can still bankrupt an organization. The net increase or decrease in cash is a key liquidity measure. A low cash balance at any point is cause for concern because the organization may not be able to meet immediate obligations. Creditors and other users examine cash flow from operating activities because organizations are better able to repay debt over the long term if they are generating funds for these payments from their operations. The opposite example might be firms that have to borrow more or attract more equity investment to provide cash for debt service, which can be a downward cycle. Cash flows from the investing activities section of the statement can highlight major capital expenditures or, in other words, the organization’s potential and strategy for long-term growth. The cash flow statement can also be used to show if and where cash misappropriation may have occurred. The third section, cash flows from financing activities, can show whether a company’s growth is financed more through operating profits, debt, or equity. Exhibit IV-10 summarizes the uses of financial statements. Exhibit IV-10: Summary of the Uses of Financial Statements
Disclosures/Footnotes A complete set of financial statements can help a reasonably informed user form an opinion as to an organization’s creditworthiness, profitability, or overall value, but the statements alone can be misleading. The notes or disclosures to the financial statements should be considered an integral part of the statements, especially when comparing two or more entities. Financial statements are not complete without disclosures as mandated by the appropriate accounting standards. Disclosures include schedules that drill down to a more useful level of detail than presented on the statements, such as the inventory valuation method that was used (e.g., last-in, first out [LIFO], etc.) or a schedule of inventory by classification type, as shown in Exhibit IV-11. Exhibit IV-11: Inventory Schedule Presented in the Notes Section
Acceptable Methods of Disclosure
Disclosures are sometimes referred to as footnotes or notes. They are acceptable if made either in the body of the statements (parenthetical explanations), as footnotes, or as notes appended after the statements. Required Disclosures The following are some examples of required disclosures. • Contingent liabilities (loss contingencies). Contingencies are events that have an uncertain outcome but that are likely to be resolved in the future. Gain contingencies, or those contingencies likely to result in a gain, are not reported. Contingent liabilities, such as pending litigation, must be recorded when they can be reasonably estimated and are likely to occur. • Subsequent events. Events that occur after the balance sheet date (usually the end of the fiscal year) but before the financial statement issuance date should be disclosed if material (i.e., useful to users), for example, the sale of a plant. Subsequent events could be additional information that affects the estimates used in preparing the financial statements. If the condition existed at the balance sheet date, the statements are adjusted; if after the balance sheet date, a footnote disclosure is made. • Contractual obligations. Contractual obligations include covenants on liabilities (or assets) requiring that certain balances be maintained, etc. • Accounting policies and valuation methods used. Accounting policies where more than one method is available should be disclosed. These include valuation methods for inventory; depreciation methods; property, plant, and equipment; and other items involving estimates. Disclosure requirements include the accounting method used, the method of valuation, balances by class of assets, and basic assumptions made. • Change in accounting policies. Changes in accounting policies must be disclosed, including an explanation by management of why the new method is preferable. Departures from GAAP or IFRS should be noted. • Capital stock disclosures. For each class of stock, the organization should disclose the number of shares authorized, issued (fully paid versus not), and outstanding (beginning and ending balances) as well as:
• Par value, if any. • Treasury stock held. • Nature and purpose of any equity reserves. • Board actions regarding dividend declarations. • Off-balance-sheet accounting. Off-balance-sheet accounting (OBSA) methods allow organizations to acquire funds without having to report a related liability on the balance sheet. For example, two or more organizations may jointly create a subsidiary for the sole purpose of financing a project. The subsidiary takes out a construction loan that is cosigned by the parent companies. Proceeds from the project are used to repay the loan. The parent companies don’t need to record the debt on their balance sheets, improving the look of their statements from a high level. Despite the fact that all OBSA methods used are required to be disclosed in the notes to the statements, allowance of such methods can potentially reduce the usefulness of the balance sheet for analysis. • Other disclosures. Other disclosures required include but are not limited to credit claims (schedule of obligations), claims of equity holders (contracts, senior securities), restricted cash, deferred taxes, lease information, and pension assets and liabilities. Financial reporting requirements in the laws of certain countries (e.g., the U.S. Sarbanes-Oxley Act of 2002 or the Financial Instruments and Exchange Law in Japan, commonly known as J-SOX) may necessitate the use of other disclosures.
Limitations of the Statements Balance Sheet Limitations The balance sheet cannot provide the true value of an organization because it cannot include nonfinancial measures, such as the value of employees, in its calculations. Most of the assets and liabilities reported on the balance sheet are valued at their historical cost, which can be significantly different
from their current market values. Exhibit IV-12 compares balance sheet values to current market value. The differences can be material. Note also that estimates are used for items such as net accounts receivable, another limitation affecting the usefulness of the statements.
Exhibit IV-12: Asset Valuation Methods Asset
Balance Sheet Valuation
Market Valuation
Cash
Stated (or face) value
Same
Short- and long-term investments
Hold-to-maturity investments: measure on balance sheet at cost (net of amortized premium/discount); available-for-sale securities: mark to market (adjust to market value) with a market valuation adjustment (from cost) in the equity section of balance sheet (other comprehensive income)
Same
Accounts receivable
Stated value or estimated collectible amount
Could be estimated incorrectly
Inventories
Cost (or lower of cost or market if impaired)
Could be understated or overstated due to changes in demand, inflation
Prepaid items
Cost (or historical cost)
Can be understated due to inflation
Property, plant, and equipment (PPE)
Cost less accumulated depreciation; if value is impaired, write down
Often understated due to longterm inflation, demand changes
Equity
Cumulative amount raised in stock issuance plus reinvested net income (retained earnings)
Shares of stock outstanding times price per share
Income Statement Limitations The primary drawbacks of the income statement are that judgments and estimates may be used and different accounting methods, principles, and criteria can be applied, making statements from two different firms less
comparable. From an internal auditing standpoint, judgments and estimates pose higher risk, because estimates must be tested and the standards against which to test are also subject to interpretation. Differing accounting methods are less of an issue for internal auditors because the method used can be tested for validity; the most likely intervention would be a suggestion for use of a more appropriate method. One other limitation of the income statement is that some items are omitted because they are very difficult to value, such as unrealized gains and losses on some securities or even more amorphous concepts such as the value of customer service or customer satisfaction. Statement of Cash Flows Limitations Since the statement of cash flows can be prepared in two different ways (direct or indirect), the statements may be difficult to compare. When the direct method is used, a separate schedule is required showing the reconciliation of net income to cash flows from operating activities. These statements can also become fairly complex when items such as the following are included: • Allowance for doubtful accounts used for accounts receivable • Purchase of short-term available-for-sale securities (reducing cash but not net income) • Material noncash transactions (included only in the notes) • Gains from sale of assets (deducted to avoid double-counting of the gain) General Limitations of Financial Statements Voluntary accounting method changes can be used to increase reported net income, but such changes must be disclosed and the organizations must report the impact of the accounting changes on earnings. In addition, accounting changes when adopted should reflect management’s decision or intention (as reflected by the underlying rationale or support provided) to use the adopted changes going forward for the organization’s financial reporting. Accounting changes are not intended to function as temporary
management tools that would allow management to continually make preferential adjustments; rather they need to be reflective of a consistent, conservative application of U.S. GAAP or IFRS. Exhibit IV-13 summarizes the limitations of financial statements. Exhibit IV-13: Summary of Limitations of Financial Statements
Manipulation of Financial Statement Elements to Conceal Fraud Although there are accounting principles and standards, various tactics have been applied to financial statements to achieve certain objectives. Earnings and incomes may be “smoothed.” Accounting principles may be interpreted or prioritized differently. This may make it difficult to determine when the line has been crossed from “creative accounting” into fraudulent financial reporting. According to Standard 1210.A2, “Internal auditors must have sufficient knowledge to evaluate the risk of fraud and the manner in which it is managed by the organization, but are not expected to have the expertise of a person whose primary responsibility is detecting and investigating fraud.” See also Standard 1200, “Proficiency and Due Professional Care,” and Standard 1210, “Proficiency.” Internal auditors looking for additional information can consult
“Consideration of Fraud in a Financial Statement Audit,” which gives auditors guidance for detecting material fraud. It emphasizes being professionally skeptical, discussing issues with management, applying audit tests unpredictably, and following up on management override of controls. It also elaborates on elements generally present in a fraud, which is called the fraud triangle: • Incentive or pressure to commit fraud • Opportunity to commit the fraud • An attitude or rationalization to justify the fraud The opportunity to commit fraud generally arises due to inadequate, ineffective, or missing internal controls. The internal auditor adds great value to the organization by providing ongoing internal control assurance such as identifying control gaps, control deficiencies, or opportunities for control enhancement. Evidence of any one factor is enough to justify greater scrutiny. Furthermore, internal auditors should use their judgment in assessing the risk of misstatement due to fraud according to four risk attributes: • Type of risk involved • Significance of the risk (materiality) • Likelihood of the risk causing a material misstatement • Pervasiveness of the risk, or whether it applies to statements in general or to a particular class of transactions Internal auditors must be alert to the possibility of fraud and set risk-based priorities for their tests to detect the three types of deliberate misstatements possible on financial statements—fraudulent financial reporting, misappropriation of assets, and corruption. Fraudulent Financial Reporting Fraudulent financial reporting is falsified reporting designed to mislead
financial statement users, usually by understating expenses or liabilities or by overstating revenues or assets. It can occur in three ways: • Manipulation of the accounting records or supporting documents • Omission of events, information, or transactions • Intentional misapplication of accounting principles (via altering amounts, estimates, classification, method of presentation, or disclosure) Auditors should discuss with the audit team likely methods of perpetrating and concealing fraud and likely incentives for management and others to commit and/or rationalize fraud. In general, internal auditors should be alert to: • Unusual concentrations of authority in one area or individual, especially when coupled with inadequate controls. • Evasiveness. • History of dishonesty. • Potential for significant financial reward from issuing fraudulent financial reports. Exhibit IV-14 lists common red flags associated with specific areas of fraudulent financial reporting.
Exhibit IV-14: Fraudulent Financial Reporting Red Flags Areas of Fraudulent Reporting Fictitious revenues
Examples
• •
Unusual growth in income or profitability
• •
Highly complex transactions
Earnings growth despite negative cash flows in some parts of the organization Transactions occurring just before the end of the reporting period
Improper asset valuation
Concealed liabilities
Improper disclosures
• •
Sales or income attributed to unknown customers
• • • •
Changes made to inventory counts
• • • • • •
Unposted invoices from vendors
• •
Poor communication of standards about disclosure
Lack of documentation for posted sales
Fictitious sales accounts Fictitious assets backed by forged documents Recording expenses as assets
Unacknowledged and/or unrecorded liabilities Relying on subjective valuations Unusually low expenses or purchases Levels of loss lower than for comparable companies Errors that reduce tax liabilities
Ineffective boards of directors
Internal auditors conduct analytical review procedures to identify possible indicators of fraud. Analytical reviews (also referred to as analytical auditing procedures or analytical procedures) examine relationships among information. In particular, examining relationships among information that is often overlooked can provide valuable insights. Analytical review procedures are addressed in Part 2 of this learning system, in Section III, Chapter 2. The common element in analytical review procedures is comparison: comparison to prior periods, to budgets or forecasts, to financial versus nonfinancial information, to expected ratios or relationships, to other organizational units, or to other organizations. Additionally, financial statement analysis, including common-size statements and ratios, are used to detect potential fraud. Fraud will often leave evidence behind because it is difficult to gain access to all of the related accounts at once. For example, “Consideration of Fraud in a Financial Statement Audit” notes as an example that management could record a fictitious receivable and revenue but not be able to manipulate cash. Comparing net income on the income statement to cash flows from operations on the statement of cash flows using analytical procedures should
detect an unusual relationship. Similarly, inventory, accounts payable, cost of goods sold, and sales are all interrelated, and discrepancies between accounts require further investigation. Sales volume in accounting records versus in records maintained by operations may similarly not match. Misappropriation of Assets Misappropriation of assets is theft of a material amount of an organization’s assets; it includes unauthorized acquisition, use, and/or disposition of assets or resources. Because fraud is usually concealed, auditors should maintain professional skepticism and determine the strength of internal controls over management and others who have the potential means to hide evidence of misappropriation. Corruption Corruption includes conflicts of interest such as purchasing or sales schemes (e.g., acts in restraint of free trade), bribery such as invoice kickbacks or bid rigging, illegal gratuities, or economic extortion. Testing for corruption is similar to that described previously for fraudulent financial reporting.
Depreciation Methods As noted earlier, depreciation is a method of allocating the cost of tangible assets over the periods of expected use. It is a way of showing that assets decline in value over time, which is why land is not depreciated, because unlike the buildings on the land, land rarely declines in value. Depreciating an asset starts with the original cost and then moves on to determining the asset’s depreciable base. The depreciable base is the asset’s original cost less its salvage value. Salvage value is the estimated value of an asset if it is sold at the end of its depreciation period or service life. Salvage value can be zero. The service life of an asset differs from its functional life because service life includes not only wear and tear but also the economic viability of the asset due to obsolescence. In Exhibit IV-6, the consolidated balance sheet, depreciation of the enterprise’s assets is reflected in the line titled “Net property, plant, and
equipment.” Net PPE has been calculated by subtracting accumulated depreciation from fixed assets, yielding net fixed assets. Different depreciation methods exist, and accountants should choose a method that fits the use pattern and service life of the asset most closely. This section covers some of the most common methods.
Straight-Line Method The straight-line depreciation method assumes that the asset has the same usefulness and repair expense in each year. This may be unrealistic, but the method is popular because it is straightforward. The straight-line method determines the amount to depreciate per year by simple division:
Activity Method Unlike the straight-line method, the activity method isn’t based on the passage of time but on a measure of productivity relative to the total expected productivity for an asset such as production equipment. The measure can be either an output unit (parts produced) or an input unit, such as employee or machine hours. The following formula calculates activity method depreciation: This method results in faster depreciation in the periods of higher use and vice versa.
Accelerated Depreciation Methods Accelerated depreciation methods have a steadily decreasing charge so that assets are depreciated quickly in early years, which can match the use patterns of many assets. Items that have increasing maintenance costs will have more balanced total costs if an accelerated depreciation method is applied. Sum-of-the-Years’-Digits Method
The sum-of-the-years’-digits method starts with the depreciable base and reduces it by a fraction based on the number of remaining years of service, calculated as follows: Exhibit IV-15 shows how the sum-of-the-years’-digits method is applied. Note that the depreciable base used for calculating the depreciation fraction is kept constant and the book value starts at original cost and ends at salvage value.
Exhibit IV-15: Sum-of-the-Years’-Digits Depreciation
Year
Depreciable Base Remaining Life (USD) (Years)
Depreciation Fraction
0
1
$180,000
4
2
$180,000
3
Book Value at End of Year (USD)
Depreciation Expense (USD)
$200,000
4/10
$72,000
$128,000
3
3/10
$54,000
$74,000
$180,000
2
2/10
$36,000
$38,000
4
$180,000
1
1/10
$18,000
$20,000
10
10/10
$180,000
Declining Balance Method The declining balance method of depreciation starts with straight-line depreciation and accelerates it by applying some multiple, generally 1.5 times (a 150% declining balance) or 2 times (a double-declining, or 200%, declining balance) the straight-line rate. In straight-line depreciation, an
asset with a 20-year life would have a 5% per year straight-line depreciation (1/20). In our example, the depreciation rate is 25% (1/4 per year). A 150% declining balance would be 1.5 × 25% = 37.5% per year, and a double-declining balance would be 50%. Unlike the other methods, this method starts with the original book value, not the depreciable base, and then depreciates the asset down to the salvage value. (The final year may have a lower depreciation charge than calculated to ensure that the salvage value remains.) As shown in Exhibit IV-16, this method may result in the depreciation ending earlier or later than with straight-line depreciation. Therefore, organizations sometimes switch to straight-line depreciation near the end of an asset’s life.
Exhibit IV-16: Declining Balance Depreciation (150% Declining Balance)
Year
Beginning of Year Asset Book Value (USD)
Rate
1
$200,000
37.5%
$75,000
$125,000
2
125,000
37.5%
46,875
78,125
3
78,125
37.5%
29,297
48,828
4
48,828
37.5%
18,311
30,517
5
$30,517
37.5%
* 10,517
$20,000
$180,000
Depreciation End-of-Year Book Charge (USD) Value (USD)
* An extra year of depreciation was required over straight-line due to decreasing charge. The depreciation in Year 5 calculated to U.S. $11,444 but was reduced to U.S. $10,517 to reflect salvage value.
Asset Disposal Assets can be voluntarily disposed of through sale, exchange, or abandonment or through involuntary conversion, such as a fire. Depreciation
is prorated for the portions of the years that assets are depreciated to the dates of disposal. The depreciated book value of a disposed asset will not always equal its value at disposal, because depreciation is primarily a method of cost allocation and the salvage value was an estimate made in the past. The gain or loss on disposition is an adjustment to correct net income over the period the asset was depreciated. These gains or losses are displayed on the income statement as part of normal operating activities unless it is a business segment being disposed of. Business segments would need to report the results of continuing and discontinued operations in separate accounts. Losses from involuntary conversion may be reported as extraordinary items if criteria are met. Auditors reviewing asset disposals will be interested in the adequacy and effectiveness of internal controls over these disposals.
Measuring Financial Elements When measuring the value of financial elements, a key factor is the reliability of the valuation or estimate. Valuation also has a time factor because money can be invested to earn a return over time.
Historical Cost versus Fair Market Value According to the principle of historical cost, using the values actually paid or received is more reliable than estimates of current value. For example, until an asset is actually sold, its value to the organization remains uncertain. Aside from the high cost of constantly reassessing the values of all assets and liabilities, such a practice would allow for easier manipulation of the financial statement elements. The alternative is presenting the fair market value of the items, as is applied to most short- and long-term securities not intended to be held to maturity (available-for-sale or trading account securities) with readily determinable market value(s). Fair value, or fair market value, is the amount an asset could be acquired (or sold) for or a liability incurred (or settled), assuming willing parties that are not involved in a liquidation. For nonmonetary exchanges of assets, the fair value is the current market value of either the asset given up or the asset received in an exchange, whichever is easier to determine.
Time Value of Money The time value of money is the concept that money received today is worth more than money received tomorrow because the money could be invested to earn a return greater than the original investment. This occurs either through an investment that earns interest, through the agreed-upon payment for use of resources, or because the money is used in business to generate profits. The longer the time period, the greater the future value. Future value is the value of an investment at a particular date in the future assuming that compound interest is applied. Compound interest is the interest computed against the principal plus any previous accrued interest (or, from the lender’s perspective, interest not withdrawn). The opposite of future value is present value, which is the value at the present moment of a sum to be received in the future, assuming discounting using compound interest. For assets to be received or liabilities paid in the future as part of a contractual agreement, the asset would be recorded at its present value on the financial statements. Future Value of a Single Sum Calculating the future value of a sum that is invested for a certain number of periods at a given interest rate involves a formula that factors in the interest rate (or likely earnings rate of a business venture) and the number of periods. In actual practice, this calculation is often done in a spreadsheet application using preset formulas. This allows the practitioner to simply enter the input values needed and determine the answer. A spreadsheet could also be formatted to include these results as part of a larger analysis or report. Alternatively, a “Future Value of a Single Sum” table can be used, as shown in Exhibit IV-17, which calculates the future value of one dollar or other monetary unit against multiple time periods and interest rates. Multiplying any present value by the amount in the table will result in the same answer as if a future value formula is used. Note how all amounts are greater than 1.0, which means that future value is always higher than the initial present value.
Exhibit IV-17: Future Value of a Single Sum Future Value of 1 (Future Value of a Single Sum) Periods
6%
8%
9%
10%
11%
12%
1
1.06000
1.08000
1.09000
1.10000
1.11000
1.12000
2
1.12360
1.16640
1.18810
1.21000
1.23210
1.25440
3
1.19102
1.25971
1.29503
1.33100
1.36763
1.40493
4
1.26248
1.36049
1.41158
1.46410
1.51807
1.57352
5
1.33823
1.46933
1.53862
1.61051
1.68506
1.76234
6
1.41852
1.58687
1.67710
1.77156
1.87041
1.97382
7
1.50363
1.71382
1.82804
1.94872
2.07616
2.21068
8
1.59385
1.85093
1.99256
2.14359
2.30454
2.47596
9
1.68948
1.99900
2.17189
2.35795
2.55803
2.77308
10
1.79085
2.15892
2.36736
2.59374
2.83942
3.10585
Present Value of a Single Sum Calculating the present value of a single sum similarly could use a spreadsheet formula or involve manual calculations that require knowing the interest rate and the future value. Another way would be to multiply the principal amount by the appropriate amount in Exhibit IV-18. Note how all the values are less than 1.0, which means that the present value or value today will always be less than the future value.
Exhibit IV-18: Present Value of a Single Sum Present Value of 1 (Present Value of a Single Sum) Periods
6%
8%
9%
10%
11%
12%
1
.94340
.92593
.91743
.90909
.90090
.89286
2
.89000
.85734
.84168
.82645
.81162
.79719
3
.83962
.79383
.77218
.75132
.73119
.71178
4
.79209
.73503
.70843
.68301
.65873
.63552
5
.74726
.68058
.64993
.62092
.59345
.56743
6
.70496
.63017
.59627
.56447
.53464
.50663
7
.66506
.58349
.54703
.51316
.48166
.45235
8
.62741
.54027
.50187
.46651
.43393
.40388
9
.59190
.50025
.46043
.42410
.39092
.36061
10
.55839
.46319
.42241
.38554
.35218
.32197
Annuities Annuities are an example of using the time value of money. An annuity is a security that requires periodic payments in equal amounts per equal time periods and in which the interest is compounded over the same interval. An ordinary annuity requires payment at the end of each period, while an annuity due requires payment at the start of each period. Due to compounding, these annuities will result in differing values over time. The present and future values of annuities can be calculated using spreadsheet formulas, manual formulas, or the appropriate present and future value tables of an ordinary annuity/annuity due. Note that the full present and future value tables for a single sum and an ordinary annuity are available in the Resource Center.
Accounting for Selected Financial Activities Now we’ll look at how accounting is handled for several types of financial activities—bonds, leases, pensions, intangible assets, research and development, and contingent liabilities.
Bonds Bonds are debt instruments that can be raised from or issued by lenders, governments, or companies. They are the most common type of long-term asset or liability on the balance sheet. A bond indenture (agreement) will detail the bond issuer’s promise to pay a sum of cash at a set maturity date plus a specific rate of periodic interest on the face value. The face value, or par value, of a bond is the amount owed at maturity. Interest on bonds can be a fixed, or less commonly, a floating rate, with usually semiannual payments, or it can be set at zero. Zero-coupon bonds are bonds that carry zero or very low interest but are instead issued at a substantial discount
from par value, resulting in amortized discounts (a tax deduction) to maturity and no payments until maturity. Types of Bonds There are a number of types/qualities of bonds: • Government bonds are issued by government entities, either repaid through general tax revenues (general obligation bonds) or through revenues of the item financed (revenue bonds). They are backed by the full faith and credit of the government and are considered to have less risk. • Industrial revenue bonds are tax-exempt bonds issues by state or local governments to finance public projects. They are not backed by the full faith and credit of the government. • Corporate bonds are issued by corporations. • Debenture bonds have no collateral (are unsecured). • First mortgage bonds/mortgage bonds are secured by real estate. Generally, bonds secured by assets, such as mortgage bonds, are considered to have less risk. • Callable bonds are bonds that the issuer can call and retire before maturity, such as during periods of high interest rates. • Subordinate bonds have a lesser claim to cash in a default situation than other bonds. • Serial bonds have staggered maturity dates. • Term bonds all have the same maturity date. • Income bonds pay interest only when the organization has profits. The following are some other bond features: • Restrictive covenants. Covenants or indentures are the rights and obligations of the bond issuer and the bondholder, including restrictions on management actions such as not selling receivables (negative covenants) or
keeping select ratios above a benchmark level (affirmative covenants). • Sinking fund requirements. These are requirements to invest in a bond sinking fund each period and accumulate enough funds to pay off the bonds at maturity. • Stock warrants. Bonds may be issued with stock warrants (options to purchase stock at a set price for a given time) attached as an incentive. Valuation, Premiums, and Discounts on Bonds Bond values are found by determining the present value of the principal amount (par value) and of the interest payments. However, bond values are set in reference to the market interest rate for bonds of a similar risk level. The market value or yield-to-maturity of a U.S. $1,000,000 8% bond due in six years at a period when similar bonds are trading for 10% is calculated as follows: • First, the fixed annual interest payment, or annual coupon, is determined, which is 8% × U.S. $1,000,000 = U.S. $80,000. (The 8% is called the coupon rate.) • Next, the market rate is used to calculate the present value for both the principal and the interest payment. • The present value of the principal (using “Present Value of a Single Sum” tables, as in Exhibit IV-17) = PV of 6, 10% = U.S. $1,000,000 × 0.56447 = U.S. $564,470. • The present value of the interest payment (an annuity, which uses “Present Value of an Ordinary Annuity” tables, not shown here) = PVOA of 6, 10% = U.S. $80,000 × 4.35526 = U.S. $348,421. The yield-to-maturity (YTM) or market value is the combination of the present values of each separate cash flow—YTM = U.S. $564,470 + U.S. $348,421 = U.S. $912,891. If this amount were paid for the bond, the purchaser would receive a 10% yield over the six-year period. Bonds can be issued and resold at par, at a discount, or at a premium. Selling at par means that the stated bond rate and the market rate are equal.
Selling at a discount means that the stated bond rate is lower than the market interest rate. The discount on the bond discussed above would be calculated as the face value less the market price of the bonds. For the running example, this is U.S. $1,000,000 – U.S. $912,891 = U.S. $87,109 discount on bond issued. In the opposite situation, where the stated bond rate is greater than the current market rate for a similar risk bond, the stream of interest payments over the remaining life of the bond (the future value of the annuity) will be greater than a similar risk investment in the market at the current time, and so this results in the investor having to pay a premium to purchase the bond. Discounts and premiums must be amortized to the interest expense (income) account over the life of the bond issue. By recording the bond interest expense (income) as the amount of interest paid (received) and recording the amortization of the bond premium or discount, the resulting total interest expense (income)—stated as a percentage of the face value or principal amount of the bond—will equal the market rate at the time of issue (purchase). Note that many bonds pay semiannual coupons, and a U.S. $1,000,000 8% bond due in six years with semiannual coupons would actually have 12 periods at 4%, since the rate quoted is still an annual rate. All present value calculations would use the 12 periods at a 4% amount in their calculations. The semiannual interest payment would be U.S. $40,000 per payment (still U.S. $80,000 per year).
Leases Leases are contracts providing a lessee (renter) with less than total interest in a property or good owned by the lessor (the lender of the item). The lessor provides the lessee with specific rights in exchange for periodic payments. The GAO study mentioned previously includes lease accounting issues in the category of cost and expense recognition, which is one of the most common reasons for required restatements. Therefore, internal auditors need to be able to determine if leases are properly categorized. From the lessee’s perspective, there are two types of leases: operating and
capital. • Operating leases. Operating leases are generally short-term, pure rental agreements, where the asset and the related liability remain off the lessee’s books (lease expense is debited and cash is credited). • Capital leases. Under Accounting Standards Codification ASC 840-10-251 (for U.S. GAAP), a capital lease is any lease meeting at least one of the following capitalization criteria: • The lease transfers ownership of the property to the lessee by the end of the lease term. • The lease has a bargain purchase option allowing purchase at a significantly reduced price. • The lease term spans 75% or more of the estimated economic life of the leased property. • The present value of minimum lease payments, less executor costs, at the beginning of the lease term equals 90% or more of the excess of the fair market value of the leased property over any investment tax credit to be realized by the lessor. Accountants cannot use this criterion for the last 25% of the estimated economic life of the property. Note that this calculation uses the present value of an annuity due because payments are due at the beginning of each period. Capital leases are called financing leases under International Accounting Standard (IAS) 17, which defines them as leases that transfer substantially all of the risks and rewards of owning the asset, whether title is or is not eventually transferred. Capital leases are similar to purchases: The lessee records an asset (e.g., leased equipment) and a related liability (a short-term liability for the current year’s payments and a long-term liability for payments beyond the current year) when the lease is initiated. The lessee records accumulated depreciation for the asset and divides the lease payment into two accounts: an interest expense account and an obligations under capital leases account. The interest expense portion is calculated by using the lessee’s incremental
borrowing rate or, if lower, the lessor’s built-in rate of return on the asset (if known). The total payment less this interest portion equals the amount to record in the latter account, and this amount reduces the total lease obligation each year. (The remaining total lease obligation is used to calculate the interest portion for the next period.) These amounts are all required disclosures, as well as the gross amount held as capital leases by major class, contingencies, depreciation, and future lease payments for five years. IFRS disclosures under IAS 17 are similar but also include reconciliation between total minimum lease payments at the balance sheet date and their present value. From the lessor’s perspective, four types of leases exist: • Operating leases. These are the same as the operating leases discussed above. • Direct financing leases. In these leases, the lessee uses the lease to finance the purchase of an asset. The lessor keeps title to the asset, but the transaction is otherwise similar to a loan with the asset as collateral. To qualify, the sales price of the asset must equal the cost of the asset. Therefore, the lessor recognizes only interest revenue. • Leveraged leases. These are direct financing leases where there is an intermediary between the lessor and the lessee (a long-term creditor). To qualify, the lessor must have substantial financial leverage in the transaction. • Sales-type leases. These are alternative sales tools for manufacturers and dealers of an item. If the sales price (fair value) of the asset is more (or less) than the cost of the asset (i.e., generates a profit or loss), a lease can qualify as a sales-type lease. The FASB has issued Accounting Standard Update (ASU) No. 2016-02, “Leases (Topic 842),” which changes lease accounting effective after December 15, 2018, for public companies. All other companies have two dates: December 15, 2019, for fiscal year reporting and December 15, 2020, for reporting on interim periods within a fiscal year. The FASB states that
lessees need “to recognize on the balance sheet the assets and liabilities for the rights and obligations created by those leases.” Lessor accounting is largely unaffected. While the new methods retain both operating and capital leases, after the dates above, operating leases with lease terms of more than 12 months need to be recognized on the balance sheet, with lease payments listed as a lease liability and the asset listed as a right-of-use asset for the lease term. Leases of less than 12 months still have the option of being recognized as a straight-line expense. The practical result of retaining these two lessee categories is that there will be little change to the statement of cash flows or to the statement of comprehensive income (a statement that differs somewhat from the income statement, intended to provide a more holistic view of income).
Pensions While the number of organizations offering pensions is shrinking, it is still common for government employees to have a pension. Pensions are deferred employee compensation to be paid during retirement. A pension fund serves as a legally separate intermediary between the sponsor (the employer) and the beneficiaries. Organizations record pension expense over the duration of the employee’s service. Two basic types of plans are defined contribution plans and defined benefit plans. • A defined contribution plan defines the required annual contribution to the plan but makes no guarantee of the ultimate benefit level paid. Contribution formulae reflect years of service and other factors such as profit sharing. The employer’s annual pension expense equals the calculated required contribution. A liability is recorded only if the employer fails to fund the plan to this level. An asset is recorded for overpayments • Defined benefit plans are more complex, because the employer promises a specific level of benefits starting at retirement. Actuaries determine the minimum pension liability by calculating the actuarial present value of expected minimum payments to be made upon retirement (actuarial
because compensation for deaths, early retirements, etc., is included). To do this, they use one of three base measurements of service cost: • Vested benefit obligation—Benefits are calculated only for vested employees, disregarding future salary increases. • Accumulated benefit obligation—Benefits are calculated for all employees regardless of vesting, disregarding salary increases. • Projected benefit obligation—Benefits are calculated for employees regardless of vesting, and future salary levels are accounted for. (This is the preferred method, as it is the most conservative and provides the largest liability balance.) The service cost is a liability that accrues interest expense based on a settlement rate determined by actuaries reflecting the interest rate needed to settle the assets if the plan were terminated. The plan expenses are then reduced by a positive (or increased by a negative) actual return on plan assets, which is the net change in the market value of the pension fund plus dividends, interest earned, and plan contributions and less benefits paid. After certain other additions or reductions, the result is the accumulated benefit obligation. If this amount is greater than the fair value of the plan assets, the employer records a liability. IAS 19, “Employee Benefits,” requires reporting the present value of defined benefit obligations and the plan assets’ market value at each balance sheet date. It encourages the use of an actuary in calculating obligations.
Intangible Assets Accounting problems frequently surround improper recognition of intangible assets. Monitoring, valuing, and auditing intangibles can be a challenge, because many details of intangibles are subject to interpretation or estimation. Intellectual property such as patents is a key area for internal audit attention because cost and expense recognition could be improperly recorded. Intangibles are assets that have no physical substance; financial instruments are excluded by definition. While financial instruments get their value from their claim on resources, intangibles get their value from the benefits and
rights the organization gets from their use. Two basic types of intangibles exist: • Purchased intangibles are recorded at their acquisition cost plus any costs required to make the intangible ready for use (e.g., legal fees). • Internally developed intangibles expense costs as they are incurred, including all research and development costs. However, directly traceable costs such as legal fees can be capitalized. The following are examples of intangible assets: • Copyrights are government protections granted to authors and artists of all types. They expire 70 years after the author’s or artist’s death. • Patents are exclusive rights to sell, use, or manufacture something for a period of 20 years. • Trade names and trademarks are symbols or words that distinguish an organization or product. They remain in force while in continuous use. • Contracts are arrangements guaranteeing the rights and obligations between parties, including franchises, licenses, and service contracts. • Leases are intangibles but are classified as part of plant, property, and equipment (PPE). • Customer intangibles are data with value regarding customers, such as customer lists and contracts with customers. • Goodwill is the excess of the price paid for a subsidiary over the fair value of the subsidiary’s net assets. From a purchase accounting perspective, goodwill is not amortized but is instead re-valued each year, with any impairments being recognized if the value goes down. Intangibles can have either a limited or an indefinite life. Limited life intangibles are not expected to be used indefinitely (e.g., high annual expense or projected obsolescence), are attached to wasting resources (e.g., rights to use a mine), or are bound by law or contract to a finite life (or
renewal is prohibitively expensive). Indefinite life intangibles are expected to contribute to cash flows for an indefinite period because they are not restricted by contract, law, or regulation to a finite life. Limited life intangibles amortize their cost over the period of expected use. (Amortization is analogous to depreciation, except that it is used for intangible assets.) Like tangible assets, limited life intangibles can have a residual value if they can be sold at the end of their use, and this amount would not be amortized. The system used to determine the amortization amount per period should reflect the asset’s pattern of consumption. Each period’s amortization amount is treated as an expense by crediting the proper asset account or a separate accumulated amortization account per asset class. Limited life intangibles are tested for impairment by means of a recoverability test. If the sum of expected future cash flows from the asset is less than the carrying amount, an impairment exists and a write-off of the impaired amount is required. Different methods are used to measure the loss. Indefinite life intangibles (e.g., goodwill) are not amortized but are instead tested for impairment annually using the fair value test. If the fair value is less than the carrying amount of the intangible, then the asset is impaired by this amount and should be written down. The classification, treatment, and amortization of intangibles are guided by accounting rules and standards (e.g., GAAP or IFRS).
Research and Development Research and development (R&D) consists of research, which is a methodical process or search designed to discover new knowledge, and development, which is the use of research to develop new processes and products or significantly improve existing ones. Incremental improvements to processes or products are not considered to be R&D. R&D costs could be expensed or capitalized as they are incurred, depending on the accounting standards used. U.S. GAAP prescribes expensing, while the IFRS prescribes capitalizing.
R&D is not considered an intangible asset. If materials, PPE, or purchased intangibles are used in R&D, they are expensed unless they have an alternative future use, in which case they are treated as normal inventory, PPE, etc. Personnel and contract services are always expensed. Indirect costs are expensed to the extent that they can be reasonably allocated, except general and administrative costs, which are expensed as R&D only if clearly related.
Contingent Liabilities Contingencies are situations or circumstances with an uncertain potential for gain or loss, called gain or loss contingencies; they are tied to certain future events that may or may not occur. Auditors must determine completeness, or whether all contingencies are recognized. The most common contingencies are pending lawsuits (e.g., discrimination, civil rights, consumer privacy, rate hearings for regulated industries). Following the accounting principle of conservatism, loss contingencies are recorded but gain contingencies are not. Contingent liabilities satisfy two criteria: The amount of the loss can be estimated reasonably, and all available information implies that it is probable that a liability will exist on or before the financial statement date. The FASB gives the term “probable” specific meaning in the prior definition as part of three possible states for contingencies: • “Probable” means that the event(s) are likely to happen. • “Reasonably possible” means that the likelihood of occurrence is somewhere between probable and remote. • “Remote” means that there is only a slight chance the event(s) will occur. Probable events that cannot be reasonably estimated in value should not be recorded, but a recorded contingent liability could still have an uncertain payee or date of payment. Examples of loss contingencies include assessments, environmental liabilities, product recalls, the collectability of accounts receivable, and warranties, guarantees, or coupons.
Topic B: Advanced and Emerging Financial Accounting Concepts (Level B) This topic looks at a number of advanced and emerging financial accounting concepts: earnings per share; dividends; deferred taxes; equity security investments; partnerships, combinations, and consolidations; consolidated financial statements; and foreign currency transactions.
Earnings per Share (EPS) Earnings per share (EPS) must be disclosed on the income statement. This section first examines how to calculate basic earnings per share and then compares it to earnings per share with a complex capital structure.
Basic Earnings per Share Basic earnings per share is calculated as income available to common shareholders per weighted average share of common stock, as below: Let’s look at an example of this calculation using amounts from the financial statements shown in the prior topic. (The statements are rounded to millions, but here whole numbers are used; this is a best practice to enable the calculations to be more precise.) ABC, Inc., had 10 million shares at the start of 2018 and issued 1 million more shares on June 1 (after 5 months). Weighted average shares outstanding is determined by multiplying the number of shares outstanding by the prorated number of months outstanding (months outstanding/12 months) and then summing the amounts found, in this case, (10 million × 5/12) + (11 million × 7/12), which results in a weighted average of 10,583,333 shares. Basic EPS is then calculated as follows:
Note that ABC, Inc., had no preferred dividends. (These are fairly uncommon.) Notes to the financial statements include a schedule disclosing weighted average common shares outstanding, as in Exhibit IV-1. Exhibit IV-1: Note Disclosing Earnings and Dividends Per Share
EPS with a Complex Capital Structure A firm may have securities such as preferred stock or bonds that can be converted into common shares at the option of the owner. These securities are considered dilutive because they will increase the number of shares outstanding and therefore reduce earnings per share. Other items that could dilute EPS include the impact of warrants and other options. (Warrants are certificates allowing the holder to purchase stock shares at a set price over a given period of time.) Companies with dilutive securities must report diluted earnings per share on the income statement so that investors can judge the impact of these items on EPS. Exhibit IV-1 shows that in Year 6, 199 incremental common shares were included in diluted EPS.
Dividends Dividends are a distribution of profits, not an expense. The dividend payout ratio is the percentage of earnings paid out in cash to shareholders. Dividend policy balances income to be reinvested in the company as retained earnings versus income distributed to shareholders as dividends. For internal auditors, the primary tasks involving dividends would be to audit
the registrar and the actual disbursement process for proper application of internal controls. Key dates for dividends include: • Date of declaration—The board of directors announces a dividend, creating a liability. (Retained earnings are debited; dividends payable are credited.) • Ex-dividend date—Subsequent stock purchases do not benefit from a previously declared dividend. • Date of record—Stockholders who own stock on this date will receive the dividend. • Date of payment—This is the date when the dividend will be paid. (Dividends payable are debited; cash is credited.) There are several types of dividends: • Cash dividends are the most common type of dividend, paid in cash. • Liquidating dividends are paid as a return of the stockholders’ investment rather than from retained earnings (e.g., a liquidation). • Property dividends are paid in the form of property, investments, etc., accounted for at the fair value of the assets given. • Stock dividends pay shares of stock, reclassifying a portion of retained earnings as paid-in capital instead of reducing total assets or shareholders’ equity. Stock splits aren’t dividends but are intended to reduce the stock price, resulting in no net change to stockholders’ equity. A three-for-one stock split would triple each shareholder’s shares and divide the par value by three.
Deferred Taxes Organizations often choose to use one set of regulations for determining taxable income for tax authorities and a different set of standards (e.g.,
GAAP or IFRS) for determining pretax financial income for financial reporting. When this occurs, taxable income on the financial statements will differ from taxable income on the tax return. Some of these differences will be resolved in later periods; these are called temporary differences, meaning that they will reverse in future periods. Items that cause temporary differences can include depreciation, long-term construction contracts, goodwill amortization versus impairment testing, estimated costs such as warranty expense, and other cash basis versus accrual basis differences when cash basis is required for taxes. Other differences, called permanent differences, will never reverse themselves. Examples of permanent differences include effective tax rate changes, making a portion of a temporary difference permanent, deductions for dividends received that are nontaxable under tax codes but are taxable under financial reporting standards, and government tax exemptions and special deductions beyond those allowed by GAAP or IFRS. A permanent difference would have no deferred tax consequences because it affects only the period in which it occurs. A temporary difference is a net difference between taxable income and pretax financial income that results in a deferred tax liability (a reduction in current taxes paired with an increased taxable amount in future periods) or a deferred tax asset (an increase in current taxes paired with a decreased taxable amount in future periods). For example, if the tax code requires the use of a modified cash basis but financial accounting requires accrual accounting and revenue recognition, uncollected sales on account would be included in pretax financial income but not in taxable income. If in year 1 uncollected sales were U.S. $10,000 for financial reporting but U.S. $0 for taxable income, with a 40% tax rate, year 1 taxes payable would be U.S. $4,000 less than tax expense as recorded on the financial statements. A deferred tax liability for U.S. $4,000 would also be recorded. If in year 2 those accounts receivable were all paid in cash, the year’s taxable income would be increased by U.S. $10,000 and the U.S. $4,000 liability would be removed when the increased tax was paid. These differences can span multiple years, with portions being reversed each year.
Equity Security Investments When organizations purchase equity interests in other organizations in the form of capital or preferred stock, accounting treatment for the investment depends on the amount of influence the investor can exercise over the investee, as explained below and summarized in Exhibit IV-2. If the investor has a passive interest (less than 20% ownership): • The investment is recorded at cost. • The investment is valued using the fair value method. The fair value method compares the cost to each security’s market value, and the net gain or loss on all similar securities is recorded to unrealized holding gain or loss and to securities fair value adjustment. • If the shares are available-for-sale (the intended use is flexible), unrealized holding gains and losses are recorded in other comprehensive income and as a separate component of stockholders’ equity. • If the shares are held as trading shares (planned to be sold in the near future), they are held at cost on the balance sheet and gains and losses are recognized as part of net income upon sale. If the investor has significant influence (between 20% and 50% ownership): • The investment is recorded at cost, adjusted every period by the investor’s share of the investee’s net income and dividends. • The equity method is required. This method acknowledges a relationship with substance between investor and investee. The investor’s proportional share of the investee’s net earnings increases the investment carrying amount, while net losses and dividends paid to the investor decrease the carrying amount. (Dividends reduce the investee’s owners’ equity.) • Unrealized holding gains and losses are not recognized. If the investor has a controlling interest (greater than 50% ownership): • Consolidated financial statements are required. The investee is considered
a subsidiary and the investor the parent company. Statements treat both as if they were a single entity. • Unrealized holding gains and losses are not recognized.
Exhibit IV-2: Equity Security Investments Ownership Level Passive interest: 50%
Fair market value.
Consolidated financial statements.
Not recognized.
Partnerships, Combinations, and Consolidations Companies form partnerships and mergers to increase their influence over a market or over the company in which they are purchasing an interest. Benefits include economies of scale and other efficiencies and cost savings, diversification for financial stability, and better international impact. Antitrust laws exist to prevent mergers that would significantly reduce competition. There are various types of partnerships, mergers, and consolidations:
• Partnerships. A partnership (business type) is an association between two or more persons or corporations to be co-owners in a business for profit, such as a law firm. General partnerships carry unlimited personal liability for the actions of all partners; limited liability partnerships (LLPs) limit liability to each partner’s own actions. A joint venture is an agreement between two separate organizations to accomplish a single project together. • Business combinations and mergers. A business combination is when the operations of two or more organizations are brought under common control. A friendly takeover is performed when the boards of directors of the organizations work out a mutually agreed-upon deal to present to shareholders; in an aggressive takeover, when the investee attempts to avoid takeover, it is likely that the investor will make a tender offer, which bypasses the organization and works directly with shareholders. In a horizontal merger, organizations in the same industry merge; a vertical merger is a supply chain merger (customers and suppliers). Mergers between publicly held companies are accomplished through stock purchases. The purchaser tenders an offer for stock at a certain price. In a two-tier tender offer, the purchaser offers to buy stock at a certain price per share until a certain point (e.g., acquisition of controlling interest) is reached. After this point, the offer price drops. The higher price rewards sellers who move quickly but also moves the stock acquisition along more quickly. • Special purpose entities (SPEs) or variable interest entities (VIEs). A special purpose, or variable interest, entity is a subsidiary created by the parent company to perform a specific task, often part of an off-balancesheet accounting arrangement. Many organizations have misused SPEs, such as Enron’s use of multiple SPEs to hide massive amounts of debt. Enron also needed to divert income to failing SPEs, a liability unknown to shareholders. Internal audit activities should include SPEs in their audit universe and periodic risk assessments. FASB ASC 810-10-15 includes provisions for the use of SPEs.
Asset acquisition and stock acquisition are two methods of obtaining ownership of an organization. In an asset acquisition, 100% of the investee’s assets must be purchased; in a stock acquisition, only 50% or more of the common stock must be owned. In the former case, acquisition results in the investee ceasing to be a business entity, and all accounts are rolled into the investor’s books as a statutory merger or statutory consolidation. A statutory consolidation results in a new corporation that issues new common stock, replacing both old stocks. In a statutory merger, one survivor organization keeps its stock and the other subsidiaries convert their stock into shares of the survivor. In a stock acquisition, the investee can be maintained as a separate entity with consolidated financial statements.
Purchase Accounting All business combinations under GAAP and IFRS must use purchase accounting. On the consolidated balance sheet, purchase accounting records assets and liabilities at their fair market values, recording the excess of cost over fair market value as goodwill. The acquired retained earnings of the investee are not recognized. Any equity securities issued as consideration are recorded at the issuer’s fair market value. On the balance sheet, depreciable or amortizable assets have their excess of market over book values depreciated or amortized, reducing future earnings on the income statement. The investee’s earnings subsequent to the date of acquisition are included in the investor’s books; prior earnings are not recognized. Direct expenses from the combination are included in the purchase price of the investee company and are therefore capitalized by charging them to an asset account, while indirect costs (e.g., a merger department, manager time and overhead allocated to the merger that would have been incurred even without the merger) are expensed as incurred. Finally, any security issuance costs are used to reduce the value of the security on the books.
Consolidated Financial Statements Consolidated financial statements present the results of operations and the financial position of a parent and its subsidiaries as if they were a single entity. The subsidiaries can remain as legally separate entities.
Steps in consolidation include: 1. Determine the ownership percentage and minority interests of each subsidiary. 2. Combine the assets, liabilities, revenues, and expenses of each organization. The investee’s net assets multiplied by the investor’s ownership percentage equals the book value of the subsidiary. Differences between the purchase price and the book value are allocated to the appropriate underlying asset or liability accounts. Accounts valued at historical cost can be increased until they reach market value. If all such accounts are marked up to market value, any excess becomes goodwill. When cost exceeds book value, the assets are adjusted upward or liabilities downward; when book value exceeds cost, the assets are adjusted downward or liabilities upward. This step requires estimates, an area of concern for internal audit. 3. Record eliminating entries to reverse all intercompany transactions and balances. Investment accounts as well as all stockholders’ equity from prior partial acquisitions must be eliminated to avoid double-counting. Eliminating entries are discussed in more detail after Exhibit IV-3. 4. Issue consolidated statements. The four primary statements are required. Adjusting entries could be required to reverse entries made by one organization and not another. Exhibit IV-3 shows an example of a consolidated balance sheet working paper. It assumes that InvestorCo purchased 80% of InvesteeCo’s stock for U.S. $103,600. InvesteeCo’s book value is U.S. $112,000, and 80% of this is U.S. $89,600, thus making a U.S. $14,000 excess of cost over book value. This amount is used to adjust plant and equipment upward. Exhibit IV-3: Working Paper for Consolidated Balance Sheet
Eliminating Entries Note that Exhibit IV-3 includes a section for eliminating entries that would not appear in any of the journal entries. Eliminating entries help avoid presenting redundant information between a parent and its subsidiaries for stock ownership of a subsidiary and intercompany debt, revenue, and expenses. For stock ownership, the portion of the subsidiary’s stock purchased by the parent is the parent’s asset, and this portion is eliminated from the subsidiary’s balance sheet and statement of equity. The remaining portion not acquired is reported in the subsidiary’s equity account. Intercompany debt occurs when a parent makes an loan to a subsidiary. The parent’s financial statement lists this as an asset, a note receivable, and the subsidiary lists it as a liability, a note payable. Elimination entries remove both the asset and the liability to show that this is essentially a cash transfer between the entities.
Intercompany revenues and expenses occur when the entities sell products or services to one another, enter into leasing arrangements, or otherwise transfer assets. For example, company X sells its subsidiary, company Y, saleable goods for $400,000 on May 1. The cost of these goods for company X was $250,000. The following journal entries record the sale:
May 1—Company X’s journal entries to record the sale to company Y Dr. cash
$400,000
Cr. sales—intercompany
Dr. cost of goods sold—intercompany
$400,000
$250,000
Cr. inventory
$250,000
May 1—Company Y’s journal entry to record the purchase of goods Dr. inventory
$400,000 Cr. cash
$400,000
Now, assume that on November 1 company Y sold the goods at a markup:
November 1—Company Y’s journal entries to record the sale of goods originally purchased by company X Dr. cash
$500,000
Cr. sales
Dr. cost of goods sold
$400,000
Cr. inventory
To eliminate double-counting on the consolidated statements, the
$500,000 $400,000
transactions are essentially reversed at the balance sheet date, December 31. The transactions that need to be eliminated are company X’s credit to sales —intercompany (by issuing a debit in the same amount) as well as company X’s debit to cost of goods sold—intercompany. To get the transaction to balance, the difference is credited to the same account separately:
December 31—Final elimination entry Dr. sales–intercompany
Cr. cost of goods sold–intercompany
$400,000
$400,000
Foreign Currency Transactions Organizations conducting international trade or those with branches or subsidiaries located in other countries will likely need to conduct transactions in multiple currencies. This results in the need for foreign currency exchange and also raises issues of valuation for multinational organizations, such as currency fluctuations between the primary and other currencies. For example, consider the situation of a U.S. company’s investment in a European venture denominated in euros. If the value of the euro declines, the value of the U.S. company’s investment would decline as well. If a European company invests in a U.S. enterprise with U.S. dollars and the dollar declines in value, the value of the European company’s investment declines. International taxation and other legal and political factors also must be accounted for but are not covered here.
Foreign Exchange (FX) Exposure Foreign exchange (FX) rates are quotations of the number of units of one currency needed to exchange for a unit of a different currency. Exchange rate risk is the volatility of exchange rates between an organization’s primary currency and any currencies used by its subsidiaries and trading
partners. The main uncertainty is the actual amount of money that will be received from or paid for any foreign-denominated transaction. Translation exposure is the risk that fluctuations in exchange rates will affect reported income. Hedging is a common means of offsetting the risk of translation exposure. Economic exposure is the risk that fluctuations in exchange rates will affect the future cash flows or value of the organization. Measuring economic exposure requires determining the cash inflows and outflows for each currency. Then an organization could work to reduce the total number of cash flows through multiple means such as consolidating payments or matching cash inflows to cash outflows denominated in the same currency. Exchange rates are listed in reference to a primary currency. Exhibit IV4 uses the U.S. dollar (USD) as the primary currency, and other currencies are listed in relation to it. The “USD ($) Equivalent” column shows how many dollars are needed per one unit of foreign currency; the “Currency per USD ($)” column shows the inverse, or the amount of foreign currency needed to buy one U.S. dollar.
Exhibit IV-4: Foreign Currency Quotation Format for U.S. Dollars Currency
USD ($) Equivalent
Currency per USD ($)
British pound (₤)
$1.783/₤
₤0.5609/$
Euro (€)
$1.185/€
€0.8439/$
Currency dealers provide bid-offer quotes such as €0.8400 – 0.8499/US $, meaning that the dealer would buy or bid one U.S. dollar for €0.84 and sell or offer one U.S. dollar for €0.8499. The two types of FX markets are the spot and forward markets. Spot markets are for transactions that settle in one or two days; forward markets are quotations for foreign exchange where the transaction will take place more than two days in the future, commonly within a year. When the spot and exchange rates are equal, they are on par. When the spot market values the currency higher than the forward market, the currency is trading at a
discount. The opposite situation is a premium. These differences are caused by interest rate differences between the two countries: The currency with the higher interest rate will sell at a discount in the forward market. A forward exchange contract is an agreement to buy foreign currency in the future at a price determined by the forward market, often as a fair value hedge against the cash flow variability from changes in exchange rates.
Consolidated Financial Statements Where Currency Translation or Remeasurement Is Required When a parent company is preparing consolidated financial statements, it must translate the statements of subsidiaries into a common currency. Reporting currency is the currency in which the parent company has chosen to present its financial statements. Functional currency is the currency of the subsidiary’s primary economic environment; it could be the parent company’s reporting currency, if the subsidiary is primarily an arm of the parent’s operations, or it could be the local currency of the country in which the subsidiary is located, for relatively self-contained, integrated operations. When a subsidiary uses the parent’s reporting currency as its functional currency, the statements must be remeasured using historical exchange rates for the period. Remeasurement uses the temporal method, in which all nonmonetary balances or balance sheet accounts other than cash, claims on cash, or obligations to pay cash as well as their associated expenses must be remeasured. When a subsidiary uses the local currency of the primary economic environment as its functional currency, the current rate method is used to translate the statements. This method translates all assets and liabilities using the exchange rate as of the balance sheet date, while paid-in capital accounts use the historical exchange rates for the period. Under both remeasurement and translation, income statement accounts are translated using the average exchange rate for the period. If the subsidiary is using neither the parent’s reporting currency nor its local currency, then it isn’t using the functional currency, and its statements would first be remeasured into the functional currency and then translated into the
reporting currency. For example, a British subsidiary of a U.S. company that uses the euro due to its many continental clients would fall into this category.
Topic C: Financial Analysis (Ratio Analysis) (Level P) While analytical audit procedures include many types of analysis—for example, regression analysis—this topic focuses on financial ratio analysis. See Part 1 of this learning system for information on other audit tools. Financial ratios quantitatively relate two or more numbers for comparison as a percentage or number of times or days. Ratios are a primary decisionmaking tool for lenders, credit reporting agencies, investors, regulators, and others in understanding an organization’s risks and returns; they are also very useful in helping auditors find irregularities. Comparing relevant ratios can greatly enhance the analyst’s understanding, moving from data to information. Ratios can be used to value stocks, such as finding a stock’s intrinsic value as opposed to its market value, to judge the value and continued viability of organizations or assets (e.g., bond valuation agencies), as a management tool to judge performance against a standard, or as an auditing tool to detect unusual variations from expectations.
Benchmarking and Comparative Analysis Ratios take on the most significance when compared to internal and external benchmarks. Benchmarking is the comparison of an organization or project to similar internal or external organizations or projects. Internal benchmarking includes comparing divisions against the best division in an organization or comparing the results of one division against its past performance record. External benchmarking includes comparing an organization against either industry averages or specific competitors. For effective benchmarking, the project/organization being assessed and the source project/organization must be reasonably comparable. Methods for ensuring a like comparison include common-size financial statements and inflation, historical cost, and accounting method adjustments.
Common-Size Financial Statements
Common-size financial statements express all account balances as percentages of one relevant aggregate balance. Both income statements and balance sheets may be put in a common-size format. Common-size statements may be horizontal or vertical. • Horizontal common-size financial statements express the results for the same organization over several periods as a percentage of a base year, with other years shown as the percentage increase or decrease from the base year. Each account is set at 100% for the base year. Horizontal statements help determine how an organization is changing over time, because the percentages simplify analysis over use of currency amounts. The basic equation is: Exhibit IV-1 shows a horizontal common-size statement performed on income statements. Exhibit IV-1: Horizontal Analysis Performed on Income Statements
• Vertical common-size financial statements express the amounts in a
statement as a percentage of a chosen base, such as sales or cost of goods sold on the income statement or total assets on the balance sheet. The base is set at 100%, and other amounts on the statement are expressed as a percentage of that total. Such proportional weightings allow two organizations to be compared even if they have very different amounts of capital, such as comparing an organization with U.S. $1 million in total assets to an organization with U.S. $100 million. The percentages make it easy to determine which has a higher relative proportion of inventory, for example. Exhibit IV-2 shows a vertical common-size statement. Exhibit IV-2: Vertical Analysis Performed on Balance Sheets
Inflation, Historical Cost, and Accounting Method Adjustments Financial statements are not adjusted for inflation. Therefore, when comparing two sets of financial data where there is a wide gap in years, the statement amounts should be adjusted. The most current year is set as a base year, and the other statements are adjusted by the inflation rate so they
can be expressed in base-year amounts. Similarly, fixed assets are valued on a statement at historical cost, but this value becomes more distorted over time. Adjustments to fair value may be appropriate in some situations. When it is possible, any significant differences in accounting policies should be reconciled by calculating what the amounts would be if the same policies were followed for each statement. Inventory valuation methods, depreciation methods, classification of leases, pension costs, and choices concerning capitalization versus expensing of costs all need to be standardized.
Auditor’s Financial Statement Analysis Procedures According to Standard 2320, “Analysis and Evaluation,” “Internal auditors must base conclusions and engagement results on appropriate analyses and evaluations.” Auditors can use analytical procedures to detect: • Differences that are not expected. • The absence of differences when they are expected. • Potential errors. • Potential irregularities or illegal acts. • Other unusual or nonrecurring transactions or events. Internal auditors should compare the results of ratio analysis to related nonfinancial information, to the results of other organizational units, and to relationships among the elements such as by using a segmented audit cycle. Prior to an engagement, ratios can help determine areas of greatest risk. During an engagement, ratios help the auditor to evaluate data to support engagement results. From a substantive testing perspective, analytical review procedures can be used by auditors to compile support and evidence regarding the reasonableness of select stated account balances or financial statement line items. The scope of analysis should match the risk assessment and significance of the area and the availability and reliability of
the data. At the end of an engagement, ratios can help as a reasonableness test. Unexpected results should be followed up through interviews with management or application of other procedures until internal auditors are satisfied. One warning: Auditors or others can use numbers selectively to support a preconceived bias, especially where some elements must be adjusted or unexpected variables occur. Auditors should also use analysis to look for unexpected relationships in ratios.
Examples of Ratios Now we will look at several key types of financial ratios: • Leverage ratios. Much as a physical lever multiplies the amount of force applied, financial and operating leverage are ways to multiply gains from equity or fixed costs. However, they also increase risks. • Liquidity/short-term debt ratios. Liquidity ratios primarily show an organization’s ability to pay its short-term obligations without undue hardship. For each of the liquidity or short-term debt ratios discussed below, the higher the ratio, the stronger the liquidity. • Debt management ratios. Debt management ratios include various ways of measuring the degree of financial leverage and debt coverage. Essentially these show how much debt is in use and repayment risk. • Profitability ratios. Profitability ratios measure an organization’s earning power. They help judge operating performance (sales versus related expenses), leverage, and risk. Gross, operating, and net profit margin are three measures often compared. For example, say that, compared to industry averages over several years, gross profit margin has been holding steady but operating profit margin and net profit margin have been declining. The cause must be from indirect costs, since gross profit equals net sales less cost of goods sold, while operating profit and net profit deducts cost of goods sold and a number of indirect items. • Return on investment (ROI) ratios. ROI is simply return divided by
investment. Any amount greater than 1.0 indicates positive return. There are many common variations on ROI, depending on how the numerator and the denominator are defined. • Investment valuation ratios. Determining the value of an investment actively traded on an exchange (i.e., stock market) can use long-term debt and dividend ratios. Other methods to value an investment include residual income and Economic Value Added (EVA®). (Neither of these are covered in these materials.) An additional type of financial ratio, asset management ratios, measure how efficiently an organization’s assets are used to generate income. These are discussed in the next topic. Exhibit IV-3 describes common ratios in each of the categories listed above. In some cases, examples are included that are from the ABC, Inc., financial statements presented earlier. The examples show ratios for the most current statement year, though in some cases they may also use information from the previous year, such as for calculating averages. Exhibit IV-3: Summary of Ratios Used in Analyzing Financial Statements Ratio
Calculation
What It Measures
Leverage: Debt vs. Equity Operating leverage
Financial
Proportion of fixed costs required to produce goods or services. The higher the operating leverage, the greater the impact of changes in price and variable costs, up or down. Higher operating leverage means higher risk, because fixed charges must be met regardless of sales levels. If the return on assets
leverage index
is lower than the return on common equity, the organization is trading on the equity at a gain, a greater return than the interest related to their fixed cost debts. Financial leverage, or trading on the equity, is relative use of fixed interest (debt or preferred stock). Shareholders prefer a higher degree of financial leverage: the alternate source of funds multiplies their equity investment, assuming profitable operations. If unprofitable, the loss is equally magnified. Generally, invested capital should exceed borrowed capital. Values over 1 indicate that the firm is using debt efficiently.
Financial leverage
Liquidity/Short-Term Debt Current ratio
Proportion of assets to liabilities at one point in time. ABC, Inc., has U.S. $8.98 in current assets for each dollar of its current liabilities, or 8.98 times the current assets. The current ratio cannot provide data on cash flow timing, however. Lowering current ratios over time shows declining liquidity but, if too high, could show that the firm has too much invested in lowyield short-term
assets. Cash ratio
Proportion of cash or easily convertible securities to liabilities at one point in time. A more conservative measure of liquidity used to determine if an organization can pay its obligations over the short term. However, firms can use other sources than cash to pay current liabilities.
Quick ratio (acidtest ratio)
Like the current ratio, but eliminates inventory, the least liquid of current assets and therefore the least available for cash to reduce current debts. It is the proportion of most liquid sources of funding (cash, cash equivalents, receivables) to liabilities. Since inventory is excluded, stable current ratio but declining quick ratio could indicate temporary or permanent increase in inventory.
Net working capital
Not actually a ratio. Measures the relationship of shortterm debt to short-term assets by simply subtracting liabilities from assets. A larger number indicates a greater ability to pay current debts—greater liquidity, in other
words. Debt Management Debt ratio
Measures how much of an organization’s assets are financed by debt. A lower debt ratio is better, because it implies that relatively fewer liabilities exist. A relatively low debt ratio means that an organization finances its activities more through equity but also relatively low financial leverage.
Debt to equity ratio
Measures an organization’s proportion of liabilities to equity. A reasonable ratio of debt to equity varies among industries, with capitalintensive organizations generally carrying more debt in relation to owners’ equity. Under 100% is desirable.
Profitability Gross profit margin
or
Gross profit is the money remaining from sales revenues after deduction for the cost of goods sold; gross profit margin is the proportion of net sales minus cost of goods sold to net sales. When organizations are compared, a higher ratio indicates more effective management of pricing
and control of costs. For the organization’s profitability over time, a rising trend indicates increases in operational efficiency. This ratio relates sales to production costs. For each dollar of sales, ABC, Inc., generates U.S. $0.49 in gross profit. Operating profit margin
or
Net profit margin or
Operating profit is net sales less cost of goods sold and operating expenses (also called selling, general, and administrative expenses). For each dollar of sales, ABC, Inc., makes U.S. $0.26 in operating profit. The higher the operating profit margin, the greater the company’s operating efficiency. Net profits are calculated by subtracting interest and taxes from operating profits; net profit margin, therefore, is the portion of sales remaining after covering net profits. It measures the effectiveness of debt, tax management, operations, pricing, and cost controls. Normal net profit margin depends on the industry; a relatively low margin could mean that
competitors are forcing price cuts or poor cost controls. Return on Investment (ROI) Return on assets (ROA)
Proportion of net earnings to total assets. Shows how well the company has used its assets to produce value. Net income should include only income from continuing operations. For each dollar invested in total assets, ABC, Inc., makes U.S. $0.19 in net income. A variation, total return on assets, adds interest expense to net income to give firms with high debt financing a more appropriate ratio.
Investment Valuation Dividend yield
Dividend payout ratio
The rate of return of one share of stock per period. Dividend yield shows the percentage of stock value returned as dividends in the period. Proportion of a company’s earnings paid out as dividends.
Price/earnings (P/E) ratio
Book value per common share
Proportion of a share’s price to its earnings. A higher number is better; a declining trend may indicate poor growth potential.
Shareholders’ portion of all assets as stated on the balance sheet if liquidated.
Limitations of Ratios Ratios do have some limitations. We’ve already covered some of them, for example, inflation or different accounting methods. A primary limitation is that ratios should not be relied upon as a sole decision-making factor but should be combined with nonfinancial data such as consideration of strategy or management talent. Furthermore, management tracks ratios themselves and has the power to adjust certain ratios by initiating transactions that may improve the ratio even if the action isn’t in the long-term best interest of the organization. For example, if management uses some current assets to pay current liabilities, both numbers will be reduced. If the current ratio was already above 1.0, the result will be an apparent increase and, if below, an apparent decrease. Such equal changes to both numerator and denominator can alter the appearance of a ratio without any actual change in performance. Auditors should be wary of such “window dressing.” Difficulties can arise when trying to find a benchmark organization due to the wide differences even among companies supposedly in the same industry. Many organizations are highly diversified or own multiple subsidiaries that have nothing to do with a particular industry. On the other hand, industry averages are available for most industries, but these are what they imply—a point halfway between the best and worst performers, not an ideal state for an organization. Using a few close competitors may be more appropriate.
Comparing statements across national boundaries adds a new level of difficulty: Languages and currencies need translating, and accounting practices need to be standardized. Ratios themselves have many variations, and precalculated ratios published in financial statements may not all use the same numerators or denominators or may use beginning amounts, averages, or ending amounts. The best course is to recalculate all ratios when comparing statements so that each set of ratios will be based on the same assumptions.
Topic D: Revenue Cycle, Current Asset Management Activities and Accounting, and Supply Chain Management (Level B) Looking at business activities as cycles allows internal auditors or other professionals to see how processes are interconnected. While an internal purchasing officer may not think much about the problems of an accounts payable manager, their processes are in fact connected by the need for available cash at the right time, which the accounts payable manager may be able to influence. Looking at business cycles helps people see these interconnections and develop ways to make the overall system function more effectively. Internal auditors should have a basic understanding of a number of business cycles, including high-level cycles, such as the operating cycle, and more detailed cycles, such as the procurement or knowledge cycles. This topic focuses on two specific cycles—the revenue cycle and the supply chain management cycle. It also covers management activities and accounting related to current assets, and, since inventory is a key part of current assets, inventory management and valuation.
The Revenue Cycle The revenue cycle (sometimes referred to as the sales and collections cycle) is a process that starts with a salesperson, a customer service specialist, or the customer placing a customer order. As Exhibit IV-1 shows, a sales order is then generated, which typically results in a credit check unless the customer is paying in cash. If credit is approved or cash is received, the goods are shipped or otherwise handed off to the customer. Finally, an invoice is sent to the customer and the sale is recorded in the accounting records. Exhibit IV-1: Revenue Cycle
Audit Objectives The following accounts are used to capture relevant data from the sales and collections cycle:
Internal auditors examine transactions posted to each of these accounts as well as sales recorded in the sales journal and accounts receivable subsidiary ledger to determine whether key controls are operating effectively. Key controls are determined using a risk-based approach. A control is determined to be key or not key without consideration of resources. The sufficiency and availability of resources are considered then in relation to the key controls identified and requiring assurance coverage, and a determination is made regarding whether there are sufficient existing resources at the right skill and knowledge levels to effectively test these key controls from a required timing perspective. The CAE will need to address situations and options available when there are resource gaps. This riskbased methodology forms the basis for the audit objectives. Key controls in the sales and collections cycle may include proper segregation of duties,
proper authorizations, use of proper documents that are prenumbered if appropriate, and use of proper internal verification procedures. Since most sales and collections activities take place using electronic systems, a key audit objective is often to determine that IT general controls are working effectively. It will be difficult to rely on the specific application controls if IT general controls, such as access controls or software change controls, show weaknesses. While most controls in the sales and collections cycle relate to transactions rather than specific account balances, management may employ high-level detective controls and selected analytical review procedures (such as select interest and fee-based income/expense yield analyses) for reasonableness monitoring related to account balances. Such procedures can also potentially provide additional substantive reasonableness monitoring support. So for those controls that are proven to be operating effectively, the tests of details regarding these controls can usually be reduced. This can lower audit costs without sacrificing audit quality. A check of any unusual transactions is often appropriate to make sure they were approved and reported. In enterprise resource planning systems or systems that perform a similar function for sales and collections, internal auditors also examine the customer master file and the transaction history file when performing comparisons.
Substantive Tests Internal auditing for the sales and collection cycle includes substantive testing of transaction-level controls. This is because revenue is received in many forms in this cycle and internal auditors provide assurance that the transactions are complete, accurate, and properly recorded, that payments are credited to the correct customer accounts, and that cash and goods are not misappropriated. Depending on defined internal audit engagement objectives, internal auditors may test five financial statement control assertions, known by the acronym PERCV. This stands for Presentation and disclosure, Existence and occurrence, Rights and obligations, Completeness, and Valuation and
allocation. All five of these financial statement control assertions are applicable to internal audit testing in relation to ICFR (internal control over financial reporting) such as for Sarbanes-Oxley Act (SOX) Section 404 compliance, especially presentation and disclosure and rights and obligations. For other types of engagements, presentation and disclosure and rights and obligations are not often used. However, existence and occurrence, completeness, and valuation and allocation are important areas of internal audit attention for many types of engagements. For example, existence tests include making sure that sales invoices have a supporting bill of lading. One completeness test is to make sure that there are no gaps in the numerical sequence of shipping documents, which can double as a test for timing (part of valuation). For valuation and allocation, internal auditors look at accuracy, proper classification, timing, and posting and summarization. For example, to test accuracy, sales orders can be compared against approved price lists to ensure that the prices match. For classification, one test would be whether liabilities are improperly recorded as sales. For posting and summarization, a useful test is to examine reconciliations such as by comparing listings of cash receipts to deposits recorded on bank statements. When auditing sales, completeness is less of an issue, but it can be tested by tracing the prenumbered shipping documents forward to the journal. The accuracy and timing of sales is generally more of a concern, with the exception that if controls are shown to be operating effectively, accuracy may also be less of an issue. When there is a risk of a control weakness, accuracy tests involve selecting a sample of sales invoices and comparing them to the price lists, recalculating extensions and footings (checking extensions involves verifying that the unit volume times the unit cost agrees with the total dollar amount for each line item; checking footings involves summing the extensions and verifying the total against the invoice), and tracing invoices to journal entries for sales and accounts receivable. Timing of sales is tested to ensure that sales are recorded in the proper period. This is done by tracing shipping documents to the sales journal. Taking a sample of sales transactions two weeks prior and two weeks after the period end date can be an effective procedure.
Testing Scope Specific audit tests for each of the types of accounts in the revenue cycle include the following. • Sales. Specific documents that are generated in the sales cycle and that may need to be examined include customer purchase orders, sales orders, credit applications, picking sheets, bills of lading or other shipping documents, invoices and remittance advices, cash receipts journals, bank statements, and monthly billing statements. Audit procedures for such documents include tracing a transaction forward or vouching backward to associated records and accounts. For example, vouching sales entries back to shipping documents could reveal nonexistent sales or possible duplicate sales. Auditors should note that further testing may be required to determine if shipments were made to nonexistent customers. Independent verification of a customer’s existence may be necessary, as accounts receivable generated from sales to phantom customers may be written off as uncollectible accounts. • Cash receipts. Internal auditors typically review the cash receipts journal when testing the adequacy and effectiveness of internal controls around the receipt of customer payments, depositing payments in bank accounts, and the proper recording of transactions in the accounting records. Since there should be proper segregation of duties between mail room activities, handling of cash receipts, and recording the related accounting entries, internal auditors can use observation to validate their understanding of these processes and control points. Internal auditors may examine remittance advices, deposit tickets, bank statements, and postings to cash receipts journals and the related customer subsidiary accounts receivable ledgers. Internal auditing should perform audit tests designed to detect fraud related to cash receipts. Auditors may perform a “proof of cash” to validate that cash received was deposited to the organization’s bank account and that transactions were properly recorded in the accounting records. Other controls include the following: • Vacation or rotation of duties policies are enforced.
• Checks are prelisted and restrictively endorsed. • Statements are mailed each month. • Deposits and batches are reconciled. • Customer correspondence and returned statements are part of document retention practices. Note that payments made by credit card are not handled the same way as other accounts receivable. This is because the creditor is actually the bank issuing the card. Payments, less the bank fee (which must be recorded as an expense), are quickly collected electronically, thus becoming cash receipts. • Sales returns and allowances. When a customer returns an item, the item is received and a credit memo is issued to accounts receivable. In addition to examining these credit memos, internal auditors can examine the sales returns and allowances journal. Internal auditors can make sure that returns are being tracked separately and recorded in the appropriate general ledger account (sales returns and allowances) rather than just reducing the sales totals. • Allowance for doubtful accounts and bad debt expense. Unlike the more straightforward accounts, the allowance for doubtful accounts (contra-asset account) is based on management’s estimate of the amount of uncollectible accounts receivable for the coming period, generally one year. Management should have a documented methodology for calculating this estimate. It may be based on the accounts receivable aging schedule, what management knows about customers’ financial conditions (in the aggregate and individually for major customers), economic trends and regulatory implications, etc. The offsetting entry to increase the allowance for doubtful accounts is recorded as bad debt expense. Internal auditors should evaluate the adequacy and reasonableness of management’s process for estimating the allowance for doubtful accounts. • Charge-off of uncollectible accounts. When an account receivable is deemed to be uncollectible, it could be sold at a deep discount to a thirdparty collection agency, who may or may not have recourse back to the
organization if it is unable to collect, depending on the terms of the contract. Internal auditors can verify that the terms of such contracts are in order and that any payments from collection agencies are properly recorded. The amount of accounts receivable that are not recovered is then charged off as a reduction of the allowance for doubtful accounts. Internal auditors should verify that accounts receivable charge-offs are properly authorized and that they are treated separately from credit memos. Note that there is no special journal for charge-offs. In addition, monitoring the level of internal operating charge-offs in a unit can be a helpful planning tool in identifying areas where operational problems may exist and that require further attention from an assurance coverage end.
Current Asset Management Activities and Accounting Current assets include cash and cash equivalents, inventory, and accounts receivable. Management of current assets is handled by several different functional areas. Treasury manages cash and cash equivalents, investing cash in safe short-term investment vehicles to generate a return without risking loss of capital. Inventory managers set policies for inventory levels and ensure inventory level adequacy. Accounts receivable is responsible for ensuring that accounts are collected and for monitoring the status of the average age of accounts receivable. One way to gain a good grasp on why current assets often need to be looked at holistically is to examine asset management ratios. These ratios help determine if current assets are being leveraged sufficiently to generate profits. Note that while accounts payable is not a current asset (it is a current liability), it is considered along with current assets because the timing of these cash outflows directly impacts the revenue cycle, as already discussed. Exhibit IV-2 provides examples of common asset management ratios. As with the ratios presented in the previous topic, examples from the ABC, Inc., financial statements are included. Note that fixed asset turnover is included here to keep the asset management ratios together even though fixed assets are not current assets.
Exhibit IV-2: Asset Management Ratios Ratio
What It Measures
Calculation
Asset Management Average A/R turnover
Receivables collection period or average days’ accounts
Number of times accounts receivable are collected each year. Average A/R is calculated by finding the average between the current and prior years’ A/R. Increasing A/R turnover indicates effective credit extension and collection processes. If too high, credit policies may be restricting sales. A declining ratio indicates lax collections or that bad debts need to be written off sooner. A/R at ABC, Inc., are created and paid 5.25 times in the year. Length of time required to convert account receivable to cash. Should
receivable
be compared with company’s credit terms to detect issues with collection. It takes ABC, Inc., an average of 70 days to convert A/R to cash. If the credit term is less than this amount, the organization has trouble collecting or has lax credit.
Average inventory turnover
Proportion of goods sold to goods in inventory. Indicates how efficiently a company converts inventory into sales. If relatively high, inventory is efficiently managed, while a declining ratio could show an inventory build-up due to poor demand or obsolescence. Too high a ratio could mean lost sales due to stockouts.
Inventory
How many
processing period or average days’ sales in inventory
days it would take an organization to process and sell a single inventory turn. A higher ratio is better.
Accounts payable turnover
Accounts payable payment period or average days’ payables
How many times a company’s accounts are generated and paid in a year. A lower ratio is preferable (as long as accounts are paid in a timely fashion). To calculate purchases, the cost of goods sold (COGS) is adjusted by the change in inventory (Purchases = COGS + Ending Inventory – Starting Inventory). ABC, Inc., generated and paid its A/P 8.68 times during the year. How long it takes to pay the average account. An increasing period might indicate cash
flow issues. It takes ABC, Inc., an average of 42 days to pay an account payable. Cash conversion cycle
Fixed assets turnover
The cash conversion cycle measures the average number of total days it takes to convert money from a cash outflow (start of production) to a cash inflow. ABC, Inc., has cash invested in its operating cycle an average of 157 days. Measure of how efficiently a company uses its fixed assets (property, plant, equipment) to generate sales. The higher the number, the more efficiently fixed assets are being used (or, possibly, there is a need to replace older assets). Might
be used to measure the effectiveness of significant investments in PP&E. Net sales is sales minus sales discounts, returns, and allowances. ABC, Inc., generates U.S. $5.18/dollar invested in net fixed assets.
Here are a few relationships between current asset accounts or ratios to note: • Accounts receivable turnover should be paired with an accounts receivable aging schedule to determine how long receivables have been outstanding. • Profits drop when inventory increases faster than sales: Increasing accounts receivable combined with stable inventory overall but an increasing finished goods inventory indicates that sales are lagging. Inventory turnover ratios would also be declining. Accounting for current assets can involve the following current asset balance sheet accounts: • Accounts receivable • Allowance for doubtful accounts (a contra account) • Cash • Due from accounts (amounts of deposits currently held at another company) • Marketable securities (if less than one year)
• Interest receivable • Inventory, including raw materials, work-in-process, and finished goods • Prepaid insurance • Prepaid rent • Stock and bond investments (if available for sale or maintained in a trading account) • Supplies
The Supply Chain Management Cycle A supply chain (sometimes referred to as a logistics network) is a global network used to deliver products and services from raw materials to end customers through an engineered flow of information, physical distribution, and cash. A supply chain can most accurately be viewed as a set of linked processes or business cycles that take place in the extraction of materials for transformation into products or services for distribution to customers. These processes are carried out by the various functional areas within the organizations that comprise the supply chain. The most basic supply chain includes the supplier, the producer, and the customer. Four basic flows connect the entities in a supply chain: • Physical materials and services flow from suppliers through the intermediate entities. • Cash from customers flows back “upstream” toward the raw material supplier. • Information flows back and forth along the chain. • There is a reverse flow of products returned for repair, recycling, or disposal. Exhibit IV-3 shows these four basic flows.
Exhibit IV-3: Basic Supply Chain Flows
Supply Chain Management Processes Supply chain management processes are used to efficiently design, plan, execute, monitor, and integrate every link in the supply chain so that goods and services are produced and distributed at the right quantities in the right place and at the right time in order to minimize system-wide costs while satisfying all the various customers. A goal of supply chain management is to create net value for customers and other key stakeholders. This is accomplished by building a competitive infrastructure, leveraging worldwide logistics, synchronizing supply with demand, and measuring performance globally. Another goal of supply chain management is to manage supply chain risks, including risks to the availability or quality of suppliers and the goods and services they supply as well as to the proper distribution of goods and services to customers. Each of the supply chain management processes discussed next plays a role in supply chain risk management. The following are key processes that help managers collaborate across functional and organizational boundaries within a supply chain: • Customer relationship management (CRM) is described below. • Customer service management involves managing details related to the product service agreements worked out during the CRM process. The central task of this process is to keep customers satisfied and loyal, thus reducing the risk of loss of market share. • Demand management keeps demand and supply in balance to avoid the
risks of unnecessary accumulations of inventory or stock shortages. This is accomplished through demand forecasting and tracking technologies. • Order fulfillment involves delivering the right product or service at the right time in the right amounts. This is primarily a logistics function and also involves the ability to determine customer needs and build infrastructure required to source, make, and deliver the desired goods. Customer satisfaction may very well depend on ensuring the highest level of quality in order fulfillment possible (while remaining cost-effective). • Manufacturing flow management facilitates producing all the required products in quality condition on schedule. This involves the logistics team ensuring that supplies arrive when they are needed from suppliers. It also means that the sales and operations team must develop schedules that fit sales and production requirements and remain consistent with available capacity. • Supplier relationship management (SRM) is described below. • Product development and commercialization involves successfully developing new products or services and then marketing them. It depends on excellent relationships with both suppliers and customers. The CRM team identifies the needs of customers. The SRM team develops relationships with suppliers who can reliably deliver quality materials and components. And finally, the research and development team designs the product or service with the needs of manufacturing, logistics, purchasing, and sales in mind. Successful supply chain management depends on all of these processes working interactively.
CRM and SRM The most important processes in supply chain management are customer relationship management and supplier relationship management. These two processes create and manage the link between adjacent partners in the supply chain and provide the context for the other processes mentioned above.
The CRM process encompasses activities designed to locate customers, assess their potential needs, and determine the products and services necessary to build and maintain a loyal customer base. During the CRM process, cross-functional teams work with internal and external customers to determine their product and service needs and develop product service agreements (PSAs) that define the nature of those relationships. Teams can also work with customers to improve order and delivery processes and reduce unpredicted variation in demand. These teams might include representatives from product design, operations, finance, and other areas as well. As CRM focuses on building loyalty with key customers, SRM develops long-term relationships with key suppliers. Together, CRM and SRM provide the links that hold the supply chain together. Depending on the industry or organization, building relationships with external suppliers can take many different forms. Some will be custom-tailored to provide critical, high-quality goods, while others might be handled through standardized PSAs that are nonnegotiable. Determining an organization’s supply needs is a cross-functional project involving marketing, research and development, production, logistics, and finance. SRM is a key risk management activity because even an independent supplier’s actions and ethics can impact the organization’s reputation risk and other aspects of enterprise risk. Finding reliable and trustworthy suppliers and ensuring that they operate according to sound governance, risk management, and control frameworks and ethics are key aspects of SRM. Organizations must also be aware of stakeholder demands for greater transparency and accountability at every level. From supply chain to customer and from employee to investor, a company needs to develop responsible business policies and practices and make them an integral part of its organizational strategy and mission.
Types of Supply Chain Management The two primary types of supply chain management are vertical integration and horizontal (or lateral) integration.
Vertical integration, or vertical supply chain management, refers to the practice of bringing the supply chain inside one organization. This strategy involves ownership of many or all parts of a supply chain. It can grow from an entrepreneurial base to which departments and layers of management are added to accommodate expansion. Or it may grow through mergers and acquisitions. An example of a vertically integrated enterprise is a wireless phone company that manufactures phones, stocks them at retail outlets, sells them, provides coverage, and handles warranty service. The primary benefit of vertical integration is control. Lateral or horizontal integration has replaced vertical integration as the favored approach to managing the myriad activities of the supply chain. As organizations have become larger and the supply chain’s reach has become more global, it has become difficult for one company to have the expertise necessary to excel in all elements of the chain. In lateral supply chain management, various aspects of a business are out-sourced, and the challenge becomes synchronizing the activities of a network of independent organizations. The primary reasons organizations depend on the lateral supply chain include: • To achieve economies of scale. (The potential capacity of an independent provider to achieve economies of scale is always greater.) • To improve business focus and expertise. (This can lead to lower pricing and higher quality.) • Because it’s possible. (Advanced communication technology has erased many of the barriers to doing business at a distance.) The vertical and lateral approaches are the two most common supply chain management approaches around the world, but they are not the only methods in existence. Japanese companies favor an intermediate form of integration called “keiretsu,” in which suppliers and customers are not completely independent but instead own significant stakes in one another.
Changes in Supply Chain Management
Because many organizations rely heavily on out-sourcing processes and components, an audit team will need to evaluate the various quality management system/environmental management system risks associated with the supply chain. International standards require oversight of a company’s suppliers to ensure that the products it sells meet customer expectations. And the boundaries separating the inside and the outside of organizations are blurring thanks to new technological interconnectivity, a fact that has major implications for internal auditors. Organizations are replacing their contract-driven supply chains with free markets. Instead of specifying PSAs or contracts years in advance, organizations buy products instantly and rely more on freelance workers. In light of these changes, supply chain risks include: • An organization depending too heavily on a single supplier of a critical component. Failure of timely delivery can seriously affect profit and loss. • Off-shore suppliers raising language, management, and transportation issues that might affect an organization’s profitability and reputation. • Ineffective supply chain management systems not addressing CRM and SRM concerns, to the detriment of an organization’s competitiveness.
Strategic Marketing and Supply Chain Management CRM helps organizations become more customer-driven, with the goal of understanding the customer’s requirements and preferences in order to develop long-term relationships. An important component of supply chain management is developing an effective promotion and distribution strategy, both domestically and globally. This depends on organizations successfully informing people about the products and services they offer and persuading buyers, distribution channel members, and the public at large to purchase their brands. Marketing Communications Mix Organizations spend a great deal of money promoting their products and services. An organization’s promotional strategy generally describes the set of interrelated communications activities—the marketing communications
mix—that the organization uses to communicate with its customers, distributors, and other relevant audiences. Internal auditors could provide assurance that an organization’s promotional strategy will help it achieve management’s desired objectives and, ultimately, a competitive position. Strategic marketing is driven by customer needs. Marketers often think in terms of the four Cs of marketing: • The customer is the primary focus. • Cost analysis must take into account all the issues customers consider before making a purchase. • Convenience and cost/value are interrelated. For example, being able to order online may make purchasing more convenient, thus increasing sales. • Communication means having a dialogue with customers. Rather than telling customers what they need, organizations must listen to what customers want through their actions and words. An organization’s promotional communications mix typically includes advertising, sales promotions, public relations, personal selling, and direct marketing. Distribution Channels and Systems A key aspect of the marketing mix as it relates to supply chain management is the use of effective and efficient distribution channels in order to create and sustain an organization’s competitive advantage. A distribution channel is a group of interrelated and interdependent institutions and agencies that pool their efforts to distribute a product to end users. A distribution channel is frequently a chain of intermediaries who pass a product or service to the next organization before it reaches the end user. Distribution channels include: • The producer. • Customers. • Organizational buyers.
• Marketing intermediaries. • Retailers. • Wholesalers (who break down bulk items into smaller packages for resale by retailers). • Agents (used primarily in international markets; they secure an order for a producer and then take a commission). • Distributors. • Direct sale (from producer to user without an intermediary). • Mail order (Internet and telephone). Exhibit IV-4 shows an example of common distribution channels. Exhibit IV-4: Common Distribution Channels
Organizations face many different distribution channel decisions and challenges, including: • Developing an overall channel strategy. • Comparing costs of using intermediaries to achieve wider distribution. • Determining channel membership (types of distribution). • Monitoring and managing channels.
• Determining whether to use direct and/or indirect channels (direct to consumer and/or indirect via a wholesaler). • Determining whether to use multilevel marketing channels and single or multiple channels. • Determining the length of the channel (levels of distribution). • Determining who should control the channel. • Determining which types of intermediary should be used. • Deciding whether electronic distribution should be used. • Deciding whether it makes sense (in terms of cost) to keep an inventory of products in the pipeline. Strong distribution channels perform a variety of value-added activities in moving products and services through the channel from producer to end user. Exhibit IV-5 presents a list of different distribution channel activities and their functions. The nature of the industry, the target market, the product or service, and numerous other factors determine which of the functions are necessary to support a channel and which organizations in the supply chain will be responsible for providing them.
Exhibit IV-5: Distribution Channel Activities and Functions Activity
Description
Marketing intermediaries
Reduce the number of transactions for producers and end users
Product inventory
Helps meet buyers’ time-of-purchase and variety preferences
Transportation
Eliminates geographic/location gaps between buyers and sellers
Financing
Facilitates the monetary or currency exchange function
Processing and storage
Separates large quantities into individual orders; maintains inventory and assembles orders for shipment
Advertising and sales promotion
Communicates product availability, location, features and benefits
Pricing
Sets the basis of exchange between buyer and seller
Risk reduction
Provides mechanisms such as insurance, return policies, and futures trading
Personal selling
Provides sales, product information, and supporting services
Service and repairs
Provides essential customer support and service
Channel Types and Structure Organizations typically choose between two major types of distribution channels: conventional channels and vertical marketing channels. • In a conventional distribution channel, independent organizations are linked vertically. Each organization fends for itself, with minimal cooperation or concern for the total performance of the channel. The focus is transactional (buyer-seller transactions) rather than close collaboration throughout the channel. • In a vertical marketing system (VMS) approach, the channel is managed as a coordinated or programmed system. One organization is designated the channel manager and is responsible for directing channel activities, setting operating rules and guidelines, and providing management assistance to other organizations participating in the channel. VMS channels dominate the retail sector and are becoming more popular in the business, industrial, and service sectors as well. Factors Influencing Channel Design The type of channel influences how many levels of organization to include in the channel and the specific kinds of intermediaries. For example, an industrial products producer might choose between independent manufacturing agents and a chain of distributors. There are several factors that can influence channel design: • End-user preferences (where customers want to purchase products or services)
• Product or service characteristics (complexity, features, service requirements, etc.) • Manufacturer’s core capabilities and resources (Smaller producers will have more channel constraints.) • Required functions (what is necessary to move the product or service from the producer to the customer, such as storage, transportation, and servicing) • Availability, experience, and skills of intermediaries Organizations that have very different products or services might select a different distribution channel for each different category of product. Or organizations with very different types of customers that use the same product may choose a different distribution channel for each different customer segment. Effectively this means having multiple supply chains, each one tailored to the needs of the customers and the other factors listed above. International Considerations To remain competitive, many organizations pursue distribution channels with a global reach. These distribution practices range from a minimal number of intermediaries in the U.K. to elaborate distribution systems in Japan. Organizations interested in global expansion: • Study distribution trends and patterns in nations of interest. • Explore trends in technology (radio frequency identification, satellite communications), regional cooperatives (the European Union, for example), and transportation services. • Assess the likelihood and impact of terrorism or civil unrest. • Investigate currency matters and banking institutions. • Determine cost and capital requirements.
• Evaluate the product or service fit with different distribution strategies.
Inventory Management and Valuation Most organizations find it necessary to maintain inventories that are either sold to customers or consumed within the organization. (In service organizations, inventory sometimes takes the form of queues, the lines people have to wait in or scheduling the start of projects in the future.) The reasons for holding inventory include: • To meet future demand. • To cover fluctuations in supply or demand. (This is also called safety stock; such inventory is held as a buffer against miscalculations of timing or quantity.) • To fill the pipeline. (This is called pipeline or transportation inventory; it covers the transportation time required for new inventory to reach its destination.) • To hedge against price fluctuations by increasing inventory when prices are favorable and holding back when they aren’t. • For economies of scale when purchases in large quantities may qualify for discounts that offset the extra cost of holding or storing the inventory.
Inventory Management Inventory management focuses on reducing the costs of holding and transporting inventory without sacrificing customer service. Successful inventory management requires a systematic approach combined with accurate record keeping. Improvement in inventory management and control is important at all stages of operations, including purchasing, production, distribution, and sales. Here we discuss basic inventory management concepts, with an emphasis on those techniques that are focused on continuous improvement. KPIs for Inventory
There are two key performance indicators (KPIs) for inventory: • Reduction of inventory costs related to holding, ordering, and transporting materials, supplies, and finished goods • Achievement of customer satisfaction targets related to the quality, availability, and on-time delivery of products and services (which may depend on the availability of supplies) Types of inventory There are four basic types of inventory: • Raw materials inventory or cycle stock may be purchased and held for a period in advance of the time it is needed for production. • Work-in-process (WIP) inventory consists of raw materials that have been only partly transformed into their finished state or components that have not been installed or connected. For accounting purposes, WIP is an account holding all inventory in production but not yet complete as of the balance sheet date. • Finished goods are products that are ready-to-wear, ready-to-eat, ready-todrive, or ready-to-use and are waiting to be purchased. • MRO (maintenance/repair/operations) or supplies inventory includes those supplies required for repairs and maintenance of machinery, computers, and so on. Inventory Costs The following costs are associated with inventory: • Purchasing costs are the costs of goods acquired from suppliers. • Ordering costs are incurred when placing orders for more inventory. This includes all material and labor in order processing, office supplies, clerical labor, etc. Use of electronic forms and payment transfers can reduce ordering costs. • Carrying costs (also called holding costs) are the costs of housing the
inventory. This includes rent, depreciation, taxes, insurance, material handling, labor, investment costs, etc. These costs may be as high as 40% of the value of the inventory. • Stockout (or shortage) costs are incurred for running out of a particular item for which there is customer demand. This can result in back orders, lost sales, a damaged reputation, and lost customers. • Set-up costs result from the process of preparing to go into production to fill an order. This includes labor for cleaning and adjusting machinery. In order to find the lowest overall cost for inventory, organizations use various inventory decision models to determine when to order or when to manufacture inventory and how much to buy or make. Balancing these costs is important. Holding costs tend to go up with larger order quantities, while set-up or order costs tend to go down since they respond to economies of scale. Some organizations practice lean or just-in-time manufacturing, which focuses on the reduction or elimination of waste in all areas. (Excess inventory is a key waste to eliminate.) Such organizations may also employ lean accounting methods that help reward managers who reduce inventories (unlike traditional accounting, which can treat an unnecessary inventory build-up as positive work getting done). Challenges in Inventory Management The challenges in inventory management include: • Reducing variability in the quality, amount, and timing of supply deliveries. • Balancing the cost of holding more inventory and the cost of holding less. • Reducing production cycle times. • Maintaining production equipment. • Improving demand forecasting.
Inventory Valuation
Inventory valuation is important to auditing, because it is an estimate that can be manipulated with material effect on the financial statements. Manufacturers will have three types of inventory: raw material, WIP, and finished goods, as described earlier. Retailers will have just one category: merchandise. Service companies will have little or no inventory. The inventory cycle is related to the warehousing cycle; the former records the related costs while the latter records the physical flow of goods. Controls should address both. Inventory accounting is performed either on a perpetual basis, which keeps a continuous record of inventory changes as they occur, or using the periodic inventory system, which determines only the inventory on hand at the end of a period by physical count. Under perpetual inventory accounting, raw material and merchandise purchases are debited to inventory. Each sale includes a debit to the cost of goods sold account and a credit to inventory. Discounts, freight-in, and returns and allowances are included in the inventory account. Most computer-based systems are perpetual, because they can reflect the changes to the cost of goods sold account, inventory control account, and all subsidiary ledger inventory accounts instantaneously and simultaneously. In a periodic inventory system, purchases are debited to a purchases account. Beginning inventory cost plus the purchases account total equals the period’s cost of goods available for sale. Ending inventory is determined by physical count, and only then can ending inventory be subtracted from the cost of goods available for sale to determine the cost of goods sold. Because this method is becoming obsolete, it is not covered further in this text. We’ll now look at four types of perpetual inventory valuation: FIFO, LIFO, moving average cost, and specific identification. Each uses a different method for calculating ending inventory and cost of goods sold. Examples are included; assume that, for the month of June, the organization in the examples has no beginning inventory and makes the following purchases and sales:
• June 7: Purchase 6,000 units @ U.S. $20/unit, for balance of 6,000 units. • June 14: Purchase 18,000 units @ U.S. $22/unit, for balance of 24,000 units. • June 20: Sell 12,000 units, for balance of 12,000 units. • June 21: Purchase 6,000 units @ U.S. $23.75/unit, for balance of 18,000 units. The organization’s cost of goods available for sale is beginning inventory (U.S. $0) plus the cost of all purchases, which equals U.S. $658,500. FIFO The first-in, first-out (FIFO) inventory valuation method assumes that the oldest goods are used or sold first. Ending inventory will consist of the most recent purchases, meaning that this method best approximates current cost for held inventory. However, current revenues will be matched against older costs, violating the matching principle and possibly distorting net income and gross profits. This method is appropriate when the physical flow of goods follows the accounting method (not required). Income cannot be manipulated under this method if the proper methodology is followed. Exhibit IV-6 shows how the FIFO method applies the data in our example.
Exhibit IV-6: FIFO Method Date
Transaction (USD)
Cost (USD)
Balance Calculation (USD)
Balance (USD)
June 7
Purchase 6,000 x $20.00 =
$120,000
6,000 x $20.00
= $120,000
June 14
Purchase 18,000 x $22.00 =
$396,000
(6,000 x $20.00) + (18,000 x $22.00)
= $516,000
June 20
Sale (6,000 x $20.00) + (6,000 x $22.00) =
($252,000)
12,000 x $22.00
= $264,000
June 21
Purchase 6,000 @ $23.75 =
$142,500
(12,000 x $22.00) + (6,000 x $23.75)
= $406,500
Cost of Goods Available for Sale – Ending Inventory = COGS U.S. $658,500 – U.S. $406,500 = U.S. $252,000
LIFO The last-in, first-out (LIFO) inventory valuation method assumes that the newest purchases are used or sold first. Ending inventory will consist of the oldest purchases, including purchases possibly made years ago, so this method undervalues held inventory, assuming inflation. The LIFO method is not allowed under international standards (IFRS) but is allowed under GAAP. Exhibit IV-7 shows how our example would be applied under LIFO.
Exhibit IV-7: LIFO Method Date
Transaction (USD)
Cost (USD)
Balance Calculation (USD)
June 7
Purchase 6,000 x $20.00 =
$120,000
6,000 x $20.00
= $120,000
June 14
Purchase 18,000 x $22.00 =
$396,000
(6,000 x $20.00) + (18,000 x $22.00)
= $516,000
June 20
Sale 12,000 x $22.00 =
($264,000)
(6,000 x $20.00) + (6,000 x $22.00)
= $252,000
June 21
Purchase 6,000 @ $23.75 =
(6,000 x $20.00) + (6,000 x $22) + (6,000 x $23.75)
= $394,500
$142,500
Balance (USD)
Cost of Goods Available for Sale – Ending Inventory = COGS U.S. $658,500 – U.S. $394,500 = U.S. $264,000
Moving Average Cost The average cost method is called the “moving” average when applied to perpetual inventory (and the “weighted” average for periodic inventory). The moving average method is simple, and income cannot be manipulated using this method. Since it is difficult to specifically identify every inventory flow, proponents argue that the use of averages is required. Exhibit IV-8 shows how our example is applied when a new average is
calculated each time a purchase is made. The average is applied to any sales prior to the next purchase.
Exhibit IV-8: Moving Average Cost Method Transaction (USD)
Cost (USD)
Purchase 6,000 @ $20.00 =
$120,000
6,000
x $20.00
= $120,000
June 14
Purchase 18,000 @ $22.00 =
$396,000
24,000
x $21.50
= $516,000
June 20
Sale 12,000 @ $21.50 =
($258,000)
12,000
x $21.50
= $258,000
June 21
Purchase 6,000 @ $23.75 =
$142,500
18,000
x $22.25**
= $400,500
Date June 7
Unit Balance
Average Cost (USD)
Balance (USD)
*Average cost = (U.S. $120,000 + U.S. $396,000)/24,000 = U.S. $21.50, the amount applied to sales until new purchases are made. ** New average cost = (U.S. $258,000 + U.S. $142,500)/18,000 = U.S. $22.25 Cost of Goods Available for Sale – Ending Inventory = COGS U.S. $658,500 – U.S. $400,500 = U.S. $258,000
Specific Identification Method With the specific identification method, each specific item in inventory held or sold is tracked separately. This is most often used for special order or low-volume, high-cost goods. While specific identification matches cost flow to the physical flow of goods, it can be used to manipulate net income because the seller could select, from otherwise identical inventory, the lot that has the lowest or highest cost, affecting both ending inventory and cost of goods sold. Also, indirect costs such as storage or discounts cannot be easily specifically identified. Calculating ending inventory and cost of goods sold is shown in Exhibit IV-
9. The example assumes that the 18,000 units of ending inventory were made up from half of each of the first two purchases and all of the June 21 purchase (meaning that the June 20 sale specifically identifies half of each of the first two purchased lots).
Exhibit IV-9: Specific Identification Method Purchase Date
Units Purchased
Cost (USD)
Total (USD)
June 7
3,000
$20.00
$60,000
June 14
9,000
$22.00
$198,000
June 21
6,000
$23.75
$142,500
18,000
$400,500
Ending Inventory =
Cost of Goods Available for Sale – Ending Inventory = COGS U.S. $658,500 – U.S. $400,500 = U.S. $258,000
Adjusting Inventory Inventory is sometimes adjusted for shrinkage, which occurs when the physical count is lower than the accounting total due to theft, error, or deliberate overstatement. The opposite would indicate an accounting error or a deliberate understatement. Because of this, internal auditors should be alert to such possibilities when reviewing adjustments in this area. When inventory value is impaired due to obsolescence or other factors, inventory is no longer valued at original cost. Instead it is valued at the lower of cost or market (LCM), where cost is the original cost and market refers to the market-determined cost to reproduce or replace the item, the lower of which becomes the new value. Determining market value for an LCM calculation has two restrictions. These restrictions are related to the net realizable value (NRV), which is the sales price of an asset, usually inventory, less the costs of completion and transportation or disposal that can be predicted within reason. The first restriction is that the market value cannot be greater than the inventory’s
NRV (a ceiling); the second is that it cannot be less than the NRV less an allowance for an ordinary profit margin (a floor). This ceiling and floor are controls to prevent inventory from being overstated or understated. If the ceiling were not there, inventory could be reported at replacement cost, which for damaged or obsolete inventory would be considerably higher than the funds received from a sale less the selling costs. Without the floor, inventory could be understated and the loss overstated.
Topic E: Capital Budgeting, Capital Structure, Taxation, and Transfer Pricing (Level B) Capital Budgeting An effective budgeting system serves two primary functions in an organization: planning and control. A budget is a detailed plan that helps an organization deal with uncertainty and the future. It is key to helping an organization achieve specific goals and objectives. A budget plan must be aligned with an organization’s strategy to match its strengths with opportunities in the marketplace in order to accomplish organizational goals over the short and long term. A budget also sets standards that can control the use of an organization’s resources and motivate employees. It provides a process of checks and balances on the actions of people within the organization who are responsible for different aspects of the budget to ensure that all parts of the organization are working together to achieve its overall strategic goals.
Master Budget A master budget is a summary of an organization’s plans that sets specific targets for sales, production, distribution, and financing activities over a year, an operating period, or a shorter duration. It generally culminates in a cash budget, a budgeted income statement, and a budgeted balance sheet. It sets quantitative goals for all operations, including detailed plans for raising the necessary capital for long- and short-term investments. The master budget is a map showing where an organization is heading. If it is properly designed, it will show the company heading in the same direction as the company’s strategy and long-term plan. The master budget is broken down into three different components: • Operating budget. Identifies resources needed for operations and is concerned with the acquisition of these resources through purchase or manufacture. (Operating budgets are discussed in Chapter 2, Topic A.)
• Financial budget. Matches sources of funds with uses of funds in order to achieve the goals of the organization. It includes budgets for cash inflows, cash outflows, financial position, and operating income. • Capital budget. Evaluates and selects projects that require large amounts of funding and provide benefits far into the future. The capital budget feeds into the cash budget and other financial budgets. Often, the capital budget is considered a separate entity from the master budget, but all of an organization’s different budgets comprise an interrelated system. This topic focuses on capital budgeting.
Capital Budgeting Process Managers use a capital budgeting process to plan significant outlays on projects that have long-term implications for the organization. Such a process consists of three successive steps: 1. Identify and define potential projects; define clear boundaries for an investment project. Understand what the project will do and what it will not do. 2. Evaluate and select the projects; analyze project revenues and benefits (both financial and nonfinancial), costs, and cash flows for the project’s entire life cycle. 3. Monitor and review the projects selected and make modifications and alterations as new developments warrant.
Investment Evaluation Analysis Capital budgeting involves investment. Investments include stocks and bonds, facilities, inventory, equipment, research, and hiring and training staff. They all require a commitment of funds in the present with the expectation of future returns through additional cash inflows or reduced cash outflows. Typical capital budgeting decisions include:
• Cost reduction. (Should new equipment be purchased to reduce costs?) • Expansion. (Should a new plant or warehouse facility be acquired to increase capacity and sales?) • Equipment selection. (Which machine would be the most cost-effective to buy?) • Lease or buy. (Should new equipment be leased or purchased?) • Equipment replacement. (Should old equipment be replaced now or later?) To make capital investment decisions, managers must estimate the quantity and timing of cash flows, assess the risk of the investment, and consider the impact of the project on the organization’s profits. There are many different methods to guide managers in accepting or rejecting potential investments. We will discuss four here: net present value, internal rate of return, payback period, and accounting rate of return. These capital investment decision models can be classified into discounting models and nondiscounting models. The use of discounting models has increased over the years; however, some organizations still use the nondiscounting models, and many organizations use both types. Auditors should be familiar with both categories of models and the information they provide to managers for making capital investment decisions. Discounting Models Discounting models recognize the time value of money, a concept that takes into account that a dollar today is worth more than a dollar a year from now. Discounting models also acknowledge that those projects promising earlier returns are preferable to projects promising later returns. The two approaches to making capital budgeting decisions using discounted cash flows are the net present value method and the internal rate of return. Under the net present value (NPV) method, the present value of a project’s cash inflows is compared to the present value of the project’s cash outflows. The difference between these values, called the net present value, determines whether or not the project is an acceptable investment.
For example, the manager at David’s Cafe is considering purchasing a new espresso machine to make coffee that is now being made by two older models. The machine will cost U.S. $5,000, and it will last for five years. At the end of the five years, it will have zero scrap value. Using the machine will reduce labor costs by U.S. $1,800 per year. (Fewer employees will be necessary during peak times.) David’s Cafe requires a minimum pretax return of 20% on all investment projects. Should the manager buy the new espresso machine? The manager must determine whether the U.S. $5,000 cash investment now is justified if it will reduce labor costs by U.S. $1,800 each year over the next five years. The total cost savings is U.S. $9,000 (5 × U.S. $1,800); however, the company can earn a 20% return by investing its money elsewhere. So the cost reductions must not just cover the cost of the machine; they must also yield at least the 20% return, or the company should invest the money elsewhere. To determine whether the espresso machine is a wise investment, the stream of annual U.S. $1,800 cost savings is discounted to its present value and then compared to the cost of the new machine. The 20% minimum return rate is called the discount rate and is used in the discounting process. The analysis is shown below. The present value of an annuity of U.S. $1,800 at the end of each period for five periods at 20% is U.S. $5,383. This is usually calculated using a spreadsheet formula, but a manual calculation, or a “Present Value of an Ordinary Annuity” table (available in the Resource Center) could be used instead. The factor from the table for 20% for five periods is 2.9906. This assumes that the cost savings will occur at the end of each year rather than during the year. The present value would be greater with cost savings occurring during the year. As the analysis shows, David’s Cafe should buy the new espresso machine, because the present value of the cost savings is U.S. $5,383 as compared to
the present value of U.S. $5,000 for the required investment (the cost of the machine). Deducting the present value of the required investment from the present value of the cost savings gives a net present value of U.S. $383. The project’s return exceeds the discount rate. Whenever the NPV is zero or greater, an investment project is acceptable. The internal rate of return (IRR) is the rate of return promised by an investment project over its useful life. It is sometimes simply called the yield on a project. To compute the internal rate of return, a manager finds the discount rate that equates the present value of a project’s cash outflows with the present value of its cash inflows. The IRR is that discount rate that will cause the net present value of a project to be equal to zero. A simple way to determine the IRR for the previous example is: Referring to a “Present Value of an Ordinary Annuity” table (not shown), the present value factor for a 24% return for five periods is 2.7454; for 22%, it is 2.8636. Therefore, the internal rate of return is slightly less than 24%. Once the IRR for a project is computed, it is compared with the firm’s required rate of return. If the IRR is greater than the required rate, the project is acceptable. If the IRR is equal to the required rate, managers must decide whether to accept or reject it. The project is rejected if the IRR is less than the organization’s required rate of return. In this example, the nearly 24% internal rate of return is well above the 20% required by David’s Cafe. Note that the method above can be used for annuities; for projects with a variable return, the denominator would consist of the sum of the present value of each year’s return. Both the NPV and IRR methods have gained widespread acceptance as decision-making tools. In comparing the two models, it is important to keep in mind that: • The NPV method is often simpler to use because the IRR method requires a process of trial and error. However, computer spreadsheets can be used to automate the IRR method.
• The NPV method makes a more realistic assumption about the rate of return that can be earned on cash flows from a project. If the NPV and IRR methods disagree about the worthiness of a project, it might be wiser to use the data from the NPV method. Nondiscounting Models Two nondiscounting models are still commonly used and are preferred by many managers for project evaluation: the payback period and the accounting rate of return. The payback period is the time required for an organization to recover its original investment. If the cash flows of a project are an equal amount each period, then the following formula can be used to compute the project’s payback period: If the cash flows are unequal, the payback period is computed by adding the annual cash flows until such time as the original investment is recovered. If a fraction of a year is needed, it is assumed that cash flows occur evenly within each year. Using the figures from David’s Cafe, U.S. $5,000/U.S. $1,800 = 2.7778, so the payback period is about two years and nine months. Some organizations set a maximum payback period for all projects and reject any that exceed that level. This provides a rough measure of risk, with the notion that the longer a project takes to pay for itself, the riskier it is. Also, in some industries the risk of obsolescence is high. Organizations in these industries would be interested in recovering initial investments quickly. Additional information provided by the payback method can help managers: • Control the risks associated with the uncertainty of future cash flows. • Minimize the impact of an investment on a firm’s liquidity problems. • Control the effect of the investment on performance measures.
Note that there is a calculation called the discounted payback period that discounts the annual cash flows using the same present value calculations already described and therefore avoids the drawback of the above method failing to use discounting. For this reason, sometimes the nondiscounting method is called the simple payback period. Unlike the other capital budgeting methods discussed so far, the accounting rate of return (ARR) method (also known as the simple rate of return) measures the return on a project in terms of net operating income, as opposed to using a project’s cash flow. The approach is to estimate the revenues that will be generated by a proposed investment and then to deduct from these revenues all of the projected operating expenses associated with the project. The net operating income is then related to the initial investment as shown in the following formula: The ARR method does not consider a project’s profitability. And, like the payback period, it ignores the time value of money; this is a critical deficiency in both methods. It can lead a manager to choose investments that do not maximize profits (thus the reason for the discounted payback period method). However, both the payback period and the ARR are useful as screening measures. In the case of the ARR, it can ensure that new investments will not adversely affect financial accounting ratios, specifically those that may be monitored to ensure compliance with debt covenants. Payback period can help identify investment proposals that managers should consider further. If a proposal doesn’t provide a payback within some specified period, the potential project can be rejected without additional consideration. Strengths and Weaknesses of Decision-Making Models Each of the decision-making models has its strengths and weaknesses. Internal auditors might provide assurance regarding whether an organization is using the correct method(s) to decide about a particular investment.
Auditors might also encourage an organization to use more than one method to supplement the primary analytical method for evaluating capital projects. Exhibit IV-1 summarizes the strengths and weaknesses of the four capital investment decision-making models discussed.
Exhibit IV-1: Strengths and Weaknesses of Capital Budgeting Decision-Making Models Method
Strengths
• •
Considers time value of money.
• Internal rate of return
• •
Payback period
Net present value
Accounting rate of return
Weaknesses
•
Not meaningful for comparing projects requiring different amounts of investment.
Additive for combined projects.
•
Favors large investments.
Considers time value of money.
•
Easy to compare projects requiring different amounts of investment.
Assumption of reinvestment rate of return could be unrealistic.
•
Complex to compute if done manually.
• • •
Simple to use and understand.
•
Ignores time value of money (unless discounted payback period method is used).
•
Ignores cash flows beyond payback period.
• •
Data readily available.
• •
Ignores time value of money.
Uses realistic discount rate for reinvestment.
Measures liquidity. Allows for risk tolerance.
Consistent with other financial measures.
Uses accounting numbers rather than cash flow.
Post-Audit of Capital Projects A key element of the capital investment process is a follow-up analysis once the capital project has been implemented. A post-audit compares the actual benefits of the investment with the projected benefits and the actual operating costs with the projected operating costs. The post-audit also evaluates the overall outcome of the investment and proposes corrective action if necessary. It is important that auditors take into account that the assumptions driving the initial analysis might be invalidated by changes in the actual operating environment. Post-audits can be expensive to conduct; however, their benefits can often outweigh the cost. These benefits include:
• Evaluating profitability to ensure that resources are used wisely. • Positively impacting the behavior of managers; holding managers accountable, making it more likely that they will make capital investment decisions in the best interests of the organization. • Providing feedback to managers in order to improve capital budget decisions in the future.
Capital Structure While capital budgeting is how businesses determine the best projects to invest in to ensure growth and future profitability, capital structure tells you where the money for capital projects comes from. Capital structure is a term used in finance to refer to how a business is structured and financed. Basically, it details the way a company finances its assets through a combination of cash, equity, and liabilities (debt). A business can get money from two sources: its owners (including outside investors) and lenders (including suppliers who extend credit to the company). Money from owners and investors is called equity financing; this includes common stock, preferred stock, and retained earnings. Borrowed money, called debt financing, is funds that have to be repaid, often with interest. It grants no ownership interest and can include bank loans or bonds. The greater the proportion of the business financed by debt, the higher leveraged the company is. For example, a business that sells U.S. $30 billion in equity and incurs U.S. $370 billion in debt is said to be 30% equity-financed and 70% debt-financed. The company’s ratio of debt-to-total financing, 70% in this example, is referred to as the firm’s leverage. Usually, companies that are heavily financed by debt have more problems when there are issues in the money markets, as they struggle to fund their assets. Different types of capital impose different types of risks for an organization. For this reason, capital structure affects the value of a company, and
therefore much analysis goes into determining what an organization’s optimal capital structure is.
Taxation The goal of federal economic policy is to exert a stabilizing influence on the economy in order to minimize the severity of the peaks and recessions of economic cycles. The responsibility for control of the economy in the U.S. is split between Congress, using fiscal policy, and the Federal Reserve Board, using monetary policy. Fiscal policy refers to a government’s use of taxes and spending to achieve its macroeconomic goals. Fiscal policy can be discretionary—a deliberate action taken by Congress to control a swing in the economy, or it can be nondiscretionary—long-term policy that has the built-in tendency to exert a correcting action on economic swings. Government taxation plays a significant policy role, and governments use different kinds of taxes and tax rates to achieve different objectives. They may: • Decrease the demand for goods and services in order to contract the economy. • Raise money for public spending for items such as infrastructure projects, education, health care, unemployment benefits, social security, welfare, defense spending, and transportation. • Distribute the tax burden among individuals or classes of the population involved in taxable activities such as businesses. • Redistribute resources between individuals or classes in the population. • Fund foreign aid and military aid. • Modify patterns of consumption or employment within an economy by making some classes of transactions more or less attractive. Everything a taxpayer earns, spends, and owns is called the tax base. Taxes
are most often levied as a percentage of the tax base (a percentage of a taxpayer’s income or a percentage of the value of a good, service, or asset). This percentage is called the tax rate. Taxes can be classified as: • Progressive—High-income taxpayers pay a larger fraction of their income than do low-income taxpayers. The U.S. federal tax system is progressive. • Proportional—High- and low-income taxpayers pay the same fraction of income. • Regressive—High-income taxpayers pay a smaller fraction of their income than do low-income taxpayers. An important distinction when exploring tax rates is to distinguish between the marginal rate and the effective rate. The effective tax rate is the total tax paid divided by the total amount the tax is paid on, or taxable income. The marginal tax rate is the rate paid on the last dollar of income earned. When applied to a progressive tax code like that in the U.S., which has progressively higher tax rates for higher income earners, it is the last dollar of income that puts someone into a higher tax bracket. Thus marginal tax rates refer to a progressive system with tax brackets. In contrast, the effective tax rate is also called the average tax rate because it is the tax that would be due if the taxpayer were subject to a constant rather than a progressive tax rate. For example, IAS 12, “Income Taxes,” requires a reconciliation disclosure on IFRS filings on the tax that should be expected if the current tax rate were applied to the accounting profit or loss, a type of effective or average tax rate.
Types of Taxes In the U.S., the federal income tax accounts for nearly 42% of federal revenues. It is imposed on incomes of individuals and organizations and is paid on all types of income, including wages, salaries, dividends, interest, rents, and capital gains. It includes: • Payroll tax. Levied directly on wages and salaries. This is the second
most important source of federal revenue. • Corporate income tax. Levied on the accounting profits of corporations. The following are other types of taxes: • Sales tax. Percentage of the amount paid for some purchases of goods and services. Sales taxes are the most important source of revenue for states. They are credited to a sales taxes payable account (cash is debited) rather than being reported as part of revenue. When the tax is paid, the transactions are reversed. The tax is not reported as an expense. • Use tax. Tax that is collected for a particular need, such as a gas tax levied to maintain roads. • Value-added tax (VAT). Applies the equivalent of a sales tax to every operation that creates value. • Property tax. Based on the value of taxable property, including residential housing, farms, factories, and business equipment. Local governments rely heavily on property taxes. • Ad valorem tax. Any tax for which the tax base is the value of a good, service, or property. Sales taxes, tariffs, property taxes, inheritance taxes, and value-added taxes are different types of ad valorem tax. • Capital gains tax. Tax levied on the profit released upon the sale of a capital asset. • Excise tax. A specific cash amount levied on a particular commodity, such as liquor. Excise taxes are based on the quantity, not the value, of the product purchased.
Tax Minimization Strategies In a global economy, tax minimization strategies are particularly important. One of the most important components in such a strategy is transfer pricing; this is discussed below. Additional tax minimization strategies include: • Merging and restructuring organizations in an attempt to reduce cost and
risk while increasing operational efficiency. • Structuring the organization for tax efficiency in areas such as cross-border mergers, spin-offs, foreign acquisitions, divestitures, and joint ventures to optimize after-tax cash flow. • Off-shoring aspects of the business to the same or another company in another country to lower the cost of operations in the new location; alternately, moving headquarters to a low or zero income tax country. • Using tax incentives for exporters. • Using cross-border financing strategies. • Maximizing benefits through cash repatriation, including dividends, interest, and royalties. • In the U.S., using modified accelerated cost recovery (MACRS), which allows for accelerated depreciation based on the life of the asset. Faster acceleration allows a taxpayer to deduct greater amounts during the first few years of an asset’s life.
Transfer Pricing Many organizations are decentralized, with various divisions or departments comprising the organization as a whole. Often these organizations use output from one division as the input to another. This raises a significant accounting issue. How is the transferred good or service valued? Transfer pricing is a system for pricing products or services that are transferred from one organizational subunit (responsibility center or strategic business unit) to another within the same organization. A good or service that is transferred between two segments of an organization is called an intermediate product. For example, let’s say that one division in a candy manufacturer makes the vanilla cream (the intermediate product) that goes inside the company’s line of specialty chocolates. The transfer price in this case is the internal charge that the specialty chocolate division pays to the vanilla cream division. The
transfer payment does not necessarily involve an exchange of cash between the two divisions, but an accounting entry is made to reflect a cost to the specialty chocolate division and corresponding revenue to the vanilla cream division. Transfer pricing: • Affects the strategic objectives of an organization. If an organization wants the business units to behave independently and keep managers motivated to achieve organizational goals, transfer prices should be similar to those set for an external customer. • Requires coordination among the marketing, production, and financial functions. • Affects sourcing and, possibly, the marketing of the final and intermediate products. • Impacts the overall revenues of a parent organization with subsidiaries or franchising operations. The parent organization can transfer significant funds to or from franchisees and subsidiaries by changing the prices for supplies and franchise fees. Transfer pricing can also play a role in tax planning in that it allows an organization to shift income to a division in a lower-tax country. The objective of transfer pricing used in this way is to lower the company’s effective worldwide income tax obligations. Creative transfer pricing approaches applied in the context of acquisitions, divestitures, plant relocations, research and development activities, and global restructuring transactions assist in the management and minimization of global tax rates. Firms have some discretion in setting transfer prices; however, they are also constrained by existing tax laws and treaties. The method used to set transfer prices must be carefully considered. Transfer pricing is a crucial issue for organizations with a high degree of vertical integration. A corporation that owns farms, food warehouses, distributors, and grocery stores will need to set prices for each service that will allow each portion of the business to be financially flexible.
Setting Transfer Prices There are three primary factors to consider in setting transfer prices: control, decentralized planning decisions, and international issues. Control In a decentralized organization that is partitioned into responsibility centers, managerial performance and compensation are often linked to a responsibility center’s profitability. Transfer pricing is used to provide incentives and performance measures for managers of different responsibility centers, to ensure that costs are assigned to the business unit manager responsible for the costs, and to ensure that managers are not impacted negatively or positively by transfer prices used by managers of other responsibility centers. Decentralized Planning Decisions The transfer pricing method used should encourage managers to make decisions about purchasing internally and externally supplied services and products that are consistent with the organization’s overall goals. Domestically, the choice of the “best” transfer price involves considering the effect of transfer pricing on the selling and buying units’ incentives. Transfer pricing: • Should provide each business unit with the relevant information necessary to determine the optimum tradeoff between organization costs and revenues. • Should help measure the economic performance of individual business units. • Should be simple to understand and easy to administer. International Issues Transfer pricing becomes even more complicated for multinational organizations. Transfer pricing should: • Minimize tax liability. (When transferring products or services between two countries with different corporate income tax rates, it is important that
transfer prices are set to minimize the total tax liability in both countries.) • Minimize risks of expropriation. (When a government takes ownership and control of assets a foreign investor has invested in that country, measures must be implemented such as limiting new investment and setting the transfer price so that funds are removed from the foreign country as quickly as possible.) • Minimize taxes, tariffs, and currency restrictions. (These and other political considerations will affect where a multinational organization operates and which transfer pricing method it chooses.) • Incorporate alternative performance measurements if transfer prices are set in order to minimize taxes. (Managers will have to be motivated by tying performance measures to revenues, production costs, and market share rather than accounting profits.) • Comply with all national laws and regulations. When choosing one transfer pricing model over another, it is important to keep in mind that the fundamental objective in setting transfer prices is to motivate managers to act in the best interests of the overall organization. Typically, an organization balances that objective among the three factors described above by choosing the transfer pricing method that best fits its structure, goals, and long-term strategy.
Transfer Pricing Models There are four common transfer pricing models used by organizations to set transfer prices for products and services being “bought” and “sold” between internal divisions: market price, full cost (absorption), variable cost, and negotiated price. • Market price model. The market price model is a true arm’s-length model, because it sets the internal transfer price for a good or service at the going market price. This model can be used only when an item has a market; items such as work-in-process inventory may not have a market price. The market price keeps business units autonomous, forces the
selling units to be competitive with external suppliers, and is preferred by tax authorities. Businesses that use this model should account for the reduced selling and marketing costs in the price. Multinational organizations most often use the arm’s-length standard to set transfer prices that reflect the price that would be set by unrelated parties acting independently. • Full cost model (absorption model). The full cost (absorption) model starts with the seller’s variable cost for an item and then allocates fixed costs to the prices. Some organizations allocate standard fixed costs, because this allows the buying unit to know the cost in advance and keeps the seller from becoming too inefficient due to a captive buyer that pays for inefficiencies. Adding fixed costs is relatively straightforward and fair. However, it can alter a business unit’s decision-making process. Although fixed costs should not be included in the decision to purchase items internally or externally, often managers will purchase the “lower cost” external item even though internal fixed costs will still be incurred. • Variable cost model. The variable cost model sets transfer prices at the unit’s variable cost, or the actual cost to produce the good or service less all fixed costs. This method will lower the selling unit’s profits and increase the buying unit’s profits due to the low price. It is advantageous for selling units that have excess capacity or for situations when a buying unit could purchase from external sources but the company wants to encourage internal purchases. Tax authorities prefer that organizations not use this method because lowering the profits of a profit center can cause the unit to underreport taxable income. • Negotiated price model. The negotiated price model sets the transfer price through negotiation between the buyer and the seller (managers of different business units). When different business units experience conflicts, negotiation or even arbitration may be necessary to keep the organization as a whole functioning efficiently. Negotiated prices can make both buying and selling units less autonomous by forcing them to negotiate among themselves.
Each transfer model has its advantages and disadvantages, as shown in Exhibit IV-2.
Exhibit IV-2: Advantages and Disadvantages of Transfer Pricing Models Transfer Pricing Model
Advantages
Disadvantages
•
Helps to preserve unit autonomy.
•
Intermediate products often have no market price.
•
Provides incentive for the selling unit to be competitive with outside suppliers.
•
Should be adjusted for cost savings such as reduced selling costs and no commissions.
•
Has arm’s-length standard desired by taxing authorities.
• • •
Easy to implement.
•
Overstates opportunity cost if excess capacity exists.
•
Irrelevance of fixed costs in decision making; fixed costs should be ignored in the buyer’s choice of whether to buy inside or outside the organization.
Variable cost
•
Causes buyer to act as desired (to buy inside).
•
Unfair to seller if seller is a profit or investment business unit.
Negotiated price
•
Can be the most practical when significant conflict exists.
•
Need negotiation rules and/or arbitration procedure, which can reduce autonomy.
•
Potential tax problems; might not be considered arm’s length.
Market price
Full cost (absorption)
Intuitive and easily understood. Preferred by tax authorities over variable cost.
Choosing a Transfer Pricing Method An organization must periodically reevaluate whether to make internal price transfers and, if so, which transfer price should be set. Choosing a transfer pricing model depends on the individual circumstances of a specific organization. Key factors to consider are: • Is there an outside supplier? If not, there is no market price, and the best transfer price is based on cost or negotiated price.
• Is the seller’s variable cost less than the market price? If not, the seller’s costs are likely too high, and the buyer may be more likely to buy outside. • Is the selling unit operating at full capacity? In other words, will the order from the internal buyer cause the selling unit to deny other sales opportunities? If not, the selling division should provide the order to the internal buyer at a transfer price somewhere between variable cost and market price. An internal auditor evaluates an organization’s transfer pricing systems to ensure that they meet its transfer pricing objectives. These include performance evaluation for management and business units, tax minimization, management of foreign currencies and tax compliance risks, and other strategic objectives.
Chapter 2: Managerial Accounting Managerial accounting originated as a pure cost accounting discipline at organizations. The duties involved collecting information on costs and reporting this information to management. The development of information systems that can automatically capture cost information has moved managerial accounting into a strategic discipline: not just accounting for costs but for strategic assessments of the best use of funds or most efficient use of resources.
Topic A: General Concepts in Managerial Accounting (Level B) This topic starts by differentiating managerial accounting from financial accounting, discusses responsibility accounting, and then goes into significant detail on budgeting and budgets. The topic concludes with discussions of cost accounting and break-even analysis.
Managerial Accounting Versus Financial Accounting Perhaps the best way to understand managerial accounting is by comparing it to financial accounting. Exhibit IV-1 summarizes the differences in focus, aggregation, reports, standards, data, and audit. The primary difference, however is the audience, or users of the information produced. The information needs of the users or stakeholders direct all inputs, processes, and outputs of managerial and financial accounting. Note that much of the detailed information produced in managerial accounting can be used in financial accounting, but only those sources of information that conform to external accounting standards can be used for financial reporting purposes. Information not prepared using these standards is generally for management control systems and decision-making purposes only. Exhibit IV-1: Managerial Accounting versus Financial Accounting
Note also that while external audit is the primary type of auditor for financial accounting and reporting, it should be clear from earlier in these materials that internal auditors can and often do play a major assurance role on the financial accounting and reporting side as well.
Responsibility Accounting In a decentralized organization, decision making is spread to managers at different levels, at the level of subunits, or what are known as responsibility centers (areas of the organization that are empowered to make their own decisions but are held responsible and accountable for the costs and spending under their direct control). Responsibility accounting is the process of recognizing those subunits (responsibility centers), assigning responsibilities to the managers of those subunits, and evaluating the performance of those managers. It is an important concept for an effective profit planning and control system. The central premise is that managers must be held responsible for those line items (revenues or costs)—and only those line items—that they can actually control. Responsibility centers can be a single individual, a department, a functional area, or a division. They are any portion of an organization in which the manager is given responsibility for costs, profits, revenues, or investments.
Responsibility accounting: • Identifies responsibility centers based on the extent of a manager’s individual responsibilities. • Holds managers responsible for deviations between budgeted goals and actual results. • Encourages managers to correct unfavorable discrepancies and to communicate feedback to higher management regarding sources of favorable and unfavorable discrepancies. • Links specific responsibilities and specialized knowledge to specific performance measures. • Personalizes accounting information by looking at costs from the perspective of personal control. A manager’s responsibilities dictate the type of responsibility center and the type of appropriate performance measure for him or her. Managers with more responsibilities typically make more complex decisions and have more control over factors that affect an organization’s value.
Responsibility Centers Responsibility centers are classified by their primary effect on an organization as a whole: Cost centers generate costs (expenses but no revenues), revenue or profit centers generate profit (revenues and expenses), and investment centers make investments (revenues, expenses, and investment return). A cost center such as a service department may generate some revenues, but the department usually has a net cost. Exhibit IV2 describes the different responsibility centers and the responsibilities of their managers.
Exhibit IV-2: Responsibility Centers and Manager Responsibilities Responsibility Centers Cost centers (data
Manager Responsibilities
•
Has fewest responsibilities because department
generates little or no revenue and has control over limited amount of assets.
processing, human resources, accounting, customer service)
Profit/revenue centers (sales departments, bank branches, restaurants, retail shops)
Investment centers (usually contain several profit centers and can be primarily focused on either internal or external investments)
• •
Must control costs through efficient use of resources.
•
Must follow up on cost variances. Success at removing unfavorable cost variances and analyzing favorable variances is often tied to compensation.
•
Performance measures include total costs and the amount and quality of the output.
•
Since profit margin is a function of both revenue and costs, manager is responsible for generating revenues and controlling costs.
• •
Responsible for both cost and pricing of products.
•
Limited to the use of a pre-specified amount of assets; does not have control over investments.
•
Primary performance measures are the profit generated, quality, and customer satisfaction.
•
Has responsibilities of profit center manager in addition to the right to expand or contract the size of operations.
• •
Responsible for investments, costs, and revenues.
•
Responsible for reviewing and approving temporary and long-term investments for capital maintenance, return on investment, and strategic investment.
•
Can request more funds to increase capacity, develop new products, and expand geographically.
•
Performance measures are more difficult to identify because of the individualized nature of responsibilities of the manager and lack of control over many aspects of their operations. Strategic investments are evaluated for their fit with the organizational strategy, while other investments are judged on their return on investment and preservation of capital.
Rewarded for minimizing costs without sacrificing quality.
Can decide what products to produce, the quality level, and how to market the products.
Responsible for reviewing and approving capital budgeting and other investments such as R&D.
Performance Measures Managers of responsibility centers are evaluated on the basis of
performance measures that are both accounting-based and nonfinancial. Evaluations should be based solely on factors within a particular manager’s control. Effective performance measures lead to a desired strategic result by causing a manager and other employees to strive for organizational goals. Performance measures all have their own strengths and weaknesses and are most effective when used in combination. The following are common performance measures: • Return on investment (ROI)—Divides the profit (excluding interest expense) generated by an investment center by the total assets of the investment center. • Residual income/Economic Value Added (EVA)—Calculates the difference between an investment center’s profits or operating income and the opportunity cost of using its assets. • Transfer pricing—Recognizes the interactions of different responsibility centers through the system of pricing products or services transferred within the same organization. • Productivity—A ratio measuring output against input. • Revenues, market share, and operating costs. • Profitability analysis—For products, business units, and customers. • Benchmark values—From other managers or organizations. • Critical success factors—Specific measurable goals that must be met to achieve an organization’s strategies; include both financial and nonfinancial measures. Effective performance measures must be aligned with organizational strategies, objectives, and goals. They should be tailored to the audience and the level of management to which they are directed. Used well, performance measures can act as incentives for managers and all employees; used poorly, performance measures can discourage superior performance, undermine morale, and result in organizations that are counterproductive.
Budgeting As introduced in Chapter 1 in this section, operating plans and budgets are financial plans for the future; they identify organizational objectives and the actions needed to achieve them. An organization’s strategic plan can be translated into long- and short-term objectives. These objectives form the basis for an organization’s various budgets and related operating plans.
Operating Plans and Budgets This topic focuses on operating budgets, which describe the incomegenerating activities of an organization—its sales, production, and finished goods inventories. Operating budgets are plans that identify needed resources and the way these resources will be acquired for all day-to-day activities, including sales and services, production, purchasing, marketing, and research and development. The ultimate outcome of the operating budgets is a pro forma or budgeted set of financial statements. Operating budgets: • Are tools for short-term planning and control. • Typically cover a one-year period and state revenues and expense planning for that year. (However, some organizations use continuous rolling budgets, e.g., adding a new month to the end of the budget as each month passes.) • Fine-tune an organization’s strategic plan. • Help to coordinate the activities of several parts of an organization. • Assign responsibility to managers, authorize the amounts they are permitted to spend, and inform them of performance expected. • Are the basis for evaluating a manager’s performance. The operating budget consists of a budgeted income statement accompanied by a number of supporting budgets. These supporting budgets are used in conjunction to develop an overall operating budget:
Sales Budget The sales budget is the projection showing expected sales in units and their expected selling prices. It is the basis for all of the other budgets, so it is important that the sales budget be as accurate as possible. Preparation for the sales budget usually begins with an organization’s forecasted sales level, its long- and short-term objectives, and its production capacity. The sales budget defines the capacity needed throughout the organization, including production costs and selling and administrative costs. A sales forecast is a subjective estimate of the future sales of an organization’s products or services. Without an accurate sales forecast, all other budget elements will be inaccurate. Many organizations generate several independent sales forecasts from different sources such as marketing, managers, and the sales department. Forecasters consider: • Historical sales trends. • Economic and industry condition indicators. • Competitors’ actions. • Rising costs. • Pricing policies. • Credit policies. • Amount of advertising and marketing. • Unfilled back orders.
Exhibit IV-3 shows an example of a sales budget. Exhibit IV-3: Sales Budget
Production Budget Once the desired level of sales is determined, the production budget is created to satisfy the expected demand. The production budget is the plan for acquiring resources and combining them to meet sales goals and maintain a specific level of inventory. It is calculated by adding budgeted sales to the desired ending inventory minus the beginning inventory. Inventory levels should be kept as low as possible without constricting sales. The production budget must also take into account: • Policies regarding stabilizing production versus flexible production schedules that minimize finished inventories. • Conditions of production equipment. • Availability of production resources such as materials and laborers. • Experience with production yields and quality. Exhibit IV-4 shows an example of a production budget for four quarters. Exhibit IV-4: Production Budget
Merchandise Purchases Budget A merchandising organization does not have a production budget. Instead, the production budget is replaced by a merchandise purchases budget, which shows the amount of merchandise an organization needs to purchase during the period. The basic format of a merchandise purchases budget is the same as the production budget. Instead of budgeted production in units, as shown in Exhibit IV-4, the last items in a merchandise purchase budget are budgeted purchases. A merchandising organization would prepare an inventory purchases budget for each item carried in stock. Direct Materials Budget The production budget becomes the basis for preparing several other budgets for the period. The first is the direct materials budget, which determines the required materials and the quality level of the materials used to meet production. The direct materials budget is often broken down into a direct materials usage budget and a direct materials purchase budget. While the production budget specifies only the number of units to be produced, the direct materials usage budget specifies the material components and the cost of these materials. The direct materials purchase budget is concerned with direct purchases of material components and finished goods. Exhibit IV-5 shows an example of the direct materials usage budget; Exhibit IV-6 shows an example of the direct materials purchase budget. Exhibit IV-5: Usage Budget—Direct Materials
Exhibit IV-6: Purchase Budget—Direct Materials
Direct Labor Budget The direct labor budget is prepared by the production manager and human resources. It can help an organization plan production processes to smooth out production and keep a consistent workforce size throughout the year. Organizations that have unions or that need to use contract employees can plan accordingly to avoid emergency hiring, labor shortages, or layoffs. Labor budgets are usually broken down into categories like semiskilled, unskilled, and skilled. Organizations using just-in-time manufacturing techniques can use the direct labor budget to plan for maintenance, minor repair, installation, training, and other activities. Exhibit IV-7 shows an example of a direct labor budget. Exhibit IV-7: Direct Labor Budget
Overhead Budget The overhead budget (also called the factory or manufacturing overhead budget) includes all the production costs other than direct materials and direct labor. This is sometimes called a fixed costs budget because most of the costs in this category do not vary with the rise and fall of production. This includes things like rent and insurance. Variable costs that are included in this budget are those that may vary with production levels, such as batch set-up costs and the costs of electricity and other utilities. Fixed costs are easy to budget, but the variable costs require forecasting the number of units to be produced, the production methods used, and other external factors. Many organizations separate the overhead budget into variable and
fixed items. Exhibit IV-8 shows an example of an overhead budget. Note that in this example, total direct labor hours (DLH) is used as the cost driver to allocate factory overhead. The DLH listed in the first row is the combination of steel girder and rebar hours from Exhibit IV-7. Exhibit IV-8: Overhead Budget
Cost of Goods Sold Budget The cost of goods sold budget includes the total and per-unit production cost that is budgeted for a period. This budget is sometimes called the cost of goods manufactured and sold budget, since it often also includes items budgeted to be in inventory. It is created only after the production, direct materials, direct labor, and overhead budgets are formed, since it is basically a summary of those budgets. Exhibit IV-9 shows an example of a cost of goods sold budget. Exhibit IV-9: Cost of Goods Sold Budget
Selling and Administrative Expenses Budget Nonmanufacturing expenses are often grouped into a single budget called a selling and administrative expenses budget or nonmanufacturing costs budget. Sales expenses are included in this category, because they are not allocated to production processes but must be expensed in the period in which they occur. Exhibit IV-10 shows an example of a selling and administrative expenses budget. Exhibit IV-10: Selling and Administrative Expenses Budget
Budget Period Organizations must prepare budgets for a set time period. A typical budget is established for the one-year period that corresponds to the fiscal year of an organization. Annual budgets are often broken down into quarterly and monthly time periods to give managers regular opportunities to compare actual data with budgeted data. This process can highlight any problems and allow managers to remedy them more quickly. However, as noted earlier, an increasingly popular budgeting method is continuous budgeting (also called a rolling budget), which is a 12- to 18month budget system that rolls forward one period as the current period is completed. A continuous budget has a month, quarter, or year basis. As each period ends, the upcoming period’s budget is revised and another period is added to the end of the budget. This system has the advantage of keeping managers focused on the future at least one year ahead and ensures that the budgets remain up-to-date with the operating environment. Special software makes continuous budgets feasible to implement.
Budget Approaches Most organizations construct next year’s budget by starting with the current year’s budget as a baseline and then adjusting each line item for expected price and volume changes. This traditional approach to budgeting is known as incremental budgeting. In incremental budgeting, a manager starts with last year’s budget and adds to it (or subtracts from it) according to anticipated needs. But alternative budgeting approaches are also being used in many organizations. When used properly, these approaches can greatly improve budget effectiveness. These alternative budget approaches include: • Project budgeting. Project budgets are used when a project is completely separate from other elements of an organization or is the only element of a company. This includes projects like a movie, a road, or an aircraft. Project budgets are also used for smaller projects. When these projects use resources and staff that are committed to an entire organization, care must be taken that the project budget contains links to different cost centers and lines of responsibility.
• Activity-based budgeting. An activity-based budget (ABB) focuses on activities. ABB proponents feel that traditional budgeting, which focuses on departments or products, obscures the relationship between costs and outputs by oversimplifying the measurements into categories like labor hours, machine hours, or outputs for an entire process or department. Instead of using only volume drivers as a measurement tool, ABB also uses activity-based cost drivers, such as the number of set-ups in a process or an operation, to make a clear connection between resource consumption and output. This allows managers to see how resource demands are affected by changes in the products being offered, product designs, manufacturing processes, market share, and customer base. Exhibit IV-11 highlights some of the differences between activity-based budgeting and traditional budgeting.
Exhibit IV-11: Differences Between Activity-Based and Traditional Budgeting Activity-Based Budgeting
Traditional Budgeting
•
Emphasizes value-added activities and expresses budgeting units in terms of activity costs
•
Emphasizes input resources and expresses budgeting units in terms of functional areas
•
Encourages teamwork, continuous improvement, and customer satisfaction
•
Encourages increasing management performance
•
Provides opportunities for cost reduction and elimination of wasteful activities
•
•
Identifies value-added vs. non-value-added activities
Relies on past (historical) budgets and often continues funding items that would be cut if their cost-effectiveness (or lack of) were known
•
Coordinates and synchronizes activities of the entire organization to serve customers
•
Minimizes variances and maximizes individual responsibility unit performances
• Zero-based budgeting. A zero-based budget helps organizations avoid situations in which ineffective elements of the business continue to exist simply because they were part of a previous budget. Such a budget starts with zero dollars allocated to budget items rather than making incremental changes to already existing allocations. These budgets focus on constant cost justification by forcing managers to conduct in-depth reviews of each area under their control. Zero-based budgets can create efficient and lean
organizations by encouraging regular, periodic review of all activities and functions. This type of budgeting approach is popular with government and nonprofit organizations. However, there are serious drawbacks to zero-based budgeting. The primary drawback is that it encourages managers to exhaust all their resources during a budget period for fear that they will be allocated less during the next budget cycle. Other drawbacks to zero-based budgeting include: • It can encourage a significant amount of waste and unnecessary purchasing if a manager has incorporated budget slack into the budget. • The annual review process can be time-consuming and expensive. • Not using prior budgets can lead to ignoring lessons learned from previous years. These drawbacks can be mitigated by performing zero-based budgeting only on a periodic basis or by performing this type of budget process for a separate department each year, perhaps following an internal audit of that department. • Kaizen budgeting. Continuous improvement (“kaizen” in Japanese) has become a common practice for organizations operating in a globally competitive environment. A kaizen budget is a budgeting method that incorporates continuous improvement by focusing on planned future operating processes rather than current operating practices. Kaizen budgeting starts by identifying areas of improvement and determining expected changes needed to achieve the desired improvements. Budgets are prepared based on the improved practices or procedures, which typically results in more efficient, lower-cost budgets. The benefits of a kaizen budget include its proactive changes, which are often mandated by organizational policies that attempt to lower costs without sacrificing productivity. The drawbacks are that managers may lower quality levels or move production processes to cheaper labor markets to reduce costs. Also, unlike zero-based budgets, kaizen budgets emphasize improving existing expenditures. Some projects that should
have been cut or added may be ignored in favor of incremental improvements.
Budgeting Concerns General Concerns A common concern expressed by managers and managerial accountants is that the budgets that are produced over the course of several months before the start of the next year are out of step with the actual operating environment either before they are produced or soon after they are produced. In an APQC (American Productivity and Quality Center) survey, managers were asked “At what point in the year do the assumptions used to develop budgets become so materially different that they lose their effectiveness?” The respondents’ answers were as follows: • Never happens—17.1% • Before year begins—5.6% • During first quarter—20.2% • During second quarter—28.9% • During third quarter—22.7% • During fourth quarter—5.6% Traditional budget relevance is a function of the volatility of the economic environment, the sensitivity of the organization’s products and services to changes in its economic and operating environment, and the complexity of the organization. Smaller or less complex organizations are more likely to be able to adapt to changes and are therefore less affected. More complex organizations base their budgets on more assumptions, and the further these assumptions are from actual results, the less useful a static budget becomes over time. These budget challenges are a common reason why some organizations are moving toward a continuous or rolling budget process. However, this is a
major change for an organization, and many unwritten rules or organizational values are based on the traditional budget. These unwritten rules, such as always inflating a budget because it will go through a round of cuts prior to the final budget, or always meeting the budget but never beating it, can undermine any change effort in this area. Therefore such a change might be treated as business reengineering, with major change management initiatives and realignment of measurements and incentives. Simply duplicating the current budget process every month will only create more work without necessarily improving results. International Concerns Organizations operating in an international setting must pay special attention to adapting their budgets to the specific environments in which they operate. A multinational company faces several unique budgeting concerns, including: • Cultural and language differences. • Fluctuating monetary exchange rates. • Local economic conditions. • Dissimilar political and legal environments. • Discrepancies in the inflation rates of different countries. • Governmental policies affecting labor costs, equipment purchases, cash management, and other budget items. It is important for managers to be aware that budgeting procedures acceptable in one country may not be acceptable in another. In addition, fluctuating currency exchange rates and different inflation rates must be incorporated into the budget, because changes in these rates can affect an organization’s budgeted purchasing power, operating income, and cash flows. Organizations operating in high inflation countries should also reduce budget lead times and revise budgets frequently in light of the actual inflation they experience.
Cost Accounting Managers classify costs depending on how the costs will be used—for preparing external reports, predicting cost behavior, assigning costs to cost objects, and decision making. To understand managerial accounting, you need to be familiar with basic cost terminology. It is important to note that the terms used for different costs and the ways these costs are classified and measured can vary from organization to organization.
Basic Cost Terms and Concepts The following terms and concepts will be discussed in the remainder of this chapter: • Cost. Any resource that must be given up to obtain some objective. Costs can be money paid for a good or service, a new liability, or giving up an asset. They include actual (historical) and budgeted (forecasted) costs. • Cost object. Any object that can have a cost applied to it and can be used to determine how much a particular thing or activity costs. These include products, services, customers, projects, departments, and activities. • Cost driver (also called an allocation base). Any factor that has a causeand-effect relationship with costs, such as a rise in sales volume that affects a rise in sales commissions. • Actual costs. The historical cost paid for goods or services. • Direct costs. Any costs that can be easily and accurately traced to a cost object (usually direct labor and direct materials). Direct costs for a fastfood hamburger might include 0.15 labor hours and the cost of the ground beef and the bun. Direct costs for a provider of a service, like a professional consulting services firm, might be the labor costs of professionals who provide client services. • Indirect costs. Any costs that are related to a cost object but cannot be easily and accurately traced to the product (such as overhead). Indirect costs for a hamburger include maintenance costs of the fast-food machinery, utility costs of the franchise building, and the franchise
manager’s salary. Indirect costs for a professional services firm include the cost of its office furniture and cubicles. These indirect costs are allocated (assigned to a cost object) through reasonable estimation. • Differential costs (also called incremental costs). The difference in costs between any two alternatives. • Opportunity costs. The potential benefits given up when one alternative is selected over another. These costs are not typically entered into the accounting records of an organization but must be considered in management decision making. • Sunk costs. Any costs that have already been incurred and that cannot be changed by any decision made now or in the future. These are not differential costs and should be ignored when making a business decision. Costs are associated with all types of organizations, including manufacturing, merchandising, and service providers. Manufacturing organizations purchase or extract materials and combine or convert them into new finished goods. Merchandise organizations (retailers, wholesalers, and distributors) buy goods for resale at markup without changing the basic form of the items. Service organizations provide intangible services to customers (health care, insurance, banking, etc.). These organizations differ in some of the specific costing information they need for planning, controlling, and decision making. For example, manufacturing organizations are more complex than merchandising organizations because a manufacturer must produce goods as well as market them. Because of this, manufacturing organizations have the most complicated costs of the three types of organizations. Despite their differences, however, these organizations also share many of the same basic activities. For that reason, an understanding of the basic cost principles in manufacturing companies can be helpful in understanding the costs in other types of organizations.
Product vs. Period Costs For purposes of valuing inventories and determining expenses for an organization’s balance sheet and income statement, costs are classified as either product costs or period costs.
Product Costs Product costs (also known as inventoriable costs or manufacturing costs) are those costs associated with the manufacture of goods or the provision of services. These costs are assigned to inventories and are considered assets until the products are sold. At the point of sale, product costs become cost of goods sold on the income statement. Product costs may be categorized as follows. • Prime costs. The combination of direct labor and direct materials costs. Direct materials are those that become an integral part of the finished product and that can be physically traced to it. Direct labor costs are those that can be easily traced to individual units of product. • Conversion costs. The combination of direct labor and overhead costs. Manufacturing overhead is a conversion cost. It includes all costs of manufacturing except direct materials and direct labor. This includes depreciation of factory equipment and buildings, maintenance and repairs of equipment, utility costs, property taxes, supervisor costs, and other costs associated with operating the manufacturing facilities. Product costs differ for manufacturers, merchandisers, and service providers. Manufacturers consider only the costs needed to complete a product to be product costs (direct materials, direct labor, and overhead). Merchandise companies buy their goods in a finished state. Their product cost is whatever they pay for the products purchased, including freight costs. These are typically charged into a single inventory account called merchandise inventory. Service companies have little or no inventory. For inventory that does exist, if the service organization manufactures the goods, the inventory is treated as if the organization were a manufacturing company. If the service organization buys the goods already made, the inventory is treated as merchandise inventory. Period Costs Period costs (also called operating expenses or nonmanufacturing costs) are all the expenses that cannot be included in product costs and must be expensed in the period in which they occur. Costs that cannot be reasonably
allocated to a specific product are expensed (and not inventoried) because they are not expected to provide measurable future benefits. Period costs include: • Marketing or selling costs. All costs necessary to secure customer orders and get the finished product into the hands of the customer. These include advertising, shipping, sales commissions, and storage costs in shipping warehouses. • Administrative costs. All executive, organizational, and clerical costs associated with the general management of the organization as a whole. These include executive compensation, public relations, and secretarial costs. In a manufacturing organization, period costs can often be 25% of sales revenue, so controlling these costs can achieve measurable savings. The same is true in service or merchandise organizations, which may incur significant marketing costs.
Cost Behavior For planning purposes, managers must be able to predict how certain costs will behave in response to changes in the level of business activity. As the activity level rises or falls, a particular cost may rise or fall as well. Or it may remain constant. This is known as cost behavior. For example, a manager at a shoe manufacturing company who expects sales to jump by 10% next year will need to know how that will affect the total costs budgeted for the factory. The amount of raw materials and labor will increase, but the factory building itself won’t expand, nor will a new custodian or secretary be necessary. In order to develop an accurate budget for next year, the manager needs to understand the behavior of all the different costs affected. To help make distinctions about which costs will change and by how much, costs are often categorized as variable, fixed, or mixed. Variable Costs Variable costs rise and fall as the output level rises and falls. An example
of a variable cost is direct materials. The cost of direct materials used during a period will vary in direct proportion to the number of units produced. In our tennis shoe company example, each pair of tennis shoes uses two shoelaces. If the output of tennis shoes increases by 10% next year, so will the number of shoelaces used. Shoelaces are a variable cost. Variable costs are normally expressed with respect to the total amount of goods and services an organization produces. In a manufacturing organization, variable costs include direct labor and raw materials, utilities, and waste disposal. In a merchandising organization, the costs of goods sold, commissions to salespeople, and billing costs are variable costs. In a hospital, the costs of supplies, drugs, meals, and nursing services are variable costs. As output increases, variable costs increase at different rates. At low levels of production, many resources may not be used fully or most efficiently. At high production levels, diminishing returns cause variable costs to accelerate. Between the extremes, most resources are used efficiently and variable costs rise more slowly. Fixed Costs Fixed costs are the portions of the total cost that remain constant regardless of changes in the level of activity over the relevant range (see below). Rent is good example of a fixed cost. A coffee shop that rents a sophisticated espresso machine pays the same monthly rental fee whether it makes 12 cups that month or 120. Fixed costs are independent of the level of production. However, the following items are important to understand with regard to fixed costs: • Very few costs are completely fixed. Most will change if there is a large enough change in activity (i.e., above or below the relevant range). If the capacity of the espresso machine is 1,000 cups per month and the coffee shop suddenly needs to make 1,200 cups per month, it would most likely need to rent a second espresso machine, and its fixed costs would increase.
• Fixed costs create confusion when expressed on a per-unit basis because the average fixed cost per unit increases and decreases inversely with changes in activity. For example, the average cost per cup of coffee will fall as more cups are sold, because the U.S. $500 monthly rental cost of the machine will be spread out over more cups. Conversely, the average cost per cup of coffee will increase if fewer people buy coffee and the cost of renting the machine is spread over fewer cups. Depreciation, insurance, property taxes, supervisory salaries, administrative salaries, and advertising are examples of fixed costs. To say that a cost is fixed means that it is fixed within some relevant range. The relevant range is the range of activity within which the assumptions about variable and fixed costs are valid. This is typically expressed as specific cost drivers for a specific duration of time. For example, the assumption that the rent for the espresso machine is U.S. $500 per month is valid within the relevant range of 0 to 1,000 cups per month. Increase the number of cups sold per month above 1,000, and an additional espresso machine will be needed, and those fixed costs will increase to a new fixed level (U.S. $1,000/month). Mixed Costs The time horizon is important for determining cost behavior, because costs can change from fixed to variable depending on whether the decision takes place over the short run or the long run. Total costs are all the fixed and variable costs for a cost object. Mixed costs are a combination of fixed and variable costs. All three cost patterns are found in most organizations.
Absorption/Full Costing and Variable/Direct Costing The essential purpose of any managerial costing system is to provide cost data to help managers plan, organize, direct, and control. However, external financial reporting and tax reporting requirements also influence how costs are accumulated and summarized on managerial reports. Although costing systems will be discussed in more detail in the next topic, we will touch here on the two general approaches used for costing products for the purpose of valuing inventories and cost of goods sold: absorption costing
and variable costing. Absorption Costing Absorption costing (also known as full costing) is a method of inventory costing in which all variable and fixed manufacturing costs are included as inventoriable costs; thus inventory “absorbs” all manufacturing costs. Absorption costing: • Uses a gross margin format on an organization’s income statement. • Is the format required for external financial reporting. • Highlights the differences between manufacturing and nonmanufacturing costs. • Treats each finished unit as having absorbed its share of the fixed manufacturing costs (an inventoriable cost). • Defers fixed manufacturing costs in ending inventory to future periods. In addition, under absorption costing, if more units are bought than sold (inventory is increasing), net income will be higher than under variable costing because fixed manufacturing costs are all sitting in inventory. Variable Costing Variable costing (also known as direct costing) is a method of inventory costing in which all variable manufacturing costs are included as inventoriable costs except for fixed manufacturing costs, which are treated as costs of the period in which they are incurred. Variable costing: • Uses a contribution margin format on the income statement. • Highlights the distinction between fixed and variable costs. • Deducts fixed manufacturing costs as an expense. • Expenses fixed manufacturing costs in the period in which the inventory is created.
In addition, under variable costing, if more units are bought than sold, net income will be lower than under absorption costing because not as many costs end up in inventory compared to cost of goods sold. Both variable and absorption costing expense all nonmanufacturing costs (both fixed and variable) in the period in which they occur. The only difference between the methods is how they account for fixed manufacturing costs. Exhibit IV-12 illustrates the classification of costs as product or period costs under absorption and variable costing.
Exhibit IV-12: Classification of Costs Under Absorption and Variable Costing Product costs
Period costs
Absorption Costing
• • • •
Direct materials
• •
Selling expenses
Direct labor Variable overhead
Variable Costing
• • •
Direct materials
• • •
Fixed overhead
Direct labor Variable overhead
Fixed overhead
Administrative expenses
Selling expenses Administrative expenses
Benefits and Limitations of Absorption and Variable Costing Absorption costing is the standard method used in most countries, including the U.S., because it is required for external financial reporting under IFRS and GAAP and for tax reporting by the U.S. Internal Revenue Service. It is also used by the majority of organizations around the world for managerial accounting purposes, because many accountants argue that it better matches costs with revenues. However, the limitations of absorption costing include: • It allows managers to manipulate operating income simply by increasing production. • It can encourage managers to increase inventory even if no additional demand exists if a bonus or some other incentive is tied to operating
income. • It can encourage managers to produce items that absorb the highest fixed manufacturing costs instead of what is best for the company. To fix this and other improper management incentives, an organization may want to use variable costing for internal reporting. Variable costing: • Allows a manager less latitude about what to produce. • Can provide a disincentive for accumulating inventory such as a percentage carrying charge for all ending inventory. • Emphasizes the impact of fixed costs on profits. • Makes it easier to estimate the profitability of products, customers, and other segments of business. • Ties in with cost control methods such as standard costs and flexible budgets.
Cost Analysis It is important that internal auditors have a solid understanding of basic cost concepts to ensure that managers have the information necessary for appropriate reporting and decision making and to ensure that an organization has sound financial practices and adequate internal accounting controls in place. Business leaders use both cost-benefit and cost-volume-profit analysis to assist in making critical decisions. However, each analysis is used for specific purposes and aids different aspects of planning and management decision making. Using the right analysis at the right time can have a dramatic impact on the efficiency and effectiveness of operations.
Cost-Benefit Analysis Cost-benefit analysis is a managerial accounting approach to making business decisions. This analytical tool assesses the positive and negative
consequences of a proposed action. It quantifies all of the positive factors (benefits) and subtracts all of the negative factors (costs). The difference between the two indicates whether the planned action is advisable. Costbenefit analysis can include both quantitative and qualitative factors. However, it often works best when most of the costs and benefits can be reduced to financial terms, so they can be more easily compared. The key to a successful cost-benefit analysis is making sure to include all of the costs and all of the benefits and to properly quantify them. This type of analysis attempts to predict the financial impacts and other business consequences of an action. It can identify the following: • Hard dollar savings (actual quantitative savings) • Soft dollar savings (qualitative savings, such as management/labor time or building space) • Cost avoidance (elimination of future costs) In general terms, cost-benefit analysis is used to find a balance between the benefits and costs of specific actions.
Cost-Volume-Profit Analysis In contrast, cost-volume-profit (CVP) analysis helps managers understand the interrelationships among cost, volume, and profit by focusing on the interactions among five factors: prices of products, volume or level of activity, per-unit variable costs, total fixed costs, and mix of products sold. CVP analysis has many decision-making applications, including setting prices for products and services, introducing a new product or service, replacing a piece of equipment, deciding whether to make or buy a specific product or service, and performing strategic “what-if” analyses. Additional uses include: • Determining how many units must be sold to earn a target profit level at either a targeted operating income or targeted net income. • Determining the sensitivity of profits (or break-even) to possible changes
in cost or sales volume. • Calculating the break-even point with two or more products using the weighted average contribution margin. CVP analysis is also used for planning purposes. Because of that, it is important to measure the opportunity costs of any investment decision. The cost of using noncash or cash resources to make a product or develop a service should reflect the alternative use of those resources. If cash is borrowed, the interest expense should be included in the analysis as well as the forgone interest (the opportunity cost) of the cash used to make the investment. CVP analysis is based on an explicit model of the relationships among its three factors—costs, revenues, and profits—and it tracks how they change in a predictable way as the volume of activity changes. The CVP model is: or, equivalently, since total costs include both variable and fixed cost elements: Replacing revenues with the quantity of units sold times the unit selling price and replacing variable cost with unit variable cost times the quantity of units sold, the CVP model is: The symbolic form of the model is: Where:
• USP is the unit selling price. • Q is the quantity sold. • FC is the total fixed cost. • UVC is the unit variable cost. • OP is the operating profit (profits not including unusual or nonrecurring items and income taxes). Assumptions of CVP Analysis The CVP analysis discussed in this section makes these assumptions: • Total costs can be divided into fixed and variable costs with respect to levels of output (the amount of goods produced or services provided by an organization). • Total revenues and total costs have a linear (straight-line) relationship to output units within a relevant range. In other words, within a limited range of output, total costs are expected to increase at a linear rate. Exhibit IV-13 shows a simple representation of this linear relationship. Exhibit IV-13: CVP Graph of Total Revenues, Total Costs, and Output Levels
CVP analysis makes additional assumptions that may or may not be true in a specific scenario: • The selling price is constant. The price of a product or service will not change as volume changes.
• In multiproduct companies, the sales mix (the relative proportion in which a company’s products are sold) is constant. • In manufacturing companies, inventories do not change. The number of units produced equals the number of units sold. Typically, even if these assumptions do not hold true in every instance, the basic validity of CVP analysis remains. The benefits of this type of analysis include that it is simple and it provides a manager with a low-cost approximation of the profit effect of an investment. However, it is important to acknowledge the basic assumptions of CVP analysis and to use it as one method among many to assess the potential benefits of any investment.
Break-Even Analysis CVP analysis is sometimes referred to as break-even analysis. Technically, break-even analysis is only one part of CVP analysis; however, it is an important determinant and can be used in assessing how various “what-if” decision alternatives will affect operating income. The break-even point is the output level at which total revenues and total costs are equal. At breakeven, operating income is zero. Above the break-even point, operating income levels are profitable; below break-even, there is a loss. The breakeven point can be determined using three different methods: an equation method, a contribution margin method, and a graph method. The three methods will be described using the following scenario. A computer software maker has introduced a new product. The unit selling price for the product is U.S. $200. The fixed costs for the product are U.S. $4,000. The variable selling costs for the product are U.S. $100 per unit, and the quantity of the product sold is 75.
Equation Method A common equation method for computing the break-even point is: or
Where: • USP is the unit selling price. • Q is the quantity sold. • UVC is the unit variable costs. • FC is the fixed costs. • OI is the operating income. At the break-even point, operating income is zero. Setting operating income to zero and inserting the numbers in the equation, the break-even point for the scenario (expressed in units) is calculated as follows:
In this example, selling fewer than 40 units will be a loss, selling 40 units will be break-even, and selling more than 40 will make a profit. Contribution Margin Method The contribution margin method is an algebraic adaptation of the equation method. The contribution margin represents the amount remaining from sales revenue after variable expenses are deducted. It is the amount available to cover fixed expenses and then to provide profits for the period. If the contribution margin is not sufficient to cover the fixed expenses, there is a loss for the period. The contribution margin is found by taking revenues and subtracting all costs of the output that vary with respect to the number of output units. The contribution margin method is based on the following equation:
Where: • USP is the unit selling price. • Q is the quantity sold. • UVC is the unit variable costs. • FC is the fixed costs. • OI is the operating income • UCM is the unit contribution margin (USP – UVC). Setting operating income to zero and inserting the numbers in the contribution margin method, the break-even point (expressed in units) for the same scenario is calculated as follows:
Graph Method A CVP graph (or break-even chart) shows the interrelationships among cost, volume, and profit graphically. The activity level (unit volume) is shown on the horizontal (x) axis, and dollars are shown on the vertical (y) axis. Total costs and revenues are both plotted as lines; their point of intersection is the break-even point. Exhibit IV-14 shows a CVP graph of break-even analysis. Exhibit IV-14: CVP Graph of Break-Even Analysis
Topic B: Costing Systems (Level B) Product costing is the process of accumulating, classifying, and assigning direct materials, direct labor, and factory overhead costs to products and services. The way in which a product or service is costed can have a substantial impact on reported net income as well as key management decisions. Product costing provides useful cost information for all types of organizations for: • Inventory management and costing of products and services. • Management planning, cost control, and performance measurement. • Strategic and operational decision making. There are two primary types of product costing systems: cost measurement (allocation) systems and cost accumulation systems. The choice of a particular system depends on the nature of the industry and the product or service, the organization’s strategy and management information needs, and the costs and benefits of acquiring, designing, modifying, and operating a particular system.
Cost Measurement (Allocation) Systems Cost measurement (allocation) systems apply costs to the appropriate products, jobs, or services. Three cost measurement methods are discussed in this topic:
The primary difference among these costing methods is the approach each takes to assigning or allocating overhead costs (all production costs other than direct materials and direct labor) to cost objects. Allocation is necessary because overhead costs are not traceable to individual cost
objects.
Actual Costing An actual costing system records the actual costs incurred for direct materials, direct labor, and overhead (by allocating actual amounts). The actual costs are determined by waiting until the end of the accounting period and then calculating the costs based on the recorded amounts. The primary benefit of actual costing is that it is more accurate than other costing systems. However, strict actual costing systems are rarely used because their limitations far outweigh the benefits. Limitations of an actual costing system include: • Its inability to provide accurate unit cost information on a timely basis. Costs cannot be known until all of the invoices are received, which may not be until the end of the fiscal year or later. • The difficulty of assigning overhead items such as property taxes, organizational employee salaries, and insurance, which do not have the direct relationship that direct materials and direct labor do. For example, how much of a custodian’s salary should be assigned to a unit of product or service? • Distorted period costs due to overhead items such as property taxes that are billed once or twice a year. Overhead costs in those billing periods would be higher than in other periods. Even if an organization averages overhead costs by totaling manufacturing overhead costs for a given period and then dividing this total by the number of units produced, distorted costs can still occur. Because the number of units produced (or services offered) varies from period to period but fixed costs do not vary with these changes, actual costing makes costs per unit vary for products produced in different periods. Organizations interested in smoothing out cost fluctuations in cost per unit turn instead to normal costing.
Normal Costing Normal costing is the most widely used method of costing. It solves the problems associated with actual costing. A normal costing system applies actual costs for direct materials and direct labor to a job, process, or other cost center and then uses a predetermined rate to assign overhead. This rate is based on the predetermined factory overhead application rate and the activity of a cost driver or allocation base of the cost center. Normal costing is used by most organizations because: • Actual overhead costs are not readily available or cannot be easily allocated within the time frame allowed for period-end statements. • It helps an organization keep product costs current. Using a standard rate for overhead plus actual labor and actual materials costs allows for the immediate calculation of an item’s costs. • It helps an organization smooth out or “normalize” fluctuations in factory overhead rates in order to have the same cost per unit per level of production from one period to the next over the year. Estimated overhead can be found by dividing budgeted annual factory overhead costs by budgeted volume or activity levels. Overhead is applied throughout the year by multiplying the predetermined overhead rate by the actual amount of the allocation base used. Finally, at the end of the year, actual overhead costs are reconciled with applied overhead. Typically, the difference is not large, and the variance can be disposed of by: • Adding to or subtracting from the cost of goods sold account for the period. • Prorating the net difference between the current period’s applied overhead balances in the work-in-process inventory, finished goods inventory, and cost of goods sold accounts.
Standard Costing In a standard costing system, costs are assigned to products using quantity and price standards for direct materials, direct labor, and overhead using a
predetermined (standard) rate. Manufacturing, service, food, and nonprofit organizations all make use of standard costing to some extent. Standard costs are the expected or target costs for specific cost objects. A quantity standard is the amount of input that should be used per unit of output. A price standard is the amount that should be paid for the quantity of input to be used. The unit standard cost is computed by multiplying these standards: Establishing standards is the joint responsibility of operations, purchasing, personnel, and accounting. Historical data, organizational policy, market expectations, strategy, time and motion studies, and activity analysis also play a role. Standards are the benchmark or norm for measuring performance. They can be set at an ideal level to encourage a higher level of performance or set at a currently attainable level. The advantages of a standard costing system include: • It is less likely to incorporate past inefficiencies. • It can improve planning and control by providing readily available unit cost information such as materials price variances that can be used for pricing decisions. • It can simplify product costing. • It can be adapted in light of new data indicating changes during the budget period. The disadvantages of a standard costing system include: • Unreasonable standards might be set. • Standards might be authoritarian, inflexible, or secretive. • Standards might be poorly communicated. • Detailed computation of variances may place undue emphasis on profits,
which can produce dysfunctional behavior in just-in-time manufacturing environments. (It may encourage inventories to be purchased in large quantities to take advantage of discounts.)
Actual, Normal, and Standard Costing Compared Exhibit IV-1 summarizes how costs are assigned in actual, normal, and standard costing systems.
Exhibit IV-1: Cost Assignment in Actual, Normal, and Standard Costing
Direct Materials
Direct Labor
Overhead
Actual costing
Actual cost
Actual cost
Actual cost
Normal costing
Actual cost
Actual cost
Budgeted overhead cost using predetermined rate
Standard costing
Standard cost
Standard cost
Standard cost
Accumulation Costing Systems Accumulation costing systems accumulate costs and assign them to a particular cost object such as a product or service. Organizations typically use one of two basic types of accumulation costing systems when they need to assign costs to products and services: job costing or process costing.
Job Costing Job costing (also called job-order costing) is a costing system that assigns costs to a specific job (a distinct unit, batch, or lot of a product or service). Job costing is used in situations where many different products are produced each period and each unique job uses a different amount of resources. Job costing systems are often used by manufacturing organizations for capital asset construction such as roads, houses, and airplanes. In the service sector, job costing is used in medical and legal
organizations, advertising agencies, and repair shops. In the merchandising sector, it is used for custom mail-order items and special promotions. A job costing system assigns costs to individual jobs using the following steps: • Identify the job by a unique code or other date-specific reference method. • Trace the direct costs for the job. • Identify indirect cost pools associated with the job (overhead). • Choose the cost allocation base (cost drivers) to be used in allocating indirect costs to the job. • Calculate the rate per unit of each cost allocation base. • Assign cost to the cost object by adding all direct and indirect costs (based on a combination of machine and labor hours). The benefits of job costing systems include the following: • They provide detailed results of a specific job or operation. • They can accommodate multiple costing methods, such as actual, normal, and standard costing, and are flexible enough to be used by a wide variety of organizations. • They can have strategic value for an organization because they give a detailed breakdown of all the different types of costs. • They can help pinpoint sources of cost overruns across different jobs by providing gross margin and gross profit figures to compare profitability.
Process Costing A process costing system accumulates product or service costs by process or department and then assigns them to a large number of nearly identical products by dividing the total costs by the total number of units produced. Process costing is appropriate for highly automated, repetitive processes where the cost of one unit is identical to the cost of another.
Process costing systems are common among manufacturers that massproduce large quantities of similar or identical products such as paint, newspapers, food, or chemicals. In the service sector, check conversion and postal delivery organizations use process costing as do services like medical treatments, beautician services, and dry cleaning processes. In the merchandising sector, process costing systems are used for items such as magazine subscription receipts.
Job Costing and Process Costing Compared Job costing and process costing systems share a number of similarities: • Both systems assign material, labor, and predetermined overhead costs to products and provide a mechanism for computing unit product costs. • Both systems use the same basic accounts, such as manufacturing overhead, raw materials, work-in-process, and finished goods. • The flow of costs through the accounts is basically the same in both systems. However, despite the similarities, the differences between the systems are significant, as shown in Exhibit IV-2.
Exhibit IV-2: Key Differences Between Job Costing and Process Costing Job Costing
Process Costing
•
Used with a wide variety of distinct products or services.
•
Used with similar or identical products and a more or less continuous flow of units.
•
Total job costs consist of actual direct materials, actual direct labor, and overhead applied using a predetermined rate or rates.
•
Costs are assigned uniformly to all units passing through a department during a specific period.
•
Costs accumulate by process or department.
•
Costs accumulate by the individual job or order and are tracked separately.
•
•
Unit cost is computed by dividing total job costs by units produced or served at the end of the job.
The flow of costs is simplified because costs are traced to fewer processing departments.
•
Unit cost is computed by dividing total process costs of the period by the units produced or served at end of the period.
Many organizations have costing systems that are neither purely job costing or purely process costing but involve elements of both. Costing systems must be chosen according to an organization’s specific operational requirements.
Determining Process Costs The key document in a process costing system is a departmental production report. This report tracks the number of units moving through the department, provides a computation of unit costs, and shows how costs were charged to the department. There are several steps in preparing a production report: • Analyze physical flow of production units; determine beginning work-inprocess inventory and all units that enter the production department during an accounting period. Also determine units that are complete and transferred out from the department or are in the work-in-process inventory at the end of a period. • Measure the total work expended on production during an accounting period by calculating equivalent units (see below) of production for direct materials, direct labor, and factory overhead. • Determine total costs to account for; these include current costs incurred and the costs of the units in the work-in-process inventory. • Compute unit costs; costs per unit are calculated for overall costs as well as for direct materials, direct labor, and factory overhead. • Assign total manufacturing costs; these are assigned to units completed and transferred out during the period and units still in process at the end of the period.
Work-in-Process Inventories As mentioned earlier, organizations producing products or offering services
that are homogeneous and produced repetitively can benefit by using a process costing system. A central concern in process costing is accounting for work-in-process inventories. In some service industries, services are completed so fast that WIP inventories are almost nonexistent. Process costing for these organizations involves computing the unit cost for services performed during a specific period by dividing the total costs for that period by the number of services provided. However, in manufacturing organizations, WIP inventories are more complicated and present two major issues: • Given that process costing essentially divides a continuous process into artificial time periods, how is the unit cost for a product or service computed given that some units produced in a period are complete and some are incomplete? • How should the costs and work of beginning WIP be treated? Should they be counted with the current period’s work and costs, or should they be treated separately? The methods that have been developed to address these concerns use the concept of equivalent units in their calculations.
Equivalent Units In job costing, partially completed units have a cost already attached to them. In process costing, these values are more difficult to determine because costs are assigned to processes and departments, not jobs or items. Since product cost is calculated by determining the cost per unit in each department, partially completed units must be factored into these calculations. At the end of a period, it is necessary to estimate what percentage of units remains incomplete—still on the production line or in work-in-process inventory. To do this, process costing accounts for any WIP inventory as equivalent units. An equivalent unit (EU) is a measure of the amount of work done on partially completed units expressed in terms of how many complete units could have been created with the same amount of work in the period under
consideration. To calculate equivalent units, the number of units that are partially complete is multiplied by the estimated percentage that are complete overall: For example, direct labor on 100 pairs of tennis shoes that is 90% complete would total 90 equivalent direct labor units. Equivalent units are calculated separately for direct labor, direct materials, and overhead because one category might be more complete than another for the same product. Equivalent units of production can be measured in two different ways: using the weighted average method or the first-in, first-out (FIFO) method. Weighted Average Method The weighted average method calculates the equivalent units of production for a department using the number of units transferred to the next department or to finished goods plus the equivalent units in the department’s ending WIP inventory. Essentially, the costs and work carried over from the prior period are counted as if they belong to the current period. In this method, beginning inventory work and costs are pooled with current work and costs, and an average unit cost is computed and applied to both units transferred out and units remaining in ending inventory. Under the weighted average method, a department’s equivalent units are computed as follows: A separate calculation is made for each cost category in each department or process. Under this method, it doesn’t matter when a product is started. All units completed in the same period or in the ending inventory of that period are treated the same. The weighted average method is concerned only with the status of the products at the end of an accounting period.
FIFO Method The FIFO costing method (which was introduced in Chapter 1, Topic D) is an inventory valuation method that calculates the unit cost using only the costs incurred and work performed during the current accounting period. FIFO considers the beginning inventory as a batch of goods separate from the goods started and completed within the same period. This method assumes that the first work done is to complete the beginning WIP inventory. Therefore, all beginning WIP inventories are assumed to be completed before the end of the current period. FIFO accounts separately for the cost of the units started in the previous period. That cost was carried into the current period through the beginning WIP inventory. If in the prior month the ending WIP inventory was 80% complete, the remainder, or 20%, is accounted for in the current month, called equivalent units to complete beginning inventory. Under the FIFO method, equivalent units are determined using the following steps: • Units to be accounted for • Units accounted for • Equivalent units costs (using work done in the current period) • Cost to be accounted for (beginning WIP inventory + current period costs) • Cost accounted for The formula for computing the equivalent units of production under FIFO is more complex than under the weighted average method:
As with the weighted average method, a separate calculation is made for each cost category in each department or process. Unlike the weighted average method, FIFO is concerned with the status of
products at both the end and the beginning of an accounting period. By definition, the beginning work-in-process inventory will always be partially complete; otherwise it would have been moved to the next department. Thus, the objective under FIFO is to obtain the correct costs of items completed during the period and items left in work-in-process inventory at the end of the period. Weighted Average and FIFO Methods Compared Exhibit IV-3 compares the weighted average method and the FIFO method.
Exhibit IV-3: Comparison of Weighted Average and FIFO Methods Weighted Average Method
•
Blends work and costs from the prior period with work and costs in the current period.
•
Easier to use because the calculations are simpler.
•
Best suited to inventories and manufacturing costs that are stable.
•
Less accurate in computing unit costs for current period output and for units in beginning work-in-process.
FIFO Method
•
Equivalent units and unit costs relate only to work done during the current period.
• •
Separates prior and current periods.
•
More closely linked to continuous improvement efforts and gives management greater control over costs and performance evaluation.
Produces a more current unit cost if changes occur in the prices for the manufacturing inputs from one period to the next.
For organizations with just-in-time or flexible manufacturing systems, choosing between the weighted average method and the FIFO method of process costing is less important, because those systems reduce overall inventory. In addition, if the accounting period is short (up to a month), then the unit costs calculated under both methods are unlikely to differ very much. With this understanding of how to determine process costs, we can see that the benefits of the process costing system include the following: • Continuous operations can take place while organizations receive timely, accurate, and relatively inexpensive cost information each period, due in
part to the use of equivalent units. • Production cost reports provide built-in checks and balances, such as balancing units to be accounted for against units already accounted for.
Activity-Based Costing (ABC) The traditional cost accounting systems discussed so far suffer from several defects that can distort costs and result in decision making based on inaccurate data: • All manufacturing costs, even those that are not caused by any specific product, are allocated to products. Nonmanufacturing costs that are caused by products, such as set-up and materials-handling costs, are not assigned to products. • Costs of idle capacity are also allocated to products, which essentially charges products with resources they don’t use. • In traditional methods, expenses are typically allocated to products using unit- or volume-based cost drivers such as direct labor hours, machine hours, direct materials costs, and units produced. These can provide inaccurate product costs because products do not consume most support resources in proportion to their production volumes. • The use of volume-based cost drivers to calculate plant-wide or departmental rates produces inaccurate product costs when a large share of factory overhead costs is not volume-based and when organizations produce a diverse mix of products or services with different attributes and features. Unlike traditional costing systems, activity-based costing (ABC) is a method of assigning costs to products, services, and customers based on the consumption of resources caused by activities. ABC is a costing method designed to provide managers with cost information for strategic and other decisions that potentially affect capacity and therefore “fixed” costs. It is often used to supplement, rather than replace, an organization’s more traditional costing system.
To understand ABC, it is necessary to be familiar with the following terminology: • Activity. Any type of action, work, or movement performed within an entity. • Activity center. A logical grouping of activities, actions, movements, or sequences of work. • Resource. An economic element applied or used to perform activities (such as salaries and materials). • Resource cost driver. A measurement of the amount of resources consumed by an activity. Resource costs used in an activity are assigned to a cost pool using a resource cost driver. An example of a cost driver is the amount of leather necessary to make a pair of boots. • Activity cost driver. A measurement of the amount of an activity used by a cost object. Activity cost drivers assign costs in cost pools (batch, lot, product, facility, or unit) to cost objects. An example of an activity cost driver is the number of labor hours required for the activity of performing set-up for a particular product. The premise of the ABC approach is that an organization’s products or services are the result of activities performed and that the required activities use resources, incurring costs. Resources are assigned to activities, and activities are assigned to cost objects based on the activities’ use. The resource cost is calculated using a cost driver; the amount of activity consumed in a period is multiplied by the cost of the activity. The calculated costs are assigned to the product or service. ABC systems can be very helpful in the following instances: • For tracking costs when organizations have expanded into multiple products and/or products that use varying amounts of resources; this includes raw materials and other direct costs and also indirect costs such as customer service, quality control, and supervision • When the cost of inaccurate costing data exceeds the added costs of
collecting more information and implementing an ABC system • When strategic decision making includes product pricing decisions, allocation of funds, and process improvement Exhibit IV-4 presents some of the differences between ABC and traditional costing systems.
Exhibit IV-4: Differences between ABC and Traditional Costing ABC
Traditional Costing
Uses activity- and volume-based cost drivers.
Uses up to three volume-based cost drivers.
Overhead assigned to activities and then to products or services.
Overhead assigned to departments and then to products or services.
Focus on processes and costing issues that cross departmental boundaries.
Focus on assigning cost and process improvement responsibilities to managers within departments.
Nonmanufacturing and manufacturing costs may be assigned to products.
Only manufacturing costs are assigned to products.
Two-Stage Allocation Activity-based costing is a two-stage allocation process: • Stage one—Assign overhead (resource) costs to activity cost pools or activity centers using pertinent resource cost drivers. Cost pools can be either activities or activity centers. • Stage two—Based on how a cost object uses resources (using pertinent activity cost drivers that measure a cost object’s drain on an activity), assign activity costs to cost objects such as products, services, or customers.
Key Steps in ABC There are three key steps in implementing an ABC system:
1. Identify activities and resource costs. Activity analysis determines work performed by each activity and organizes it into activity centers and various levels of activity. Activity levels include: • Unit—Volume- and unit-based activities. • Batch—Set-up, purchase orders, inspections, and production scheduling. • Product-sustaining—Product design, expediting, and implementing engineering changes. • Facility-sustaining—Environmental health and safety, security, depreciation, taxes, and insurance. • Customer—Customer service, phone banks, and custom orders. 2. Assign resource costs to activities. Resource costs are assigned to activities using resource cost drivers. A cause-and-effect relationship must be established between the driver and the activity. Common relationships include: • • • • • • •
Number of employees—personnel activities. Time worked—personnel activities. Set-up hours—set-up or machine activities. Machine hours—machine-running activities. Number of orders—production orders. Square feet—cleaning activities. Value added—general and administrative.
3. Assign activity costs to cost objects. After determining activity costs, the activity costs per unit are measured using an appropriate cost driver. The activity cost driver should be directly related to the rise and fall of the cost. The activity cost drivers determine the proportion of a cost to allocate to each product or service using the following formula:
Benefits and Limitations of ABC The benefits of using activity-based costing include: • It reduces distortions caused by traditional cost allocation methods.
• It gives managers access to relevant costs. • It measures activity-driving costs, which allows managers to assess how overall cost and value are affected. • It normally results in substantially greater unit costs for low-volume products than is reported under traditional product costing. This results in better decision making regarding whether or not to add or drop a product line. The limitations of using activity-based costing include: • It requires numerous development and maintenance hours to implement and use, even with new software and databases. • It does not relate all overhead costs to a particular cost driver and may need to be arbitrarily allocated. • It generates a tremendous amount of data, and managers can be misled into concentrating on the wrong data. • Its reports do not conform to GAAP, so it may not be used as an external reporting system.
Additional Costing Methods Between the two extremes of traditional costing systems and the ABC system, there are many other costing methods that emphasize different aspects of the costing process. Two of these are operation costing and lifecycle costing.
Operation Costing Operation costing is a hybrid system incorporating elements of job costing and process costing. It assigns direct materials to each job or batch but assigns direct labor and overhead in a manner similar to that for process costing. Operation costing is useful for organizations that have similar processes for high-volume activities but need to use different materials for different jobs. Examples of products for which operation costing may be
useful include clothing, jewelry, furniture, shoes, and electronics.
Life-Cycle Costing Life-cycle costing considers the entire cost life cycle of a product or service. It differs from other costing methods, which measure and report product and service costs for relatively short periods, such as a month or a year. Life-cycle costing provides managers with a more complete view of the total costs of a product or service rather than limiting the analysis to manufacturing costs, which is typical of most costing methods. As with an ABC system, organizations sometimes use life-cycle costing to supplement their usual costing systems. Life-cycle costing can provide strategic cost planning and product pricing information, which can help managers lower the total costs of a product or service over its entire life cycle. In life-cycle costing, the total costs for a service’s or product’s life cycle have three phases: • Upstream costs, such as research and development and design (prototyping, testing, and engineering) • Manufacturing costs, such as purchasing and direct and indirect manufacturing costs • Downstream costs, such as marketing and distribution and service and warranty costs Life-cycle costing places a strategic focus on improving costs in all three phases. For example, poor early design of a product or service could lead to much higher marketing costs, lower sales, and higher service costs over the life of a product or service. Improving product design in the upstream phase and improving the manufacturing process and relationships with suppliers in the manufacturing phase will improve the costs in the downstream phase. Life-cycle costing aims to make managers more proactive in the early phases to avoid having to be reactive in the downstream phase.
Topic C: Costs and Their Use in Decision Making (Level B) Managers are constantly faced with having to make decisions among alternatives. The decisions often involve which products to make or services to offer, which production methods to use, what prices to charge, and what channels of distribution to use. Making decisions about these and other issues often requires sifting through large amounts of data, with only some of it being pertinent. In making a decision, the costs and benefits of one alternative are compared to the costs and benefits of other alternatives. Costs that differ between alternatives are called relevant costs. Note that costs that have already been incurred (sunk costs) are no longer relevant to decision making. Distinguishing between relevant and irrelevant costs is important for two primary reasons: • Irrelevant data can be ignored and need not be analyzed, which saves decision makers time and effort. • Bad decisions can result from mistakenly including irrelevant cost and benefit data when analyzing alternatives. To be successful in decision making, managers need to be able to tell the difference between relevant and irrelevant data and correctly use the relevant data in analyzing alternatives. Internal auditors may be able to play a value-adding assurance coverage role in reviewing the quality, timeliness, and completeness of data used in management decision-making processes. This can include identifying potential issues or deficiencies of the processes and offering related enhancement recommendations as appropriate. Internal auditors can also play this type of key assurance role in other important decision-making processes for the organization.
Cost Behavior and Relevant Costs A relevant cost is a cost yet to be incurred; it is a future cost. Relevant
costs differ for each option available to the decision maker. If a cost will be the same regardless of the alternative selected, it is irrelevant and should not be considered in the decision-making process. Only future costs that differ among options are relevant for a decision. For example, let’s say that the manager of David’s Cafe is considering buying a new espresso maker. She is evaluating different espresso maker models and is also strategizing about where to put it. The prices for the different espresso makers are relevant to the decision because those costs differ according to each machine’s features and benefits. An example of an irrelevant cost is the monthly rent for the cafe. The building’s rent remains the same whether or not the manager purchases the new espresso maker, which model she selects, or where the machine is installed. Relevant costs: • Can be either fixed or variable costs, but they are often variable because they differ for each option and have not already been committed. • Depend on changes in supply and demand for resources. • Are avoidable; they can be eliminated in whole or in part by choosing one alternative over another. • Are oriented toward the future. • Are focused on short-term decisions. • Are different for each alternative choice. • Should include opportunity costs—the benefit given up when one alternative is selected over another. Relevant costs should also include both quantitative and qualitative factors. Quantitative factors are outcomes that are measured in numerical terms. These are broken down further into financial measures and nonfinancial measures. Financial measures are expressed in monetary terms and include things like the costs of direct materials, labor, and marketing. Nonfinancial measures are expressed numerically but not in financial terms. These
include a reduction in product development time for a manufacturing company or the percentage of on-time arrivals for an airline company. Qualitative factors cannot be measured in numerical terms and include issues like employee morale, customer goodwill, and the quality of a product or service. Relevant costs typically emphasize quantitative factors because of their financial ramifications. However, any decision should ultimately evaluate the tradeoffs between both of these types of factors.
Common Applications Management accountants face many decisions in which the application of relevant cost analysis is useful. Four of the most common applications for this cost information are make or buy decisions, special order decisions, sell or process further decisions, and keep or drop decisions.
Make or Buy Decisions Managers are often faced with the decision to make a particular product or offer a service internally or to buy it from an outside vendor (out-sourcing). A manufacturer may need to consider whether to make or buy components used in manufacturing. A manager of a service organization may need to decide whether to provide a service (such as payroll processing, human resources, or IT services) in-house or to out-source it. Reaching a decision about whether to make or buy generally involves a comparison of the relevant cost to make the item internally with the cost to purchase it externally. If the relevant costs are less than the purchase price, the decision should be to keep production inside. If the outside purchase price is less than these avoidable costs, the logical decision is to out-source. As mentioned earlier, opportunity costs should also be part of the decisionmaking process. Common make or buy opportunity costs include: • Whether some part of the fixed overhead could be reduced by outsourcing. • Whether some part of the space being used during internal production
could be used for some other purpose. A make or buy analysis of relevant costs plays a key role in the decision to out-source, but there’s more to successful out-sourcing than potential profit margins. Organizations also need to evaluate the qualitative factors of dealing with an external supplier. These include an external supplier’s ability to: • Ensure on-time delivery and a smooth flow of parts, materials, and services. • Maintain acceptable quality control.
Special Order Decisions A special order pricing decision involves a one-time opportunity to accept or reject an order for a specified quantity of a product or service. Determining whether to accept or reject a special order request involves evaluating profitability based on relevant and opportunity costs and capacity utilization. If there is excess capacity—more than enough to cover the order—the organization needs to identify variable costs associated with the special order that are not normally incurred. These are relevant costs, and they determine the break-even price. (See Topic A in this chapter for more on CVP analysis and break-even points.) If the price offered for the special order is greater than the unit cost, the order is profitable and should be accepted. If the firm is operating at or near capacity, the break-even price is the normal sale price. When there is no excess capacity, a special order should be taken only if the offered price exceeds the normal sale price. A firm must also consider the opportunity costs of accepting the order and evaluate whether doing so would result in the loss of other more-profitable sales.
Sell or Process Further Decisions Sell or process further decisions concern selling a product or service before an intermediate processing step or deciding to add further processing and
then sell the product or service for a higher price. Common examples include decisions to: • Add features to a product to enhance functionality. • Improve the flexibility or quality of a service. • Repair defective products so they can be sold at the normal sale price rather than at a discount. Sell or process further decisions require analysis of relevant costs and consideration of joint products or services. These involve situations in which two or more products or services are produced from a single common input and have common processes and production costs up to a split-off point. The split-off point is the point in the production process at which the joint products can be recognized as separate products. Joint costs are those costs incurred up to the split-off point. An example of a joint product is cranberries that are harvested and then sold as is (the split-off point) or further processed into juice, sauce, and jelly. Many managers erroneously consider joint costs as relevant to a sell or process further decision. However, joint costs are irrelevant because they are common costs that must be incurred to get the product or service to the split-off point. They are not directly attributable to any of the intermediate products or services; they are irrelevant in deciding what to do from the split-off point forward. For sell or process further decisions, it is profitable to continue processing a product or service as long as the incremental revenue received (the revenue attributable to the added processing) exceeds the incremental processing costs.
Keep or Drop Decisions A decision to keep or drop a product or service or whether to add a new one is largely determined through relevant cost analysis and the impact the decision will have on net operating income. Avoidable costs must be distinguished from unavoidable costs. Only those costs that are avoidable
are relevant to consider in the decision analysis. For example, given a product line made up of three different products, it is generally unwise to drop one of the products from the sales mix based solely on a recent net operating loss. Instead, a manager should attempt to distinguish between traceable fixed expenses and common fixed expenses for the product. The traceable fixed expenses are potentially avoidable costs if the product is dropped. The common fixed expenses are unavoidable costs and will remain whether the product is dropped or kept. Once avoidable costs are identified, their associated contribution margin can be determined and the decision to keep, add, or drop a product or service can be made more confidently. If the avoidable fixed costs saved are greater than the contribution margin amount lost, it will be better to eliminate the segment; overall net operating income should improve. If the avoidable fixed costs saved are not as much as the contribution margin amount that will be lost, it will be better to keep the product or service.
Next Steps You have completed Part 3, Section IV, of The IIA’s CIA Learning System®. Next, check your understanding by completing the online section-specific test(s) to help you identify any content that needs additional study. Once you have completed the section-specific test(s), a best practice is to reread content in areas you feel you need to understand better. Then you should complete the Part 3 online post-test. You may want to return to earlier section-specific tests periodically as you progress through your studies; this practice will help you absorb the content more effectively than taking a single test multiple times in a row.
Bibliography The following references were used in the development of Part 3 of The IIA’s CIA Learning System. Please note that all website references were valid as of April 2018. “Accounting Standards Update No. 2016-02, “Leases (Topic 842).” FASB, www.fasb.org/jsp/FASB/Document_C/DocumentPage? cid=1176167901010&acceptedDisclaimer=true, February 2016. “All about Ransomware.” Malwarebytes, www.malwarebytes.com/ransomware/. American Institute of Certified Public Accountants (AICPA). “AU-C Section 240, Consideration of Fraud in a Financial Statement Audit.” www.aicpa.org/research/standards/auditattest/downloadabledocuments/au-c00240.pdf, 2017. Assessing Cybersecurity Risk: Roles of the Three Lines of Defense.” Altamonte Springs, Florida: The Institute of Internal Auditors, 2016. “Business Continuity Management” (previously Global Technology Audit Guide 10 [GTAG® 10]). Altamonte Springs, Florida: The Institute of Internal Auditors, 2009. Cau, David. “Governance, Risk and Compliance (GRC) Software: Business Needs and Market Trends.” www2.deloitte.com/content/dam/Deloitte/lu/Documents/risk/lu_en_ins_governancerisk-compliance-software_05022014.pdf. “Change and Patch Management Controls: Critical for Organizational Success,” 2nd ed. (previously Global Technology Audit Guide 2 [GTAG® 2]). Altamonte Springs, Florida: The Institute of Internal Auditors, 2012. “COBIT 5: Enabling Processes,” www.isaca.org/COBIT/Pages/COBIT-5Enabling-Processes- product-page.aspx. Committee of Sponsoring Organizations of the Treadway Commission.
Enterprise Risk Management—Integrating with Strategy and Performance. Jersey City, New Jersey: American Institute of Certified Public Accountants, 2017. Committee of Sponsoring Organizations of the Treadway Commission. Internal Control—Integrated Framework (2013). Jersey City, New Jersey: American Institute of Certified Public Accountants, 2013. Creely, Edel, “5 BYOD Security Implications and How to Overcome Them.” Trilogy Technologies, trilogytechnologies.com/5-byod-securityimplications/, May 26, 2015. Crowe Horwath LLP. “Enterprise Risk Management for Cloud Computing.” COSO, www.coso.org/Documents/Cloud-Computing-Thought-Paper.pdf, 2012. “Effective Dates of Major Standards.” FASB, www.fasb.org/cs/Satellite? c=Page&cid=1176169222185&pagename=FASB%2FPage%2FSectionPage. “Evaluating Corporate Social Responsibility/Sustainable Development” (IPPF Practice Guide). Altamonte Springs, Florida: The Institute of Internal Auditors, 2010. “FASB Accounting Standards Codification®—About the Codification” (v 4.10). FASB, asc.fasb.org/imageRoot/71/58741171.pdf. “Framework for Improving Critical Infrastructure Cybersecurity,” Version 1.0. NIST (National Institute of Standards and Technology), www.nist.gov/sites/default/files/documents/cyberframework/cybersecurityframework-021214.pdf, 2014. “Gartner Says 8.4 Billion Connected ‘Things’ Will Be in Use in 2017, Up 31 Percent from 2016.” Gartner, www.gartner.com/en/newsroom/pressreleases/2017-02-07-gartner-says-8-billion-connected-things-will-be-in-use-in2017-up-31-percent-from-2016, February 7, 2017. Grassi, Paul A., Michael E. Garcia, and James L. Fenton. “Digital Identity Guidelines” (NIST Special Publication 800-63-3). NIST (National Institute of Standards and Technology),
nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-63-3.pdf. “Identity and Access Management” (previously Global Technology Audit Guide 9 [GTAG® 9]). Altamonte Springs, Florida: The Institute of Internal Auditors, 2007. “Information Technology Risks and Controls,” 2nd ed. (previously Global Technology Audit Guide 1 [GTAG® 1]). Altamonte Springs, Florida: The Institute of Internal Auditors, 2012. ISACA, www.isaca.org. ISO/IEC 27017:2015, “Information technology—Security technologies— Code of practice for information security controls based on ISO/IEC 27002 for cloud services,” www.iso.org/standard/43757.html. “ITIL Certifications.” Axelos, www.axelos.com/certifications/itilcertifications. “The ITIL Foundation Certificate in IT Service Management Syllabus,” Version 5.5. Axelos, www.axelos.com/getmedia/b2d6281d-14aa-45fc-abb74d228810c328/The_ITIL_Foundation_Certificate_Syllabus_v5-5.aspx, 2013.
“Leases.” FASB, www.fasb.org/cs/Satellite? c=Page&cid=1351027207574&d=Touch&pagename=FASB%2FPage%2FBridgePage#section “Management of IT Auditing,” 2nd ed. (previously Global Technology Audit Guide 4 [GTAG® 4]). Altamonte Springs, Florida: The Institute of Internal Auditors, 2013. “Managing and Auditing IT Vulnerabilities” (previously Global Technology Audit Guide 6 [GTAG® 6]). Altamonte Springs, Florida: The Institute of Internal Auditors, 2006. “Measuring Internal Audit Effectiveness and Efficiency” (IPPF Practice Guide). Altamonte Springs, Florida: The Institute of Internal Auditors, 2010. “The New Mafia: Gangs and Vigilantes: A Guide to Cybercrime for CEOs.” Malwarebytes, www.malwarebytes.com/pdf/white-
papers/Cybercrime_NewMafia.pdf. “Revenue Recognition: Why Did the FASB Issue a New Standard on Revenue Recognition?” FASB, www.fasb.org/jsp/FASB/Page/ImageBridgePage&cid=1176169257359. Sawyer, Lawrence B., Mortimer A. Dittenhofer, and James H. Scheiner. Sawyer’s Internal Auditing, fifth edition. Altamonte Springs, Florida: The Institute of Internal Auditors, 2005. Stippich, Warren W., Jr., and Bradley J. Preber. Data Analytics: Elevating Internal Audit’s Value. Altamonte Springs, Florida: The IIA Research Foundation, 2016. “Supplemental Guidance.” The Institute of Internal Auditors, na.theiia.org/standards-guidance/recommended-guidance/practiceguides/Pages/Practice-Guides.aspx. Taber, David. “The 11-Point Audit for Your Salesforce.com System.” CIO, www.cio.com/article/3146983/customer-relationship-management/the-11point-audit-for-your-salesforcecom-system.html, December 5, 2016. Vito, Kelli. Auditing Human Resources, 2nd ed. Altamonte Springs, Florida: The IIA Research Foundation, 2010. “What Is COBIT 5?” ISACA, www.isaca.org/cobit/pages/default.aspx. “What Is the Difference Between Differential and Incremental Backups (and Why Should I Care)?” Acronis, www.acronis.com/en-us/articles/incrementaldifferential-backups. Zamora, Wendy. “Truth in Malvertising: How to Beat Bad Ads.” Malwarebytes, blog.malwarebytes.com/101/2016/06/truth-in-malvertisinghow-to-beat-bad-ads/, December 13, 2017.
Index The numbers after each term are links to where the term is indexed and indicate how many times the term is referenced. AAR (accounting rate of return) 1, 2 ABB (activity-based budgeting) 1 ABC (activity-based costing) 1 absorption costing 1 absorption transfer pricing model 1 accelerated depreciation methods 1 accounting 1 accounting accrual basis 1 assumptions 1 cash basis 1 comparability of information 1 consistency of information 1 constraints 1 cycle 1 dual-entry 1 financial 1 financial 1 for current assets 1 managerial 1 off-balance-sheet 1 policies 1 policies 1 purchase 1 rate of return 1 rate of return 1 relevance of information 1 reliability of information 1 responsibility 1 standards 1 standards 1 terminology 1 Accounting Standards Codification (FASB) 1
accounts payable payment period 1 accounts payable turnover 1 accrual basis accounting 1 accruals 1 accumulation costing systems 1 acid-test ratio 1 activity 1 activity center 1 activity cost driver 1 activity depreciation method 1 activity-based budgeting 1 activity-based costing 1 actual costing 1 actual costs 1 additional paid-in capital 1 adjunct accounts 1 administrative costs 1 allowance for doubtful accounts/bad debt expense, audit tests for 1 amortization 1 annuities 1 asset management ratios 1 assets 1 acquisition 1 current 1 disposal of 1 intangible 1 misappropriation 1 valuation 1 audit objectives 1 tests 1 average A/R turnover 1 average days’ accounts receivable 1 average days’ payables 1 average days’ sales in inventory 1 average inventory turnover 1 balance sheet 1, 2, 3 behavior of costs 1
of costs 1 benchmarking 1 bonds 1 book value per common share 1 break-even analysis 1 budgeting 1, 2 budgeting activity-based 1 continuous 1 cost of goods sold budgets 1 direct labor budgets 1 direct materials budgets 1 for multinational organizations 1 incremental 1 kaizen 1 merchandise purchases budgets 1 operating budgets 1 overhead budgets 1 period 1 production budgets 1 project 1 sales budgets 1 selling and administrative expenses budgets 1 zero-based 1 business combinations 1 capital additional paid-in 1 budgeting 1 leases 1 stock 1 stock 1 structure 1 cash basis accounting 1 cash conversion cycle 1 cash ratio 1 cash receipts, audit tests for 1 change in accounting principle, cumulative effect of 1 charge-off of uncollectible accounts, audit tests for 1
combinations 1 common stock 1 common-size financial statements 1 comparability of accounting information 1 completed contract method of revenue recognition 1 compound interest 1 conservatism, in accounting 1 consistency of accounting information 1 consolidated financial statements 1, 2 consolidations 1 contingencies 1 contingent liabilities 1, 2 continuous budgeting 1 continuous improvement 1 contra accounts 1 contribution margin method of break-even analysis 1 controls 1 corporate finance 1 corruption 1 cost accounting 1 cost analysis 1 cost measurement (allocation) costing systems 1 cost of goods sold budgets 1 cost recovery method of revenue deferral 1 cost-benefit analysis 1 cost-benefit relationships 1 cost-volume-profit analysis 1 cost(s) 1 cost(s) actual 1 administrative 1 and decision making 1 behavior 1 behavior 1 differential 1 direct 1 driver 1 fixed 1 historical 1
historical 1 historical 1 indirect 1 inventory 1 marketing 1 mixed 1 object 1 opportunity 1 opportunity 1 opportunity 1 period 1 period 1 prime 1 prime 1 product 1 product 1 relevant 1 selling 1 sunk 1 terminology 1 variable 1 costing 1 costing absorption 1 accumulation systems 1 activity-based 1 actual 1 cost measurement (allocation) systems 1 job 1 life-cycle 1 normal 1 operation 1 process 1 product 1 standard 1 variable 1 credits/debits 1 CRM (customer relationship management) 1 currency
foreign 1 functional 1 reporting 1 current assets 1 current rate method, in financial statements 1 current ratio 1 customer relationship management 1 service management 1 CVP (cost-volume-profit) analysis 1 debits/credits 1 debt management ratios 1, 2 debt ratio 1 debt to equity ratio 1 declining balance depreciation method 1 deferred taxes 1 defined benefit plans 1 defined contribution plans 1 demand management 1 depreciation 1 differential costs 1 direct costs 1 direct financing leases 1 direct labor budgets 1 direct materials budgets 1 disclosures 1, 2, 3 discontinued operations, on income statement 1 discounting models 1 distribution channels 1 dividend payout ratio 1 dividend yield 1 dividends 1, 2 dual-entry accounting 1 earnings per share 1 economic entities 1 economic exposure 1 effective tax rate 1 EPS (earnings per share) 1 equation method of break-even analysis 1
equity 1 security investments 1 equivalent units 1 EUs (equivalent units) 1 exchange rate risk 1, 2 extraordinary items, on income statement 1 fair market value 1 FASB (Financial Accounting Standards Board) 1 FIFO (first-in, first-out) inventory valuation method 1, 2 finance 1, 2 financial accounting 1, 2 Financial Accounting Standards Board 1 financial analysis 1 financial budget 1 financial leverage 1 financial ratios. See ratios financial reporting 1 financial reporting 1 fraudulent 1 objectives 1 financial risk analysis 1 financial statements 1 financial statements 1, 2 financial statements analysis 1 and fraud 1 assertions 1 balance sheet 1 balance sheet 1 balance sheet 1 common-size 1 consolidated 1 consolidated 1 current rate method 1 disclosures/footnotes 1 income statement 1 income statement 1 income statement 1 interrelationships 1
limitations of 1 statement of cash flows 1 statement of cash flows 1 statement of cash flows 1 statement of shareholders’ equity 1 statement of shareholders’ equity 1 uses 1 financing 1 financing leases 1 first-in, first-out (FIFO) inventory valuation method 1, 2 fiscal policy 1 fixed assets turnover 1 fixed costs 1 forecast, sales 1 foreign currency 1 foreign exchange 1 forward market 1 fraud and financial statements 1 fraudulent financial reporting 1 full cost transfer pricing model 1 full disclosure 1 functional currency 1 future value 1 FX (foreign exchange) 1 GAAP (Generally Accepted Accounting Principles) 1 general ledger 1 Generally Accepted Accounting Principles 1 going concerns 1 graph method of break-even analysis 1 gross profit margin 1 historical cost 1, 2, 3 horizontal common-size financial statements 1 horizontal integration 1 IASB (International Accounting Standards Board) 1 IFRS (International Financial Reporting Standards) 1 income statement 1, 2, 3 incremental budgeting 1 indefinite life intangibles 1
indirect costs 1 inflation 1, 2 installment sales method of revenue deferral 1 intangible assets 1 internal rate of return 1, 2 International Accounting Standards Board 1 International Financial Reporting Standards 1 International Standards for the Professional Practice of Internal Auditing 1210.A2 1 inventory 1 inventory adjusting 1 costs 1 key performance indicators 1 management 1 periodic systems 1 perpetual systems 1 processing period 1 valuation 1 investment valuation ratios 1, 2 investments 1, 2, 3 IRR (internal rate of return) 1, 2 job costing 1 joint products 1 journal 1, 2 kaizen budgeting 1 keep or drop decisions 1 key performance indicators 1 KPIs (key performance indicators) 1 last-in, first-out (LIFO) inventory valuation method 1 lateral integration 1 LCM (lower of cost or market) 1 leases 1 ledger 1 leverage ratios 1, 2 leveraged leases 1 liabilities 1, 2 life-cycle costing 1
LIFO (last-in, first-out) inventory valuation method 1 limited life intangibles 1 liquidity ratios 1, 2 loss contingencies 1 lower of cost or market 1 make or buy decisions 1 managerial accounting 1 manufacturing flow management 1 marginal tax rate 1 market price transfer pricing model 1 marketing 1, 2 master budget 1 matching 1 materiality, in accounting 1 merchandise purchases budgets 1 mergers 1 misappropriation of assets 1 mixed costs 1 monetary units 1 moving average cost 1 multiple-step income statements 1 negotiated price transfer pricing model 1 net cash flows 1 net present value 1, 2 net profit margin 1 net realizable value 1 net working capital 1 nondiscounting models 1 normal costing 1 NPV (net present value) 1, 2 NRV (net realizable value) 1 objectives 1 OBSA (off-balance-sheet accounting) 1 off-balance-sheet accounting 1 operating budgets 1, 2 operating leases 1 operating leverage 1 operating profit margin 1 operation
costing 1 opportunity costs 1, 2, 3 order fulfillment 1 out-sourcing 1 overhead budgets 1 costs 1 P/E (price/earnings) ratio 1 par value 1 partnerships 1 payback period 1, 2 pensions 1 percentage-of-completion method of revenue recognition 1 PERCV (presentation and disclosure, existence and occurrence, rights and obligations, completeness, 1 performance measures 1 period costs 1, 2 periodic inventory systems 1 perpetual inventory systems 1 point-of-sale recognition 1 post-audit of capital projects 1 posting 1 preferred stock 1 prepayments 1 price/earnings ratio 1 prime costs 1, 2 process costing 1 product costing 1 costs 1 costs 1 development/commercialization 1 production budgets 1 reports 1 profitability ratios 1, 2 project budgeting 1 purchase accounting 1
quick ratio 1 R&D (research and development) 1 ratios accounts payable payment period 1 accounts payable turnover 1 acid-test 1 asset management 1 average A/R turnover 1 average days’ accounts receivable 1 average days’ payables 1 average days’ sales in inventory 1 average inventory turnover 1 book value per common share 1 cash 1 comparison between 1 current 1 debt 1 debt management 1 debt management 1 debt to equity 1 dividend payout 1 dividend yield 1 financial 1 fixed assets turnover 1 gross profit margin 1 inventory processing period 1 investment valuation 1 investment valuation 1 leverage 1 leverage 1 limitations of 1 liquidity/short-term debt 1 liquidity/short-term debt 1 net profit margin 1 operating profit margin 1 price/earnings (P/E) 1 profitability 1 profitability 1 quick 1
receivables collection period 1 return on assets (ROA) 1 return on investment (ROI) 1 return on investment (ROI) 1 receivables collection period 1 recognition 1, 2, 3 relevance of accounting information 1 relevant costs 1 relevant range 1 reliability of accounting information 1 reporting currency 1 research and development 1 resource 1 resource cost driver 1 responsibility accounting 1 restrictive covenants 1 retained earnings 1 return on assets 1 return on investment ratios 1, 2 revenue cycle 1 revenue recognition 1, 2 risk exchange rate 1 exchange rate 1 ROA (return on assets) 1 ROI (return on investment) ratios 1, 2 rolling budgets 1 sales and collections cycle 1 sales budgets 1 sales forecast 1 sales returns and allowances, audit tests for 1 sales-type leases 1 sales, audit tests for 1 sell or process further decisions 1 selling and administrative expenses budgets 1 selling costs 1 short-term debt ratios 1, 2 single-step income statements 1 solvency 1
special orders 1 special purpose entities 1 specific identification method 1 SPEs (special purpose entities) 1 spot market 1 SRM (supplier relationship management) 1 standard costing 1 statement of cash flows 1 of cash flows 1 of cash flows 1 of financial position 1 of operations 1 of retained earnings 1 of shareholders’ equity 1 of shareholders’ equity 1 stock acquisition 1 common 1 preferred 1 splits 1 treasury 1 straight-line depreciation method 1 strategic marketing 1 subsequent events 1 sum-of-the-years’-digits depreciation method 1 sunk costs 1 supplier relationship management 1 supply chain 1 supply chain risks 1 See also supply chain management supply chain management 1 supply chain management and strategic marketing 1 cycle 1 horizontal integration 1 lateral integration 1 processes 1
See also supply chain vertical integration 1 taxation 1 taxation deferred taxes 1 minimization strategies 1 minimization strategies 1 minimization strategies 1 types of taxes 1 time value of money 1 transfer pricing 1 translation exposure 1 treasury stock 1 trial balance 1, 2 valuation 1, 2 variable cost transfer pricing model 1 variable costing 1 variable costs 1 variable interest entities 1 vertical common-size financial statements 1 vertical integration 1 vertical marketing systems 1 VIEs (variable interest entities) 1 VMSs (vertical marketing systems) 1 weighted average method for calculating equivalent units 1 WIP (work-in-process) inventory 1, 2 work-in-process inventory 1, 2 working papers 1 worksheets, in accounting 1 zero-based budgeting 1 Build 08/24/2018 15:40 p.m.
Contents Section IV: Financial Management Section Introduction Chapter 1: Financial Accounting and Finance Topic A: Concepts and Principles of Financial Accounting (Level B) Topic B: Advanced and Emerging Financial Accounting Concepts (Level B) Topic C: Financial Analysis (Ratio Analysis) (Level P) Topic D: Revenue Cycle, Current Asset Management Activities and Accounting, and Supply Chain Topic E: Capital Budgeting, Capital Structure, Taxation, and Transfer Pricing (Level B) Chapter 2: Managerial Accounting Topic A: General Concepts in Managerial Accounting (Level B) Topic B: Costing Systems (Level B) Topic C: Costs and Their Use in Decision Making (Level B) P3_Bibliography Index